url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://en.wikipedia.org/wiki/Ford-Fulkerson_algorithm
# Ford–Fulkerson algorithm (Redirected from Ford-Fulkerson algorithm) The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified[1] or it is specified in several implementations with different running times.[2] It was published in 1956 by L. R. Ford, Jr. and D. R. Fulkerson.[3] The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path. ## Algorithm Let ${\displaystyle G(V,E)}$ be a graph, and for each edge from u to v, let ${\displaystyle c(u,v)}$ be the capacity and ${\displaystyle f(u,v)}$ be the flow. We want to find the maximum flow from the source s to the sink t. After every step in the algorithm the following is maintained: Capacity constraints Skew symmetry ${\displaystyle \forall (u,v)\in E\ f(u,v)\leq c(u,v)}$ The flow along an edge can not exceed its capacity. ${\displaystyle \forall (u,v)\in E\ f(u,v)=-f(v,u)}$ The net flow from u to v must be the opposite of the net flow from v to u (see example). ${\displaystyle \forall u\in V:u\neq s{\text{ and }}u\neq t\Rightarrow \sum _{w\in V}f(u,w)=0}$ The net flow to a node is zero, except for the source, which "produces" flow, and the sink, which "consumes" flow. ${\displaystyle \sum _{(s,u)\in E}f(s,u)=\sum _{(v,t)\in E}f(v,t)}$ The flow leaving from s must be equal to the flow arriving at t. This means that the flow through the network is a legal flow after each round in the algorithm. We define the residual network ${\displaystyle G_{f}(V,E_{f})}$ to be the network with capacity ${\displaystyle c_{f}(u,v)=c(u,v)-f(u,v)}$ and no flow. Notice that it can happen that a flow from v to u is allowed in the residual network, though disallowed in the original network: if ${\displaystyle f(u,v)>0}$ and ${\displaystyle c(v,u)=0}$ then ${\displaystyle c_{f}(v,u)=c(v,u)-f(v,u)=f(u,v)>0}$. Algorithm Ford–Fulkerson Inputs Given a Network ${\displaystyle G=(V,E)}$ with flow capacity c, a source node s, and a sink node t Output Compute a flow f from s to t of maximum value 1. ${\displaystyle f(u,v)\leftarrow 0}$ for all edges ${\displaystyle (u,v)}$ 2. While there is a path p from s to t in ${\displaystyle G_{f}}$, such that ${\displaystyle c_{f}(u,v)>0}$ for all edges ${\displaystyle (u,v)\in p}$: 1. Find ${\displaystyle c_{f}(p)=\min\{c_{f}(u,v):(u,v)\in p\}}$ 2. For each edge ${\displaystyle (u,v)\in p}$ 1. ${\displaystyle f(u,v)\leftarrow f(u,v)+c_{f}(p)}$ (Send flow along the path) 2. ${\displaystyle f(v,u)\leftarrow f(v,u)-c_{f}(p)}$ (The flow might be "returned" later) • "←" denotes assignment. For instance, "largestitem" means that the value of largest changes to the value of item. • "return" terminates the algorithm and outputs the following value. The path in step 2 can be found with for example a breadth-first search (BFS) or a depth-first search in ${\displaystyle G_{f}(V,E_{f})}$. If you use the former, the algorithm is called Edmonds–Karp. When no more paths in step 2 can be found, s will not be able to reach t in the residual network. If S is the set of nodes reachable by s in the residual network, then the total capacity in the original network of edges from S to the remainder of V is on the one hand equal to the total flow we found from s to t, and on the other hand serves as an upper bound for all such flows. This proves that the flow we found is maximal. See also Max-flow Min-cut theorem. If the graph ${\displaystyle G(V,E)}$ has multiple sources and sinks, we act as follows: Suppose that ${\displaystyle T=\{t|t{\text{ is a sink}}\}}$ and ${\displaystyle S=\{s|s{\text{ is a source}}\}}$. Add a new source ${\displaystyle s^{*}}$ with an edge ${\displaystyle (s^{*},s)}$ from ${\displaystyle s^{*}}$ to every node ${\displaystyle s\in S}$, with capacity ${\displaystyle c(s^{*},s)=d_{s}\;(d_{s}=\sum _{(s,u)\in E}c(s,u))}$. And add a new sink ${\displaystyle t^{*}}$ with an edge ${\displaystyle (t,t^{*})}$ from every node ${\displaystyle t\in T}$ to ${\displaystyle t^{*}}$, with capacity ${\displaystyle c(t,t^{*})=d_{t}\;(d_{t}=\sum _{(v,t)\in E}c(v,t))}$. Then apply the Ford–Fulkerson algorithm. Also, if a node u has capacity constraint ${\displaystyle d_{u}}$, we replace this node with two nodes ${\displaystyle u_{in},u_{out}}$, and an edge ${\displaystyle (u_{in},u_{out})}$, with capacity ${\displaystyle c(u_{in},u_{out})=d_{u}}$. Then apply the Ford–Fulkerson algorithm. ## Complexity By adding the flow augmenting path to the flow already established in the graph, the maximum flow will be reached when no more flow augmenting paths can be found in the graph. However, there is no certainty that this situation will ever be reached, so the best that can be guaranteed is that the answer will be correct if the algorithm terminates. In the case that the algorithm runs forever, the flow might not even converge towards the maximum flow. However, this situation only occurs with irrational flow values. When the capacities are integers, the runtime of Ford–Fulkerson is bounded by ${\displaystyle O(Ef)}$ (see big O notation), where ${\displaystyle E}$ is the number of edges in the graph and ${\displaystyle f}$ is the maximum flow in the graph. This is because each augmenting path can be found in ${\displaystyle O(E)}$ time and increases the flow by an integer amount of at least ${\displaystyle 1}$, with the upper bound ${\displaystyle f}$. A variation of the Ford–Fulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the Edmonds–Karp algorithm, which runs in ${\displaystyle O(VE^{2})}$ time. ## Integral example The following example shows the first steps of Ford–Fulkerson in a flow network with 4 nodes, source ${\displaystyle A}$ and sink ${\displaystyle D}$. This example shows the worst-case behaviour of the algorithm. In each step, only a flow of ${\displaystyle 1}$ is sent across the network. If breadth-first-search were used instead, only two steps would be needed. Path Capacity Resulting flow network Initial flow network ${\displaystyle A,B,C,D}$ {\displaystyle {\begin{aligned}&\min(c_{f}(A,B),c_{f}(B,C),c_{f}(C,D))\\=&\min(c(A,B)-f(A,B),c(B,C)-f(B,C),c(C,D)-f(C,D))\\=&\min(1000-0,1-0,1000-0)=1\end{aligned}}} ${\displaystyle A,C,B,D}$ {\displaystyle {\begin{aligned}&\min(c_{f}(A,C),c_{f}(C,B),c_{f}(B,D))\\=&\min(c(A,C)-f(A,C),c(C,B)-f(C,B),c(B,D)-f(B,D))\\=&\min(1000-0,0-(-1),1000-0)=1\end{aligned}}} After 1998 more steps … Final flow network Notice how flow is "pushed back" from ${\displaystyle C}$ to ${\displaystyle B}$ when finding the path ${\displaystyle A,C,B,D}$. ## Non-terminating example Consider the flow network shown on the right, with source ${\displaystyle s}$, sink ${\displaystyle t}$, capacities of edges ${\displaystyle e_{1}}$, ${\displaystyle e_{2}}$ and ${\displaystyle e_{3}}$ respectively ${\displaystyle 1}$, ${\displaystyle r=({\sqrt {5}}-1)/2}$ and ${\displaystyle 1}$ and the capacity of all other edges some integer ${\displaystyle M\geq 2}$. The constant ${\displaystyle r}$ was chosen so, that ${\displaystyle r^{2}=1-r}$. We use augmenting paths according to the following table, where ${\displaystyle p_{1}=\{s,v_{4},v_{3},v_{2},v_{1},t\}}$, ${\displaystyle p_{2}=\{s,v_{2},v_{3},v_{4},t\}}$ and ${\displaystyle p_{3}=\{s,v_{1},v_{2},v_{3},t\}}$. Step Augmenting path Sent flow Residual capacities ${\displaystyle e_{1}}$ ${\displaystyle e_{2}}$ ${\displaystyle e_{3}}$ 0 ${\displaystyle r^{0}=1}$ ${\displaystyle r}$ ${\displaystyle 1}$ 1 ${\displaystyle \{s,v_{2},v_{3},t\}}$ ${\displaystyle 1}$ ${\displaystyle r^{0}}$ ${\displaystyle r^{1}}$ ${\displaystyle 0}$ 2 ${\displaystyle p_{1}}$ ${\displaystyle r^{1}}$ ${\displaystyle r^{2}}$ ${\displaystyle 0}$ ${\displaystyle r^{1}}$ 3 ${\displaystyle p_{2}}$ ${\displaystyle r^{1}}$ ${\displaystyle r^{2}}$ ${\displaystyle r^{1}}$ ${\displaystyle 0}$ 4 ${\displaystyle p_{1}}$ ${\displaystyle r^{2}}$ ${\displaystyle 0}$ ${\displaystyle r^{3}}$ ${\displaystyle r^{2}}$ 5 ${\displaystyle p_{3}}$ ${\displaystyle r^{2}}$ ${\displaystyle r^{2}}$ ${\displaystyle r^{3}}$ ${\displaystyle 0}$ Note that after step 1 as well as after step 5, the residual capacities of edges ${\displaystyle e_{1}}$, ${\displaystyle e_{2}}$ and ${\displaystyle e_{3}}$ are in the form ${\displaystyle r^{n}}$, ${\displaystyle r^{n+1}}$ and ${\displaystyle 0}$, respectively, for some ${\displaystyle n\in \mathbb {N} }$. This means that we can use augmenting paths ${\displaystyle p_{1}}$, ${\displaystyle p_{2}}$, ${\displaystyle p_{1}}$ and ${\displaystyle p_{3}}$ infinitely many times and residual capacities of these edges will always be in the same form. Total flow in the network after step 5 is ${\displaystyle 1+2(r^{1}+r^{2})}$. If we continue to use augmenting paths as above, the total flow converges to ${\displaystyle \textstyle 1+2\sum _{i=1}^{\infty }r^{i}=3+2r}$, while the maximum flow is ${\displaystyle 2M+1}$. In this case, the algorithm never terminates and the flow doesn't even converge to the maximum flow.[4] ## Python implementation of Edmonds-Karp algorithm import collections # This class represents a directed graph using adjacency matrix representation class Graph: def __init__(self,graph): self.graph = graph # residual graph self.ROW = len(graph) def BFS(self,s, t, parent): '''Returns true if there is a path from source 's' to sink 't' in residual graph. Also fills parent[] to store the path ''' # Mark all the vertices as not visited visited = [False] * (self.ROW) # Create a queue for BFS queue = collections.deque() # Mark the source node as visited and enqueue it queue.append(s) visited[s] = True # Standard BFS Loop while queue: u = queue.popleft() # Get all adjacent vertices's of the dequeued vertex u # If a adjacent has not been visited, then mark it # visited and enqueue it for ind, val in enumerate(self.graph[u]): if visited[ind] == False and val > 0 : queue.append(ind) visited[ind] = True parent[ind] = u # If we reached sink in BFS starting from source, then return # true, else false return visited[t] # Returns the maximum flow from s to t in the given graph def EdmondsKarp(self, source, sink): # This array is filled by BFS and to store path parent = [-1] * (self.ROW) max_flow = 0 # There is no flow initially # Augment the flow while there is path from source to sink while self.BFS(source, sink, parent) : # Find minimum residual capacity of the edges along the # path filled by BFS. Or we can say find the maximum flow # through the path found. path_flow = float("Inf") s = sink while s != source: path_flow = min (path_flow, self.graph[parent[s]][s]) s = parent[s] # Add path flow to overall flow max_flow += path_flow # update residual capacities of the edges and reverse edges # along the path v = sink while v != source: u = parent[v] self.graph[u][v] -= path_flow self.graph[v][u] += path_flow v = parent[v] return max_flow ## Notes 1. ^ Laung-Terng Wang, Yao-Wen Chang, Kwang-Ting (Tim) Cheng (2009). Electronic Design Automation: Synthesis, Verification, and Test. Morgan Kaufmann. p. 204. ISBN 0080922007. 2. ^ Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2009). Introduction to Algorithms. MIT Press. p. 714. ISBN 0262258102. 3. ^ Ford, L. R.; Fulkerson, D. R. (1956). "Maximal flow through a network" (PDF). Canadian Journal of Mathematics. 8: 399–404. doi:10.4153/CJM-1956-045-5. 4. ^ Zwick, Uri (21 August 1995). "The smallest networks on which the Ford–Fulkerson maximum flow procedure may fail to terminate". Theoretical Computer Science. 148 (1): 165–170. doi:10.1016/0304-3975(95)00022-O.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 116, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810535073280334, "perplexity": 678.6207996713437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00189.warc.gz"}
https://socratic.org/questions/if-the-radius-of-the-earth-s-orbit-around-the-sun-is-150-million-km-what-is-the-
Astronomy Topics # If the radius of the Earth's orbit around the Sun is 150 million km, what is the speed of the Earth? Mar 20, 2018 The gravitation force grant the centripetal force. #### Explanation: ${F}_{g} = {F}_{c}$ ${F}_{g} = \frac{G {M}_{s u n} {M}_{e a r t h}}{{R}^{2}}$ Newton gravitation law where $G = 6.67 \cdot {10}^{- 11} {m}^{3} / \left(k g \cdot s\right)$, $R = 1.5 \cdot {10}^{11} m , {M}_{s u n} = 2 \cdot {10}^{30}$ $\frac{G {M}_{s u n} {M}_{e a r t h}}{{R}^{2}} = {M}_{e a r t h} {v}^{2} / R$ ${v}^{2} = G {M}_{s u n} / R$ $v = 29.8 \frac{k m}{s}$ ${v}_{m e a n} = 29 , 783 \frac{k m}{s}$ sorce: https://en.wikipedia.org/wiki/Earth ##### Impact of this question 1648 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9394540786743164, "perplexity": 2898.3305865162947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00121.warc.gz"}
https://www.math.ias.edu/seminars/abstract?event=21658
# Noisy Binary Search and Applications COMPUTER SCIENCE/DISCRETE MATH I Topic: Noisy Binary Search and Applications Speaker: Avinatan Hassidim Affiliation: Hebrew University Date: Monday, January 21 Time/Room: 11:15am - 12:15pm/S-101 We use a Bayesian approach to optimally solve problems in noisy binary search. We deal with two variants: 1. Each comparison can be erroneous with some probability 1 - p. 2. At each stage k comparisons can be performed in parallel and a noisy answer is returned We present a (classic) algorithm which optimally solves both variants together, up to an additive term of O(log log (n)), and prove matching information theoretic lower bounds. We use the algorithm with the results of Farhi et al. (FGGS99) presenting a quantum search algorithm in an ordered list of expected complexity less than log(n)/3, and some improved quantum lower bounds on noisy search, and search with an error probability Joint work with Michael Ben-Or.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510426878929138, "perplexity": 1646.1897722203503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00508.warc.gz"}
https://quant.stackexchange.com/questions/17526/how-to-differentiate-a-brownian-motion/17532
# How to differentiate a brownian motion? By definition a wiener process cannot be differentiated. But when we use Ito's lemma on $F = X^2$, where X is wiener process we have total change in $$dF = 2XdX + dt$$ How can we calculate $\frac{dF}{dX}$ when by definition it cannot be differentiated? Isin't this contradiction by definition? • Have you looked at this Wikipedia page? – Bob Jansen Apr 26 '15 at 9:14 • yes...wikipedia doesn't answer my question...see my comment below. Even for writing the integral form, how do I get dF/dX. – Animesh Saxena Apr 27 '15 at 11:11 • I think you are mixing things here. The function $F: x \mapsto x^2$ is differentiable but not the Wiener process itself :) – byouness May 15 '18 at 15:51 In order to apply Ito's lemma, your function needs to be a twice-differentiable function. There is no issue with the non-differentiability of the Wiener process. $\frac{dF}{dX}$ involves differentiating F, not the Wiener process X. Using a simple analogy: instantaneous velocity ($\frac{dD}{dt}$) is the derivative of position (D) over time; what is differentiated is not time, but distance. I believe this is where your confusion stems from. We write the differential form of Ito formula for simplification. Actually, the differential form for Ito formula $$dF(W(t)) = 2W(t)dW(t) + dt$$ means the integral form for Ito formula, $$\int{dF} = \int{2W(t)dW(t)} + \int{dt}$$ which make sense in mathemaitcs. • We get $2W(t)$ by using Ito formula. And I think ocstl answered your question. – logistic Apr 27 '15 at 13:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919688880443573, "perplexity": 680.658783108554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00096.warc.gz"}
https://socratic.org/questions/how-do-you-factor-the-expression-6x-2-23x-15
Algebra Topics # How do you factor the expression 6x^2 - 23x + 15? May 8, 2016 $6 {x}^{2} - 23 x + 15 = \left(6 x - 5\right) \left(x - 3\right)$ #### Explanation: Use an AC method: Find a pair of factors of $A C = 6 \cdot 15 = 90$ with sum $B = 23$ The pair $18 , 5$ works in that $18 \times 5 = 90$ and $18 + 5 = 23$. Use this pair to split the middle term and factor by grouping: $6 {x}^{2} - 23 x + 15$ $= 6 {x}^{2} - 18 x - 5 x + 15$ $= \left(6 {x}^{2} - 18 x\right) - \left(5 x - 15\right)$ $= 6 x \left(x - 3\right) - 5 \left(x - 3\right)$ $= \left(6 x - 5\right) \left(x - 3\right)$ $\textcolor{w h i t e}{}$ Alternatively, you can complete the square and use the difference of squares identity: ${a}^{2} - {b}^{2} = \left(a - b\right) \left(a + b\right)$ with $a = \left(12 x - 23\right)$ and $b = 13$ as follows: I will multiply by $4 \cdot 6 = 24$ first to avoid some fractions: $24 \left(6 {x}^{2} - 23 x + 15\right)$ $= 144 {x}^{2} - 552 x + 360$ $= {\left(12 x\right)}^{2} - 2 \left(12 x\right) \left(23\right) + 360$ $= {\left(12 x - 23\right)}^{2} - {23}^{2} + 360$ $= {\left(12 x - 23\right)}^{2} - 529 + 360$ $= {\left(12 x - 23\right)}^{2} - 169$ $= {\left(12 x - 23\right)}^{2} - {13}^{2}$ $= \left(\left(12 x - 23\right) - 13\right) \left(\left(12 x - 23\right) + 13\right)$ $= \left(12 x - 36\right) \left(12 x - 10\right)$ $= \left(12 \left(x - 3\right)\right) \left(2 \left(6 x - 5\right)\right)$ $= 24 \left(x - 3\right) \left(6 x - 5\right)$ Dividing both ends by $24$ we find: $6 {x}^{2} - 23 x + 15 = \left(x - 3\right) \left(6 x - 5\right)$ ##### Impact of this question 1117 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 29, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130581617355347, "perplexity": 732.8143486267979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00117.warc.gz"}
https://math.stackexchange.com/questions/3192181/uncoditional-formula-for-unsigned-area-under-a-linesegment-w-r-t-y-0
# Uncoditional formula for unsigned area under a linesegment w.r.t. $y=0$ I'm interested in finding the unsigned area under a line segment with respect to $$y=0$$. The line segment is defined by start point $$(s_x, s_y)$$ and end point $$(e_x, e_y)$$ Without loss of generality, I translate the line segment along the $$x$$-axis, equating $$s_x$$ to $$0$$ and decreasing $$e_x$$ by $$s_x$$. The line equation then becomes: $$l(x) = \frac{\textrm{rise}}{\textrm{run}}x + y\textrm{-intercept} = \frac{e_y-s_y}{e_x}x + s_y$$ In order to find the unsigned area under the curve, I take the definite integral of the absolute value of $$l$$: $$A = \int_{0}^{e_x} |l(x)| dx = \int_{0}^{e_x} \left \lvert \frac{e_y-s_y}{e_x}x + s_y \right \rvert dx$$ I used an online tool (https://www.integral-calculator.com/, I believe it uses Maxima) to evaluate this integral (Wolfram Alpha was unable to compute it for free), and it evaluates to: $$A = \frac{(e_y|e_y|-s_y|s_y|)e_x}{2(e_y-s_y)}$$ I've checked the answer for several values and this formula seems to be correct for all combinations of sign for the variables. Now comes my problem. This formula has a discontinuity at $$e_y = s_y$$, which requires me to make this formula conditional. I can remove the discontintuity as follows: If $$e_y = s_y \neq 0$$, the expression becomes: $$A = \frac{(e_y^2-s_y^2)e_x}{2(e_y-s_y)}=\frac{(e_y-s_y)(e_y+s_y)e_x}{2(e_y-s_y)}=\frac{(e_y+s_y)e_x}{2}$$ which is just the familiar formula for the area of a quadrilateral. My questions: 1. Is there an unconditional expression which covers all cases? 2. The formula is commutative in $$e_y$$ and $$s_y$$ (i.e. $$A(e_y, s_y) = A(s_y, e_y)$$ for all $$e_x$$). I would therefore suspect that this formula can be expressed by commutative operators (e.g. summation and multiplication). This conjecture is proven above for the case where $$e_y$$ and $$s_y$$ are of the same sign. Is this correct for the general problem? • When translating along the $y$-axis, isn't it the $y$ coordinate that changes? Also, both points get translated, and the slope will still be \fract{e_y - s_y}{e_x - s_x}.$– Marwan Mizuri Apr 18 at 11:31 • @MarwanMizuri You're right, I meant "translating along the$x$-axis" (updated the question). Also added that$e_x$gets translated by the same amount. – user3072337 Apr 18 at 11:43 • @MarwanMizuri And yes, the slope is still$\frac{e_y - s_y}{e_x - s_x}$, but as$s_x = 0\$, it reduces to the form I mention. – user3072337 Apr 18 at 11:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551639556884766, "perplexity": 265.2259520438056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00161.warc.gz"}
http://mathoverflow.net/questions/3299/does-the-super-temperley-lieb-algebra-have-a-z-form
# Does the super Temperley-Lieb algebra have a Z-form? Background Let V denote the standard (2-dimensional) module for the Lie algebra sl2(C), or equivalently for the universal envelope U = U(sl2(C)). The Temperley-Lieb algebra TLd is the algebra of intertwiners of the d-fold tensor power of V. TLd = EndU(V⊗…⊗V) Now, let the symmetric group, and hence its group algebra CSd, act on the right of V⊗…⊗V by permuting tensor factors. According to Schur-Weyl duality, V⊗…⊗V is a (U,CSd)-bimodule, with the image of each algebra inside EndC(V⊗…⊗V) being the centralizer of the other. In other words, TLd is a quotient of CSd. The kernel is easy to describe. First decompose the group algebra into its Wedderburn components, one matrix algebra for each irrep of Sd. These are in bijection with partitions of d, which we should picture as Young diagrams. The representation is faithful on any component indexed by a diagram with at most 2 rows and it annihilates all other components. So far, I have deliberately avoided the description of the Temperley-Lieb algebra as a diagram algebra in the sense that Kauffman describes it. Here's the rub: by changing variables in Sd to ui = si + 1, where si = (i i+1), the structure coefficients in TLd are all integers so that one can define a ℤ-form TLd(ℤ) by these formulas. TLd = C ⊗ TLd(ℤ) As product of matrix algebras (as in the Wedderburn decomposition), TLd has a ℤ-form, as well: namely, matrices of the same dimensions over ℤ. These two rings are very different, the latter being rather trivial from the point of view of knot theory. They only become isomorphic after a base change to C. There is a super-analog of this whole story. Let U = U(gl1|1(C)), let V be the standard (1|1)-dimensional module, and let the symmetric group act by signed permutations (when two odd vectors cross, a sign pops up). An analogous Schur-Weyl duality statement holds, and so, by analogy, I call the algebra of intertwiners the super-Temperley-Lieb algebra, or STLd. Over the complex numbers, STLd is a product of matrix algebras corresponding to the irreps of Sd indexed by hook partitions. Young diagrams are confined to one row and one column (super-row!). In that sense, STLd is understood. However, idempotents involved in projecting onto these Wedderburn components are nasty things that cannot be defined over ℤ Question 1: Does STLd have a ℤ-form that is compatible with the standard basis for CSd? Question 2: I am pessimistic about Q1; hence, the follow up: why not? I suspect that this has something to do with cellularity. Question 3: I care about q-deformations of everything mentioned: Uq and the Hecke algebra, respectively. What about here? I am looking for a presentation of STLd,q defined over ℤ[q,q-1]. - Wow, great questions! –  Scott Morrison Oct 29 '09 at 19:41 Though boy did it ever confuse me that you're using d here to denote the number of strings instead of the value of the circle! –  Noah Snyder Oct 30 '09 at 1:28 An important difference between these two cases. For sl_2 the defining representation is self-dual, so the TL algebra captures the full representation theory. For gl(1|1), on the other hand, you're only seeing some of the representation theory. In particular, you only get to see the semisimple part of the representation theory. –  Noah Snyder Oct 30 '09 at 1:32 The question is great --- if I could vote it up more than once, I would. I have no useful answers, though. But your question suggests related ones, which I've posted at [mathoverflow.net/questions/3366/]. –  Theo Johnson-Freyd Oct 30 '09 at 1:36 @Noah: Sorry for the confusion. I think I choose a different letter to index these things every time I talk about them. I like to use m and n for things like gl(m|n). Perhaps, d is poor choice, since $\delta$ is so often the loop value, as you say... –  Sammy Black Oct 30 '09 at 4:31 It depends what you mean by "compatible." For any Z-form of a finite-dimensional C-algebra, there's a canonical Z-form for any quotient just given by the image (the image is a finitely generated abelian subgroup, and thus a lattice). I'll note that the integral form Bruce suggests below is precisely the one induced this way by the Kazhdan-Lusztig basis, since his presentation is the presentation of the Hecke algebra via the K-L basis vectors for reflections, with the additional relations. What you could lose when you take quotients is positivity (which I presume is one of things you are after). The Hecke algebra of S_n has a basis so nice I would call it "canonical" but usually called Kazhdan-Lusztig. This basis has a a very strong positivity property (its structure coefficients are Laurent polynomials with positive integer coefficients). I would argue that this is the structure you are interested in preserving in the quotient. If you want a basis of an algebra to descend a quotient, you'd better hope that the intersection of the basis with the kernel is a basis of the kernel (so that the image of the basis is a basis and a bunch of 0's). An ideal in the Hecke algebra which has a basis given by a subset of the KL basis is called "cellular." The kernel of the map to TLd, and more generally to EndU_q(sl_n)(V⊗d) for any n and d, is cellular. Basically, this is because the parititions corresponding to killed representations form an upper order ideal in the dominance poset of partitions. However, the kernel of the map to STLd is not cellular. In particular, every cellular ideal contains the alternating representation, so any quotient where the alternating representation survives is not cellular. So, while STLd inherits a perfectly good Z-form, it doesn't inherit any particular basis from the Hecke algebra. I'm genuinely unsure if this is really a problem from your standpoint. I mean, the representation V⊗d still has a basis on which the image of any positive integral linear combination of KL basis vectors acts with positive integral coefficients. However, I don't think this guarantees any kind of positivity of structure coefficients. Also, Stroppel and Mazorchuk have a categorification of the Artin-Wedderburn basis of S_n, so maybe it's not as bad as you thought. Anyways, if people want to have a real discussion about this, I suggest we retire to the nLab. I've started a relevant page there. - Minor point: both here and on the nLab page, you refer to the algebra $STL_d$ as the q-Schur algebra, but I believe that they are centralizers of one another. Either way, thanks. Your comments are helpful. –  Sammy Black Mar 23 '10 at 23:08 You're right. I'll fix that. –  Ben Webster Mar 24 '10 at 2:26 I would define this algebra by a presentation. This algebra is over $\mathbb{Z}[\delta]$ but you can specialise to $\mathbb{Z}[q,q^{-1}]$ by taking $\delta\mapsto q+q^{-1}$. The generators are $u_1,\ldots ,u_{n-1}$ (I have $n$ where OP has $d$). Then defining relations are $$u_i^2=\delta u_i$$ $$u_iu_j=u_ju_i\qquad\text{if |i-j|>1}$$ $$u_iu_{i+1}u_i-u_i=u_{i+1}u_iu_{i+1}-u_{i+1}$$ $$u_{i-1}u_{i+1}u_i(\delta-u_{i-1})(\delta-u_{i+1})=0$$ $$(\delta-u_{i-1})(\delta-u_{i+1})u_iu_{i-1}u_{i+1}=0$$ Does this count? If you want to move this to nLab that's fine by me. - I don't understand your final comment. When $n=4$, the algebra is 20-dimensional, but only 2 transpositions in $S_4$ have the transposition $(i−1, i+2) = (1 \;4)$, namely $(4,2,3,1)$ and $(4,3,2,1)$ in one-line notation. –  Sammy Black Mar 23 '10 at 20:07 I have withdrawn the final comment. (I meant something different but that's irrelevant as it was mistaken). –  Bruce Westbury Mar 23 '10 at 21:06 Bruce- the KL basis vectors corresponding to hook partitions under RS definitely project to a basis. Unfortunately, this basis won't have the kind of positivity one gets from the KL basis. –  Ben Webster Mar 23 '10 at 21:12 It is not clear to me whether other (elementary) bases also have this property. If you insist on positivity (presumably because your ambition is to categorify this algebra) then is there a Kazhdan-Lusztig basis? –  Bruce Westbury Mar 23 '10 at 22:02 I would be very surprised if there were anything positive. After some discussions last week, I have some ideas about how categorifying this story works, and it definitely has differentials in it. –  Ben Webster Mar 23 '10 at 22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831586241722107, "perplexity": 745.7361059345086}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645367702.92/warc/CC-MAIN-20150827031607-00333-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=424&pid=5021
• 1 Vote(s) - 5 Average • 1 • 2 • 3 • 4 • 5 using sinh(x) ? sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 07/21/2010, 03:19 AM (This post was last modified: 07/21/2010, 05:06 PM by sheldonison.) (07/20/2010, 08:31 PM)tommy1729 Wrote: how do you know the superfunction of 2sinh is periodic in the imaginary direction ? that is intresting.It is interesting. The period is based on the limiting behavior of 2sinh at its fixed point of zero, where the slope=2. If -real(z) is large enough, then 2^z becomes an excellent approximation of the SuperFunction of 2sinh. The limit equation for the superfunction of 2sinh in the complex plane is: $\operatorname{SuperFunction}(z) = \lim_{n \to \infty} \operatorname{2sinh}^{[n]}(2^{z-n})$ Since 2^z is periodic in the imaginary, with period = i*2pi/ln(2), then the SuperFunction of 2sinh is also periodic with that period. This leads to some interesting behavior (pointed out in earlier posts). The SuperFunction grows superexponentially negative at img(z)=i*pi/ln(2), with img(f)=0. At img(z)=0.5i*pi/ln(2), the SuperFunction has real(f)=0, and converges to an imaginary fixed point. - Sheldon tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 07/21/2010, 10:34 PM sheldon , you write " if -real(z) is large enough ..." do you mean if real(z) is a large negative number ? but then it isnt periodic for all real(z) so its rather " pseudoperiodic ". then we approximate periodicity in the section with large negative real parts in the imaginary direction. weird , in comparison with kouznetsov sexp which approximates periodicity in the section with large imaginary parts in the real direction. thats the same property rotated by 90 degrees ?? hmm wouldnt by that logic almost every superfunction be periodic ?? i mean => af = f ' (0) * f(x) lim n-> oo af^[n] ( a^(z-n) ) real(z) << -10 since a^z is periodic in direction Q , with period i*2pi/ln(a) , then the superfunction of af is also periodic with that period. ... by analogue ... ???? sorry if im mistaken in advance. regards tommy1729 sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 07/22/2010, 12:23 PM (This post was last modified: 07/22/2010, 12:31 PM by sheldonison.) (07/21/2010, 10:34 PM)tommy1729 Wrote: hmm wouldnt by that logic almost every superfunction be periodic ?? i mean => af = f ' (0) * f(x) lim n-> oo af^[n] ( a^(z-n) ) real(z) << -10 since a^z is periodic in direction Q , with period i*2pi/ln(a) , then the superfunction of af is also periodic with that period.Mostly correct, with period i*2pi/ln(a). But not true for every superfunction. Knesser's solution for sexp_e is not periodic, but is pseudo periodic. It starts with the superfunction, which is complex valued at the real axis, and then does a conformal mapping, to put real values back on the real axis, along with a Schwarz reflection. For i(z)>=1i, sexp_e is fairly close to the (linearly shifted) complex valued superfunction. You can see the complex pseudo periodicity in the contour graphs in Kouznetsov's paper on the sexp_e. - Sheldon tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 07/28/2010, 02:24 PM exp(x) plotted in red. ln ln ln 2sinh exp exp exp (x) plotted in black. this is 3 iterations of my formula for the approximation of exp(x). as you can see that is already pretty good. it is clear that for large x the approximation with my formula is better than for smaller x. so i plotted a section of negative x. regards tommy1729 Attached Files   save1.svg (Size: 6.88 KB / Downloads: 293)   save3.svg (Size: 6.59 KB / Downloads: 296)   save2.svg (Size: 7.49 KB / Downloads: 305) tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 07/28/2010, 02:53 PM as said before exp exp exp ... (z) does *not converge* in a neighbourhood of z when im(z) =/= 0. i believe my algoritm is coo on the reals and by analytic continuation / mittag leffler expansion we can construct " the " (imho i.e. "my") sexp. by * not converge* i mean it is chaotic , since of course exp exp ... (2) also diverges. this is so because e^(x + yi) = e^x ( cos y + sin y i ) and large y can become arbitrary close to a multiple of 2pi. although analytic continuation solves the problem - assuming it works and it has period 2pi i - i would like to investigate further the behaviour of the exp iteration. for instance , is it true that the neighbourhood of z always contains exactly one non-chaotic value ? call it hp. assuming we can find hp for every z as a limiting sequence going to zero in the correct way. is it then possible to extend my formula to : exp^[z1,z] = ln ln ln ... 2sinh^[z] exp exp exp ... ( z1 + hp(z1) ) regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 12/04/2010, 11:09 PM (This post was last modified: 12/04/2010, 11:20 PM by tommy1729.) $\operatorname{TommySexp_e}(z,x)= \lim_{n \to \infty } \ln^{[n]} (\operatorname{2sinh}^{[z]}(\exp^{[n]}(x)))$ to compute a taylor series , we need to know that it is justified. concretely that means we need to prove that $\ D^m {TommySexp_e}(z,x)= D^m \lim_{n \to \infty } \ln^{[n]} (\operatorname{2sinh}^{[z]}(\exp^{[n]}(x)))$ , where D^m is the mth derivative with respect to x(*) , holds. ( with respect to z should give the same result , but seems harder at first sight ) analytic continuation preserves periodicity , which is 2pi i. that is also the reason why we apparantly cant extent this method to bases between eta and e^(1/2). ( hints : fourier , entire , think about it ) a further remark and actually request is that i would love to see a plot of all the solutions , including branches of exp(x) = t exp(exp(x)) = t ... exp^[n] = t where t = 0 +/- 1.895494239i is one of the nonzero fixpoint of 2sinh. it might give me new insight or ideas. thanks in advance. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 12/08/2010, 02:31 PM (12/04/2010, 11:09 PM)tommy1729 Wrote: a further remark and actually request is that i would love to see a plot of all the solutions , including branches of exp(x) = t exp(exp(x)) = t ... exp^[n] = t where t = 0 +/- 1.895494239i is one of the nonzero fixpoint of 2sinh. any volunteers ? sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 01/17/2011, 07:43 PM (This post was last modified: 01/17/2011, 08:34 PM by sheldonison.) (12/08/2010, 02:31 PM)tommy1729 Wrote: (12/04/2010, 11:09 PM)tommy1729 Wrote: a further remark and actually request is that i would love to see a plot of all the solutions , including branches .... any volunteers ?I am re-posting this reply after fixing a minor error. There is still some interest in the TommySexp solution (wikipedia talk), which as suggested previously, is infinitely differentiable, but probably nowhere analytic at the real axis. The goal of this post is to show the location of the singularities that limit the radius of convergence, for n=3 and n=4, to bolster the argument that the TommySexp function is nowhere analytic. The singularities occur wherever the 2sinh superfunction(z) has a value of 0 + 2nPi*I, where n is a positive integer. Start with the n=3 case, renormalized with a bias=0.06783836607, so that TommySexp(0)=1. n=3 is sufficiently large to get more than double precision accurate results at the real axis, centered around z=0. Here is a plot of TommySexp(z) with a radius of 0.45, with z centered at the origin, for the n=3 case, which requires three logarithms. ${\text{TommySexp}(z)= \ln(\ln(\ln(\text{2sinh}^{[z+\text{bias}+3]})))=\ln(\ln(\ln(\text{superf2sinh(z+\text{bias}+3))))$ For this plot with red=real, and green=imaginary. For this plot, n=3, with z=0.45*exp(I*theta), plotting TommySexp(z). Inside this region of the complex plane, the TommySexp function is analytic, with no singularities.     But now, we increase the sample radius merely 0.008, from 0.450 to 0.458. And now the TommySexp function becomes poorly behaved, because there are singularities inside the circle. Following this path to theta(Pi*I), the imag(TommySexp(-0.458 )) is no longer even zero, due to the singularities! This means the function's value has becomes dependent on which path around the singularity is chosen. Again, red is real, and green is imaginary.     What's going on, is that there are singularities, so that the TommySexp function has a limited radius of convergence. Next, I post the first fifty singularities of TommySexp, for the n=3 case. The fourth singularity is closest to the origin (after iterating three logarithms), with an absolute value of 0.457, which is why the second plot of the TommySexp(z), centered at the origin, with a radius=0.458 is poorly behaved. Code:First fifty singularities 1 2.007675616 + 0.7110234703*I 2 2.156738932 + 0.4901228794*I 3 2.234549232 + 0.4054846386*I 4 2.285107737 + 0.3580209487*I 5 2.321752935 + 0.3267267408*I 6 2.350104098 + 0.3041253984*I 7 2.373010280 + 0.2868168927*I 8 2.392097990 + 0.2730084001*I 9 2.408376026 + 0.2616549026*I 10 2.422509626 + 0.2521013150*I 11 2.434958702 + 0.2439136307*I 12 2.446053388 + 0.2367915138*I 13 2.456038031 + 0.2305196835*I 14 2.465098133 + 0.2249392776*I 15 2.473377583 + 0.2199301716*I 16 2.480990056 + 0.2153996258*I 17 2.488026798 + 0.2112747481*I 18 2.494562072 + 0.2074973457*I 19 2.500657065 + 0.2040203194*I 20 2.506362741 + 0.2008050829*I 21 2.511721961 + 0.1978196799*I 22 2.516771090 + 0.1950373863*I 23 2.521541226 + 0.1924356567*I 24 2.526059153 + 0.1899953187*I 25 2.530348091 + 0.1876999500*I 26 2.534428293 + 0.1855353899*I 27 2.538317521 + 0.1834893535*I 28 2.542031431 + 0.1815511233*I 29 2.545583889 + 0.1797113001*I 30 2.548987229 + 0.1779616016*I 31 2.552252468 + 0.1762946966*I 32 2.555389481 + 0.1747040687*I 33 2.558407155 + 0.1731839033*I 34 2.561313512 + 0.1717289930*I 35 2.564115816 + 0.1703346579*I 36 2.566820664 + 0.1689966790*I 37 2.569434067 + 0.1677112411*I 38 2.571961511 + 0.1664748842*I 39 2.574408018 + 0.1652844619*I 40 2.576778197 + 0.1641371059*I 41 2.579076287 + 0.1630301948*I 42 2.581306193 + 0.1619613273*I 43 2.583471521 + 0.1609282993*I 44 2.585575608 + 0.1599290830*I 45 2.587621547 + 0.1589618092*I 46 2.589612207 + 0.1580247520*I 47 2.591550261 + 0.1571163144*I 48 2.593438195 + 0.1562350166*I 49 2.595278330 + 0.1553794846*I 50 2.597072833 + 0.1545484411*IThe singularities occur wherever the 2sinh superfunction(z) has a value of 0 + 2nPi*I, where n is an integer>=1, because superf2sinh(z+1)=0, and the logarithm of zero is a singularity. For an example, consider the fourth singularity on the list. Let z= 2.285107737 + 0.3580209487*I superf2sinh(z)=25.13274123*I=8*Pi*I. superf2sinh(z+1)=0. Now iterate the logarithm of superf2sinh(z+1), three times. We get a singularity on the first iteration. The logarithm, on a path circling the singularity gives different results, depending on the path. The logarithm of the logarithm of a singularity is also a singularity, which explains the second graph. All of these particular singularities line on a contour line, where real(superf2sinh(z))=0. So, now I plot this curve (red), superimposing on it a radius (z=0.458 ) half circle, showing how the sampling circle now includes the first singularity. This curve also shows that for the n=4 case, the radius of convergence is much much smaller still. It turns out the 600 thousandth singularity is at z=4.0025+0.0347i, so the radius of convergence for n=4 drops to 0.035. Using this definition, as n (number of iterated logarithms used to generate TommySexp) grows larger, the radius of convergence for TommySexp(z) at the origin gets arbitrarily small. Also, as the singularities approach the real axis, they also get arbitrarily close together. Thus, even though it can be shown that all of TommySexp's derivatives at the real axis are continuous, it is nonetheless probably nowhere analytic, just like the base change sexp definition discussed in an earlier post. - Sheldon     tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 01/17/2011, 11:36 PM thanks for your reply sheldon. in fact my method is intended for the real line. for the complex plane it will not work * in its limit form *. i fact , i wont converge for most nonreal numbers * in its limit form *. ( fixpoints L and L* will work ) to sketch some of the reasons , apart from yours , exp exp exp ... exp(z) does not converge for the neighbourhood of any nonreal z. the identity like functions that commutes with * the limit form * is id(x) = log log ... id( exp exp ... ) however x = log log exp exp (x) but this does not hold for complex z z =/= log log exp exp (z) which has ofcourse great consequences ( again for the limit form ! ) i dont have much time , but i think i made some ideas clear. although slightly on different paths , i think our ideas will merge. i think i can show that my limit form can be transformed to work for all of z. later when i have more time. i had some intuition about those singularities , so it doesnt surprise me. on the other hand , we might be able to learn more about them , and i thank sheldon for the post and pics. regards tommy1729 ps : im still thinking about the base change too , despite not personally appealing to me and ( imho ? ) missing important properties ...[/font] tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 07/02/2011, 10:32 PM $\operatorname{TommySexp_e}(z,x)= \lim_{n \to \infty } \ln^{[n]} (\operatorname{2sinh}^{[z]}(\exp^{[n]}(x)))$ i can now prove a positive radius in x when expanded at certain points and Coo for real z. i assume that can be strengthened to analytic in both x and z with some effort (and perhaps a good book). ( see lemma 1 * add link later * ) however i wont go into details here yet , im currently more intrested in other things about tetration and perhaps this should make a good paper. in particular i still dont know why 1.729 i ^1.729 i ^ ... 0.5 + 0.5 i gives a circle and that bothers me. « Next Oldest | Next Newest » Possibly Related Threads... Thread Author Replies Views Last Post exp^[3/2](x) > sinh^[1/2](exp(x)) ? tommy1729 7 6,076 10/26/2015, 01:07 AM Last Post: tommy1729 2*sinh(3^h*asinh(x/2)) is the superfunction of (...) ? Gottfried 5 5,598 09/11/2013, 08:32 PM Last Post: Gottfried zeta and sinh tommy1729 0 2,020 05/30/2011, 12:07 PM Last Post: tommy1729 Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164678454399109, "perplexity": 2663.5790427207726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00452.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-12-parametric-equations-polar-coordinates-and-conic-sections-12-1-parametric-equations-exercises-page-603/36
## Calculus (3rd Edition) The parametric equation after the translation is $c(t)=(7+5\cos t,4+12\sin t)$ The ellipse of Exercise 28 is parametrized by $c(t)=(5\cos t,12\sin t)$ for $-\pi \leq t\leq \pi$. The center is at the origin. To translate the center of $c(t)$ at $(0,0)$ to $(7,4)$, we replace $c(t)=(5\cos t,12\sin t)$ by $c(t)=(7+5\cos t,4+12\sin t)$ for $-\pi \leq t\leq \pi$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388898611068726, "perplexity": 156.6512908697784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00287.warc.gz"}
https://insightmaker.com/insight/111574/Migration-and-infection-propagation
Migration and infection propagation This insight is about infection propagation and  population migration influence on this propagation. For this, we defined a world population size and a percentage of it who’s infected. Then, we created an agent where we simulated possible states of an individual. So, he can be healthy, infected (with an infection rate) or immunized ( with a certain rate of immunization). If the individual is infected, he can be alive or dead. Then, we simulated different continents (North-America, Asia and Europe) with a migration between these with a certain rate of migration (we tried to approach reality). Then, thanks to our move action which represents a circular permutation between the different continents with a random probability, the agent will be applied to every individual of the world population. How does the program work ? In order to use this insight, we need to define a size of world population and a probability of every individual to reproduce himself. Every individual of this population can have three different state (healthy, infected or immunized) and infected people can be alive or dead. We need to define a percentage of infection to healthy people and a percentage of death for infected people and also a percentage of immunization. Finally there is Migration Part of the program, in this one, we need to define three different continents, states or whatever you want. We also need to define a migration probability between each continent to move these person. With this moving people, we can study the influence of migration on the propagation of a disease. Vincent Cochet, Julien Platel, Jordan Béguet
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091640710830688, "perplexity": 1428.9781815095098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655895944.36/warc/CC-MAIN-20200707204918-20200707234918-00550.warc.gz"}
https://en.wikipedia.org/wiki/Principal_component_analysis
# Principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as Population Genetics, Microbiome studies, Atmospheric Science etc. [1] PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.866, 0.5) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean. The principal components of a collection of points in a real coordinate space are a sequence of ${\displaystyle p}$ unit vectors, where the ${\displaystyle i}$-th vector is the direction of a line that best fits the data while being orthogonal to the first ${\displaystyle i-1}$ vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. In data analysis, the first principal component of a set of ${\displaystyle p}$ variables, presumed to be jointly normally distributed, is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through ${\displaystyle p}$ iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The ${\displaystyle i}$-th principal component can be taken as a direction orthogonal to the first ${\displaystyle i-1}$ principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.[2][3][4][5] Robust and L1-norm-based variants of standard PCA have also been proposed.[6][7][8][5] ## History PCA was invented in 1901 by Karl Pearson,[9] as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s.[10] Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (invented in the last quarter of the 19th century[11]), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe's Principal Component Analysis),[12] Eckart–Young theorem (Harman, 1960), or empirical orthogonal functions (EOF) in meteorological science, empirical eigenfunction decomposition (Sirovich, 1987), empirical component analysis (Lorenz, 1956), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics. ## Intuition PCA can be thought of as fitting a p-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small. To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. Biplots and scree plots (degree of explained variance) are used to explain findings of the PCA. The above picture is of a scree plot that is meant to help interpret the PCA and decide how many components to retain. The start of the bend in the line (point of inflexion) should indicate how many components are retained, hence in this example, three factors should be retained. ## Details PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[12] Consider an ${\displaystyle n\times p}$ data matrix, X, with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of size ${\displaystyle l}$ of p-dimensional vectors of weights or coefficients ${\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}$ that map each row vector ${\displaystyle \mathbf {x} _{(i)}}$ of X to a new vector of principal component scores ${\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}$, given by ${\displaystyle {t_{k}}_{(i)}=\mathbf {x} _{(i)}\cdot \mathbf {w} _{(k)}\qquad \mathrm {for} \qquad i=1,\dots ,n\qquad k=1,\dots ,l}$ in such a way that the individual variables ${\displaystyle t_{1},\dots ,t_{l}}$ of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where ${\displaystyle l}$ is usually selected to be strictly less than ${\displaystyle p}$ to reduce dimensionality). ### First component In order to maximize variance, the first weight vector w(1) thus has to satisfy ${\displaystyle \mathbf {w} _{(1)}=\arg \max _{\Vert \mathbf {w} \Vert =1}\,\left\{\sum _{i}(t_{1})_{(i)}^{2}\right\}=\arg \max _{\Vert \mathbf {w} \Vert =1}\,\left\{\sum _{i}\left(\mathbf {x} _{(i)}\cdot \mathbf {w} \right)^{2}\right\}}$ Equivalently, writing this in matrix form gives ${\displaystyle \mathbf {w} _{(1)}=\arg \max _{\left\|\mathbf {w} \right\|=1}\left\{\left\|\mathbf {Xw} \right\|^{2}\right\}=\arg \max _{\left\|\mathbf {w} \right\|=1}\left\{\mathbf {w} ^{\mathsf {T}}\mathbf {X} ^{\mathsf {T}}\mathbf {Xw} \right\}}$ Since w(1) has been defined to be a unit vector, it equivalently also satisfies ${\displaystyle \mathbf {w} _{(1)}=\arg \max \left\{{\frac {\mathbf {w} ^{\mathsf {T}}\mathbf {X} ^{\mathsf {T}}\mathbf {Xw} }{\mathbf {w} ^{\mathsf {T}}\mathbf {w} }}\right\}}$ The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector. With w(1) found, the first principal component of a data vector x(i) can then be given as a score t1(i) = x(i)w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)w(1)} w(1). ### Further components The k-th component can be found by subtracting the first k − 1 principal components from X: ${\displaystyle \mathbf {\hat {X}} _{k}=\mathbf {X} -\sum _{s=1}^{k-1}\mathbf {X} \mathbf {w} _{(s)}\mathbf {w} _{(s)}^{\mathsf {T}}}$ and then finding the weight vector which extracts the maximum variance from this new data matrix ${\displaystyle \mathbf {w} _{(k)}=\mathop {\operatorname {arg\,max} } _{\left\|\mathbf {w} \right\|=1}\left\{\left\|\mathbf {\hat {X}} _{k}\mathbf {w} \right\|^{2}\right\}=\arg \max \left\{{\tfrac {\mathbf {w} ^{\mathsf {T}}\mathbf {\hat {X}} _{k}^{\mathsf {T}}\mathbf {\hat {X}} _{k}\mathbf {w} }{\mathbf {w} ^{T}\mathbf {w} }}\right\}}$ It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX. The k-th principal component of a data vector x(i) can therefore be given as a score tk(i) = x(i)w(k) in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)w(k)} w(k), where w(k) is the kth eigenvector of XTX. The full principal components decomposition of X can therefore be given as ${\displaystyle \mathbf {T} =\mathbf {X} \mathbf {W} }$ where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called loadings in PCA or in Factor analysis. ### Covariances XTX itself can be recognized as proportional to the empirical sample covariance matrix of the dataset XT.[12]: 30–31 The sample covariance Q between two of the different principal components over the dataset is given by: {\displaystyle {\begin{aligned}Q(\mathrm {PC} _{(j)},\mathrm {PC} _{(k)})&\propto (\mathbf {X} \mathbf {w} _{(j)})^{\mathsf {T}}(\mathbf {X} \mathbf {w} _{(k)})\\&=\mathbf {w} _{(j)}^{\mathsf {T}}\mathbf {X} ^{\mathsf {T}}\mathbf {X} \mathbf {w} _{(k)}\\&=\mathbf {w} _{(j)}^{\mathsf {T}}\lambda _{(k)}\mathbf {w} _{(k)}\\&=\lambda _{(k)}\mathbf {w} _{(j)}^{\mathsf {T}}\mathbf {w} _{(k)}\end{aligned}}} where the eigenvalue property of w(k) has been used to move from line 2 to line 3. However eigenvectors w(j) and w(k) corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written ${\displaystyle \mathbf {Q} \propto \mathbf {X} ^{\mathsf {T}}\mathbf {X} =\mathbf {W} \mathbf {\Lambda } \mathbf {W} ^{\mathsf {T}}}$ The empirical covariance matrix between the principal components becomes ${\displaystyle \mathbf {W} ^{\mathsf {T}}\mathbf {Q} \mathbf {W} \propto \mathbf {W} ^{\mathsf {T}}\mathbf {W} \,\mathbf {\Lambda } \,\mathbf {W} ^{\mathsf {T}}\mathbf {W} =\mathbf {\Lambda } }$ where Λ is the diagonal matrix of eigenvalues λ(k) of XTX. λ(k) is equal to the sum of the squares over the dataset associated with each component k, that is, λ(k) = Σi tk2(i) = Σi (x(i)w(k))2. ### Dimensionality reduction The transformation T = X W maps a data vector x(i) from an original space of p variables to a new space of p variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first L principal components, produced by using only the first L eigenvectors, gives the truncated transformation ${\displaystyle \mathbf {T} _{L}=\mathbf {X} \mathbf {W} _{L}}$ where the matrix TL now has n rows but only L columns. In other words, PCA learns a linear transformation ${\displaystyle t=W_{L}^{\mathsf {T}}x,x\in \mathbb {R} ^{p},t\in \mathbb {R} ^{L},}$ where the columns of p × L matrix ${\displaystyle W_{L}}$ form an orthogonal basis for the L features (the components of representation t) that are decorrelated.[13] By construction, of all the transformed data matrices with only L columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error ${\displaystyle \|\mathbf {T} \mathbf {W} ^{T}-\mathbf {T} _{L}\mathbf {W} _{L}^{T}\|_{2}^{2}}$ or ${\displaystyle \|\mathbf {X} -\mathbf {X} _{L}\|_{2}^{2}}$. A principal components analysis scatterplot of Y-STR haplotypes calculated from repeat-count values for 37 Y-chromosomal STR markers from 354 individuals. PCA has successfully found linear combinations of the markers that separate out different clusters corresponding to different lines of individuals' Y-chromosomal genetic descent. Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting L = 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix W, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain.[14] ### Singular value decomposition The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, ${\displaystyle \mathbf {X} =\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}}$ Here Σ is an n-by-p rectangular diagonal matrix of positive numbers σ(k), called the singular values of X; U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X; and W is a p-by-p whose columns are orthogonal unit vectors of length p and called the right singular vectors of X. In terms of this factorization, the matrix XTX can be written {\displaystyle {\begin{aligned}\mathbf {X} ^{T}\mathbf {X} &=\mathbf {W} \mathbf {\Sigma } ^{\mathsf {T}}\mathbf {U} ^{\mathsf {T}}\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{\mathsf {T}}\\&=\mathbf {W} \mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } \mathbf {W} ^{\mathsf {T}}\\&=\mathbf {W} \mathbf {\hat {\Sigma }} ^{2}\mathbf {W} ^{\mathsf {T}}\end{aligned}}} where ${\displaystyle \mathbf {\hat {\Sigma }} }$ is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies ${\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }$. Comparison with the eigenvector factorization of XTX establishes that the right singular vectors W of X are equivalent to the eigenvectors of XTX, while the singular values σ(k) of ${\displaystyle \mathbf {X} }$ are equal to the square-root of the eigenvalues λ(k) of XTX. Using the singular value decomposition the score matrix T can be written {\displaystyle {\begin{aligned}\mathbf {T} &=\mathbf {X} \mathbf {W} \\&=\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{\mathsf {T}}\mathbf {W} \\&=\mathbf {U} \mathbf {\Sigma } \end{aligned}}} so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T. Efficient algorithms exist to calculate the SVD of X without having to form the matrix XTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix[citation needed], unless only a handful of components are required. As with the eigen-decomposition, a truncated n × L score matrix TL can be obtained by considering only the first L largest singular values and their singular vectors: ${\displaystyle \mathbf {T} _{L}=\mathbf {U} _{L}\mathbf {\Sigma } _{L}=\mathbf {X} \mathbf {W} _{L}}$ The truncation of a matrix M or T using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of rank L to the original matrix, in the sense of the difference between the two having the smallest possible Frobenius norm, a result known as the Eckart–Young theorem [1936]. ## Further considerations The singular values (in Σ) are the square roots of the eigenvalues of the matrix XTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see below). PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the discrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT". Nonlinear dimensionality reduction techniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same sample variance and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.[15] Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on "Mean-centering in Moderated Regression: Much Ado About Nothing". Since covariances are correlations of normalized variables (Z- or standard-scores) a PCA based on the correlation matrix of X is equal to a PCA based on the covariance matrix of Z, the standardized version of X. PCA is a popular primary technique in pattern recognition. It is not, however, optimized for class separability.[16] However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[17] The linear discriminant analysis is an alternative which is optimized for class separability. ## Table of symbols and abbreviations Symbol Meaning Dimensions Indices ${\displaystyle \mathbf {X} =\{X_{ij}\}}$ data matrix, consisting of the set of all data vectors, one vector per row ${\displaystyle n\times p}$ ${\displaystyle i=1\ldots n}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle n}$ the number of row vectors in the data set ${\displaystyle 1\times 1}$ scalar ${\displaystyle p}$ the number of elements in each row vector (dimension) ${\displaystyle 1\times 1}$ scalar ${\displaystyle L}$ the number of dimensions in the dimensionally reduced subspace, ${\displaystyle 1\leq L\leq p}$ ${\displaystyle 1\times 1}$ scalar ${\displaystyle \mathbf {u} =\{u_{j}\}}$ vector of empirical means, one mean for each column j of the data matrix ${\displaystyle p\times 1}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle \mathbf {s} =\{s_{j}\}}$ vector of empirical standard deviations, one standard deviation for each column j of the data matrix ${\displaystyle p\times 1}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle \mathbf {h} =\{h_{i}\}}$ vector of all 1's ${\displaystyle 1\times n}$ ${\displaystyle i=1\ldots n}$ ${\displaystyle \mathbf {B} =\{B_{ij}\}}$ deviations from the mean of each column j of the data matrix ${\displaystyle n\times p}$ ${\displaystyle i=1\ldots n}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle \mathbf {Z} =\{Z_{ij}\}}$ z-scores, computed using the mean and standard deviation for each row m of the data matrix ${\displaystyle n\times p}$ ${\displaystyle i=1\ldots n}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle \mathbf {C} =\{C_{jj'}\}}$ covariance matrix ${\displaystyle p\times p}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle j'=1\ldots p}$ ${\displaystyle \mathbf {R} =\{R_{jj'}\}}$ correlation matrix ${\displaystyle p\times p}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle j'=1\ldots p}$ ${\displaystyle \mathbf {V} =\{V_{jj'}\}}$ matrix consisting of the set of all eigenvectors of C, one eigenvector per column ${\displaystyle p\times p}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle j'=1\ldots p}$ ${\displaystyle \mathbf {D} =\{D_{jj'}\}}$ diagonal matrix consisting of the set of all eigenvalues of C along its principal diagonal, and 0 for all other elements ( note ${\displaystyle \mathbf {\Lambda } }$ used above ) ${\displaystyle p\times p}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle j'=1\ldots p}$ ${\displaystyle \mathbf {W} =\{W_{jl}\}}$ matrix of basis vectors, one vector per column, where each basis vector is one of the eigenvectors of C, and where the vectors in W are a sub-set of those in V ${\displaystyle p\times L}$ ${\displaystyle j=1\ldots p}$ ${\displaystyle l=1\ldots L}$ ${\displaystyle \mathbf {T} =\{T_{il}\}}$ matrix consisting of n row vectors, where each vector is the projection of the corresponding data vector from matrix X onto the basis vectors contained in the columns of matrix W. ${\displaystyle n\times L}$ ${\displaystyle i=1\ldots n}$ ${\displaystyle l=1\ldots L}$ ## Properties and limitations of PCA ### Properties Some properties of PCA include:[12][page needed] Property 1: For any integer q, 1 ≤ qp, consider the orthogonal linear transformation ${\displaystyle y=\mathbf {B'} x}$ where ${\displaystyle y}$ is a q-element vector and ${\displaystyle \mathbf {B'} }$ is a (q × p) matrix, and let ${\displaystyle \mathbf {\Sigma } _{y}=\mathbf {B'} \mathbf {\Sigma } \mathbf {B} }$ be the variance-covariance matrix for ${\displaystyle y}$. Then the trace of ${\displaystyle \mathbf {\Sigma } _{y}}$, denoted ${\displaystyle \operatorname {tr} (\mathbf {\Sigma } _{y})}$, is maximized by taking ${\displaystyle \mathbf {B} =\mathbf {A} _{q}}$, where ${\displaystyle \mathbf {A} _{q}}$ consists of the first q columns of ${\displaystyle \mathbf {A} }$ ${\displaystyle (\mathbf {B'} }$ is the transpose of ${\displaystyle \mathbf {B} )}$. Property 2: Consider again the orthonormal transformation ${\displaystyle y=\mathbf {B'} x}$ with ${\displaystyle x,\mathbf {B} ,\mathbf {A} }$ and ${\displaystyle \mathbf {\Sigma } _{y}}$ defined as before. Then ${\displaystyle \operatorname {tr} (\mathbf {\Sigma } _{y})}$ is minimized by taking ${\displaystyle \mathbf {B} =\mathbf {A} _{q}^{*},}$ where ${\displaystyle \mathbf {A} _{q}^{*}}$ consists of the last q columns of ${\displaystyle \mathbf {A} }$. The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements of x, and they may also be useful in regression, in selecting a subset of variables from x, and in outlier detection. Property 3: (Spectral decomposition of Σ) ${\displaystyle \mathbf {\Sigma } =\lambda _{1}\alpha _{1}\alpha _{1}'+\cdots +\lambda _{p}\alpha _{p}\alpha _{p}'}$ Before we look at its usage, we first look at diagonal elements, ${\displaystyle \operatorname {Var} (x_{j})=\sum _{k=1}^{P}\lambda _{k}\alpha _{kj}^{2}}$ Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements of x into decreasing contributions due to each PC, but we can also decompose the whole covariance matrix into contributions ${\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}$ from each PC. Although not strictly decreasing, the elements of ${\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}$ will tend to become smaller as ${\displaystyle k}$ increases, as ${\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}$ is nonincreasing for increasing ${\displaystyle k}$, whereas the elements of ${\displaystyle \alpha _{k}}$ tend to stay about the same size because of the normalization constraints: ${\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}$. ### Limitations As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[18] The applicability of PCA as described above is limited by certain (tacit) assumptions[19] made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (see kernel PCA). Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[20] and forward modeling has to be performed to recover the true magnitude of the signals.[21] As an alternative method, non-negative matrix factorization focusing only on the non-negative elements in the matrices, which is well-suited for astrophysical observations.[22][23][24] See more at Relation between PCA and Non-negative Matrix Factorization. PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[25] PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[26][page needed] Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[27] The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[27] ### PCA and information theory Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models. Under the assumption that ${\displaystyle \mathbf {x} =\mathbf {s} +\mathbf {n} ,}$ that is, that the data vector ${\displaystyle \mathbf {x} }$ is the sum of the desired information-bearing signal ${\displaystyle \mathbf {s} }$ and a noise signal ${\displaystyle \mathbf {n} }$ one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view. In particular, Linsker showed that if ${\displaystyle \mathbf {s} }$ is Gaussian and ${\displaystyle \mathbf {n} }$ is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes the mutual information ${\displaystyle I(\mathbf {y} ;\mathbf {s} )}$ between the desired information ${\displaystyle \mathbf {s} }$ and the dimensionality-reduced output ${\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }$.[28] If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vector ${\displaystyle \mathbf {n} }$ are iid), but the information-bearing signal ${\displaystyle \mathbf {s} }$ is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on the information loss, which is defined as[29][30] ${\displaystyle I(\mathbf {x} ;\mathbf {s} )-I(\mathbf {y} ;\mathbf {s} ).}$ The optimality of PCA is also preserved if the noise ${\displaystyle \mathbf {n} }$ is iid and at least more Gaussian (in terms of the Kullback–Leibler divergence) than the information-bearing signal ${\displaystyle \mathbf {s} }$.[31] In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noise ${\displaystyle \mathbf {n} }$ becomes dependent. ## Computing PCA using the covariance method The following is a detailed description of PCA using the covariance method (see also here) as opposed to the correlation method.[32] The goal is to transform a given data set X of dimension p to an alternative data set Y of smaller dimension L. Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X: ${\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}}$ Organize the data set Suppose you have data comprising a set of observations of p variables, and you want to reduce the data so that each observation can be described with only L variables, L < p. Suppose further, that the data are arranged as a set of n data vectors ${\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}$ with each ${\displaystyle \mathbf {x} _{i}}$ representing a single grouped observation of the p variables. • Write ${\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}$ as row vectors, each with p elements. • Place the row vectors into a single matrix X of dimensions n × p. Calculate the empirical mean • Find the empirical mean along each column j = 1, ..., p. • Place the calculated mean values into an empirical mean vector u of dimensions p × 1. ${\displaystyle u_{j}={\frac {1}{n}}\sum _{i=1}^{n}X_{ij}}$ Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[33] Hence we proceed by centering the data as follows: • Subtract the empirical mean vector ${\displaystyle \mathbf {u} ^{T}}$ from each row of the data matrix X. • Store mean-subtracted data in the n × p matrix B. ${\displaystyle \mathbf {B} =\mathbf {X} -\mathbf {h} \mathbf {u} ^{T}}$ where h is an n × 1 column vector of all 1s: ${\displaystyle h_{i}=1\,\qquad \qquad {\text{for }}i=1,\ldots ,n}$ In some applications, each variable (column of B) may also be scaled to have a variance equal to 1 (see Z-score).[34] This step affects the calculated principal components, but makes them independent of the units used to measure the different variables. Find the covariance matrix • Find the p × p empirical covariance matrix C from matrix B: ${\displaystyle \mathbf {C} ={1 \over {n-1}}\mathbf {B} ^{*}\mathbf {B} }$ where ${\displaystyle *}$ is the conjugate transpose operator. If B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose. • The reasoning behind using n − 1 instead of n to calculate the covariance is Bessel's correction. Find the eigenvectors and eigenvalues of the covariance matrix • Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: ${\displaystyle \mathbf {V} ^{-1}\mathbf {C} \mathbf {V} =\mathbf {D} }$ where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as SAS,[35] R, MATLAB,[36][37] Mathematica,[38] SciPy, IDL (Interactive Data Language), or GNU Octave as well as OpenCV. • Matrix D will take the form of an p × p diagonal matrix, where ${\displaystyle D_{k\ell }=\lambda _{k}\qquad {\text{for }}k=\ell }$ is the jth eigenvalue of the covariance matrix C, and ${\displaystyle D_{k\ell }=0\qquad {\text{for }}k\neq \ell .}$ • Matrix V, also of dimension p × p, contains p column vectors, each of length p, which represent the p eigenvectors of the covariance matrix C. • The eigenvalues and eigenvectors are ordered and paired. The jth eigenvalue corresponds to the jth eigenvector. • Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors. Rearrange the eigenvectors and eigenvalues • Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. • Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector • The eigenvalues represent the distribution of the source data's energy[clarification needed] among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j: ${\displaystyle g_{j}=\sum _{k=1}^{j}D_{kk}\qquad {\text{for }}j=1,\dots ,p}$[citation needed] Select a subset of the eigenvectors as basis vectors • Save the first L columns of V as the p × L matrix W: ${\displaystyle W_{kl}=V_{k\ell }\qquad {\text{for }}k=1,\dots ,p\qquad \ell =1,\dots ,L}$ where ${\displaystyle 1\leq L\leq p.}$ • Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that ${\displaystyle {\frac {g_{L}}{g_{p}}}\geq 0.9}$ Project the data onto the new basis • The projected data points are the rows of the matrix ${\displaystyle \mathbf {T} =\mathbf {B} \cdot \mathbf {W} }$ That is, the first column of ${\displaystyle \mathbf {T} }$ is the projection of the data points onto the first principal component, the second column is the projection onto the second principal component, etc. ## Derivation of PCA using the covariance method Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find ${\displaystyle (\ast )}$ a d × d orthonormal transformation matrix P so that PX has a diagonal covariance matrix (that is, PX is a random vector with all its distinct components pairwise uncorrelated). A quick computation assuming ${\displaystyle P}$ were unitary yields: {\displaystyle {\begin{aligned}\operatorname {cov} (PX)&=\operatorname {E} [PX~(PX)^{*}]\\&=\operatorname {E} [PX~X^{*}P^{*}]\\&=P\operatorname {E} [XX^{*}]P^{*}\\&=P\operatorname {cov} (X)P^{-1}\\\end{aligned}}} Hence ${\displaystyle (\ast )}$ holds if and only if ${\displaystyle \operatorname {cov} (X)}$ were diagonalisable by ${\displaystyle P}$. This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. ## Covariance-free computation In practical implementations, especially with high dimensional data (large p), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids the np2 operations of explicitly calculating and storing the covariance matrix XTX, instead utilizing one of matrix-free methods, for example, based on the function evaluating the product XT(X r) at the cost of 2np operations. ### Iterative computation One way to compute the first principal component efficiently[39] is shown in the following pseudo-code, for a data matrix X with zero mean, without ever computing its covariance matrix. r = a random vector of length p r = r / norm(r) do c times: s = 0 (a vector of length p) for each row x in X s = s + (x ⋅ r) x λ = rTs // λ is the eigenvalue error = |λ ⋅ r − s| r = s / norm(s) exit if error < tolerance return λ, r This power iteration algorithm simply calculates the vector XT(X r), normalizes, and places the result back in r. The eigenvalue is approximated by rT (XTX) r, which is the Rayleigh quotient on the unit vector r for the covariance matrix XTX . If the largest singular value is well separated from the next largest one, the vector r gets close to the first principal component of X within the number of iterations c, which is small relative to p, at the total cost 2cnp. The power iteration convergence can be accelerated without noticeably sacrificing the small cost per iteration using more advanced matrix-free methods, such as the Lanczos algorithm or the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectors r and s with block-vectors, matrices R and S. Every column of R approximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the product XT(X R). Implemented, for example, in LOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-level BLAS matrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. ### The NIPALS method Non-linear iterative partial least squares (NIPALS) is a variant the classical power iteration with matrix deflation by subtraction implemented for computing the first few components in a principal component or partial least squares analysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example, genomics, metabolomics) it is usually only necessary to compute the first few PCs. The non-linear iterative partial least squares (NIPALS) algorithm updates iterative approximations to the leading scores and loadings t1 and r1T by the power iteration multiplying on every iteration by X on the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations to XTX, based on the function evaluating the product XT(X r) = ((X r)TX)T. The matrix deflation by subtraction is performed by subtracting the outer product, t1r1T from X leaving the deflated residual matrix used to calculate the subsequent leading PCs.[40] For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precision round-off errors accumulated in each iteration and matrix deflation by subtraction.[41] A Gram–Schmidt re-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[42] NIPALS reliance on single-vector multiplications cannot take advantage of high-level BLAS and results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. ### Online/sequential estimation In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[43] ## PCA and qualitative variables In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variable species. For this, the following results are produced. • Identification, on the factorial planes, of the different species, for example, using different colors. • Representation, on the factorial planes, of the centers of gravity of plants belonging to the same species. • For each center of gravity and each axis, p-value to judge the significance of the difference between the center of gravity and origin. These results are what is called introducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê & Pagès 2009 and Pagès 2013. Few software offer this option in an "automatic" way. This is the case of SPAD that historically, following the work of Ludovic Lebart, was the first to propose this option, and the R package FactoMineR. ## Applications ### Intelligence The earliest application of factor analysis was in locating and measuring components of human intelligence. it was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as the Intelligence Quotient (IQ). The pioneering statistical psychologist Spearman actually developed factor analysis in 1904 for his two-factor theory of intelligence, adding a formal technique to the science of psychometrics. In 1924 Thurstone looked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[44] ### Residential differentiation In 1949, Shevky and Williams introduced the theory of factorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[45] Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms. One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[46] About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[47] ### Development indexes PCA has been the only formal method available for the development of indexes, which are otherwise a hit-or-miss ad hoc undertaking. The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city. The country-level Human Development Index (HDI) from UNDP, which has been published since 1990 and is very extensively used in development studies,[48] has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA. ### Population genetics In 1978 Cavalli-Sforza and others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events. Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[49] PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologist Eran Elhaik published a theoretical paper in Scientific Reports analyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking and circular reasoning.[50] ### Market research and indexes of attitude Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[51] PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[52] Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[53] ### Quantitative finance In quantitative finance, principal component analysis can be directly applied to the risk management of interest rate derivative portfolios.[54] Trading multiple swap instruments which are usually a function of 30–500 other market quotable swap instruments is sought to be reduced to usually 3 or 4 principal components, representing the path of interest rates on a macro basis. Converting risks to be represented as those to factor loadings (or multipliers) provides assessments and understanding beyond that available to simply collectively viewing risks to individual 30–500 buckets. PCA has also been applied to equity portfolios in a similar fashion,[55] both to portfolio risk and to risk return. One application is to reduce portfolio risk, where allocation strategies are applied to the "principal portfolios" instead of the underlying stocks.[56] A second is to enhance portfolio return, using the principal components to select stocks with upside potential.[citation needed] ### Neuroscience A variant of principal components analysis is used in neuroscience to identify the specific properties of a stimulus that increases a neuron's probability of generating an action potential.[57] This technique is known as spike-triggered covariance analysis. In a typical application an experimenter presents a white noise process as a stimulus (usually either as a sensory input to a test subject, or as a current injected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates the covariance matrix of the spike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. The eigenvectors of the difference between the spike-triggered covariance matrix and the covariance matrix of the prior stimulus ensemble (the set of all stimuli, defined over the same length time window) then indicate the directions in the space of stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the varince of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential. Spike sorting is an important procedure because extracellular recording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performs clustering analysis to associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is, order parameters, during phase transitions in the brain.[58] ## Relation with other methods ### Correspondence analysis Correspondence analysis (CA) was developed by Jean-Paul Benzécri[59] and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied to contingency tables. CA decomposes the chi-squared statistic associated to this table into orthogonal factors.[60] Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not. Several variants of CA are available including detrended correspondence analysis and canonical correspondence analysis. One special extension is multiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[61] ### Factor analysis The above picture is an example of the difference between PCA and Factor Analysis. In the top diagram the "factor" (e.g., career path) represents the three observed variables (e.g., doctor, lawyer, teacher) whereas in the bottom diagram the observed variables (e.g., pre-school teacher, middle school teacher, high school teacher) are reduced into the component of interest (e.g., teacher). Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors. Factor analysis is similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[62] In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[12]: 158  Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) or causal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[63] ### K-means clustering It has been asserted that the relaxed solution of k-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[64][65] However, that PCA is a useful relaxation of k-means clustering was not a new result,[66] and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[67] ### Non-negative matrix factorization Fractional residual variance (FRV) plots for PCA and NMF;[24] for PCA, the theoretical values are the contribution from the residual eigenvalues. In comparison, the FRV curves for PCA reaches a flat plateau where no signal are captured effectively; while the NMF FRV curves are declining continuously, indicating a better ability to capture signal. The FRV curves for NMF also converges to higher levels than PCA, indicating the less-overfitting property of NMF. Non-negative matrix factorization (NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[22][23][24] in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis. In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[20] For NMF, its components are ranked based only on the empirical FRV curves.[24] The residual fractional eigenvalue plots, that is, ${\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}$ as a function of component number ${\displaystyle k}$ given a total of ${\displaystyle n}$ components, for PCA has a flat plateau, where no data is captured to remove the quasi-static noise, then the curves dropped quickly as an indication of over-fitting and captures random noise.[20] The FRV curves for NMF is decreasing continuously[24] when the NMF components are constructed sequentially,[23] indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[24] indicating the less over-fitting property of NMF. ### Iconography of correlations It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane. Iconography of correlations - Geochemistry of marine aerosols The iconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables. The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation). A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable". ## Generalizations ### Sparse PCA A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables. Sparse PCA overcomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables. Several approaches have been proposed, including • a regression framework,[68] • a convex relaxation/semidefinite programming framework,[69] • a generalized power method framework[70] • an alternating maximization framework[71] • forward-backward greedy search and exact methods using branch-and-bound techniques,[72] • Bayesian formulation framework.[73] The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[74] ### Nonlinear PCA Linear PCA versus nonlinear Principal Manifolds[75] for visualization of breast cancer microarray data: a) Configuration of nodes and 2D Principal Surface in the 3D PCA linear manifold. The dataset is curved and cannot be mapped adequately on a 2D principal plane; b) The distribution in the internal 2D non-linear principal surface coordinates (ELMap2D) together with an estimation of the density of points; c) The same as b), but for the linear 2D PCA manifold (PCA2D). The "basal" breast cancer subtype is visualized more adequately with ELMap2D and some features of the distribution become better resolved in comparison to PCA2D. Principal manifolds are produced by the elastic maps algorithm. Data are available for public competition.[76] Software is available for free non-commercial use.[77] Most of the modern methods for nonlinear dimensionality reduction find their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points. Trevor Hastie expanded on this concept by proposing Principal curves[78] as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for data approximation followed by projecting the points onto it, as is illustrated by Fig. See also the elastic map algorithm and principal geodesic analysis.[79] Another popular generalization is kernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. In multilinear subspace learning,[80] PCA is generalized to multilinear PCA (MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. N-way principal component analysis may be performed with models such as Tucker decomposition, PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. ### Robust PCA While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive to outliers in the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example, in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA[81] based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[6][4] Robust principal component analysis (RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[82][83][84] ## Similar techniques ### Independent component analysis Independent component analysis (ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations. ### Network component analysis Given a matrix ${\displaystyle E}$, it tries to decompose it into two matrices such that ${\displaystyle E=AP}$. A key difference from techniques such as PCA and ICA is that some of the entries of ${\displaystyle A}$ are constrained to be 0. Here ${\displaystyle P}$ is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied : 1. ${\displaystyle A}$ has full column rank 2. Each column of ${\displaystyle A}$ must have at least ${\displaystyle L-1}$ zeroes where ${\displaystyle L}$ is the number of columns of ${\displaystyle A}$ (or alternatively the number of rows of ${\displaystyle P}$). The justification for this criterion is that if a node is removed from the regulatory layer along with all the output nodes connected to it, the result must still be characterized by a connectivity matrix with full column rank. 3. ${\displaystyle P}$ must have full row rank. then the decomposition is unique up to multiplication by a scalar.[85] ### Discriminant analysis of principal components Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[86] In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA). ## Software/source code • ALGLIB - a C++ and C# library that implements PCA and truncated PCA • Analytica – The built-in EigenDecomp function computes principal components. • ELKI – includes PCA for projection, including robust variants of PCA, as well as PCA-based clustering algorithms. • Gretl – principal component analysis can be performed either via the pca command or via the princomp() function. • Julia – Supports PCA with the pca function in the MultivariateStats package • KNIME – A java based nodal arranging software for Analysis, in this the nodes called PCA, PCA compute, PCA Apply, PCA inverse make it easily. • Mathematica – Implements principal component analysis with the PrincipalComponents command using both covariance and correlation methods. • MathPHP – PHP mathematics library with support for PCA. • MATLAB - The SVD function is part of the basic system. In the Statistics Toolbox, the functions princomp and pca (R2012b) give the principal components, while the function pcares gives the residuals and reconstructed matrix for a low-rank PCA approximation. • MatplotlibPython library have a PCA package in the .mlab module. • mlpack – Provides an implementation of principal component analysis in C++. • NAG Library – Principal components analysis is implemented via the g03aa routine (available in both the Fortran versions of the Library). • NMath – Proprietary numerical library containing PCA for the .NET Framework. • GNU Octave – Free software computational environment mostly compatible with MATLAB, the function princomp gives the principal component. • OpenCV • Oracle Database 12c – Implemented via DBMS_DATA_MINING.SVDS_SCORING_MODE by specifying setting value SVDS_SCORING_PCA • Orange (software) – Integrates PCA in its visual programming environment. PCA displays a scree plot (degree of explained variance) where user can interactively select the number of principal components. • Origin – Contains PCA in its Pro version. • Qlucore – Commercial software for analyzing multivariate data with instant response using PCA. • RFree statistical package, the functions princomp and prcomp can be used for principal component analysis; prcomp uses singular value decomposition which generally gives better numerical accuracy. Some packages that implement PCA in R, include, but are not limited to: ade4, vegan, ExPosition, dimRed, and FactoMineR. • SAS – Proprietary software; for example, see[87] • Scikit-learn – Python library for machine learning which contains PCA, Probabilistic PCA, Kernel PCA, Sparse PCA and other techniques in the decomposition module. • SPSS - Proprietary software most commonly used by social scientists for PCA, factor analysis and associated cluster analysis. • Weka – Java library for machine learning which contains modules for computing principal components. ## References 1. ^ Jolliffe, Ian T.; Cadima, Jorge (2016-04-13). "Principal component analysis: a review and recent developments". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 374 (2065): 20150202. doi:10.1098/rsta.2015.0202. PMC 4792409. PMID 26953178. 2. ^ Barnett, T. P. & R. Preisendorfer. (1987). "Origins and levels of monthly and seasonal forecast skill for United States surface air temperatures determined by canonical correlation analysis". Monthly Weather Review. 115 (9): 1825. Bibcode:1987MWRv..115.1825B. doi:10.1175/1520-0493(1987)115<1825:oaloma>2.0.co;2. 3. ^ Hsu, Daniel; Kakade, Sham M.; Zhang, Tong (2008). A spectral algorithm for learning hidden markov models. arXiv:0811.4413. Bibcode:2008arXiv0811.4413H. 4. ^ a b Markopoulos, Panos P.; Kundu, Sandipan; Chamadia, Shubham; Pados, Dimitris A. (15 August 2017). "Efficient L1-Norm Principal-Component Analysis via Bit Flipping". IEEE Transactions on Signal Processing. 65 (16): 4252–4264. arXiv:1610.01959. Bibcode:2017ITSP...65.4252M. doi:10.1109/TSP.2017.2708023. S2CID 7931130. 5. ^ a b Chachlakis, Dimitris G.; Prater-Bennette, Ashley; Markopoulos, Panos P. (22 November 2019). "L1-norm Tucker Tensor Decomposition". IEEE Access. 7: 178454–178465. arXiv:1904.06455. doi:10.1109/ACCESS.2019.2955134. 6. ^ a b Markopoulos, Panos P.; Karystinos, George N.; Pados, Dimitris A. (October 2014). "Optimal Algorithms for L1-subspace Signal Processing". IEEE Transactions on Signal Processing. 62 (19): 5046–5058. arXiv:1405.6785. Bibcode:2014ITSP...62.5046M. doi:10.1109/TSP.2014.2338077. S2CID 1494171. 7. ^ Zhan, J.; Vaswani, N. (2015). "Robust PCA With Partial Subspace Knowledge". IEEE Transactions on Signal Processing. 63 (13): 3332–3347. arXiv:1403.1591. Bibcode:2015ITSP...63.3332Z. doi:10.1109/tsp.2015.2421485. S2CID 1516440. 8. ^ Kanade, T.; Ke, Qifa (June 2005). Robust L1 Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 1. IEEE. p. 739. CiteSeerX 10.1.1.63.4605. doi:10.1109/CVPR.2005.309. ISBN 978-0-7695-2372-9. S2CID 17144854. 9. ^ Pearson, K. (1901). "On Lines and Planes of Closest Fit to Systems of Points in Space". Philosophical Magazine. 2 (11): 559–572. doi:10.1080/14786440109462720. 10. ^ Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24, 417–441, and 498–520. Hotelling, H (1936). "Relations between two sets of variates". Biometrika. 28 (3/4): 321–377. doi:10.2307/2333955. JSTOR 2333955. 11. ^ Stewart, G. W. (1993). "On the early history of the singular value decomposition". SIAM Review. 35 (4): 551–566. doi:10.1137/1035134. 12. Jolliffe, I. T. (2002). Principal Component Analysis. Springer Series in Statistics. New York: Springer-Verlag. doi:10.1007/b98835. ISBN 978-0-387-95442-4. 13. ^ Bengio, Y.; et al. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/TPAMI.2013.50. PMID 23787338. S2CID 393948. 14. ^ Forkman J., Josse, J., Piepho, H. P. (2019). "Hypothesis tests for principal component analysis when variables are standardized". Journal of Agricultural, Biological, and Environmental Statistics. 24 (2): 289–308. doi:10.1007/s13253-019-00355-5.{{cite journal}}: CS1 maint: multiple names: authors list (link) 15. ^ A. A. Miranda, Y. A. Le Borgne, and G. Bontempi. New Routes from Minimal Approximation Error to Principal Components, Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer 16. ^ Fukunaga, Keinosuke (1990). Introduction to Statistical Pattern Recognition. Elsevier. ISBN 978-0-12-269851-4. 17. ^ Alizadeh, Elaheh; Lyons, Samanthe M; Castle, Jordan M; Prasad, Ashok (2016). "Measuring systematic changes in invasive cancer cell shape using Zernike moments". Integrative Biology. 8 (11): 1183–1193. doi:10.1039/C6IB00100A. PMID 27735002. 18. ^ Leznik, M; Tofallis, C. 2005 Estimating Invariant Principal Components Using Diagonal Regression. 19. ^ Jonathon Shlens, A Tutorial on Principal Component Analysis. 20. ^ a b c Soummer, Rémi; Pueyo, Laurent; Larkin, James (2012). "Detection and Characterization of Exoplanets and Disks Using Projections on Karhunen-Loève Eigenimages". The Astrophysical Journal Letters. 755 (2): L28. arXiv:1207.4197. Bibcode:2012ApJ...755L..28S. doi:10.1088/2041-8205/755/2/L28. S2CID 51088743. 21. ^ Pueyo, Laurent (2016). "Detection and Characterization of Exoplanets using Projections on Karhunen Loeve Eigenimages: Forward Modeling". The Astrophysical Journal. 824 (2): 117. arXiv:1604.06097. Bibcode:2016ApJ...824..117P. doi:10.3847/0004-637X/824/2/117. S2CID 118349503. 22. ^ a b Blanton, Michael R.; Roweis, Sam (2007). "K-corrections and filter transformations in the ultraviolet, optical, and near infrared". The Astronomical Journal. 133 (2): 734–754. arXiv:astro-ph/0606170. Bibcode:2007AJ....133..734B. doi:10.1086/510127. S2CID 18561804. 23. ^ a b c Zhu, Guangtun B. (2016-12-19). "Nonnegative Matrix Factorization (NMF) with Heteroscedastic Uncertainties and Missing data". arXiv:1612.06037 [astro-ph.IM]. 24. Ren, Bin; Pueyo, Laurent; Zhu, Guangtun B.; Duchêne, Gaspard (2018). "Non-negative Matrix Factorization: Robust Extraction of Extended Structures". The Astrophysical Journal. 852 (2): 104. arXiv:1712.10317. Bibcode:2018ApJ...852..104R. doi:10.3847/1538-4357/aaa1f2. S2CID 3966513. 25. ^ "What are the Pros and cons of the PCA?". i2tutorials. September 1, 2019. Retrieved June 4, 2021. 26. ^ Abbott, Dean (May 2014). Applied Predictive Analytics. Wiley. ISBN 9781118727966. 27. ^ a b Jiang, Hong; Eskridge, Kent M. (2000). "Bias in Principal Components Analysis Due to Correlated Observations". Conference on Applied Statistics in Agriculture. doi:10.4148/2475-7772.1247. ISSN 2475-7772. 28. ^ Linsker, Ralph (March 1988). "Self-organization in a perceptual network". IEEE Computer. 21 (3): 105–117. doi:10.1109/2.36. S2CID 1527671. 29. ^ Deco & Obradovic (1996). An Information-Theoretic Approach to Neural Computing. New York, NY: Springer. ISBN 9781461240167. 30. ^ Plumbley, Mark (1991). Information theory and unsupervised neural networks.Tech Note 31. ^ Geiger, Bernhard; Kubin, Gernot (January 2013). "Signal Enhancement as Minimization of Relevant Information Loss". Proc. ITG Conf. On Systems, Communication and Coding. arXiv:1205.6935. Bibcode:2012arXiv1205.6935G. 32. ^ "Engineering Statistics Handbook Section 6.5.5.2". Retrieved 19 January 2015. 33. ^ A.A. Miranda, Y.-A. Le Borgne, and G. Bontempi. New Routes from Minimal Approximation Error to Principal Components, Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer 34. ^ Abdi. H. & Williams, L.J. (2010). "Principal component analysis". Wiley Interdisciplinary Reviews: Computational Statistics. 2 (4): 433–459. arXiv:1108.4372. doi:10.1002/wics.101. S2CID 122379222. 35. ^ 36. ^ eig function Matlab documentation 37. ^ "Face Recognition System-PCA based". www.mathworks.com. 38. ^ Eigenvalues function Mathematica documentation 39. ^ Roweis, Sam. "EM Algorithms for PCA and SPCA." Advances in Neural Information Processing Systems. Ed. Michael I. Jordan, Michael J. Kearns, and Sara A. Solla The MIT Press, 1998. 40. ^ Geladi, Paul; Kowalski, Bruce (1986). "Partial Least Squares Regression:A Tutorial". Analytica Chimica Acta. 185: 1–17. doi:10.1016/0003-2670(86)80028-9. 41. ^ Kramer, R. (1998). Chemometric Techniques for Quantitative Analysis. New York: CRC Press. ISBN 9780203909805. 42. ^ Andrecut, M. (2009). "Parallel GPU Implementation of Iterative PCA Algorithms". Journal of Computational Biology. 16 (11): 1593–1599. arXiv:0811.1081. doi:10.1089/cmb.2008.0221. PMID 19772385. S2CID 1362603. 43. ^ Warmuth, M. K.; Kuzmin, D. (2008). "Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension" (PDF). Journal of Machine Learning Research. 9: 2287–2320. 44. ^ Kaplan, R.M., & Saccuzzo, D.P. (2010). Psychological Testing: Principles, Applications, and Issues. (8th ed.). Belmont, CA: Wadsworth, Cengage Learning. 45. ^ Shevky, Eshref; Williams, Marilyn (1949). The Social Areas of Los Angeles: Analysis and Typology. University of California Press. 46. ^ Flood, J (2000). Sydney divided: factorial ecology revisited. Paper to the APA Conference 2000, Melbourne,November and to the 24th ANZRSAI Conference, Hobart, December 2000.[1] 47. ^ "Socio-Economic Indexes for Areas". Australian Bureau of Statistics. 2011. Retrieved 2022-05-05. 48. ^ Human Development Reports. "Human Development Index". United Nations Development Programme. Retrieved 2022-05-06. 49. ^ Novembre, John; Stephens, Matthew (2008). "Interpreting principal component analyses of spatial population genetic variation". Nat Genet. 40 (5): 646–49. doi:10.1038/ng.139. PMC 3989108. PMID 18425127. 50. ^ Elhaik, Eran (2022). "Principal Component Analyses (PCA)‑based findings in population genetic studies are highly biased and must be reevaluated". Scientific Reports. 12. 14683. doi:10.1038/s41598-022-14395-4. PMID 36038559. S2CID 251932226. 51. ^ DeSarbo, Wayne; Hausmann, Robert; Kukitz, Jeffrey (2007). "Restricted principal components analysis for marketing research". Journal of Marketing in Management. 2: 305–328 – via Researchgate. 52. ^ Dutton, William H; Blank, Grant (2013). Cultures of the Internet: The Internet in Britain (PDF). Oxford Internet Institute. p. 6. 53. ^ Flood, Joe (2008). "Multinomial Analysis for Housing Careers Survey". Paper to the European Network for Housing Research Conference, Dublin. Retrieved 6 May 2022. 54. ^ The Pricing and Hedging of Interest Rate Derivatives: A Practical Guide to Swaps, J H M Darbyshire, 2016, ISBN 978-0995455511 55. ^ Giorgia Pasini (2017); Principal Component Analysis for Stock Portfolio Management. International Journal of Pure and Applied Mathematics. Volume 115 No. 1 2017, 153–167 56. ^ Libin Yang. An Application of Principal Component Analysis to Stock Portfolio Management. Department of Economics and Finance, University of Canterbury, January 2015. 57. ^ Brenner, N., Bialek, W., & de Ruyter van Steveninck, R.R. (2000). 58. ^ Jirsa, Victor; Friedrich, R; Haken, Herman; Kelso, Scott (1994). "A theoretical model of phase transitions in the human brain". Biological Cybernetics. 71 (1): 27–35. doi:10.1007/bf00198909. PMID 8054384. S2CID 5155075. 59. ^ Benzécri, J.-P. (1973). L'Analyse des Données. Volume II. L'Analyse des Correspondances. Paris, France: Dunod. 60. ^ Greenacre, Michael (1983). Theory and Applications of Correspondence Analysis. London: Academic Press. ISBN 978-0-12-299050-2. 61. ^ Le Roux; Brigitte and Henry Rouanet (2004). Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis. Dordrecht: Kluwer. ISBN 9781402022357. 62. ^ Timothy A. Brown. Confirmatory Factor Analysis for Applied Research Methodology in the social sciences. Guilford Press, 2006 63. ^ Meglen, R.R. (1991). "Examining Large Databases: A Chemometric Approach Using Principal Component Analysis". Journal of Chemometrics. 5 (3): 163–179. doi:10.1002/cem.1180050305. S2CID 120886184. 64. ^ H. Zha; C. Ding; M. Gu; X. He; H.D. Simon (Dec 2001). "Spectral Relaxation for K-means Clustering" (PDF). Neural Information Processing Systems Vol.14 (NIPS 2001): 1057–1064. 65. ^ Chris Ding; Xiaofeng He (July 2004). "K-means Clustering via Principal Component Analysis" (PDF). Proc. Of Int'l Conf. Machine Learning (ICML 2004): 225–232. 66. ^ Drineas, P.; A. Frieze; R. Kannan; S. Vempala; V. Vinay (2004). "Clustering large graphs via the singular value decomposition" (PDF). Machine Learning. 56 (1–3): 9–33. doi:10.1023/b:mach.0000033113.59016.96. S2CID 5892850. Retrieved 2012-08-02. 67. ^ Cohen, M.; S. Elder; C. Musco; C. Musco; M. Persu (2014). Dimensionality reduction for k-means clustering and low rank approximation (Appendix B). arXiv:1410.6801. Bibcode:2014arXiv1410.6801C. 68. ^ Hui Zou; Trevor Hastie; Robert Tibshirani (2006). "Sparse principal component analysis" (PDF). Journal of Computational and Graphical Statistics. 15 (2): 262–286. CiteSeerX 10.1.1.62.580. doi:10.1198/106186006x113430. S2CID 5730904. 69. ^ Alexandre d'Aspremont; Laurent El Ghaoui; Michael I. Jordan; Gert R. G. Lanckriet (2007). "A Direct Formulation for Sparse PCA Using Semidefinite Programming" (PDF). SIAM Review. 49 (3): 434–448. arXiv:cs/0406021. doi:10.1137/050645506. S2CID 5490061. 70. ^ Michel Journee; Yurii Nesterov; Peter Richtarik; Rodolphe Sepulchre (2010). "Generalized Power Method for Sparse Principal Component Analysis" (PDF). Journal of Machine Learning Research. 11: 517–553. arXiv:0811.4724. Bibcode:2008arXiv0811.4724J. CORE Discussion Paper 2008/70. 71. ^ Peter Richtarik; Martin Takac; S. Damla Ahipasaoglu (2012). "Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes". arXiv:1212.4137 [stat.ML]. 72. ^ Baback Moghaddam; Yair Weiss; Shai Avidan (2005). "Spectral Bounds for Sparse PCA: Exact and Greedy Algorithms" (PDF). Advances in Neural Information Processing Systems. Vol. 18. MIT Press. 73. ^ Yue Guan; Jennifer Dy (2009). "Sparse Probabilistic Principal Component Analysis" (PDF). Journal of Machine Learning Research Workshop and Conference Proceedings. 5: 185. 74. ^ Hui Zou; Lingzhou Xue (2018). "A Selective Overview of Sparse Principal Component Analysis". Proceedings of the IEEE. 106 (8): 1311–1320. doi:10.1109/JPROC.2018.2846588. 75. ^ A. N. Gorban, A. Y. Zinovyev, Principal Graphs and Manifolds, In: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods and Techniques, Olivas E.S. et al Eds. Information Science Reference, IGI Global: Hershey, PA, USA, 2009. 28–59. 76. ^ Wang, Y.; Klijn, J. G.; Zhang, Y.; Sieuwerts, A. M.; Look, M. P.; Yang, F.; Talantov, D.; Timmermans, M.; Meijer-van Gelder, M. E.; Yu, J.; et al. (2005). "Gene expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer". The Lancet. 365 (9460): 671–679. doi:10.1016/S0140-6736(05)17947-1. PMID 15721472. S2CID 16358549. Data online 77. ^ Zinovyev, A. "ViDaExpert – Multidimensional Data Visualization Tool". Institut Curie. Paris. (free for non-commercial use) 78. ^ Hastie, T.; Stuetzle, W. (June 1989). "Principal Curves" (PDF). Journal of the American Statistical Association. 84 (406): 502–506. doi:10.1080/01621459.1989.10478797. 79. ^ A.N. Gorban, B. Kegl, D.C. Wunsch, A. Zinovyev (Eds.), Principal Manifolds for Data Visualisation and Dimension Reduction, LNCSE 58, Springer, Berlin – Heidelberg – New York, 2007. ISBN 978-3-540-73749-0 80. ^ Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2011). "A Survey of Multilinear Subspace Learning for Tensor Data" (PDF). Pattern Recognition. 44 (7): 1540–1551. Bibcode:2011PatRe..44.1540L. doi:10.1016/j.patcog.2011.01.004. 81. ^ Kriegel, H. P.; Kröger, P.; Schubert, E.; Zimek, A. (2008). A General Framework for Increasing the Robustness of PCA-Based Correlation Clustering Algorithms. Scientific and Statistical Database Management. Lecture Notes in Computer Science. Vol. 5069. pp. 418–435. CiteSeerX 10.1.1.144.4864. doi:10.1007/978-3-540-69497-7_27. ISBN 978-3-540-69476-2. 82. ^ Emmanuel J. Candes; Xiaodong Li; Yi Ma; John Wright (2011). "Robust Principal Component Analysis?". Journal of the ACM. 58 (3): 11. arXiv:0912.3599. doi:10.1145/1970392.1970395. S2CID 7128002. 83. ^ T. Bouwmans; E. Zahzah (2014). "Robust PCA via Principal Component Pursuit: A Review for a Comparative Evaluation in Video Surveillance". Computer Vision and Image Understanding. 122: 22–34. doi:10.1016/j.cviu.2013.11.009. 84. ^ T. Bouwmans; A. Sobral; S. Javed; S. Jung; E. Zahzah (2015). "Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset". Computer Science Review. 23: 1–71. arXiv:1511.01245. Bibcode:2015arXiv151101245B. doi:10.1016/j.cosrev.2016.11.001. S2CID 10420698. 85. ^ Liao, J. C.; Boscolo, R.; Yang, Y.-L.; Tran, L. M.; Sabatti, C.; Roychowdhury, V. P. (2003). "Network component analysis: Reconstruction of regulatory signals in biological systems". Proceedings of the National Academy of Sciences. 100 (26): 15522–15527. Bibcode:2003PNAS..10015522L. doi:10.1073/pnas.2136632100. PMC 307600. PMID 14673099. 86. ^ Liao, T.; Jombart, S.; Devillard, F.; Balloux (2010). "Discriminant analysis of principal components: a new method for the analysis of genetically structured populations". BMC Genetics. 11: 11:94. doi:10.1186/1471-2156-11-94. PMC 2973851. PMID 20950446. 87. ^ "Principal Components Analysis". Institute for Digital Research and Education. UCLA. Retrieved 29 May 2018.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 172, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148967623710632, "perplexity": 1598.5505981030874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00631.warc.gz"}
http://nctr.pmel.noaa.gov/benchmark/Analytical/Analytical_IVP/index.html
The nonlinear evolution of a wave over a sloping beach is theoretically and numerically challenging due to the moving boundary singularity. Yet, it is important to have a good estimate of the shoreline velocity and associated runup-rundown motion, since they are crucial for the planning of coastal flooding and of coastal structures. As explained in the previous section, Synolakis (1987) solved this problem as a boundary value problem considering canonical bathymetry. Kânoglu (2004) solved nonlinear evolution of any given wave form over a sloping beach as an initial value problem (Fig. 1). It is proposed that any initial waveform can first be represented in the transform space using the linearized form of the Carrier-Greenspan transformation for the spatial variable, then the nonlinear evolutions of these initial waveforms can be directly evaluated. Later, Kânoglu and Synolakis (2006) solved the similar problem considering a more general initial condition, i.e., initial wave with velocity. Kânoglu (2004) considers NSW equations () with slightly different nondimensionalization than (), i.e., using the reference length as a scaling parameter, the dimensionless variables are introduced as (1) Using the original Carrier-Greenspan transformation--without coefficient in () and ()--it is possible to reduce the NSW equations to the following single second-order linear equation the same as (): (2) using the Riemann invariants of the hyperbolic system (Carrier and Greenspan, 1958). The Carrier-Greenspan transformation not only reduces the nonlinear shallow water-wave equations into a second-order linear equation, but also fixes the instantaneous shoreline to in the -space as explained previously. Furthermore, a bounded solution at the shoreline combined with a given initial condition in terms of a wave profile at in the -space, implies the following solution in the transform space, (3) where . Further, given the initial waveform , the evolution of the water-surface elevation is now given by (4) where, again, Since it is important for coastal planning, simple expressions for shoreline runup-rundown motion and velocity are useful. Considering the shoreline corresponds to in the -space, (4) reduces to the following equation for the runup-rundown motion: (5) Here and represent shoreline velocity and motion, respectively. The singularity of the at the shoreline () is removed with the consideration of the The difficulty of deriving an initial condition in the -space is overcome by simply using the linearized form of the hodograph transformation for a spatial variable in the definition of initial condition. It is proposed that any initial waveform can first be represented in the transform space using the linearized form of the Carrier-Greenspan transformation for the spatial variable (() without coefficient), then the nonlinear evolutions of these initial waveforms can be directly evaluated. Once an initial value problem is specified in the -space as , the linearized hodograph transformation is used directly to define the initial waveform in the -space, . Thus is found, and follows directly through a simple integration, as in (4). Then it becomes possible to investigate any realistic initial waveform such as Gaussian and N-wave shapes employed in Carrier et al. (2003) and the isosceles and general N-waves defined by Tadepalli and Synolakis (1994). Again, solution in the physical space can be found using the Newton-Raphson algorithm proposed by Synolakis (1987) and later used by Kânoglu (2004), as presented in (A24a, b). The shoreline runup-rundown motion and velocity are presented for one of the initial profiles given by Carrier et al. (2003): (6) The following initial profile can be obtained in the transform space after using the linearized form of the transformation for the spatial variable: (7) which leads to the definition of the initial condition : (8) Figure 2a compares the initial waveforms defined in the physical space as in (6) with the one resulting from the approximation of it, i.e., (calculated through (4)). The linearized form of the spatial variable in the definition of the initial waveforms gives satisfactory comparison. Figures 2b and 2c present the shoreline runup-rundown motion and velocity, respectively, calculated from equation (5) using the corresponding parts. It should be added that the solution presented here cannot be evaluated when the Jacobian of the transformation, , breaks down. Even though the transformation might become singular at certain points, the solution can still be obtained at other points, since local integration can be performed without prior knowledge of the dependent variables, unlike in numerical methods. This feature is discussed in detail in Synolakis (1987) and Carrier et al. (2003), and is not explained further in here. References: Carrier, G.F., T.T. Wu, and H. Yeh (2003): Tsunami runup and drawdown on a sloping beach. J. Fluid Mech., 475, 79-99. Kânoglu, U. (2004): Nonlinear evolution and runup-rundown of long waves over a sloping beach. J. Fluid Mech., 513, 363-372. Kânoglu, U., and C. Synolakis (2006): Initial value problem solution of nonlinear shallow water-wave equations. Phys. Rev. Lett., 97, 148501-148504.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815182089805603, "perplexity": 984.6911700375872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462751.85/warc/CC-MAIN-20150226074102-00161-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/219482-im-i-right.html
1. ## Im I right?? I solve these two equations and I got x=51.65degree but in my book x=128.35 so Iam confused the equations are: 54.684=F Cos x...........(1) 69.313=F Sin x.............(2) 2. ## Re: Im I right?? IF the question is as you stated it, "128.35" is impossible because if F is positive F cos x would be negative and if F is negative, F sin x would be negative. 3. ## Re: Im I right?? sorry the first equation is negative 4. ## Re: Im I right?? So your equations are F cos(x)= -54.684 and F sin(x)= 69.313? Then clearly 51.65 degrees cannot be correct. I presume neglecting the negative was your error in doing the calculation and you see now that 128.35 degrees is correct? (Actually, I get 128.27 degrees.) 5. ## Re: Im I right?? why?? I got tan(x)=-1.264 and when you take the enverse of tan the answer will be x=-51.65 so can explain how did you solve it,please 6. ## Re: Im I right?? Surely you know that if sine is positive and cosine is negative then your angle is in the second quadrant. The arctangent function is defined to only give angles in the fourth and first quadrants to make it a one to one function.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638432860374451, "perplexity": 2993.8596715783565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00472.warc.gz"}
https://papers.nips.cc/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Review.html
NeurIPS 2020 ### Review 1 Summary and Contributions: The paper gives a general condition to make NN have the constant tangent kernel, i.e., the training of the NN can be described as a training of NTK. It is a generalization/summarization of many previous NTK papers. Strengths: - The theory in this paper is sound, nicely written and easy to understand. I believe the proofs are correct. - The experiments in the paper are convincing. - It is definitely relevant to the NeurIPS community. Weaknesses: - Despite the fact the theory is sound, the authors didn't acknowledge previous works properly. For example, Theorem 3.2 and results in Appendix G has been proved previously (e.g., [1]). The paper fails to cite and/or compare some of the previous works. - Given the fact that the applications of Theorem 3.1 are mostly proved previously, the contribution of the paper is not so significant. I tend to think of this paper as a summarization of the previous line of NTK papers. [1] Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang. On Exact Computation with an Infinitely Wide Neural Net. Correctness: I believe the claims and the experiments are correct in this paper. Clarity: The paper is well written and easy to understand. Minor: - line 144, it is better to mention x bounded earlier. Relation to Prior Work: The paper fails to cite many closely related works, not to mention a discussion between its results and previous results, especially in techinical aspects. ---- post author response comments ---- The authors have addressed my main concerns, and I have changed the score accordingly. I still strongly recommend the authors to acknowledge and compare previous paper that used different proofs but have the same conclusion. (e.g., people have proved infinitely wide FC/CNN is kernel, let readers know the differences in proof. ) Reproducibility: Yes Additional Feedback: I think many NTK works has the spirit of this paper, can you give an application of Theorem 3.1 that haven't been studied yet. ### Review 2 Summary and Contributions: The paper sheds light on recent theoretical development of optimization of overparameterized neural networks through the lens of constancy of neural tangent kernel. First, the authors relate the constancy of a tangent kernel to a linear model in weights (Proposition 2.1.). With this, the authors could identify conditions for the constant kernel with respect to the spectral norm of the Hessian. Given this tool, the paper pinpoints a few key assumptions, linearity of output and sparse dependence of activation function, and no-bottleneck for guaranteeing constancy of the tangent kernel. Strengths: I believe the work provides fundamental insights on recent developments in theory of deep learning pioneered by NTK. Various strong breakthroughs including convergence of (S)GD for overparameterized networks has been developed relying on the constancy of NTK during training. This work sheds light on the conditions when those assumptions could break down and even offer solutions when it could be remedied despite non-constancy of NTK. Weaknesses: It would be useful to have perspective on how far current theoretical developments are to the actual practical case. For example, the paper mostly focuses on squared loss while widely applied NNs use softmax-cross entropy loss. I would encourage putting discussion on how one should think about optimization of those networks in the context of this paper’s result. Another point to raise is that while the claim of Section 5 and Theorem 5.1 is interesting and powerful, most discussion is based on a separate paper included in supplementary material. It does have significant overlap with current submission, so if provided submission is published in another archival venue or is another submission to NeurIPS it may be subject to dual submission (https://neurips.cc/Conferences/2020/CallForPapers). Otherwise, one should extract relevant parts such as proof of Theorem 5.1, or just cite a separate paper distinguishing contribution. Correctness: Caveat: I have not followed detailed proof in the appendix but theoretical claims sound firm and consistent with other known works. Also in section 6 claims are reasonably backed well by empirical support. Clarity: Paper is well written and easy to identify core statements. Relation to Prior Work: Discussion on recent developments and prior work is clearly discussed and mostly does a good job of stating the contribution related to the prior work. (Caveat: I am more familiar with literature related to NTK but not familiar with general non-convex optimization literature.) Reproducibility: Yes Additional Feedback: [Post Author Response] I thank the authors for responding to concerns and questions, which made me appreciate the paper better. As clarified by the authors there won't be issues with dual submission. I think the submission is good submission and will be general interest to NeurIPS community and suggest accepting. As regards to softmax, I agree with the authors when the output is softmax that current paper analysis holds. It would be interesting what would happen with softmax nonlinearities that appears in self-attention layers of Transformer architectures. -------------------------------------------------------- Few comments and nits: 1) Specify or define that Hessian is not of the loss but of the function F. In general, I believe readers at first pass may confuse Hessian to be that of the loss since it is more commonly referred to in deep learning literature. 2) L16-17: NTK is not constant in all w but should specify a domain. If you choose w far from initialization you could find w* where K deviates by a lot 3) L66: is -> as 4) It may be beneficial to provide examples of popular activation functions that satisfy (and do not satisfy) \beta_{\sigma} - smooth conditions used in the assumptions. Questions to the authors: 1) What is the important point of emphasizing Euclidean norm change is O(1)? It is also implicitly discussed in [1] where small parameter change to O(1) function change is discussed. Should I understand the point to be while literature casually talks about “small weight change”, one should be aware that in Euclidean norm it could be O(1) due to large dimensions? 2) There are few non-sparse non-linearity used in deep learning. Few examples I can think of are softmax, maxout. In the context of self-attention layers, softmax is becoming increasingly important. Is the condition (b) in Theorem 3.1 violated for these non-linearities and does not have a constancy guarantee? Do authors believe these networks would not have constant tangent kernels? 3) L251: logically violation of conditions in Theorem 3.1 does not necessarily lead to breakdown of linearity since Theorem 3.1 is not if and only if statement, correct? [1] Lee et al., Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent, NeurIPS 2019 ### Review 3 Summary and Contributions: The authors analyze under what conditions the Neural Tangent Kernel(NTK) remains constant, offering an important insight into the theoretical analyses of asymptotically wide neural networks. The crucial insights are twofold, first showing that a Riemannian manifold with metric defined by the NTK has constant gradients if and only if the manifold is linear. Secondly, the authors use elementary results from analysis to show that a vanishing spectral norm of the Hessian tensor inextricably implies linearity of the model. Technically proving the result requires showing the (approximate) sparsity of the Hessian wrt. to preactivations, and chaining the upper bound on the spectral norm across the layers. In conjunction with results from random matrix theory, it allows the authors to show that the spectral norm of a general L layer network without bottlenecks and with linear output is of order O(\sqrt{m}) where m is the width of all the layers. The theoretical result importantly hinges on the linearity of the output layer and the absence of bottleneck layers. Under these conditions, gradient descent is guaranteed to converge to a global minimizer of the loss function. The authors support their theoretical results with numerical experiments showing that a) models satisfying the aforesaid conditions the achieve exponential convergence to a global optimum b) Models with non-linear output and/or with bottleneck layer have do not have a constant kernel during training Strengths: The submission offers a novel, theoretical sound condition that guarantees the behavior of the NTK limit throughout training. This, as pointed out by the authors, is in stark contrast to the so called ‘lazy training regime’ that assumes infinitesimal changes in weights due to the infinitesimal learning rate (gradient descent vs gradient flow over finite horizon). The results presented in this paper therefore offer an important theoretical contribution to the understanding of ‘infinitely wide’ neural networks. In addition to the theoretical contribution, the work is also of potential practical interest. While NTK have not been widely adopted by machine learning practitioners, this work gives clear conditions under which the randomized kernel defined by a neural network remains constant and therefore might enjoy performance guarantees known for kernel machines; i.e. this includes generalization, but also the superfluousness of using gradient flows in place of standard gradient descent. The empirical evaluation leaves nothing to be desired and offers good support for the authors’ theoretical results. Weaknesses: None that are apparent. See additional feedback for potential ways of broadening the practical impact of these results. Correctness: The reviewer carefully read the proofs, and the empirical methodology and both are to the best of the reviewers’ understanding correct and well executed Clarity: The paper is very clearly written, subjectively striking a balance between rigor and readability. The flow of the submission is impeccable. Relation to Prior Work: The authors acknowledge all the relevant literature, and their work very extends previous results. As mentioned before it offers very clear, and insightful conditions under which the previously analyze Neural Tangent Kernel remains constant. To the best of the reviewers knowledge that question has remained open since the original publication by Jacot et al. in 2018. The discussion of the differences between Chizat et al and the current work is satisfactory, but perhaps stands to be elaborated more – i.e. while the current submission makes little assumptions about the readers’ familiarity with the relevant material, it could perhaps position itself better and why the difference between the reported results and the lazy training regime are significant. Reproducibility: Yes Additional Feedback: I would like to raise two suggestions that might broaden the practical relevance of these results: 1) The original derivation by Jacot as well as the results from lazy training by Chizat require that the NTK be optimized using gradient flows. It seems that the current results imply that (discrete-time) gradient descent ought to be sufficient, thereby removing the need for integrating a high-dimensional ODE. It seems that this might lower the entry point for practitioners and make NTKs are more readily available tool. 2) This suggestion is somewhat conjectural, but it seems that alternative losses for common classification tasks might enjoy additional theoretical benefits. Beyond using the Brier score, could the authors comment on the applicability of their results assuming a hinge loss? It seems that the hinge loss ought to satisfy conditions in Eq(13) and Eq(14), given that its second derivative is zero almost everywhere.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784230947494507, "perplexity": 808.9070544430259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00483.warc.gz"}
http://mathhelpforum.com/calculus/65987-continues-prove-question.html
1. ## continues prove question.. there is a function f(x) which is continues and every rational number "r" is its period of f(x) prove that f(x)=const ?? 2. Originally Posted by transgalactic there is a function f(x) which is continues and every rational number "r" is its period of f(x) prove that f(x)=const ?? First let us establish a lemma. If $\displaystyle f$ is continuous at $\displaystyle c$ then there exists a neighborhood $\displaystyle (c-\delta,c+\delta)~~\delta>0$ such that for any $\displaystyle x_1,x_2\in (c-\delta,c+\delta)$ the following must be true, $\displaystyle |c-x_1|<|c-x_2|\implies|f(c)-f(x)|\leqslant|f(c)-f(x_1)|$. Proof: Suppose the above is false, so that for any neighborhood of $\displaystyle c$ there exists points $\displaystyle x_1,x_2$ in that neighborhood such that $\displaystyle |x_1-c|<|x-c_2|$ and $\displaystyle |f(c)-f(x_1)|>|f(c)-f(x_2)|{\color{red}(*)}$. It is clear then from $\displaystyle \color{red}(*)$ that $\displaystyle |f(c)-f(x_1)|$ may be written as $\displaystyle |f(c)-f(x_2)|+e~~e>0$. So now in the definition of continuity choose $\displaystyle \varepsilon=|f(c)-f(x_1)|+d~~0<d<e$ so there must exist a $\displaystyle \delta>0$ such that $\displaystyle |x-c|<\delta\implies |f(c)-f(x)|<\varepsilon$ but this is a conradiction since $\displaystyle |c-x_1|<|c-x_2|<\delta$ but $\displaystyle |f(c)-f(x_1)|=|f(c)-f(x_2)|+e>\varepsilon$ So now back to the problem. Let us talk about our function on an interval $\displaystyle [a,b]$. Let $\displaystyle c\in[a,b]$ and $\displaystyle f(c)=\xi$. So by the above lemma we must have that there is an interval such that $\displaystyle |c-x_1|<|c-x_2|\implies |\xi-f(x_1)|\leqslant|\xi-f(x_2)|$ with $\displaystyle x_1,x_2$ elements of that interval. Now let $\displaystyle x_2=c\pm r~~r\in\mathbb{Q}\implies f(x_2)=\xi$. So we must then have that $\displaystyle |c-x_1|<|c-x_2|\implies |f(c)-f(x_1)|\leqslant |\xi-\xi|=0$ or that $\displaystyle f(x_1)=\xi$. So for any point of $\displaystyle [a,b]$ there is a neighborhood such that $\displaystyle f=\xi$, thus we must have that $\displaystyle f=\xi~~\forall x\in[a,b]$ Note: I forgot to mention one important fact...this construction is possible that since no matter how small the neighborhood in question is it must contain at least one rational 3. Originally Posted by Mathstud28 First let us establish a lemma. If $\displaystyle f$ is continuous at $\displaystyle c$ then there exists a neighborhood $\displaystyle (c-\delta,c+\delta)~~\delta>0$ such that for any $\displaystyle x_1,x_2\in (c-\delta,c+\delta)$ the following must be true, $\displaystyle |c-x_1|<|c-x_2|\implies|f(c)-f(x)|\leqslant|f(c)-f(x_1)|$. This lemma somehow tells that a continuous function is locally monotonous. This fails to be true... Think of $\displaystyle x\mapsto x\sin\frac{1}{x}$ for instance, at 0. It is continuous and it oscillates so the conclusion of the lemma can't be satisfied. The problem in the proof comes from the fact that there is no reason why $\displaystyle x_1,x_2$ would be in $\displaystyle [c-\delta,c+\delta]$. ($\displaystyle \delta$ is defined given $\displaystyle x_1,x_2$) There's much simpler (and correct): notice that for any rational $\displaystyle q$, since $\displaystyle f$ is $\displaystyle q$-periodic, $\displaystyle f(q)=f(0)$. In other words, $\displaystyle f$ is constant on $\displaystyle \mathbb{Q}$. Let's say $\displaystyle f(0)=c$. Now, let $\displaystyle x$ be any real number, rational or not. It is well known that there exists a sequence $\displaystyle (q_n)_n$ of rational numbers that converges toward $\displaystyle x$. Since $\displaystyle f$ is continuous, we conclude $\displaystyle f(x)=\lim_n f(q_n)=c$ (the sequence $\displaystyle (f(q_n))_n$ is constant, equal to $\displaystyle c$). qed. 4. Originally Posted by transgalactic there is a function f(x) which is continues and every rational number "r" is its period of f(x) prove that f(x)=const ?? Originally Posted by Laurent This lemma somehow tells that a continuous function is locally monotonous. This fails to be true... Think of $\displaystyle x\mapsto x\sin\frac{1}{x}$ for instance, at 0. It is continuous and it oscillates so the conclusion of the lemma can't be satisfied. The problem in the proof comes from the fact that there is no reason why $\displaystyle x_1,x_2$ would be in $\displaystyle [c-\delta,c+\delta]$. ($\displaystyle \delta$ is defined given $\displaystyle x_1,x_2$) There's much simpler (and correct): notice that for any rational $\displaystyle q$, since $\displaystyle f$ is $\displaystyle q$-periodic, $\displaystyle f(q)=f(0)$. In other words, $\displaystyle f$ is constant on $\displaystyle \mathbb{Q}$. Let's say $\displaystyle f(0)=c$. Now, let $\displaystyle x$ be any real number, rational or not. It is well known that there exists a sequence $\displaystyle (q_n)_n$ of rational numbers that converges toward $\displaystyle x$. Since $\displaystyle f$ is continuous, we conclude $\displaystyle f(x)=\lim_n f(q_n)=c$ (the sequence $\displaystyle (f(q_n))_n$ is constant, equal to $\displaystyle c$). qed. Once I was asked a question by my professor in real analysis lesson, now I feel that it is very similar to this problem. Let $\displaystyle f$ be a continuous function, if $\displaystyle f$ is of fixed value at irrational numbers, then $\displaystyle f$ takes that fixed value on rational numbers too, and hence it is a constant function. 5. Originally Posted by Laurent This lemma somehow tells that a continuous function is locally monotonous. This fails to be true... Think of $\displaystyle x\mapsto x\sin\frac{1}{x}$ for instance, at 0. It is continuous and it oscillates so the conclusion of the lemma can't be satisfied. The problem in the proof comes from the fact that there is no reason why $\displaystyle x_1,x_2$ would be in $\displaystyle [c-\delta,c+\delta]$. ($\displaystyle \delta$ is defined given $\displaystyle x_1,x_2$) There's much simpler (and correct): notice that for any rational $\displaystyle q$, since $\displaystyle f$ is $\displaystyle q$-periodic, $\displaystyle f(q)=f(0)$. In other words, $\displaystyle f$ is constant on $\displaystyle \mathbb{Q}$. Let's say $\displaystyle f(0)=c$. Now, let $\displaystyle x$ be any real number, rational or not. It is well known that there exists a sequence $\displaystyle (q_n)_n$ of rational numbers that converges toward $\displaystyle x$. Since $\displaystyle f$ is continuous, we conclude $\displaystyle f(x)=\lim_n f(q_n)=c$ (the sequence $\displaystyle (f(q_n))_n$ is constant, equal to $\displaystyle c$). qed. Uh, but $\displaystyle f:x\mapsto x\sin\left(\frac{1}{x}\right)$ is not continuous at zero. And I think I stated my Lemma wrong. It was meant to be that there exists a neighborhood of c such that there is a subset of that neighborhood that such that the above is true. Supposing that this suffices it still shows the result of the question and I believe it works for your example. 6. Originally Posted by Mathstud28 Uh, but $\displaystyle f:x\mapsto x\sin\left(\tfrac{1}{x}\right)$ is not continuous at zero. It is continuous at 0 (provided that you define f(0)=0). Originally Posted by Mathstud28 And I think I stated my Lemma wrong. It was meant to be that there exists a neighborhood of c such that there is a subset of that neighborhood that such that the above is true. Supposing that this suffices it still shows the result of the question and I believe it works for your example. The "subset" would have to be highly disconnected for such a result to be true. In any case, such an elaborate result is not needed here, as Laurent's simple proof shows. 7. Sorry Opalg and transgalactic for the fallacious proof. I always make things harder than they are. How about this alternate proof (this is more for my curiosity then for the OP) Suppose the claim is true and $\displaystyle f$ is constant. We must merely show that $\displaystyle f$ is differentiable everywhere and $\displaystyle f'=0$, in other words for all $\displaystyle x$ we must show that $\displaystyle \lim_{t\to x}\frac{f(t)-f(x)}{t-x}=0$. But now consider letting $\displaystyle t=x+ r~~r\in\mathbb{Q}$. So the above is equivalent to showing $\displaystyle \lim_{r\to{0}}\frac{f(x+r)-f(x)}{r}=0$ but $\displaystyle f(x+r)=f(x)$ so $\displaystyle \frac{f(x+r)-f(x)}{r}=0$ which finally in turn implies $\displaystyle \lim_{r\to0}\frac{f(x+r)-f(x)}{r}=0$ 8. Originally Posted by Mathstud28 Sorry Opalg and transgalactic for the fallacious proof. I always make things harder than they are. How about this alternate proof (this is more for my curiosity then for the OP) We must merely show that $\displaystyle f$ is differentiable everywhere and $\displaystyle f'=0$, in other words for all $\displaystyle x$ we must show that $\displaystyle \lim_{t\to x}\frac{f(t)-f(x)}{t-x}=0$. But now consider letting $\displaystyle t=x+ r~~r\in\mathbb{Q}$. So the above is equivalent to showing $\displaystyle \lim_{r\to{0}}\frac{f(x+r)-f(x)}{r}=0$ but $\displaystyle f(x+r)=f(x)$ so $\displaystyle \frac{f(x+r)-f(x)}{r}=0$ which finally in turn implies $\displaystyle \lim_{r\to0}\frac{f(x+r)-f(x)}{r}=0$ Just for your curiosity . There is one step that would require a justification: if $\displaystyle \lim_{r\to{0},\ r\in\mathbb{Q}}\frac{f(x+r)-f(x)}{r}$ exists, why would $\displaystyle \lim_{r\to{0}}\frac{f(x+r)-f(x)}{r}$ exist as well? This is quickly proved from the definition of the limit (cf. right below), but I think it should be underlined since this is where the continuity is needed, and thus this is the core of the proof. Suppose that a continuous function $\displaystyle \psi$ on $\displaystyle \mathbb{R}\setminus\{0\}$ is such that $\displaystyle \lim_{r\to 0,\ r\in\mathbb{Q}} \psi(x)=\ell$. Let $\displaystyle \varepsilon>0$. There is $\displaystyle \delta>0$ such that if $\displaystyle |r|<\delta$ and $\displaystyle r\in\mathbb{Q}\setminus\{0\}$ then $\displaystyle |\psi(r)-\ell|\leq \varepsilon$. Now, if $\displaystyle |x|< \delta$ and $\displaystyle x\neq 0$, $\displaystyle x$ is a limit of non-zero rational numbers $\displaystyle r_n$ with $\displaystyle |r_n|<\delta$, so that $\displaystyle |\psi(x)-\ell|=\lim_n |\psi(r_n)-\ell|\leq \varepsilon$ (since this inequality holds for every $\displaystyle n$). This proves that $\displaystyle \lim_{x\to 0}\psi(x)=\ell$. 9. Originally Posted by Laurent Just for your curiosity . There is one step that would require a justification: if $\displaystyle \lim_{r\to{0},\ r\in\mathbb{Q}}\frac{f(x+r)-f(x)}{r}$ exists, why would $\displaystyle \lim_{r\to{0}}\frac{f(x+r)-f(x)}{r}$ exist as well? This is quickly proved from the definition of the limit (cf. right below), but I think it should be underlined since this is where the continuity is needed, and thus this is the core of the proof. Suppose that a continuous function $\displaystyle \psi$ on $\displaystyle \mathbb{R}\setminus\{0\}$ is such that $\displaystyle \lim_{r\to 0,\ r\in\mathbb{Q}} \psi(x)=\ell$. Let $\displaystyle \varepsilon>0$. There is $\displaystyle \delta>0$ such that if $\displaystyle |r|<\delta$ and $\displaystyle r\in\mathbb{Q}\setminus\{0\}$ then $\displaystyle |\psi(r)-\ell|\leq \varepsilon$. Now, if $\displaystyle |x|< \delta$ and $\displaystyle x\neq 0$, $\displaystyle x$ is a limit of non-zero rational numbers $\displaystyle r_n$ with $\displaystyle |r_n|<\delta$, so that $\displaystyle |\psi(x)-\ell|=\lim_n |\psi(r_n)-\ell|\leq \varepsilon$ (since this inequality holds for every $\displaystyle n$). This proves that $\displaystyle \lim_{x\to 0}\psi(x)=\ell$. Thank you Laurent. Two of my books had proven this result and I did not know whether or not to include it. I should have been more prudent...thank you 10. Originally Posted by Laurent This lemma somehow tells that a continuous function is locally monotonous. This fails to be true... Think of $\displaystyle x\mapsto x\sin\frac{1}{x}$ for instance, at 0. It is continuous and it oscillates so the conclusion of the lemma can't be satisfied. The problem in the proof comes from the fact that there is no reason why $\displaystyle x_1,x_2$ would be in $\displaystyle [c-\delta,c+\delta]$. ($\displaystyle \delta$ is defined given $\displaystyle x_1,x_2$) There's much simpler (and correct): notice that for any rational $\displaystyle q$, since $\displaystyle f$ is $\displaystyle q$-periodic, $\displaystyle f(q)=f(0)$. In other words, $\displaystyle f$ is constant on $\displaystyle \mathbb{Q}$. Let's say $\displaystyle f(0)=c$. Now, let $\displaystyle x$ be any real number, rational or not. It is well known that there exists a sequence $\displaystyle (q_n)_n$ of rational numbers that converges toward $\displaystyle x$. Since $\displaystyle f$ is continuous, we conclude $\displaystyle f(x)=\lim_n f(q_n)=c$ (the sequence $\displaystyle (f(q_n))_n$ is constant, equal to $\displaystyle c$). qed. you say f(0)=c why $\displaystyle (q_n)_n$ converges to x x is a variable it is not a contstant to which the sequence could converge to ?? 11. Of course I could be wrong so wait for Laurent or another senior member's confirmation, but what I think Laurent's solution points to is that if $\displaystyle f:X\longmapsto\mathbb{R}$ with $\displaystyle X\subset\mathbb{R}$ then $\displaystyle \lim_{x\to p}f(x)=q\Longleftrightarrow \lim_{n\to\infty} f(p_n)=q$ for all $\displaystyle p_n\subset X$ and $\displaystyle p_n\to p\qquad p_n\ne p$. Now using this result every number rational or irational is the limit of a sequence of rationals. 12. Originally Posted by transgalactic you say f(0)=c why $\displaystyle (q_n)_n$ converges to x x is a variable it is not a contstant to which the sequence could converge to ?? I'm not sure I get what you mean. In the proof, I wrote "Let $\displaystyle x$ be (...)" so that, from this point on, $\displaystyle x$ is fixed, you can think of it as a constant, and I am allowed to introduce a sequence converging to $\displaystyle x$. More specifically, I use the fact that any real number (hence for instance, $\displaystyle x$) is the limit of a sequence of rational numbers. Finally, since I chose $\displaystyle x$ to be any number, what I prove for $\displaystyle x$ holds for any number.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966186285018921, "perplexity": 109.9446018145553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00253.warc.gz"}
https://www.physicsforums.com/threads/magnetic-field-near-a-ferromagnetic-object.655229/
# Magnetic Field near a Ferromagnetic Object 1. Nov 27, 2012 ### comm Considering a bar of ferromagnetic (FM) material (eg, Nickel) placed in a uniform magnetic field (eg, B = 1kG along x). I know the H field produced by a given magnetization M of the ferromagnet (ie, I have $\vec{H}=M \cdot \vec{\alpha}$ for a calculated alpha). How do I determine the total magnetic field near the ferromagnet? I'm having difficulty determining how M relates to the input B (I know of the hysteresis, but I don't quite understand even what saturation magnetization relates to), and reconciling H and B in and outside of the ferromagnet. Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Magnetic Field near a Ferromagnetic Object
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233979344367981, "perplexity": 1459.977752044132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00699.warc.gz"}
http://math.stackexchange.com/questions/195697/summability-of-a-function
# Summability of a function If $u\in W^{1,p}(\Omega)$ where $\Omega$ is an open subset of $\mathbb{R}^n$ and $\xi$ is a smooth compactly supported function in $\Omega$, is it true that $\xi u^{\beta-p+1} \in W^{1,p}_0$ if $\beta >p-1$? (In the end my problem is to say if $u^{\beta-p+1} \in W^{1,p}$ (from this I know it follows the result).) I think if $\Omega$ is not bounded we can't say, but if it is bounded, then we know the function $u$ belongs to $L^r$ for $r<p$, but $\beta>p-1$ could be also greater than p. Maybe if I add the hypotesys $u\in L^{\infty}$ I could conclude? Thanks for any help. - I can't understand the role of $\beta$. Of course you cannot expect a function from $L^2$ to belong to $L^\beta$ for any $\beta$. –  Siminore Sep 14 '12 at 13:36 How do you define $u^{\beta-p+1}$ -- is $u$ nonnegative (or strictly positive)? –  user31373 Sep 14 '12 at 17:27 @LVK: let's assume it is strictly positive. –  balestrav Sep 15 '12 at 20:36 @ Siminore: but if $\Omega$ is bounded we can say something, and that was my question: are there any other assumptions (the set is bounded, the function is essentially bounded) which can make us conclude something about other exponents of summability? –  balestrav Sep 15 '12 at 20:39 The presence of $\xi$ helps only by reducing the region of integration to a compact subset of $\Omega$. The question is equivalent to asking whether $u^{\beta-p+1}\in W^{1,p}$ locally. When $p>n$ we are in good shape. Indeed, $u$ has a continuous representative by the Morrey-Sobolev embedding which means that it is locally bounded away from both $0$ and $\infty$. The function $\phi(t)=t^{\beta-p+1}$ is Lipschitz on any interval $[\alpha,\beta]$ with $0<\alpha<\beta<\infty$. It is a standard fact that composition with a Lipschitz function preserves Sobolev spaces of first order. Thus, $\phi\circ u\in W^{1,p}$ locally. The assumption $p>n$ was needed only to have two-sided bounds on $u$. If you are willing to impose such bounds artificially, then any $p\ge 1$ works. Otherwise there are counterexamples. Indeed, $u(x)=|x|^r$ belongs to $W^{1,p}$ in a neighborhood of origin exactly when $p(r-1)>-n$, equivalently $pr>p-n$. If $p<n$, this may hold with negative $r$, but then raising $u$ to a sufficiently high power breaks the inequality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967668354511261, "perplexity": 174.85729570913873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00294-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/191767-poisson-regression-soccer-scores.html
## Poisson regression and soccer scores I have a data set of soccer matches, their respective results and the respective pre-game odds. I would like to study a simple strategy based on historical information. I would like create a simple model that would predict the goals scored by a team from the implied probability of the odds. So I would like to carry out poisson / negative binomial regression to estimate the mean number of goals scored by a team in a match with given bookmaker odds for various outcomes. Then I could plug these lambdas into poisson probability functions to estimate (roughly) probabilities of certain scores. Now, many authors have done similar modelling but lacking proper knowledge of stats, I do not fully understand how the regressions should be carried out. E.g. D. Dyte and S. R. Clarke (2000) study whether FIFA rankings explain goals scored by estimating a model ln(m) = a + bTR + cOR +v where m is the expected number of goals scored, TR is the team's FIFA ranking, OR is the opponents FIFA ranking and v is a parameter that measures the venue (home, away, neutral) *. What I am actually wondering then is this: Isn't it so that the explanatory variables in poisson regression model the the mean of the response variable? Therefore I cannot just regress the goals scored by e.g. the home team to the implied probability of the home team. What should be my dependent variable? e.g. ln(m) = a + bPR where m is the expected number of goals scored and PR is the implied probability of the team. Does anyone have a clue what is 'the expected number of goals scored' because it cannot be the actual number goals scored? Or is the equation just an expression and you actually regress the realized scores with the explanatory variables? Sorry about a confusing question and thanks for everyone who's willing to help! * D. Dyte and S. R. Clarke (2000). A ratings based Poisson model for World Cup soccer simulation. Journal of the Operational Research Society. Vol. 51., pp. 993-998.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189465999603271, "perplexity": 1222.2326169126213}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298660.78/warc/CC-MAIN-20150323172138-00004-ip-10-168-14-71.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/103060/left-margin-minipage-two-figures
# Left margin minipage two figures I have a minipage with two figures, the problem is, that I want to have them shifted to the left, so the left margin should be smaller, the code of my minipage: \begin{figure}[H] \begin{minipage}[b]{0.6\linewidth} \centering \centerline{\includegraphics[scale=0.4]{simplevaralv.png}} \caption[VaR assuming normal distribution, Allianz]{VaR assuming normal distribution, Allianz} \label{sva} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.6\linewidth} \centering \centerline{ \includegraphics[scale=0.4]{simplevarbasf.png}} \caption[VaR assuming normal distribution, Basf]{VaR assuming normal distribution, BASF} \label{svb} \end{minipage} \end{figure} The result looks like this: And I want to have it shifted to the left, how can I do this? - You can use a box and trick latex to think that the box has zero width and then center it: \documentclass{article} \usepackage{graphicx} \usepackage{showframe} %% just to show frames. \begin{document} \begin{figure}[htb] \centering \makebox[0pt][c]{% \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[scale=0.4]{example-image-a} \caption[VaR assuming normal distribution, Allianz]{VaR assuming normal distribution, Allianz} \label{sva} \end{minipage}% \hspace{0.5cm} \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[scale=0.4]{example-image-b} \caption[VaR assuming normal distribution, Basf]{VaR assuming normal distribution, BASF} \label{svb} \end{minipage}% }% \end{figure} \end{document} This can also be done with adjustbox package. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941681623458862, "perplexity": 2491.33606854764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932737.93/warc/CC-MAIN-20150521113212-00112-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/89188-derivative-function.html
# Thread: Derivative of This Function 1. ## Derivative of This Function $20[(x/12) - ln(x/12)] +30$ ?? I'm thinking... rearrange $ln(x/12)$ to be $ln (x) - ln (12)$ so derivative = $20[(1/12) - (1/x) - (1/12)]$ $= -20/x$ ?? 2. Originally Posted by Equality $20[(x/12) - ln(x/12)] +30$ You are on the right track $f(x)=20[(x/12) - ln(x/12)] +30 = \frac{20}{12}x -20ln(x) + 20 ln(12) + 30$ $f'(x)=\frac{20}{12}-\frac{20}{x}$ All those terms without x's in them are just real numbers, so their derivative is 0, so there are really only two terms you have to worry about so it looks as though where i went wrong is calculating $ln(12)$ i used $1/12$ instead of (derivative of 12)/12 i.e. $0/12$ i.e. $0$ 4. Yeah, I guess that is one way of thinking about it. Alternatively you know ln(12) is just a number you know? Like the derivative of 3 is 0. The derivative of $\pi$ is 0. The derivative of $\sqrt2$ is 0. So The derivative of $ln(12)=ln(2^23)=2ln(2)+ln(3) \approx 2(.69314) + 1.09861=2.48489$ is just 0 as well. Sometimes these sorts of things can cause confusion, but for every $c\in \mathbb{R}$, we have $\frac{d}{dx}c=0$. 5. got it! thanks heaps Gamma!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545255303382874, "perplexity": 535.9365613085179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00136-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.ck12.org/book/CK-12-Chemistry-Intermediate/section/13.0/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> States of Matter | CK-12 Foundation # Chapter 13: States of Matter Created by: CK-12 0  0  0 Dry ice (solid carbon dioxide, CO2) is quite an interesting substance. To remain in its solid form, dry ice must be kept very cold, below about -80°C. Your experience tells you that a solid substance will melt when its temperature is raised. However, dry ice instead changes directly from a solid to a gas in a process called sublimation. Dry ice is used frequently in “fog machines,” where large chunks of dry ice are dropped into water. The rapid warming generates large amounts of gaseous CO2, which immediately sinks because it is more dense than air. In this chapter, you will learn about the kinetic-molecular theory, which is a series of assumptions that provide a general description of the particles of matter when they are in the solid, liquid, or gas states. Along the way, you will discover how these particles behave when they undergo changes from one state of matter to another. Aug 21, 2013
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278956413269043, "perplexity": 1111.7797097388845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663587.40/warc/CC-MAIN-20140930004103-00069-ip-10-234-18-248.ec2.internal.warc.gz"}
https://2010.igem.org/Team:Aberdeen_Scotland/Curve_Fitting
# Team:Aberdeen Scotland/Curve Fitting University of Aberdeen - ayeSwitch - iGEM 2010 iGEM 2010 # Determination of the the Hill Coefficient of the CFP/MS2 loop Association (n2) Based on a graph in a paper by Witherell et al.[1] which showed the binding curves of the MS2 stem loop we could calculate more accurately the value for n2. The initial estimate for n2 was between 1 and 3 (see Parameter Space Analysis). Our two MS2 stem loops (see Fig 1 in Equations) are 19 nucleotides apart, so our binding curve will most closely resemble that of the 8-16 construct, shown in figure 1A (filled squares). Unfortunately we did not have a table of data available to us, so we had to estimate the values directly from the graph (figure 1A). These values were entered into MATLAB, and using MATLAB’s curve fitting tool we fit the Hill function for activators to the curve (figure 1B). The parameters for the best fit curve was then returned. However, the value returned is based on having one MS2 stem loop and we have two MS2 stem loops. Therefore, the value returned needs to be multiplied by two to give the final result for n2. To recap, the equation for the Hill function for activators is , where β is equivalent to our λ values, x is the environmental signal (GAL or METH), K is the dissociation constant and n is the Hill coefficient. Figure 1. A. Graph from paper by Witherell et al.[1] showing the binding curves of the MS2 stem loop. The filled squares are the 8-16 construct which closely resembles the binding curves of our MS2 stems. B. The binding curve for the 8-16 construct was reproduced in MATLAB and the Hill function for activators equation fitted to it (red line). The curve fitting tool gave the following estimated parameters for β(a), K(b) and n(c): Note that the R-square value is close to one which suggests that the fit of the curve to the data is very good. The Hill coefficient is estimated to be 1.302 with a lower limit of 1.135 and an upper limit of 1.469. However, this is just for one MS2 stem loop and we have two. Multiplying this value by 2 we get 2.604 with a lower limit of 2.270 and an upper limit of 2.938. A value greater than 2 suggests that a protein binding to the first stem loop will make it easier for a protein to bind to the second stem loop. We can say that co-operativity has been increased. ### Conclusion We have n2=2.6 and n4=1. So, the equations of our system can be written as follows: Table 1 on the Parameter Space Analysis page shows that with n2=2.6 and n4=1, between 0.96% and 2.03% of the parameter combinations tested gave bistability. However, the ideal scenario is that in table 4 on the Parameter Space Analysis page. Here, with n2=2.6 and n4=1, 51.27% to 58.04% of parameter combinations tested gave bistability. The reason we are not getting bistability 100% of the time, or 0% of the time, is because we have such a large range around each parameter. Each parameter value is randomly chosen from this range each time the program runs. Therefore, some combinations of parameters will give bistability and others will not. Ideally we would have a precise value for each parameter (no uncertainty). In this scenario, each time the program runs the parameters would be exactly the same. The result would either be 100% bistability or 0% bistability – either the switch always works or it doesn’t. ### References [1] Witherell, G.W., et al. (1990), ‘Cooperative Binding of R17 Coat Protein to RNA’, Biochemistry, Vol. 29, pp. 11051-11057 Back to the Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382492661476135, "perplexity": 1428.9195133436892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00636.warc.gz"}
https://cstheory.stackexchange.com/questions/33930/analogues-of-the-berman-hartmanis-conjecture-and-the-creativity-hypothesis
# Analogues of the Berman Hartmanis conjecture and the Creativity Hypothesis The Berman Hartmanis conjecture which formally states that there is an isomorphism for two $NP$ complete languages $L_{1}$, and $L_{2}$, the isomorphism is a bijective function $f()$ such that $f()$ maps strings from $L_{1}$ to $L_{2}$, such that $x \in L_{1}$ iff $f(x) \in L_{2}$. The isomorphism holds for all $NP$-complete languages under polynomial time assumptions(p-isomorphisms). Special conditions are: p-isomorphisms, polynomial run time $f()$, and so also the inverse of $f()$ is computable in polynomial time. The conjecture if true, would imply no $NP$-complete language is sparse. What are the analogues to the conjecture in various types of reductions, for example under $AC^{0}$ reductions the conjecture is true: Under $AC^{0}$ many-one reductions, all $NP$ complete languages do show an $AC^{0}$ isomorphism. This leads one to the creativity conjecture, which is: Are all NP-creative sets languages the same as NP-complete sets under $\leq^{m}_{p}$ reductions. Formally, creativity for a set is defined as: A set $A$ is creative if its complement is "productive". By "productivity" of a set $A$, over a class of languages $C$, we mean there is a function(not necessarily polynomial), that witnesses $A \notin C$. So, it is natural to think of problems to be creative under different conditions, eg, a polynomial time function $f$, or even under log time, or oracular function restrictions. So, while $AC^{0}$ many-one reductions give a resolution to the Berman-Hartmanis conjecture, is there resolution to the creativity conjecture under $AC^{0}$ reductions(or any other reasonable assumptions)? Creativity(k-creativity), like the Berman-Hartmanis conjecture(truth of which shows that there are no sparse $NP$-complete languages), shows us another fine grained level nature of $NP$ complete problems. Are there $NP$ complete problems that show creative(k-creativity, for one) properties, and some don't. Are there natural problems that that one would consider creativity, such that the $NP$-complete language is known to be creative (any variants of SAT)? First of all, Mahaney's Theorem says that merely assuming $\mathsf{P} \neq \mathsf{NP}$, there are no sparse $\mathsf{NP}$-complete sets. (Historically, Mahaney was motivated to study this precisely because of Berman-Hartmanis, but the theorem is independent of BH Isomorphism.) The $\mathsf{AC}^0$ version of the isomorphism conjecture (with various restrictions on uniformity) is a theorem. See the following paper and references therein:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186892509460449, "perplexity": 309.3011733448805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582736.31/warc/CC-MAIN-20190422215211-20190423001211-00159.warc.gz"}
https://ceafel.tecnico.ulisboa.pt/publication-type/paper/page/2/
# On mixed norm Bergman-Orlicz-Morrey spaces Published Oct 28, 2020 in Paper # A C*-algebra of singular integral operators with shifts and piecewise quasicontinuous coefficients Published Oct 28, 2020 in Paper # Paired operators in asymmetric space setting Published Oct 28, 2020 in Paper # Polyharmonic Bergman spaces and Bargmann type transforms Published Oct 28, 2020 in Paper # On the kernel of a singular integral operator with shift Published Oct 28, 2020 in Paper # Approximation sequences to operators on Banach spaces Published Oct 28, 2020 in Paper # The index of weighted singular integral operators with shifts and slowly oscillating data Published Oct 28, 2020 in Paper # Necessary Fredholm conditions for weighted singular integral operators with shifts and slowly oscillating data Published Oct 28, 2020 in Paper # On boundedness of Bergman projection operators in Banach spaces of holomorphic functions in half plane and harmonic functions in half space Published Oct 28, 2020 in Paper # Mixed norm spaces of analytic functions as spaces of generalized fractional derivatives of functions in Hardy type spaces Published Oct 28, 2020 in Paper # Fredholm criteria for pseudodifferential operators and induced representations of groupoid algebras Published Oct 28, 2020 in Paper # Quasi-classical limit of the open Jordanian XXX spin chain Published Oct 28, 2020 in Paper # Embeddings of local generalized Morrey spaces between weighted Lebesgue Spaces Published Oct 28, 2020 in Paper
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532135725021362, "perplexity": 2543.3288439370745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00081.warc.gz"}
http://link.springer.com/article/10.1007%2Fs11009-010-9207-6
Methodology and Computing in Applied Probability , Volume 14, Issue 2, pp 383–403 # Simulation and Estimation for the Fractional Yule Process Article DOI: 10.1007/s11009-010-9207-6 Cahoy, D.O. & Polito, F. Methodol Comput Appl Probab (2012) 14: 383. doi:10.1007/s11009-010-9207-6 ## Abstract In this paper, we propose some representations of a generalized linear birth process called fractional Yule process (fYp). We also derive the probability distributions of the random birth and sojourn times. The inter-birth time distribution and the representations then yield algorithms on how to simulate sample paths of the fYp. We also attempt to estimate the model parameters in order for the fYp to be usable in practice. The estimation procedure is then tested using simulated data as well. We also illustrate some major characteristics of fYp which will be helpful for real applications. ### Keywords Yule–Furry processFractional calculusMittag–LefflerWrightPoisson processBirth process 37A5062M8697K60
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376441240310669, "perplexity": 2069.090485618751}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00288-ip-10-143-35-109.ec2.internal.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jmd.2015.9.147
# American Institute of Mathematical Sciences 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147 ## Ergodicity and topological entropy of geodesic flows on surfaces 1 Faculty of Mathematics, Ruhr University Bochum, Universitätsstraße 150, 44780 Bochum, Germany Received  February 2015 Revised  May 2015 Published  August 2015 We consider reversible Finsler metrics on the 2-sphere and the 2-torus, whose geodesic flow has vanishing topological entropy. Following a construction of A. Katok, we discuss examples of Finsler metrics on both surfaces with large ergodic components for the geodesic flow in the unit tangent bundle. On the other hand, using results of J. Franks and M. Handel, we prove that ergodicity and dense orbits cannot occur in the full unit tangent bundle of the 2-sphere, if the Finsler metric has conjugate points along every closed geodesic. In the case of the 2-torus, we show that ergodicity is restricted to strict subsets of tubes between flow-invariant tori in the unit tangent bundle. The analogous result applies to monotone twist maps. Citation: Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147 ##### References: [1] S. Alpern and V. S. Prasad, Typical Dynamics of Volume Preserving Homeomorphisms, Cambridge Tracts in Mathematics, 139, Cambridge University Press, Cambridge, 2000.  Google Scholar [2] S. Angenent, Parabolic equations for curves on surfaces: Part II. Intersections, blow-up and generalized solutions, Ann. of Math. (2), 133 (1991), 171-215. doi: 10.2307/2944327.  Google Scholar [3] S. Angenent, A remark on the topological entropy and invariant circles of an area preserving twistmap, in Twist Mappings and Their Applications, IMA Vol. Math. Appl., 44, Springer, New York, 1992, 1-5.  Google Scholar [4] S. Angenent, Self-intersecting geodesics and entropy of the geodesic flow, Acta Math. Sin. (Engl. Ser.), 24 (2008), 1949-1952. doi: 10.1007/s10114-008-6439-2.  Google Scholar [5] V. Bangert, On the existence of closed geodesics on two-spheres, Internat. J. Math., 4 (1993), 1-10. doi: 10.1142/S0129167X93000029.  Google Scholar [6] D. Bao, S.-S. Chern and Z. Shen, An Introduction to Riemann-Finsler Geometry, Graduate Texts in Mathematics, 200, Springer-Verlag, New York, 2000. doi: 10.1007/978-1-4612-1268-3.  Google Scholar [7] P. Bernard and C. Labrousse, An entropic characterization of the flat metrics on the two torus, to appear in Geometriae Dedicata, (2015). doi: 10.1007/s10711-015-0098-0.  Google Scholar [8] G. D. Birkhoff, Dynamical Systems, American Mathematical Society Colloquium Publications, Vol. IX, American Mathematical Society, Providence, R.I., 1927.  Google Scholar [9] A. V. Bolsinov and A. T. Fomenko, Integrable Hamiltonian Systems. Geometry, Topology, Classification, Chapman & Hall/CRC, Boca Raton, FL, 2004. doi: 10.1201/9780203643426.  Google Scholar [10] M. Bonino, Around Brouwer's theory of fixed point free planar homeomorphisms, Notes de cours de l'École d'été "Méthodes topologiques en dynamique des surfaces,'' Université Grenoble I, 2006. Available from: http://www.math.univ-paris13.fr/ bonino/travaux.html. Google Scholar [11] M. Brown, A new proof of Brouwer's lemma on translation arcs, Houston J. Math., 10 (1984), 35-41.  Google Scholar [12] E. I. Dinaburg, On the relations among various entropy characteristics of dynamical systems, Math. USSR Izv., 5 (1971), 337-378. doi: 10.1070/IM1971v005n02ABEH001050.  Google Scholar [13] V. J. Donnay, Geodesic flow on the two-sphere. II. Ergodicity, in Dynamical Systems, Lecture Notes in Mathematics, 1342, Springer, Berlin, 1988, 112-153. doi: 10.1007/BFb0082827.  Google Scholar [14] H. Duan and Y. Long, A remark on the existence of closed geodesics on symmetric Finsler 2-spheres,, 2012. Available from: , (): 201202.   Google Scholar [15] J. Franks, Geodesics on $\mathbbS^2$ and periodic points of annulus homeomorphisms, Invent. Math., 108 (1992), 403-418. doi: 10.1007/BF02100612.  Google Scholar [16] J. Franks and M. Handel, Entropy zero area preserving diffeomorphisms of $\mathbbS^2$, Geom. Topol., 16 (2012), 2187-2284. doi: 10.2140/gt.2012.16.2187.  Google Scholar [17] E. Glasmachers and G. Knieper, Characterization of geodesic flows on $\mathbbT^2$ with and without positive topological entropy, Geom. Funct. Anal., 20 (2010), 1259-1277. doi: 10.1007/s00039-010-0087-2.  Google Scholar [18] E. Glasmachers and G. Knieper, Minimal geodesic foliation on $\mathbbT^2$ in case of vanishing topological entropy, J. Topol. Anal., 3 (2011), 511-520. doi: 10.1142/S1793525311000623.  Google Scholar [19] M. A. Grayson, Shortening embedded curves, Ann. of Math. (2), 129 (1989), 71-111. doi: 10.2307/1971486.  Google Scholar [20] A. Harris and G. P. Paternain, Dynamically convex Finsler metrics and $J$-holomorphic embedding of asymptotic cylinders, Ann. Global Anal. Geom., 34 (2008), 115-134. doi: 10.1007/s10455-008-9111-2.  Google Scholar [21] G. A. Hedlund, Geodesics on a two-dimensional Riemannian manifold with periodic coefficients, Ann. of Math. (2), 33 (1932), 719-739. doi: 10.2307/1968215.  Google Scholar [22] M. W. Hirsch, Differential Topology, Graduate Texts in Mathematics, 33, Springer-Verlag, New York, 1976.  Google Scholar [23] A. Katok, Ergodic perturbations of degenerate integrable Hamiltonian systems, Math. USSR Izv., 7 (1973), 535-571.  Google Scholar [24] A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Inst. Hautes Études Sci. Publ. Math., 51 (1980), 137-173.  Google Scholar [25] A. Katok and B. Hasselblatt, Introduction to the modern theory of dynamical systems, Encyclopedia of Mathematics and its Applications, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187.  Google Scholar [26] G. P. Paternain, Entropy and completely integrable Hamiltonian systems, Proc. Amer. Math. Soc., 113 (1991), 871-873. doi: 10.1090/S0002-9939-1991-1059632-7.  Google Scholar [27] J. P. Schröder, Invariant tori and topological entropy in Tonelli Lagrangian systems on the 2-torus, to appear in Ergodic Theory and Dynamical Systems, (2015). doi: 10.1017/etds.2014.137.  Google Scholar [28] J. P. Schröder, Global minimizers for Tonelli Lagrangians on the 2-torus, J. Topol. Anal., 7 (2015), 261-291. doi: 10.1142/S1793525315500090.  Google Scholar [29] Y. Yomdin, Volume growth and entropy, Israel J. Math., 57 (1987), 285-300. doi: 10.1007/BF02766215.  Google Scholar show all references ##### References: [1] S. Alpern and V. S. Prasad, Typical Dynamics of Volume Preserving Homeomorphisms, Cambridge Tracts in Mathematics, 139, Cambridge University Press, Cambridge, 2000.  Google Scholar [2] S. Angenent, Parabolic equations for curves on surfaces: Part II. Intersections, blow-up and generalized solutions, Ann. of Math. (2), 133 (1991), 171-215. doi: 10.2307/2944327.  Google Scholar [3] S. Angenent, A remark on the topological entropy and invariant circles of an area preserving twistmap, in Twist Mappings and Their Applications, IMA Vol. Math. Appl., 44, Springer, New York, 1992, 1-5.  Google Scholar [4] S. Angenent, Self-intersecting geodesics and entropy of the geodesic flow, Acta Math. Sin. (Engl. Ser.), 24 (2008), 1949-1952. doi: 10.1007/s10114-008-6439-2.  Google Scholar [5] V. Bangert, On the existence of closed geodesics on two-spheres, Internat. J. Math., 4 (1993), 1-10. doi: 10.1142/S0129167X93000029.  Google Scholar [6] D. Bao, S.-S. Chern and Z. Shen, An Introduction to Riemann-Finsler Geometry, Graduate Texts in Mathematics, 200, Springer-Verlag, New York, 2000. doi: 10.1007/978-1-4612-1268-3.  Google Scholar [7] P. Bernard and C. Labrousse, An entropic characterization of the flat metrics on the two torus, to appear in Geometriae Dedicata, (2015). doi: 10.1007/s10711-015-0098-0.  Google Scholar [8] G. D. Birkhoff, Dynamical Systems, American Mathematical Society Colloquium Publications, Vol. IX, American Mathematical Society, Providence, R.I., 1927.  Google Scholar [9] A. V. Bolsinov and A. T. Fomenko, Integrable Hamiltonian Systems. Geometry, Topology, Classification, Chapman & Hall/CRC, Boca Raton, FL, 2004. doi: 10.1201/9780203643426.  Google Scholar [10] M. Bonino, Around Brouwer's theory of fixed point free planar homeomorphisms, Notes de cours de l'École d'été "Méthodes topologiques en dynamique des surfaces,'' Université Grenoble I, 2006. Available from: http://www.math.univ-paris13.fr/ bonino/travaux.html. Google Scholar [11] M. Brown, A new proof of Brouwer's lemma on translation arcs, Houston J. Math., 10 (1984), 35-41.  Google Scholar [12] E. I. Dinaburg, On the relations among various entropy characteristics of dynamical systems, Math. USSR Izv., 5 (1971), 337-378. doi: 10.1070/IM1971v005n02ABEH001050.  Google Scholar [13] V. J. Donnay, Geodesic flow on the two-sphere. II. Ergodicity, in Dynamical Systems, Lecture Notes in Mathematics, 1342, Springer, Berlin, 1988, 112-153. doi: 10.1007/BFb0082827.  Google Scholar [14] H. Duan and Y. Long, A remark on the existence of closed geodesics on symmetric Finsler 2-spheres,, 2012. Available from: , (): 201202.   Google Scholar [15] J. Franks, Geodesics on $\mathbbS^2$ and periodic points of annulus homeomorphisms, Invent. Math., 108 (1992), 403-418. doi: 10.1007/BF02100612.  Google Scholar [16] J. Franks and M. Handel, Entropy zero area preserving diffeomorphisms of $\mathbbS^2$, Geom. Topol., 16 (2012), 2187-2284. doi: 10.2140/gt.2012.16.2187.  Google Scholar [17] E. Glasmachers and G. Knieper, Characterization of geodesic flows on $\mathbbT^2$ with and without positive topological entropy, Geom. Funct. Anal., 20 (2010), 1259-1277. doi: 10.1007/s00039-010-0087-2.  Google Scholar [18] E. Glasmachers and G. Knieper, Minimal geodesic foliation on $\mathbbT^2$ in case of vanishing topological entropy, J. Topol. Anal., 3 (2011), 511-520. doi: 10.1142/S1793525311000623.  Google Scholar [19] M. A. Grayson, Shortening embedded curves, Ann. of Math. (2), 129 (1989), 71-111. doi: 10.2307/1971486.  Google Scholar [20] A. Harris and G. P. Paternain, Dynamically convex Finsler metrics and $J$-holomorphic embedding of asymptotic cylinders, Ann. Global Anal. Geom., 34 (2008), 115-134. doi: 10.1007/s10455-008-9111-2.  Google Scholar [21] G. A. Hedlund, Geodesics on a two-dimensional Riemannian manifold with periodic coefficients, Ann. of Math. (2), 33 (1932), 719-739. doi: 10.2307/1968215.  Google Scholar [22] M. W. Hirsch, Differential Topology, Graduate Texts in Mathematics, 33, Springer-Verlag, New York, 1976.  Google Scholar [23] A. Katok, Ergodic perturbations of degenerate integrable Hamiltonian systems, Math. USSR Izv., 7 (1973), 535-571.  Google Scholar [24] A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Inst. Hautes Études Sci. Publ. Math., 51 (1980), 137-173.  Google Scholar [25] A. Katok and B. Hasselblatt, Introduction to the modern theory of dynamical systems, Encyclopedia of Mathematics and its Applications, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187.  Google Scholar [26] G. P. Paternain, Entropy and completely integrable Hamiltonian systems, Proc. Amer. Math. Soc., 113 (1991), 871-873. doi: 10.1090/S0002-9939-1991-1059632-7.  Google Scholar [27] J. P. Schröder, Invariant tori and topological entropy in Tonelli Lagrangian systems on the 2-torus, to appear in Ergodic Theory and Dynamical Systems, (2015). doi: 10.1017/etds.2014.137.  Google Scholar [28] J. P. Schröder, Global minimizers for Tonelli Lagrangians on the 2-torus, J. Topol. Anal., 7 (2015), 261-291. doi: 10.1142/S1793525315500090.  Google Scholar [29] Y. Yomdin, Volume growth and entropy, Israel J. Math., 57 (1987), 285-300. doi: 10.1007/BF02766215.  Google Scholar [1] Ping-Liang Huang, Youde Wang. Periodic solutions of inhomogeneous Schrödinger flows into 2-sphere. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1775-1795. doi: 10.3934/dcdss.2016074 [2] Tifei Qian, Zhihong Xia. Heteroclinic orbits and chaotic invariant sets for monotone twist maps. Discrete & Continuous Dynamical Systems, 2003, 9 (1) : 69-95. doi: 10.3934/dcds.2003.9.69 [3] Misha Bialy, Andrey E. Mironov. Rich quasi-linear system for integrable geodesic flows on 2-torus. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 81-90. doi: 10.3934/dcds.2011.29.81 [4] Sergei Agapov, Alexandr Valyuzhenich. Polynomial integrals of magnetic geodesic flows on the 2-torus on several energy levels. Discrete & Continuous Dynamical Systems, 2019, 39 (11) : 6565-6583. doi: 10.3934/dcds.2019285 [5] Charles Pugh, Michael Shub. Periodic points on the $2$-sphere. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1171-1182. doi: 10.3934/dcds.2014.34.1171 [6] Pedro Duarte, Silvius Klein. Topological obstructions to dominated splitting for ergodic translations on the higher dimensional torus. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5379-5387. doi: 10.3934/dcds.2018237 [7] Marie-Claude Arnaud. A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map. Journal of Modern Dynamics, 2011, 5 (3) : 583-591. doi: 10.3934/jmd.2011.5.583 [8] Qiudong Wang. The diffusion time of the connecting orbit around rotation number zero for the monotone twist maps. Discrete & Continuous Dynamical Systems, 2000, 6 (2) : 255-274. doi: 10.3934/dcds.2000.6.255 [9] Lei Wang, Quan Yuan, Jia Li. Persistence of the hyperbolic lower dimensional non-twist invariant torus in a class of Hamiltonian systems. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1233-1250. doi: 10.3934/cpaa.2016.15.1233 [10] Francisco Balibrea, J.L. García Guirao, J.I. Muñoz Casado. A triangular map on $I^{2}$ whose $\omega$-limit sets are all compact intervals of $\{0\}\times I$. Discrete & Continuous Dynamical Systems, 2002, 8 (4) : 983-994. doi: 10.3934/dcds.2002.8.983 [11] Yujun Ju, Dongkui Ma, Yupan Wang. Topological entropy of free semigroup actions for noncompact sets. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 995-1017. doi: 10.3934/dcds.2019041 [12] Patrick Foulon, Vladimir S. Matveev. Zermelo deformation of finsler metrics by killing vector fields. Electronic Research Announcements, 2018, 25: 1-7. doi: 10.3934/era.2018.25.001 [13] Luigi Ambrosio, Federico Glaudo, Dario Trevisan. On the optimal map in the $2$-dimensional random matching problem. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 7291-7308. doi: 10.3934/dcds.2019304 [14] Yuri Chekanov, Felix Schlenk. Notes on monotone Lagrangian twist tori. Electronic Research Announcements, 2010, 17: 104-121. doi: 10.3934/era.2010.17.104 [15] Ser Peow Tan, Yan Loi Wong and Ying Zhang. The SL(2, C) character variety of a one-holed torus. Electronic Research Announcements, 2005, 11: 103-110. [16] José S. Cánovas. Topological sequence entropy of $\omega$–limit sets of interval maps. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 781-786. doi: 10.3934/dcds.2001.7.781 [17] Silvére Gangloff, Alonso Herrera, Cristobal Rojas, Mathieu Sablik. Computability of topological entropy: From general systems to transformations on Cantor sets and the interval. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4259-4286. doi: 10.3934/dcds.2020180 [18] Gerard Thompson. Invariant metrics on Lie groups. Journal of Geometric Mechanics, 2015, 7 (4) : 517-526. doi: 10.3934/jgm.2015.7.517 [19] Dean Crnković, Bernardo Gabriel Rodrigues, Sanja Rukavina, Loredana Simčić. Self-orthogonal codes from orbit matrices of 2-designs. Advances in Mathematics of Communications, 2013, 7 (2) : 161-174. doi: 10.3934/amc.2013.7.161 [20] José Ginés Espín Buendía, Víctor Jiménez Lopéz. A topological characterization of the $\omega$-limit sets of analytic vector fields on open subsets of the sphere. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1143-1173. doi: 10.3934/dcdsb.2019010 2020 Impact Factor: 0.848 ## Metrics • HTML views (0) • Cited by (2) • on AIMS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604634404182434, "perplexity": 3191.282754995832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00148.warc.gz"}
http://mfat.imath.kiev.ua/article/?id=382
Open Access # Spectral measure of commutative Jacobi field equipped with multiplication structure ### Abstract The article investigates properties of the spectral measure of the Jacobi field constructed over an abstract Hilbert rigging $H_-\supset H\supset L\supset H_+.$ Here $L$ is a real commutative Banach algebra that is dense in $H.$ It is shown that with certain restrictions, the Fourier transform of the spectral measure can be found in a similar way as it was done for the case of the Poisson field with the zero Hilbert space $L^2(\Delta,d u).$ Here $\Delta$ is a Hausdorff compact space and $u$ is a probability measure defined on the Borel $\sigma$-algebra of subsets of $\Delta.$ The article contains a formula for the Fourier transform of a spectral measure of the Jacobi field that is constructed over the above-mentioned abstract rigging. ### Article Information Title Spectral measure of commutative Jacobi field equipped with multiplication structure Source Methods Funct. Anal. Topology, Vol. 13 (2007), no. 1, 28-42 MathSciNet MR2308577 Copyright The Author(s) 2007 (CC BY-SA) ### Authors Information Oleksii Mokhonko Taras Shevchenko Kyiv National University, Kyiv, Ukraine ### Citation Example Oleksii Mokhonko, Spectral measure of commutative Jacobi field equipped with multiplication structure, Methods Funct. Anal. Topology 13 (2007), no. 1, 28-42. ### BibTex @article {MFAT382, AUTHOR = {Mokhonko, Oleksii}, TITLE = {Spectral measure of commutative Jacobi field equipped with multiplication structure}, JOURNAL = {Methods Funct. Anal. Topology}, FJOURNAL = {Methods of Functional Analysis and Topology}, VOLUME = {13}, YEAR = {2007}, NUMBER = {1}, PAGES = {28-42}, ISSN = {1029-3531}, URL = {http://mfat.imath.kiev.ua/article/?id=382}, }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787944912910461, "perplexity": 960.974923741853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159165.63/warc/CC-MAIN-20180923075529-20180923095929-00335.warc.gz"}
http://www.actucation.com/college-physics-2/dipole
Informative line ### Dipole Practice to calculate torque on a dipole in a uniform field and Potential Energy due to Dipole-Dipole Interaction. Learn definition and formula of dipole moment with examples. # Dipole Definition- • Two equal and opposite point charges separated by a very small distance is called dipole. ## Dipole Moment • Dipole moment is the multiplication of one charge and distance between them. $$\vec p=q(\vec d)$$ Unit:- C-m ### Direction of dipole moment • Direction of dipole moment is always taken from negative charge to positive charge. • Dipole moment is a vector quantity. #### Which one of the following dipoles does not have  magnitude of dipole moment p=24 nC-m? A B C D × For option (A), Dipole moment p = q(d) $$\Rightarrow 2\times10^{-6}\times12\times10^{-3}=24\,\text{nC-m}$$ For option (B), Dipole moment p = q(d) $$\Rightarrow3\times10^{-6}\times8\times10^{-3}=24\;\text{nC-m}$$ For option (C), Dipole moment p = q(d) $$\Rightarrow12\times10^{-6}\times2\times10^{-3}=24\;\text{nC-m}$$ For option (D), Dipole moment p = q(d) $$\Rightarrow1\times10^{-6}\times3\times10^{-3}=3\;\text{nC-m}$$ ### Which one of the following dipoles does not have  magnitude of dipole moment p=24 nC-m? A B C D Option D is Correct # Electric Field on the Axis and Equator of Dipole • Consider a dipole shown in figure. ## Calculation of Electric Field on Axial Line • Consider a point P on the axis of dipole at a distance r from the mid-point of the dipole. $$\vec E_{-q}=\dfrac {1}{4\pi\epsilon_0}\dfrac {q}{(r+\ell)^2}$$    (in negative X-axis) $$\vec E_{+q}=\dfrac {1}{4\pi\epsilon_0}\dfrac {q}{(r-\ell)^2}$$    ( in positive X-axis) Since, $$(r+\ell)^2>(r-\ell)^2$$ so, $$|\vec E_{-q}|<|\vec E_{+q}|$$ • So, at point P net field will be in positive X-axis. $$\vec E_{net}=E_{+q}-E_{-q}$$       (towards positive X-axis) $$\vec E_{net}= \dfrac {1}{4\pi\epsilon_0} \dfrac {q}{(r-\ell)^2}- \dfrac {1}{4\pi\epsilon_0} \dfrac {q}{(r+\ell)^2}$$     (towards positive X-axis) $$\vec E_{net}= \dfrac {q}{4\pi\epsilon_0} \left [ \dfrac {1}{(r-\ell)^2}- \dfrac {1}{(r+\ell)^2} \right]$$   (towards positive X-axis) $$\vec E_{net}= \dfrac {q}{4\pi\epsilon_0} \left [ \dfrac {4r\ell}{(r^2-\ell^2)^2} \right]$$ (towards positive X-axis) • Dipole moment $$\vec p=q(2 \ell)$$ (towards positive X-axis) $$\vec E_{net}=\dfrac {2\vec pr}{4\pi\epsilon_0(r^2-\ell^2)^2}$$ • If $$\ell <<r$$ $$\vec E_{net}=\dfrac {2\vec pr}{4\pi\epsilon_0(r^2)^2}$$ $$\vec E_{net}=\dfrac {2\vec p}{4\pi\epsilon_0\;r^3}$$ ## Calculation of Electric Field on Equatorial Line of Dipole • By superposition principle, total electric field at R is the vector sum of electric field of -q and +q at R. $$E_{net}=2\;E cos\;\theta$$ (in opposite direction to $$\vec p$$) $$E_{net}=2. \dfrac {1}{4\pi\epsilon_0} \dfrac {q}{(r^2+\ell^2)} cos\theta$$ $$cos\theta = \dfrac {\ell}{\sqrt{r^2+\ell^2}}$$ $$E_{net}=\dfrac {1}{4\pi\epsilon_0} \dfrac {2q\ell}{(r^2+\ell^2)^{3/2}}$$ (opposite direction to $$\vec d$$) Since $$\ell <<r$$ $$\therefore r^2+\ell^2\simeq r^2$$ $$\vec E_{net}=\dfrac {1}{4\pi\epsilon_0}\times\dfrac {(-\vec p)}{r^3}$$ $$\vec E_{net}=\dfrac {-\vec p}{4\pi\epsilon_0\;r^3}$$ #### Choose the incorrect expression for electric field at a point on the axial  and equatorial line if, charges are +q and –q and distance between them is d. A $$(\vec E_{P\,})_{Axial}=-\dfrac {1}{4\pi\epsilon_0} \dfrac{q\vec d}{r^3}$$,  $$(\vec E_{P\,})_{Equatorial}=\dfrac {+\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ B $$(\vec E_{P\,})_{Axial}=+\dfrac {1}{4\pi\epsilon_0}\times \dfrac{2q\vec d}{r^3}$$,  $$(\vec E_{P\,})_{Equatorial}=\dfrac {-q\vec d}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ C $$(\vec E_{P\,})_{Axial}=\dfrac {1}{4\pi\epsilon_0}\times \dfrac{2\vec p}{r^3}$$,  $$(\vec E_{P\,})_{Equatorial}=\dfrac {-\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ D $$(\vec E_{P\,})_{Axial}=\dfrac {1}{2\pi\epsilon_0}\times \dfrac{q\vec d}{r^3}$$,  $$(\vec E_{P\,})_{Equatorial}=\dfrac {-\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ × Electric field at a point on axial line $$(\vec E_P)_{Axial}=\dfrac {1}{4\pi\epsilon_0}\dfrac {2\vec p}{r^3}$$ Electric field at a point on equatorial line $$(\vec E_P)_{Equatorial}=\dfrac {-1}{4\pi\epsilon_0}\dfrac {\vec p}{r^3}$$(where $$\vec p = q\vec d$$ ) Hence, option (A) is incorrect. ### Choose the incorrect expression for electric field at a point on the axial  and equatorial line if, charges are +q and –q and distance between them is d. A $$(\vec E_{P\,})_{Axial}=-\dfrac {1}{4\pi\epsilon_0} \dfrac{q\vec d}{r^3}$$ $$(\vec E_{P\,})_{Equatorial}=\dfrac {+\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ . B $$(\vec E_{P\,})_{Axial}=+\dfrac {1}{4\pi\epsilon_0}\times \dfrac{2q\vec d}{r^3}$$, $$(\vec E_{P\,})_{Equatorial}=\dfrac {-q\vec d}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ C $$(\vec E_{P\,})_{Axial}=\dfrac {1}{4\pi\epsilon_0}\times \dfrac{2\vec p}{r^3}$$, $$(\vec E_{P\,})_{Equatorial}=\dfrac {-\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ D $$(\vec E_{P\,})_{Axial}=\dfrac {1}{2\pi\epsilon_0}\times \dfrac{q\vec d}{r^3}$$, $$(\vec E_{P\,})_{Equatorial}=\dfrac {-\vec p}{4\pi\epsilon_0r^3}$$ where, $$\vec p=q\,\vec d$$ Option A is Correct # Electric Potential on the Axis of a Dipole • Consider a dipole as shown in figure. • Total electric potential at point P is the scalar sum of potential due to –q and +q charge. $$V_p=V_{-q}+V_{+q}$$ $$V_p= \dfrac {1}{4\pi\epsilon_0} \dfrac {-q}{(r+\ell)} + \dfrac {1}{4\pi\epsilon_0} \dfrac {q}{(r-\ell)}$$ $$V_p= \dfrac {1}{4\pi\epsilon_0}q \left [ \dfrac {1}{(r-\ell)} - \dfrac {1}{(r+\ell)} \right]$$ $$V_p= \dfrac {1}{4\pi\epsilon_0}q \left [ \dfrac {(r+\ell)-{(r-\ell)}} {{(r+\ell)}(r-\ell)} \right]$$ $$V_p= \dfrac {1}{4\pi\epsilon_0}q \left [ \dfrac {2\ell} {r^2-\ell^2} \right]$$ Since, $$\ell <<r$$  ($$r^2-\ell^2\simeq r^2$$) $$V_P=\dfrac {1}{4\pi\epsilon_0} \dfrac {|\vec p|}{r^2}$$ Note- A dipole having zero potential at equatorial line. #### Calculate potential at point P due to dipole as shown in figure. Given, d = 1mm, Q=2$$\mu$$C and r = 2m. A 65.4 Volt B 60 Volt C 15 Volt D 4.5 Volt × Potential at point P, $$V_P=\dfrac {1}{4\pi\epsilon_0} \dfrac {|\vec p|}{r^2}$$ where, p = dipole moment r = separation from mid point of dipole to point P $$\dfrac {1}{4\pi\epsilon_0} = 9\times10^9\;\text{Nm}^2/\text C^2$$ $$|\vec p|=q\times d = (2\times 10^{-6})\times (1\times 10^{-3})$$ $$= 2\times 10^{-9}$$ $$r = 2\,m$$ $$V_P=\dfrac {9\times 10^9\times2\times10^{-9}}{(2)^2}$$ $$V_P=4.5\;\text{ Volt}$$ ### Calculate potential at point P due to dipole as shown in figure. Given, d = 1mm, Q=2$$\mu$$C and r = 2m. A 65.4 Volt . B 60 Volt C 15 Volt D 4.5 Volt Option D is Correct # Torque Experienced by a Dipole in a Uniform Electric Field • Consider a dipole in a uniform electric field ($$\vec E$$), as shown in figure. • The dipole is placed, making an angle $$\theta$$ with the field vector. • By analyzing forces on both the point charges, it is clear that the net force on the dipole is zero. $$\vec F_{Net}=\vec F_{+q}+\vec F _{-q}$$ $$\vec F_{Net}=q\vec E+(-q\vec E)$$ $$\vec F_{Net}=0$$ • But due to the force couple, a torque is created. $$\tau$$= Couple force × Couple arm $$\tau= qE (d sin\,\theta)$$ $$\tau=(qd)\;(E)sin\;\theta$$ $$\vec \tau=q\vec d\times \vec E$$ $$\vec \tau=\vec p\times \vec E$$ ## Maxima and Minima Condition of Torque ### Maxima • Torque will be maximum when dipole is perpendicularly placed in an electric field. $$\theta = 90°$$ $$\tau_{maximum}=q\;Ed=pE$$ ### Minima • Torque will be minimum when dipole is placed in the direction of electric field or in the direction opposite to electric field. • In figure, $$\tau_{minimum}=0$$ . Note :  If $$\vec E$$ is non-uniform, expression of torque experienced by dipole changes. #### A uniform electric field of intensity $$\vec E=10^8$$ Volt/m is directed in X-Y plane along X-axis. Calculate the torque experienced by a dipole as shown in figure. (Given d = 2 mm, $$\theta$$=30°, Q = 2 $$\mu$$C) A 390 N-m B 1200 N-m C 3.9 N-m D 0.2 N-m × Net torque on dipole, $$\tau=(q\;d)E\;sin\theta$$ $$\tau =(2\times10^{-6})\;(2\times10^{-3})\;10^8\left(\dfrac {1}{2}\right)$$ $$\tau =2\times10^{-1}$$ $$\tau =0.2\,\text{N-m}$$ ### A uniform electric field of intensity $$\vec E=10^8$$ Volt/m is directed in X-Y plane along X-axis. Calculate the torque experienced by a dipole as shown in figure. (Given d = 2 mm, $$\theta$$=30°, Q = 2 $$\mu$$C) A 390 N-m . B 1200 N-m C 3.9 N-m D 0.2 N-m Option D is Correct # Electric Field at a Point • To find electric field at a point (P) which lies neither on the axis nor on the equatorial line, electric field due to -q and +q at P can be added vectorially to get net field at P. Note: This method is very complicated. • Other method is to take components of dipole moments of the dipole such that point P lies on the axis of one component of dipole and on the equator of other component of dipole. • Electric field due to component of dipole moment is in direction of dipole moment component. • Now at point P, Reason$$\rightarrow$$ (1) Axial line (2) Equator Line • Net electric field at P, • Net electric field at point P, $$\vec E = \dfrac {p}{4\pi\epsilon_0r^3}(2cos^2\theta-sin^2\theta)\hat i+\dfrac {3p\,sin\theta \,cos\theta}{4\pi\epsilon_0r^3}\hat j$$ #### Calculate electric field at point A, at a distance r = 5 m and making an angle $$\theta$$= 30°, from the mid point of dipole having charge q = 2 $$\mu$$ C and distance between charges d = 2 mm. A $$(39\hat i +37\hat j)\,V/m$$ B $$(10\hat i +15\hat j)\,V/m$$ C $$(0.35\hat i +0.325\hat j)\,V/m$$ D $$(18\hat i +12\hat j)\,V/m$$ × Dipole moment $$\vec p=2\times10^{-6}\times2\times 10^{-3}(\hat i)\,\text{C-m}$$ $$\vec p=4\times10^{-9}(\hat i)\,\text{C-m}$$ Taking components of this dipole moment such that point A lies on the axis of one component and the equator of other component. Net field at point A , At point A along equatorial line , $$E\;psin\,\theta=\dfrac {1\times p}{4\pi\epsilon_0r^3}sin30°$$ $$E\;psin\,\theta=\dfrac {4\times 10^{-9}\times 9\times10^{9}}{125}\times\dfrac {1}{2}$$ $$=\dfrac {18}{125}=0.15$$ At point A along axial line , $$\vec E\;pcos\,\theta=\dfrac {2\times p}{4\pi\epsilon_0r^3}cos30°$$ $$Ep\,cos\,\theta=\dfrac {2\times4\times 10^{-9}} {125}\times\dfrac {\sqrt3}{2}\times9\times10^9$$ $$=\dfrac {36\sqrt3}{125}=0.5$$ Net field at point A, Net field at A, Net field at A, $$\vec E=(0.35\hat i+0.325 \hat j)\,V/m$$ ### Calculate electric field at point A, at a distance r = 5 m and making an angle $$\theta$$= 30°, from the mid point of dipole having charge q = 2 $$\mu$$ C and distance between charges d = 2 mm. A $$(39\hat i +37\hat j)\,V/m$$ . B $$(10\hat i +15\hat j)\,V/m$$ C $$(0.35\hat i +0.325\hat j)\,V/m$$ D $$(18\hat i +12\hat j)\,V/m$$ Option C is Correct # Potential at a Point • Total potential at P is the scalar sum of potential due to –q and +q. $$V_P=(V_P)_{-q} +(V_P)_{+q}$$ Note : Above method is very complicated. • Other way is to take components of dipole moment of the dipole such that point P lies on the axis of one component and on the equatorial line of other component. • For $$p\;cos\theta$$, point P lies on the axial. So, potential at point P, $$(V_P)_{p\,cos\theta}=\dfrac {1}{4\pi\epsilon_0}\times\dfrac {p\, cos\,\theta}{r^2}$$ • For $$p\,\sin\theta$$, point P lies on the equatorial line so, potential at P due to p sin $$\theta$$ $$(V_P)_{p\,sin\theta}=0$$ • Total potential at point P, $$V=\dfrac {1}{4\pi\epsilon_0}\times\dfrac {p\,cos\,\theta}{r^2}$$ #### Calculate electric potential at  point A, as shown in figure. Given $$\theta =$$60°, r = 5 m, d = 2 mm, q = 5 $$\mu$$C, –q =–5 $$\mu$$C. A –4.6 Volt B 1.8 Volt C 3.9 Volt D 9.2 Volt × Dipole moment ($$(\vec p)=q\times d$$) $$\vec p=5\times10^{-6}\times2\times 10^{-3}\hat i$$ $$\vec p=10\times10^{-9}\hat i$$ C-m Taking components of dipole moment Dipole moment along axial line = p cos 60° $$=10 \times 10^{-9}\times \dfrac {1}{2}=\dfrac {10^{-8}}{2}$$ Dipole moment along equatorial line = p sin 60° $$=10 \times 10^{-9}\times \dfrac {\sqrt3}{2}=5\sqrt 3\times 10^{-9}$$ $$(V_P)_{cos\;60°_A}=\dfrac {1}{4\pi\epsilon_0}\times\dfrac {p\,cos\,60°}{r^2}$$ $$=\dfrac {9\times10^9\times10^{-8}}{(5)^2}\times\dfrac {1}{2}$$ $$= 1.8 \,Volt$$ $$(V_P)_{sin\;60°_A}=0$$ Total potential = 1.8 + 0 = 1.8 Volt ### Calculate electric potential at  point A, as shown in figure. Given $$\theta =$$60°, r = 5 m, d = 2 mm, q = 5 $$\mu$$C, –q =–5 $$\mu$$C. A –4.6 Volt . B 1.8 Volt C 3.9 Volt D 9.2 Volt Option B is Correct # Potential Energy of a Dipole • Consider a dipole of dipole moment ($$\vec p$$) in a uniform electric field $$\vec E$$. • The dipole experiences a torque. $$\tau = \vec p \times \vec E$$ • External work is done in rotating the dipole in opposite direction of the torque. This work is stored as potential energy of dipole. $$W_{external}=-W_{electric}$$ • To rotate this dipole from an angle $$\theta$$ to $$\phi$$, work done by electrical force $$d\,W_{electric}=-\tau \,d\theta$$ $$d\,W_{electric}=-pE\,sin\,\theta\,d\theta$$ $$\displaystyle\,W_{electric}=-\int \limits ^{\phi}_{\theta}pE\,sin\,\theta\,d\theta$$ $$W_{electric}=-pE\,[-cos\,\theta]^\phi_{\theta}$$ $$W_{electric}=pE\,[cos\,\phi-cos\,\theta]$$ • Change in potential energy $$\Delta U=-W_{conservative\,force}$$ $$\Delta U=-W_{electric}$$ $$\Delta U=-pE\,[cos\,\phi-cos\,\theta]$$ $$\Rightarrow\;\Delta U=pE\,[cos\,\theta-cos\,\phi]$$ $$\Rightarrow U_\theta-U_\phi=pE[cos\theta-cos\phi]$$ • From above equation, $$U=-pE\,cos\,\theta$$ $$U=-\vec p.\vec E$$ Note: Expression of potential energy of dipole is same in uniform and non-uniform electric field. #### Calculate potential energy of dipole from given figure. Given 60°, – q = – 2 $$\mu$$C,   , q = 2 $$\mu$$C and d = 2 mm, E=1010 V/m. A 20 Joule B –20 Joule C 200 Joule D –200 Joule × Dipole moment $$(\vec p)=q×\vec d$$ $$=2\times10^{-6}\times2\times10^{-3}$$ $$p=4×10^{-9}$$ C-m Potential energy is given as, $$U=-\vec p.\vec E$$ $$U=-(4\times10^{-9})\times(10^{10})\;cos60°$$ $$U=-2\times10$$ $$U=-20\;Joule$$ ### Calculate potential energy of dipole from given figure. Given 60°, – q = – 2 $$\mu$$C,   , q = 2 $$\mu$$C and d = 2 mm, E=1010 V/m. A 20 Joule . B –20 Joule C 200 Joule D –200 Joule Option B is Correct # Potential Energy due to Dipole-Dipole Interaction • Potential energy of a dipole in the effect of other dipole can be explained as a dipole placed in an electric field. • Potential energy of two dipoles placed in an electric field is given as $$U=-\vec p.\vec E$$ • Two dipoles are situated in space in four configuration. • Potential energy in any of these configuration is determined as • It means potential energy of dipole B is because of electric field of dipole A. • Potential energy of any dipole-dipole interaction  is defined as the negative of dot product of dipole moment of dipole itself and the field due to second dipole at a point where first dipole is placed. $$U=-\vec p. \vec E$$ #### Calculate potential energy of the system. Given $$\vec p_1=2\hat i$$ C-m, $$d = 1\,m$$ , $$\vec p_2=2\hat i$$ C-m A 3000 Joule B 145×106 Joule C –140 Joule D –144×109 Joule × Electric field on the axial line of dipole is given as, $$\vec E = \dfrac {2p}{4\pi\epsilon_0r^3} \hat r$$ Electric field on dipole 1 due to dipole 2, $$E_{1/2}=\dfrac {2\;\vec p_2}{4\pi\epsilon_0r^3}$$ Given, $$\vec p=2\hat i\;\text{C-m}$$ , $$\theta = 0°$$ $$\vec E_{1/2}=\dfrac {9\times10^9\times2\times 2\hat i }{(1)^3}$$ $$\vec E_{1/2}=(36\times10^9\;\hat i)\,V/m$$ Electric field on dipole 2 due to dipole 1, $$E_{2/1}=\dfrac {2}{4\pi\epsilon_0}\dfrac {\vec p_1}{r^3}$$ Given  $$\vec p_1=2\hat i, \;\theta=0°$$ $$E_{2/1}=\dfrac {9\times10^9\times 2\times 2\hat i}{(1)^3}$$ $$E_{2/1}=36\hat i\times10^9 \;V/m$$ $$U=-\vec p.\vec E = -pE\,cos\theta$$                        $$(\theta = 0°)$$ $$U=(-\vec p_1.\vec E_{1/2})+( -\vec p_2. \vec E_{2/1})$$ $$U=(-2\times36\times10^9)+( -2\times36\times10^9)$$ $$U = –144×10^9 \,\,Joule$$ ### Calculate potential energy of the system. Given $$\vec p_1=2\hat i$$ C-m, $$d = 1\,m$$ , $$\vec p_2=2\hat i$$ C-m A 3000 Joule . B 145×106 Joule C –140 Joule D –144×109 Joule Option D is Correct
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357995390892029, "perplexity": 1543.369711515431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00729.warc.gz"}
http://physics.stackexchange.com/questions/3635/radiation-from-a-pair-of-charged-objects-orbiting-each-other
# Radiation from a pair of charged objects orbiting each other This question on binary black hole solutions, led to me think about the similar question from the perspective of what we know about the Hydrogen atom. Prior to quantum mechanics, it was not understood what led to the stability of the Hydrogen atom against collapse of the electron orbits due to Bremsstrahlung, i.e. the emission of radiation due to the fact that it is in a non-inertial (accelerated) frame of reference. Bohr and Sommerfeld came up with a somewhat ad-hoc procedure - the first ever instance of quantization - according to which in the quantum theory only those classical orbits are allowed the value of whose action is quantized in units of $\hbar$. $$\int_i p dq = 2 \pi n_i$$ where the integral is over the $i^{th}$ (closed) classical orbit. Now what I'm thinking of next has probably been thought of before but I haven't done a literature review to find out. Classically, we expect the accelerating electron to radiate resulting in the catastrophic collapse of its orbit. However, in a complete description we must also take the proton into consideration. It is also a charged object and as is well known from the two-body, inverse square law, central force problem (see eg. Goldstein) both the proton and electron orbit each other. Therefore the proton, being a charged object, must also radiate if we don't ignore its (accelerated) motion around the electron. An observer sitting at a distance $d \gg r$, where $r$ is the mean size of the two-body system, will measure radiation which is a superposition of that coming from both the electron and the proton. The question is this: a). What is the phase difference between the two contributions ($\mathbf{E}_e$ and $\mathbf{E}_p$) to the net electric field $\mathbf{E}$ as seen by this observer?, b). What is the value of the total energy $E$ emitted by the electron-proton system, given by the integral of the Poynting vector across a closed surface enclosing the system, as seen by this observer? [ My motivation is to see if we can learn more about the Bohr-Sommerfeld quantization condition by considering the classical electrodynamics of the full electron-proton system. The quantity $E$ will depend on the size and shape of the classical orbits of the two charged objects or more simply of their mean separation $E:=E(r)$. As we vary $r$ from $0$ to some value $r_{max} \ll d$ we would expect $E(r)$ to oscillate and have local minima for some classical orbits. If these classical minima occur for orbits which satisfy the Bohr-Sommerfeld condition then we would have established a connection between the full classical problem and its quantization. ] - Although a point charge does radiate when accelerated, this isn't necessarily true of all charge distritutions, where the radiation fields of all the charge elements can cancel in some cases. This nonradiation condition has been known since 1910 when Paul Ehrenfest published a paper: Ungleichförmige Elektrizitätsbewegungen ohne Magnet- und Strahlungsfeld: Phys. Z. 11 (1910), 708-709) Recent classic papers are: Goedecke, G. H. (1964). "Classically Radiationless Motions and Possible Implications for Quantum Theory". Physical Review 135: B281–B288. doi:10.1103/PhysRev.135.B281 Haus, H. A. (1986). "On the radiation from point charges". American Journal of Physics 54: 1126 - Thanks @John. That is the gist of the my question. I'm looking at the papers you've mentioned. –  user346 Jan 23 '11 at 10:08 I would guess that the contribution of the proton would be fairly negligible, because of the difference in mass. The classical radiated field amplitude will depend on the magnitude of the acceleration, and while the proton and electron both experience the same force, the proton is 1836 times heavier, so the acceleration is reduced by a factor of 1836. Which means that the field amplitude will presumably be reduced by the same factor, in which case the phase relation isn't that important, because even if they were perfectly out of phase, the reduction of the total field due to the proton's contribution would be minimal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105238914489746, "perplexity": 303.07859856808415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00310-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.causes.com/profiles/189553918/activity
# Sherry Robinson ### Activity Sherry Robinson added a reason to Jkkh, Out Of Egypt: A Memoir --------------------------------------- DOWNLOAD: http://urllio.com/r8z50 --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . d5d9c27ca9 http://www.miestokate.lt/en/news/view/id/317302 http://graph.org/Minecraft-Pocket-Edition-076-IPA-IOS-Full-Download-Free-09-20…Read More Sherry Robinson added a reason to Jkkh, In Hindi Download --------------------------------------- In Hindi Download: http://urllio.com/r6f6j --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646f9e108c https://www.causes.com/posts/4993901…Read More Sherry Robinson added a reason to Jkkh, Sim N Blanco Malayalam Movie Download --------------------------------------- Sim N Blanco Malayalam Movie Download: http://urllio.com/r62v7 --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646f9e108c http://poemreader.ning.com/profiles/blogs/war-of-the-worlds-goliath-telugu-full-movie-download…Read More Sherry Robinson added a reason to Jkkh, Kidnapping Donald Trump Full Movie Hd Download --------------------------------------- Kidnapping Donald Trump Full Movie Hd Download: http://urllio.com/r60od --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646f9e108c …Read More Sherry Robinson added a reason to Jkkh, Download Operation Mekong --------------------------------------- Download Operation Mekong: http://urllio.com/r2n2l --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b Inspired by the true story known as the Mekong Massacre--two Chinese commercial vessels are ambushed while traveling down…Read More Sherry Robinson added a reason to Jkkh, The Subtle Caricature Of The Would Be Assassin Online Free --------------------------------------- The Subtle Caricature Of The Would Be Assassin Online Free: http://urllio.com/r2m1n --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b In this wild absurdist comedy, a seemingly amnesiac man…Read More Sherry Robinson added a reason to Jkkh, Pride And Prejudice And Zombies Full Movie Hd 1080p --------------------------------------- Pride And Prejudice And Zombies Full Movie Hd 1080p: http://urllio.com/r1xkj --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b Five sisters in 19th century England must cope with the pressures to…Read More Sherry Robinson added a reason to Jkkh, Tamil Movie Six Free Download --------------------------------------- Tamil Movie Six Free Download: http://urllio.com/r155u --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b A gramophone breaks the silence , they are getting dressed to meet the coming visitors , she is searching for a…Read More Sherry Robinson added a reason to Jkkh, Escort Full Movie In Hindi Free Download Mp4 --------------------------------------- Escort Full Movie In Hindi Free Download Mp4: http://urllio.com/r0ehc --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b Escort tells the story of the escort driver Alex. Alex' job is to drive call girls…Read More Sherry Robinson added a reason to Jkkh, Smokey And The Bandit 720p Movies --------------------------------------- Smokey And The Bandit 720p Movies: http://urllio.com/qzafx --------------------------------------- . . . . . . . . . . . . . . . . . . . . . . . . . . . . a5c7b9f00b In the summer of 1976 "Big Enos" Burdette, a flamboyant Texan aspiring to political office in Georgia,…Read More
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596510887145996, "perplexity": 757.2085014066448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00441.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&onejrnl=jams&pubname=one&v1=55%5C-XX&startRec=1
# American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(55\\\-XX) AND publication=(jams) Sort order: Date Format: Standard display Results: 1 to 30 of 32 found      Go to page: 1 2 [1] Aravind Asok and Jean Fasel. Splitting vector bundles outside the stable range and ${\mathbb A}^1$-homotopy sheaves of punctured affine spaces. J. Amer. Math. Soc. 28 (2015) 1031-1062. Abstract, references, and article information    View Article: PDF [2] Daniel Juteau, Carl Mautner and Geordie Williamson. Parity sheaves. J. Amer. Math. Soc. 27 (2014) 1169-1212. Abstract, references, and article information    View Article: PDF [3] Robert Lipshitz and Sucharit Sarkar. A Khovanov stable homotopy type. J. Amer. Math. Soc. 27 (2014) 983-1042. Abstract, references, and article information View Article: PDF [4] John Pardon. The Hilbert--Smith conjecture for three-manifolds. J. Amer. Math. Soc. 26 (2013) 879-899. Abstract, references, and article information    View Article: PDF [5] Carles Broto, Jesper M. Møller and Bob Oliver. Equivalences between fusion systems of finite groups of Lie type. J. Amer. Math. Soc. 25 (2012) 1-20. MR 2833477. Abstract, references, and article information    View Article: PDF [6] Peter Fiebig. Sheaves on affine Schubert varieties, modular representations, and Lusztig’s conjecture. J. Amer. Math. Soc. 24 (2011) 133-181. MR 2726602. Abstract, references, and article information    View Article: PDF [7] David Ben-Zvi, John Francis and David Nadler. Integral transforms and Drinfeld centers in derived algebraic geometry. J. Amer. Math. Soc. 23 (2010) 909-966. MR 2669705. Abstract, references, and article information    View Article: PDF [8] Kasper K. S. Andersen and Jesper Grodal. The classification of $2$-compact groups. J. Amer. Math. Soc. 22 (2009) 387-436. MR 2476779. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] Peter Linnell and Thomas Schick. Finite group extensions and the Atiyah conjecture. J. Amer. Math. Soc. 20 (2007) 1003-1051. MR 2328714. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Soren Galatius, Ib Madsen and Ulrike Tillmann. Divisibility of the stable Miller-Morita-Mumford classes. J. Amer. Math. Soc. 19 (2006) 759-779. MR 2219303. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Charles Rezk. The units of a ring spectrum and a logarithmic cohomology operation. J. Amer. Math. Soc. 19 (2006) 969-1014. MR 2219307. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] A. J. Berrick, F. R. Cohen, Y. L. Wong and J. Wu. Configurations, braids, and homotopy groups. J. Amer. Math. Soc. 19 (2006) 265-326. MR 2188127. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] Curtis T. McMullen. Minkowski's conjecture, well-rounded lattices and topological dimension. J. Amer. Math. Soc. 18 (2005) 711-734. MR 2138142. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Carles Broto, Ran Levi and Bob Oliver. The homotopy theory of fusion systems. J. Amer. Math. Soc. 16 (2003) 779-856. MR 1992826. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] James E. McClure and Jeffrey H. Smith. Multivariable cochain operations and little $n$-cubes. J. Amer. Math. Soc. 16 (2003) 681-704. MR 1969208. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Igor Belegradek and Vitali Kapovitch. Obstructions to nonnegative curvature and rational homotopy theory. J. Amer. Math. Soc. 16 (2003) 259-284. MR 1949160. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] Michael J. Hopkins, Nicholas J. Kuhn and Douglas C. Ravenel. Generalized group characters and complex oriented cohomology theories. J. Amer. Math. Soc. 13 (2000) 553-594. MR 1758754. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Mark Hovey, Brooke Shipley and Jeff Smith. Symmetric spectra. J. Amer. Math. Soc. 13 (2000) 149-208. MR 1695653. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] Wilfried Schmid and Kari Vilonen. Two geometric character formulas for reductive Lie groups. J. Amer. Math. Soc. 11 (1998) 799-867. MR 1612634. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Hélène Esnault, Bruno Kahn, Marc Levine and Eckart Viehweg. The Arason invariant and mod 2 algebraic cycles. J. Amer. Math. Soc. 11 (1998) 73-118. MR 1460391. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] Burt Totaro. Torsion algebraic cycles and complex cobordism. J. Amer. Math. Soc. 10 (1997) 467-493. MR 1423033. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] Amnon Neeman. The Grothendieck duality theorem via Bousfield's techniques and Brown representability. J. Amer. Math. Soc. 9 (1996) 205-236. MR 1308405. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Ruth Charney and Michael W. Davis. The $K(\pi,1)$-problem for hyperplane complements associated to infinite reflection groups . J. Amer. Math. Soc. 8 (1995) 597-627. MR 1303028. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] A. K. Bousfield. Localization and periodicity in unstable homotopy theory . J. Amer. Math. Soc. 7 (1994) 831-873. MR 1257059. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] W. G. Dwyer and C. W. Wilkerson. A new finite loop space at the prime two . J. Amer. Math. Soc. 6 (1993) 37-64. MR 1161306. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Jürgen Rohlfs and Joachim Schwermer. Intersection numbers of special cycles . J. Amer. Math. Soc. 6 (1993) 755-778. MR 1186963. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] Richard P. Stanley. Subdivisions and local $h$-vectors . J. Amer. Math. Soc. 5 (1992) 805-851. MR 1157293. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] Frances Kirwan. The cohomology rings of moduli spaces of bundles over Riemann surfaces . J. Amer. Math. Soc. 5 (1992) 853-906. MR 1145826. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] Sylvain E. Cappell and Julius L. Shaneson. Stratifiable maps and topological invariants . J. Amer. Math. Soc. 4 (1991) 521-551. MR 1102578. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] Xianzhe Dai. Adiabatic limits, nonmultiplicativity of signature, and Leray spectral sequence . J. Amer. Math. Soc. 4 (1991) 265-321. MR 1088332. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 32 found      Go to page: 1 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228814840316772, "perplexity": 2045.7515813687999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447043.45/warc/CC-MAIN-20151124205407-00206-ip-10-71-132-137.ec2.internal.warc.gz"}
https://cs3511.wordpress.com/course-topics/feb-23-matchings-in-bipartite-graphs/
# Feb 23. Matchings in bipartite graphs Matching A matching $M$ is a subset of $E$ such that no two edges $e, e' \in M$ share a vertex, i.e. for any two edges $e, e' \in M$ we have $e \cap e' = \phi$. Perfect Matching A perfect matching is a matching that meets every vertex of $G$, i.e. for every $v \in V$ there is an edge $(u,v) \in M$. Problem: Given a bipartite graph $G$ we wish to know whether $G$ has a perfect matching. Note that the yes answer has a short certificate, namely the perfect matching itself. The certificate for the no answer follows from Hall’s theorem which we give below. Hall’s Theorem: $G=(V=A \cup B, E)$ has a perfect matching if and only if $|A| = |B|$ and $\forall X \subseteq A, |N(X)| \ge |X|$. Proof: Note that the forward direction is trivial. Suppose for every $X \subsetneq A$, $|N(X)| > |X|$ then we are done by induction on the number of vertices. Consider an edge $e=(u, v)$ and let $G' = G\setminus \{u,v\}$. Let $V(G') = A' \cup B'$. Then clearly $|A'| = |B'|$ and $\forall X \subseteq A$, $|N(X)| \ge |X|$. Thus by our inductive hypothesis we are done. Now suppose there is a set $X \subsetneq A$ such that $|N(X)| = |X|$. Consider the graph $H$ induced on $X \cup N(X)$. Since $|X| < |A|$ and so by induction we know that there is a perfect matching in $H$. Let us consider the induced graph $H'$ on $(A\setminus N(X)) \cup (B \setminus N(X))$. We claim that for every $Y \subseteq B \setminus N(X)$, $|N(Y)| \ge |Y|$. This is because if $|A|=|B|$ then $\forall X \subseteq A$, $|N(X)| \ge |X|$ if and only if $\forall Y \subseteq B$, $|N(Y)| \ge |Y|$. Thus applying the inductive hypothesis we get a perfect matching in $H'$. Patching the perfect matchings in $H$ and $H'$ gives us a perfect matching in $G$. Algorithm Alternating path: A path $P$ in a graph is an $M$ alternating path if the edges in the path alternate between $M$ and non $M$ edges. Augmenting path: An $M$ augmenting path is an $M$ alternating path such that its two end points is $M$ unmatched. Suppose we have an $M$ augmenting path $P$. Then we can construct a new matching $M' = M \triangle P$. Note that $|M'| > |M|$. This step is called augmenting the augmenting path $P$. Thus if we have the following algorithm: • Start with an arbitrary matching $M$. • While there exists an $M$ augmenting path $P$: • Augment path $P$ • Set $M = M \triangle P$ • EndWhile Lemma: $latex M$ is a maximum matching in $G$ if and only if $G$ has no $M$ augmenting paths. Proof: Clearly if $M$ is a maximum matching then $G$ can have no $M$ augmenting path since augmenting it would produce a matching $M'$ such that $|M'| > |M|$. Suppose $M$ has no augmenting path and let $N$ be a maximum matching. Consider $M \triangle N$. Notice that the degree of any vertex $v$ in $M \triangle N$ is at most $2$. This is because any vertex can have at most $1$ $M$ edge and at most $1$ $N$ edge incident on it. Observe that this implies $M \triangle N$ must be a disjoint collection of paths and cycles. Note that every cycle must be of even length since the cycle edges alternate between $M$ and $N$. Thus the number of $M$ edges in cycles is the same as the number of $N$ edges. In even paths the number of $M$ edges is the same as the number of $N$ edges since they are alternating. Suppose we have an odd length path. There are two possibilities – either it ends with an $M$ edge in which case it is $N$ augmenting (contradiction since $N$ is maximum) or it ends with an $N$ edge in which case it is $M$ augmenting (contradicting our hypothesis). Thus $M\triangle N$ consists of only even cycles and even paths. Thus $|M| = |N|$. Finding augmenting paths Start with all the unmatched vertices and consider a BFS like procedure from them. We grow the forest around them by adding unmatched and matched edges alternatively. Notice that there cannot be an unmatched edge connecting two matched vertices in the same component – since that gives us an odd cycle. Whenever two components are connected – we get an augmenting path.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 89, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907272458076477, "perplexity": 133.46409893722307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00566.warc.gz"}
https://quant.stackexchange.com/questions/35015/i-am-trying-to-fit-an-garchp-q-model-to-fx-volatility-should-i-be-interested
# I am trying to fit an GARCH(p,q) model to FX volatility. Should I be interested in the t-value of GARCH parameters? So what I am trying to do is to model the volatility for different currencies by fitting a GARCH(p, q) model. I am selecting the values of (p, q) by iteratively going through p & q such that max(p, q) = 5. After going through the 24 different combinations of p & q (excluding where p = q = 0), I select the model which has the lowest Akaike Information Criterion. I am doing all this in R using the rugarch package. For the "best" model selected for some currencies, I observed that the t-value is extremely low. I have observed t-value of omega in a GARCH(1, 1) model as low as 0.09. My question is that can I still use the model if the GARCH parameters are statistically insignificant? Or am I mistaken and the usual "statistical significance of model parameters" which I learned for linear regression doesn't apply for GARCH type models. I am a newbie to this field but I do not mind trying to learn from the start so any pointers would be greatly appreciated. The post became kind of lengthy, so if you're only interested in what you should do, just skip to the summary in the end. However, I think most of the confusion about these tests in general stems from a lack of knowledge about what one is actually doing, when applying the test. First, let's take a look at what we are testing. As you said, you want to see, whether the GARCH parameters are "significant", according to the test, in order to choose a lag order. Generally, for a parameter estimate $\hat\beta$, we want to test the null hypothesis $\mathcal{H}_0:\hat\beta=\beta_0$ against the alternative $\hat\beta\ne\beta_0$. As we want to see, whether our parameters are "significantly different from zero", we apply the test with $\beta_0=0$. Then, the t-statistic is given by \begin{align}\label{52} t_{\hat\beta}=\frac{\hat\beta-\beta_0}{\sigma_{\hat\beta}}=\frac{\hat\beta}{\sigma_{\hat\beta}}, \end{align} where $\sigma_{\hat\beta}$ denotes the standard error of our estimate $\hat\beta$, i.e. the standard deviation of its sampling distribution. Now, one of the main assumptions of the test comes into play, namely that the estimator is normally distributed. As you use the rugarch package, estimation is done by MLE. Thus, one can note that the asymptotic distribution of $\hat\beta$ is $\mathcal{N}(0,\sigma_{\hat\beta}^2)$ under the null hypothesis and some regularity conditions. Hence, in this case the t-statistic is asymptotically standard normally distributed. Considering that the standard normal distribution is symmetric, we can approximate the p-value by \begin{align}\label{53} p=2\cdot(1-\Phi(|t_{\hat\beta}|)), \end{align} with $\Phi(\cdot)$ denoting the standard normal cumulative distribution function. Numerically, to use this test, we firstly have to compute the standard error $\sigma_{\hat\beta}$. This can be (and usually is) done for every estimated parameter at once, by approximating the covariance matrix $\Sigma$ with the inverse observed information matrix ${\mathcal I}^{-1}$. The latter is given by the negative hessian of the likelihood $-\hat{\mathbf{H}}$, or the hessian of the negative likelihood, evaluated at the maximum likelihood estimate. Altogether, this means \begin{align}\label{54} \Sigma\approx\mathcal{I}^{-1}=(-\hat{\mathbf{H}})^{-1}, \end{align} which yields the standard errors for every parameter estimate, by calculating \begin{align}\label{55} \sqrt{\text{diag}\left((-\hat{\mathbf{H}})^{-1}\right)}. \end{align} Now, let's look into some of the possible issues one might encounter: • The estimator could be non-normally distributed. In your case, as you're using MLE, the asymptotic gaussian distribution could be inaccurate for small sample sizes. • The inversion of the hessian may prove to be difficult and require high numerical precision, specifically for highly (in)significant parameters. • It should be kept in mind that, inherently, statistical tests don't give you the probability that the null hypothesis is true, but the probability of observing the data, given that the null hypothesis is true. In summary, what I wanted to say is: The test certainly has its issues and shouldn't be used as the only deciding factor. However, it can provide some insight into the impact that a parameter has on a model's estimates. Thus, when deciding the lag order of models, one should always additionally consult other model selection measures, like e.g. information criteria (AIC/BIC). The t-values can be used as pointers, which parameters could be omitted for a potentially better fit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932981014251709, "perplexity": 342.15974170379025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00568.warc.gz"}
https://library.kiwix.org/wikipedia_en_top_maxi/A/Schiehallion_experiment
# Schiehallion experiment The Schiehallion experiment was an 18th-century experiment to determine the mean density of the Earth. Funded by a grant from the Royal Society, it was conducted in the summer of 1774 around the Scottish mountain of Schiehallion, Perthshire. The experiment involved measuring the tiny deflection of the vertical due to the gravitational attraction of a nearby mountain. Schiehallion was considered the ideal location after a search for candidate mountains, thanks to its isolation and almost symmetrical shape. Schiehallion's isolated position and symmetrical shape lent well to the experiment The experiment had previously been considered, but rejected, by Isaac Newton as a practical demonstration of his theory of gravitation; however, a team of scientists, notably Nevil Maskelyne, the Astronomer Royal, was convinced that the effect would be detectable and undertook to conduct the experiment. The deflection angle depended on the relative densities and volumes of the Earth and the mountain: if the density and volume of Schiehallion could be ascertained, then so could the density of the Earth. Once this was known, it would in turn yield approximate values for those of the other planets, their moons, and the Sun, previously known only in terms of their relative ratios. ## Background A pendulum hangs straight downwards in a symmetrical gravitational field. However, if a sufficiently large mass such as a mountain is nearby, its gravitational attraction should pull the pendulum's plumb-bob slightly out of true (in the sense that it doesn't point to the centre of mass of the Earth). The change in plumb-line angle against a known object—such as a star—could be carefully measured on opposite sides of the mountain. If the mass of the mountain could be independently established from a determination of its volume and an estimate of the mean density of its rocks, then these values could be extrapolated to provide the mean density of the Earth, and by extension, its mass. Isaac Newton had considered the effect in the Principia,[1] but pessimistically thought that any real mountain would produce too small a deflection to measure.[2] Gravitational effects, he wrote, were only discernible on the planetary scale.[2] Newton's pessimism was unfounded: although his calculations had suggested a deviation of less than 2 minutes of arc (for an idealised three-mile high [5 km] mountain), this angle, though very slight, was within the theoretical capability of instruments of his day.[3] An experiment to test Newton's idea would both provide supporting evidence for his law of universal gravitation, and estimates of the mass and density of the Earth. Since the masses of astronomical objects were known only in terms of relative ratios, the mass of the Earth would provide reasonable values to the other planets, their moons, and the Sun. The data were also capable of determining the value of Newton's gravitational constant G, though this was not a goal of the experimenters; references to a value for G would not appear in the scientific literature until almost a hundred years later.[4] ## Finding the mountain Chimborazo, the subject of the French 1738 experiment ### Chimborazo, 1738 A pair of French astronomers, Pierre Bouguer and Charles Marie de La Condamine, were the first to attempt the experiment, conducting their measurements on the 6,268-metre (20,564 ft) volcano Chimborazo in the Viceroyalty of Peru in 1738.[5] Their expedition had left France for South America in 1735 to try to measure the meridian arc length of one degree of latitude near the equator, but they took advantage of the opportunity to attempt the deflection experiment. In December 1738, under very difficult conditions of terrain and climate, they conducted a pair of measurements at altitudes of 4,680 and 4,340 m.[6] Bouguer wrote in a 1749 paper that they had been able to detect a deflection of 8 seconds of arc, but he downplayed the significance of their results, suggesting that the experiment would be better carried out under easier conditions in France or England.[3][6] He added that the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day, including Edmond Halley, had suggested.[5] ### Schiehallion, 1774 The symmetrical ridge of Schiehallion viewed across Loch Rannoch Between 1763 and 1767, during operations to survey the Mason–Dixon line between the states of Pennsylvania and Maryland, British astronomers found many more systematic and non-random errors than might have been expected, extending the work longer than planned.[7] When this information reached the members of the Royal Society, Henry Cavendish realized that the phenomenon may have been due to the gravitational pull of the nearby Allegheny Mountains, which had probably diverted the plumb lines of the theodolites and the liquids inside spirit levels.[8] Prompted by this news, a further attempt on the experiment was proposed to the Royal Society in 1772 by Nevil Maskelyne, Astronomer Royal.[9] He suggested that the experiment would "do honour to the nation where it was made"[3] and proposed Whernside in Yorkshire, or the Blencathra-Skiddaw massif in Cumberland as suitable targets. The Royal Society formed the Committee of Attraction to consider the matter, appointing Maskelyne, Joseph Banks and Benjamin Franklin amongst its members.[10] The Committee dispatched the astronomer and surveyor Charles Mason to find a suitable mountain.[1] After a lengthy search over the summer of 1773, Mason reported that the best candidate was Schiehallion (then spelled Schehallien), a 1,083 m (3,553 ft) peak lying between Loch Tay and Loch Rannoch in the central Scottish Highlands.[10] The mountain stood in isolation from any nearby hills, which would reduce their gravitational influence, and its symmetrical east–west ridge would simplify the calculations. Its steep northern and southern slopes would allow the experiment to be sited close to its centre of mass, maximising the deflection effect. Coincidentally, the summit lies almost exactly at the latitudinal and longitudinal centre of Scotland.[11] Mason declined to conduct the work himself for the offered commission of one guinea per day.[10] The task therefore fell to Maskelyne, for which he was granted a temporary leave of his duties as Astronomer Royal. He was aided in the task by mathematician and surveyor Charles Hutton, and Reuben Burrow who was a mathematician of the Royal Greenwich Observatory. A workforce of labourers was engaged to construct observatories for the astronomers and assist in the surveying. The science team was particularly well-equipped: its astronomical instruments included a 12 in (30 cm) brass quadrant from Cook's 1769 transit of Venus expedition, a 10 ft (3.0 m) zenith sector, and a regulator (precision pendulum clock) for timing the astronomical observations.[12] They also acquired a theodolite and Gunter's chain for surveying the mountain, and a pair of barometers for measuring altitude.[12] Generous funding for the experiment was available due to underspend on the transit of Venus expedition, which had been turned over to the Society by King George III of the United Kingdom.[1][3] ## Measurements ### Astronomical The deflection is the difference between the true zenith Z as determined by astrometry, and the apparent zenith Z as determined by a plumb-line Observatories were constructed to the north and south of the mountain, plus a bothy to accommodate equipment and the scientists.[6] The ruins of these structures remain on the mountainside. Most of the workforce was housed in rough canvas tents. Maskelyne's astronomical measurements were the first to be conducted. It was necessary for him to determine the zenith distances with respect to the plumb line for a set of stars at the precise time that each passed due south (astronomic latitude).[3][13][14] Weather conditions were frequently unfavourable due to mist and rain. However, from the south observatory, he was able to take 76 measurements on 34 stars in one direction, and then 93 observations on 39 stars in the other. From the north side, he then conducted a set of 68 observations on 32 stars and a set of 100 on 37 stars.[6] By conducting sets of measurements with the plane of the zenith sector first facing east and then west, he successfully avoided any systematic errors arising from collimating the sector.[1] To determine the deflection due to the mountain, it was necessary to account for the curvature of the Earth: an observer moving north or south will see the local zenith shift by the same angle as any change in geodetic latitude. After accounting for observational effects such as precession, aberration of light and nutation, Maskelyne showed that the difference between the locally determined zenith for observers north and south of Schiehallion was 54.6 arc seconds.[6] Once the surveying team had provided a difference of 42.94″ latitude between the two stations, he was able to subtract this, and after rounding to the accuracy of his observations, announce that the sum of the north and south deflections was 11.6″.[3][6][15] Maskelyne published his initial results in the Philosophical Transactions of the Royal Society in 1775,[15] using preliminary data on the mountain's shape and hence the position of its center of gravity. This led him to expect a deflection of 20.9″ if the mean densities of Schiehallion and the Earth were equal.[3][16] Since the deflection was about half this, he was able to make a preliminary announcement that the mean density of the Earth was approximately double that of Schiehallion. A more accurate value would have to await completion of the surveying process.[15] Maskelyne took the opportunity to note that Schiehallion exhibited a gravitational attraction, and thus all mountains did; and that Newton's inverse square law of gravitation had been confirmed.[15][17] An appreciative Royal Society presented Maskelyne with the 1775 Copley Medal; the biographer Chalmers later noting that "If any doubts yet remained with respect to the truth of the Newtonian system, they were now totally removed".[18] ### Surveying The work of the surveying team was greatly hampered by the inclemency of the weather, and it took until 1776 to complete the task.[16][lower-alpha 1] To find the volume of the mountain, it was necessary to divide it into a set of vertical prisms and compute the volume of each. The triangulation task falling to Charles Hutton was considerable: the surveyors had obtained thousands of bearing angles to more than a thousand points around the mountain.[19] Moreover, the vertices of his prisms did not always conveniently coincide with the surveyed heights. To make sense of all his data, he hit upon the idea of interpolating a series of lines at set intervals between his measured values, marking points of equal height. In doing so, not only could he easily determine the heights of his prisms, but from the swirl of the lines one could get an instant impression of the form of the terrain. Hutton thus used contour lines, which became in common use since for depicting cartographic relief.[6][19] Hutton's solar system density table Body Density, kg·m−3 Hutton, 1778[20][lower-alpha 2] Modern value[21] Sun1,1001,408 Mercury9,2005,427 Venus5,8005,204 Earth4,5005,515 Moon3,1003,340 Mars3,3003,934 Jupiter1,1001,326 Saturn  410  687 Hutton had to compute the individual attractions due to each of the many prisms that formed his grid, a process which was as laborious as the survey itself. The task occupied his time for a further two years before he could present his results, which he did in a hundred-page paper to the Royal Society in 1778.[20] He found that the attraction of the plumb-bob to the Earth would be 9,933 times that of the sum of its attractions to the mountain at the north and south stations, if the density of the Earth and Schiehallion had been the same.[19] Since the actual deflection of 11.6″ implied a ratio of 17,804:1 after accounting for the effect of latitude on gravity, he was able to state that the Earth had a mean density of ${\displaystyle {\tfrac {17,804}{9,933}}}$, or about ${\displaystyle {\tfrac {9}{5}}}$ that of the mountain.[16][19][20] The lengthy process of surveying the mountain had not therefore greatly affected the outcome of Maskelyne's calculations. Hutton took a density of 2,500 kg·m−3 for Schiehallion, and announced that the density of the Earth was ${\displaystyle {\tfrac {9}{5}}}$ of this, or 4,500 kg·m−3.[19] In comparison with the modern accepted figure of 5,515 kg·m−3,[21] the density of the Earth had been computed with an error of less than 20%. That the mean density of the Earth should so greatly exceed that of its surface rocks naturally meant that there must be more dense material lying deeper. Hutton correctly surmised that the core material was likely metallic, and might have a density of 10,000 kg·m−3.[19] He estimated this metallic portion to occupy some 65% of the diameter of the Earth.[20] With a value for the mean density of the Earth, Hutton was able to set some values to Jérôme Lalande's planetary tables, which had previously only been able to express the densities of the major solar system objects in relative terms.[20] ## Repeat experiments A more accurate measurement of the mean density of the Earth was made 24 years after Schiehallion, when in 1798 Henry Cavendish used an exquisitely sensitive torsion balance to measure the attraction between large masses of lead. Cavendish's figure of 5,448 ± 33 kg·m−3 was only 1.2% from the currently accepted value of 5,515 kg·m−3, and his result would not be significantly improved upon until 1895 by Charles Boys.[lower-alpha 3] The care with which Cavendish conducted the experiment and the accuracy of his result has led his name to since be associated with it.[22] John Playfair carried out a second survey of Schiehallion in 1811; on the basis of a rethink of its rock strata, he suggested a density of 4,560 to 4,870 kg·m−3,[23] though the then elderly Hutton vigorously defended the original value in an 1821 paper to the Society.[3][24] Playfair's calculations had raised the density closer towards its modern value, but was still too low and significantly poorer than Cavendish's computation of some years earlier. Arthur's Seat, the site of Henry James's 1856 experiment The Schiehallion experiment was repeated in 1856 by Henry James, director-general of the Ordnance Survey, who instead used the hill Arthur's Seat in central Edinburgh.[6][14][25] With the resources of the Ordnance Survey at his disposal, James extended his topographical survey to a 21-kilometre radius, taking him as far as the borders of Midlothian. He obtained a density of about 5,300 kg·m−3.[3][16] An experiment in 2005 undertook a variation of the 1774 work: instead of computing local differences in the zenith, the experiment made a very accurate comparison of the period of a pendulum at the top and bottom of Schiehallion. The period of a pendulum is a function of g, the local gravitational acceleration. The pendulum is expected to run more slowly at altitude, but the mass of the mountain will act to reduce this difference. This experiment has the advantage of being considerably easier to conduct than the 1774 one, but to achieve the desired accuracy, it is necessary to measure the period of the pendulum to within one part in one million.[13] This experiment yielded a value of the mass of the Earth of 8.1 ± 2.4 × 1024 kg,[26] corresponding to a mean density of 7,500 ± 1,900 kg·m−3.[lower-alpha 4] A modern re-examination of the geophysical data was able to take account of factors the 1774 team could not. With the benefit of a 120-km radius digital elevation model, greatly improved knowledge of the geology of Schiehallion, and in particular a computer, a 2007 report produced a mean Earth density of 5,480 ± 250 kg·m−3.[27] When compared to the modern figure of 5,515 kg·m−3, it stood as a testament to the accuracy of Maskelyne's astronomical observations.[27] ## Mathematical procedure Schiehallion force diagram Consider the force diagram to the right, in which the deflection has been greatly exaggerated. The analysis has been simplified by considering the attraction on only one side of the mountain.[23] A plumb-bob of mass m is situated a distance d from P, the centre of mass of a mountain of mass MM and density ρM. It is deflected through a small angle θ due to its attraction F towards P and its weight W directed towards the Earth. The vector sum of W and F results in a tension T in the pendulum string. The Earth has a mass ME, radius rE and a density ρE. The two gravitational forces on the plumb-bob are given by Newton's law of gravitation: ${\displaystyle F={\frac {GmM_{M}}{d^{2}}},\quad W={\frac {GmM_{E}}{r_{E}^{2}}}}$ where G is Newton's gravitational constant. G and m can be eliminated by taking the ratio of F to W: ${\displaystyle {\frac {F}{W}}={\frac {GmM_{M}/d^{2}}{GmM_{E}/r_{E}^{2}}}={\frac {M_{M}}{M_{E}}}\left({\frac {r_{E}}{d}}\right)^{2}={\frac {\rho _{M}}{\rho _{E}}}{\frac {V_{M}}{V_{E}}}\left({\frac {r_{E}}{d}}\right)^{2}}$ where VM and VE are the volumes of the mountain and the Earth. Under static equilibrium, the horizontal and vertical components of the string tension T can be related to the gravitational forces and the deflection angle θ: ${\displaystyle W=T\cos \theta ,\quad F=T\sin \theta }$ Substituting for T: ${\displaystyle \tan \theta ={\frac {F}{W}}={\frac {\rho _{M}}{\rho _{E}}}{\frac {V_{M}}{V_{E}}}\left({\frac {r_{E}}{d}}\right)^{2}}$ Since VE, VM and rE are all known, θ has been measured and d has been computed, then a value for the ratio ρE : ρM can be obtained:[23] ${\displaystyle {\frac {\rho _{E}}{\rho _{M}}}={\frac {V_{M}}{V_{E}}}\left({\frac {r_{E}}{d}}\right)^{2}{\frac {1}{\tan \theta }}}$ ## Notes 1. During a drunken party to celebrate the end of the surveying, the northern observatory was accidentally burned to the ground, taking with it a fiddle belonging to Duncan Robertson, a junior member of the surveying team. In gratitude for the entertainment Robertson's playing had provided Maskelyne during the four months of astronomical observations, he compensated him by replacing the lost violin with one that is now called The Yellow London Lady. 2. Hutton's values are expressed as common fractions as a multiple of the density of water, e.g. Mars ${\displaystyle {\tfrac {10}{3}}}$. They have been expressed here as two significant-digit integer, multiplied by a water density of 1000 kg·m−3 3. A value of 5,480 kg·m−3 appears in Cavendish's paper. He had however made an arithmetical error: his measurements actually led to a value of 5,448 kg·m−3; a discrepancy that was not found until 1821 by Francis Baily. 4. Taking the volume of the Earth to be 1.0832 × 1012 km3. ## References 1. Davies, R.D. (1985). "A Commemoration of Maskelyne at Schiehallion". Quarterly Journal of the Royal Astronomical Society. 26 (3): 289–294. Bibcode:1985QJRAS..26..289D. 2. Newton (1972). Philosophiæ Naturalis Principia Mathematica. II. p. 528. ISBN 0-521-07647-1. Translated: Andrew Motte, First American Edition. New York, 1846 3. Sillitto, R.M. (31 October 1990). "Maskelyne on Schiehallion: A Lecture to The Royal Philosophical Society of Glasgow". Retrieved 28 December 2008. 4. Cornu, A.; Baille, J. B. (1873). "Mutual determination of the constant of attraction and the mean density of the earth". Comptes rendus de l'Académie des sciences. 76: 954–958. 5. Poynting, J.H. (1913). The Earth: its shape, size, weight and spin. Cambridge. pp. 50–56. 6. Poynting, J. H. (1894). The mean density of the earth (PDF). pp. 12–22. 7. Mentzer, Robert (August 2003). "How Mason & Dixon Ran Their Line" (PDF). Professional Surveyor Magazine. Retrieved 3 August 2021. 8. Tretkoff, Ernie. "This Month in Physics History June 1798: Cavendish weighs the world". American Physical Society. Retrieved 3 August 2021. 9. Maskelyne, N. (1772). "A proposal for measuring the attraction of some hill in this Kingdom". Philosophical Transactions of the Royal Society. 65: 495–499. Bibcode:1775RSPT...65..495M. doi:10.1098/rstl.1775.0049. 10. Danson, Edwin (2006). Weighing the World. Oxford University Press. pp. 115–116. ISBN 978-0-19-518169-2. 11. Hewitt, Rachel (2010). Map of a Nation: A Biography of the Ordnance Survey. Granta Books. ISBN 9781847084521. 12. Danson, Edwin (2006). Weighing the World. Oxford University Press. p. 146. ISBN 978-0-19-518169-2. 13. "The "Weigh the World" Challenge 2005" (PDF). countingthoughts. 23 April 2005. Retrieved 28 December 2008. 14. Poynting, J.H. (1913). The Earth: its shape, size, weight and spin. Cambridge. pp. 56–59. 15. Maskelyne, N. (1775). "An Account of Observations Made on the Mountain Schiehallion for Finding Its Attraction". Philosophical Transactions of the Royal Society. 65: 500–542. doi:10.1098/rstl.1775.0050. 16. Poynting, J. H.; Thomson, J. J. (1909). A text-book of physics (PDF). pp. 33–35. ISBN 1-4067-7316-6. 17. Mackenzie, A.S. (1900). The laws of gravitation; memoirs by Newton, Bouguer and Cavendish, together with abstracts of other important memoirs (PDF). pp. 53–56. 18. Chalmers, A. (1816). The General Biographical Dictionary. 25. p. 317. 19. Danson, Edwin (2006). Weighing the World. Oxford University Press. pp. 153–154. ISBN 978-0-19-518169-2. 20. Hutton, C. (1778). "An Account of the Calculations Made from the Survey and Measures Taken at Schehallien". Philosophical Transactions of the Royal Society. 68. doi:10.1098/rstl.1778.0034. 21. "Planetary Fact Sheet". Lunar and Planetary Science. NASA. Retrieved 2 January 2009. 22. Jungnickel, Christa; McCormmach, Russell (1996). Cavendish. American Philosophical Society. pp. 340–341. ISBN 978-0-87169-220-7. 23. Ranalli, G. (1984). "An Early Geophysical Estimate of the Mean Density of the Earth: Schehallien, 1774". Earth Sciences History. 3 (2): 149–152. doi:10.17704/eshi.3.2.k43q522gtt440172. 24. Hutton, Charles (1821). "On the mean density of the earth". Proceedings of the Royal Society. 25. James (1856). "On the Deflection of the Plumb-Line at Arthur's Seat, and the Mean Specific Gravity of the Earth". Proceedings of the Royal Society. 146: 591–606. doi:10.1098/rstl.1856.0029. JSTOR 108603. 26. "The "Weigh the World" Challenge Results". countingthoughts. Retrieved 28 December 2008. 27. Smallwood, J.R. (2007). "Maskelyne's 1774 Schiehallion experiment revisited". Scottish Journal of Geology. 43 (1): 15–31. doi:10.1144/sjg43010015. S2CID 128706820.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616755604743958, "perplexity": 1853.9737806913452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00221.warc.gz"}
http://mathhelpforum.com/pre-calculus/184673-plotting-equation-print.html
# plotting an equation • July 16th 2011, 11:31 PM nimishak plotting an equation I usually have problem in plotting the equations, eg y=2ux/u-x (plotting y wrt x). Can someone suggest a link where i can get a quick idea of how to do it? Thanks! • July 17th 2011, 04:17 AM skeeter Re: plotting an equation Quote: Originally Posted by nimishak I usually have problem in plotting the equations, eg y=2ux/u-x (plotting y wrt x). Can someone suggest a link where i can get a quick idea of how to do it? Thanks! what does "u" represent? • July 17th 2011, 04:31 AM nimishak Re: plotting an equation 'u' is a constant. • July 17th 2011, 04:48 AM skeeter Re: plotting an equation Quote: Originally Posted by nimishak 'u' is a constant. what you have is a rational function of the form $y = \frac{ax}{b-x}$ where $\, a,b \,$ are both constants. vertical asymptote $x = b$ and horizontal asymptote $y = -a$. here is a link to a series of lessons about graphing rational functions ... Graphing Rational Functions: Introduction
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222142457962036, "perplexity": 2900.040109807731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456861731.32/warc/CC-MAIN-20150501050741-00045-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.sangakoo.com/en/unit/logarithms-definition-and-properties
# Logarithms: definition and properties It is known that $$5^3=125$$, but what happens in case that the unknown is the exponent? $$5^x=125$$ In the previous example, it is enough to multiply $$5$$ by itself until we obtain $$125$$. $$5\cdot5\cdot5=125$$ After multiplying $$5$$ three times, $$125$$ is obtained, so the value of the exponent is $$3$$. In the following example: $$3^x=2187$$ $$3\cdot3\cdot3\cdot3\cdot3\cdot3\cdot3\cdot=2.187$$ So the exponent of 3 to obtain $$2.187$$ is $$7$$. There is a more practical way of finding out the exponents without having to multiply until finding the number: the logarithms. In the first example $$5^3=125$$, if we apply a logarithm, we obtain the following expression: $$log_5 125=3$$$where $$5$$ is the base of the logarithm (as it was in the power), and the expression is read as logarithm of $$125$$ to base $$5$$. If we apply logarithms in the second example: $$log_3 2.187=7$$$ Namely logarithm of $$2.187$$ to base $$3$$. Bearing in mind that the general expression of a power is $$a^n=x$$$the general expression of a logarithm is: $$log_a x=n$$$ This expression allows us to calculate the number $$n$$ to which the number $$a$$ must be raised in order to produce the number $$x$$. It is only possible to calculate the logarithm of a positive number $$> 0$$ and its base must be $$> 0$$ and not equal to $$1$$. $$log_3 0$$ It is not possible to express $$0$$ as a power of $$3$$. In fact, there is no such number that multiplied by himself results in $$0$$, therefore it is not possible to calculate. $$log_1 20$$ There is no way of expressing $$20$$ as a power with base $$1$$ because $$1^n=1$$ Raising a number to $$1$$ does not really make sense, therefore it makes no sense to calculate the logarithm to base $$1$$. We can deduce, therefore, that the base of a logarithm has to be a number greater than $$1$$. But, if it is only possible to calculate the logarithm of a number $$> 0$$, does the logarithm of $$1$$ exist? $$log_2 1$$ If we express $$1$$ as a power of base $$2$$: $$log_2 1=log_2 2^0$$ since $$2^0=1$$ For this reason $$log_2 1=log_2 2^0=0$$ The example allows to deduce that, in the general expression of a logarithm $$log_a x=n$$, when $$x=1$$, the value of the logarithm, no matter its base, it will always be $$0$$, since the only exponent to which it is possible to raise a number to obtain $$1$$ is $$0$$. In other words, since: $$a^0=1$$ then $$log_a 1=0$$. Calculating simple logarithms can be done immediately if we express the value of $$x$$ as a power of the same base as the logarithm. Continuing with the initial example: $$log_5 125=log_5 5^3=3$$$So, $$3$$ is the number to which it is necessary to raise $$5$$ to obtain $$125$$. More cases: $$log_2 4=log_2 2^2=2$$$ So that $$2$$ is the number to which it is necessary to raise $$2$$ to obtain $$4$$. $$log_{10} 1.000=log_{10} 10^3=3$$ Therefore $$3$$ is the number to which it is necessary to raise $$10$$ to obtain $$1.000$$. These examples introduce one of the properties of the logarithms: $$log_a x^y = y \cdot log_a x$$$But the logarithm of $$a$$ to base $$a$$ is always $$1$$. $$log_2 2=1$$$ because the number to which it is necessary to raise $$2$$ to obtain $$2$$ can be only $$1$$. So that $$log_a a^n=n\cdot 1=n$$\$ Before reaching the exercises, it is necessary to remember that, being related to the powers, the logarithms are also related with the roots, since: $$\sqrt[n]{a}=a^{\frac{1}{n}}=x$$ Then, in this case: $$log_a x=\dfrac{1}{n}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990441620349884, "perplexity": 1735.394089973534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00285.warc.gz"}
https://www.physicsforums.com/threads/de-sitter-space.315377/
# De Sitter space 1. May 20, 2009 ### ramparts Hey all - I'm trying to get a better qualitative understanding of de Sitter space, and I'm a bit confused about the de Sitter universe. I've always seen it mathematically as a hyperboloid embedded in a higher-dimensional Minkowski space (with some annoying metric I've forgotten) but I've read (alright, fine, on Wikipedia :P) that a patch of de Sitter universe can be expressed as an FRW cosmology with a(t) = exp(Ht) - So... under that description, why is the de Sitter metric not just an FRW metric with that scale factor? 2. May 21, 2009 ### George Jones Staff Emeritus 3. May 21, 2009 ### Mosis I always did understand de Sitter space through its cosmological description - a flat, vacuum filled universe that undergoes exponential expansion (equivalently, constant Hubble paramater). What about this description is unsatisfactory? 4. May 22, 2009 ### ramparts Thanks George! Don't have my Wald or Carroll with me so I couldn't look it up.... The problem was that I was used to seeing the de Sitter metric in a form similar to eqn 1.94 in George's link, so I was a bit confused by the description of it as a FRW cosmology with a=exp(Ht). The coordinate transformation makes sense (or will, when I think about it some more .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799985647201538, "perplexity": 1135.912065744422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589536.40/warc/CC-MAIN-20180716232549-20180717012549-00413.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/8494558
Format Choose Destination Bioelectromagnetics. 1993;14(2):173-86. # Design and characterization of a system for exposure of cultured cells to extremely low frequency electric and magnetic fields over a wide range of field strengths. ### Author information 1 Center for Biomedical Engineering, Lexington, KY 40506-0070. ### Abstract A system is described that is capable of producing extremely low frequency (ELF) magnetic fields for relatively short-term exposure of cultured mammalian cells. The system utilizes a ferromagnetic core to contain and direct the magnetic field of a 1,000 turn solenoidal coil and can produce a range of flux densities and induced electric fields much higher than those produced by Helmholtz coils. The system can generate magnetic fields from the microtesla (microT) range up to 0.14 T with induced electric field strengths on the order of 1.0 V/m. The induced electric field can be accurately varied by changing the sample chamber configuration without changing the exposure magnetic field. This gives the system the ability to separate the bioeffects of magnetic and induced electric fields. In the frequency range of 4-100 Hz and magnetic flux density range of 0.005-0.14 T, the maximum total harmonic distortion of the induced electric field is typically less than 1.0%. The temperature of the samples is held constant to within 0.4 degrees C by constant perfusion of warmed culture medium through the sample chamber. PMID: 8494558 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204484224319458, "perplexity": 1110.6931808940021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160337.78/warc/CC-MAIN-20180924090455-20180924110855-00397.warc.gz"}
https://www.hackingmath.com/author/rick/
Start with a graph that has a radius of $3$ you know how this graph looks like it's just a circle with a radius of $3$ » Graphing the trigonometric functions can be a bit tricky especially if you are going to be doing this by hand. I'm going to be showing you » The phase shift of a graph determines if the graph is going to be shifted left or right on the x-plane of the graph. Asin[B( » The period of a trigonometric function is closely related to the frequency of the function. They are related but not the exact same thing. Period vs » The frequency is closely related to the period of the base trigonometric functions. Since we are using the definition of the length of the given circle » To find the vertical shift of a trigonometric function you will need to take a close look at the function that you are viewing. Let's start » The amplitude of a trig function defines how much the graph is going to be getting stretched or compressed on the y-axis. Take for example the » This is the most basic of an idea but it's an important idea that you need to understand to be able to comprehend higher levels of » table { font-family: arial, sans-serif; border-collapse: collapse; width: 100%; } td, th { border: 1px solid #dddddd; text-align: left; padding: 8px; } Degrees Radians sin(x) cos(x) tan(x) » This post is not going to be about how to use your fingers or sing a song to memorize the common trigonometric angles. Instead of having » A right triangle is a triangle that contains one right angle and two acute angles which all add up to 180° degrees.right\ angle=90°acute\ » Question: The Giant Wheel at Cedar Point Amusement Park is a circle with diameter 128 feet which sits on an 8 foot tall platform making its » The circumference of a circle is the length that composes a circle. If you were to undo a circle and lay it down as a flat » Linear speed is how fast the arc of any given angle is growing. This is useful to determine the linear speed with the relationship to time. » To find any given theta angle you will need to know the arc length and the radius. Any given angle gives the rise to the arc, » Angular speed is how fast the central angle changes in respect to time. Below you will see an angle that is changing by $\theta$ distance. With » Two acute angles that equal to the sum of 90 degrees are complementary angles. $$\alpha =90°-\beta$$ » The DMS system also referred to as Degree-Minutes-Seconds is a system that is typically used by surveying a position that requires longitude and latitude. In the » The DMS system also referred to as Degree-Minutes-Seconds is a system that is typically used by surveying a position that requires longitude and latitude. In the » A ray is created based on an initial point and expanded out to n length. » Acute Angle $$cos(x)>0\Rightarrow 0°< x < 90°$$ Obtuse Angle $$cos(x)<0\Rightarrow 90°< x <180°$$ » Here is a list of degrees to radians conversions that you need to have memorized. You need to be able to recite each one of this » The function tan(x) $$tan(x)=\frac { opposite }{ adjacent }$$ The inverse of the function $$tan(x)=\frac { adjacent }{ opposite }$$ Or you can also represent tan(x) » The function cos(x) $$cos(x)=\frac { adjacent }{ hypotenuse }$$ The inverse of the function $$sec(x)=\frac { hypotenuse }{ adjacent }$$ » The function sin(x) $$sin(x)=\frac { opposite }{ hypotenuse }$$ The inverse of the function $$csc(x)=\frac { hypotenuse }{ opposite }$$ » The follow triangle shows the relationships between $x$ and the relating sides. Soh $$sin(x)=\frac { opposite }{ hypotenuse }$$ Cah $$cos(x)=\frac { adjacent }{ hypotenuse }$$ Toa $$tan( » To convert degrees to radians you will need to multiply degrees by pi and divide by 180.$$f(x)=\frac { \pi (x) }{ 180 } $$» To convert radians to degrees you will need to multiply the radians by 180 and divide by pi.$$ f(x)=\frac { 180(x) }{ \pi } $$»$$1°=\frac { 1 }{ 360 }$$»$$ y-{ y }{ 1 }=m(x-{ x }{ 1 }) $$» A radian is a unit measurement of an angle.$$2\pi = 360°\pi = 180°1radian=(\frac { 180 }{ \pi } )\approx 57.3°$$» An angle is composed of three different parts. Vertex Terminal Initial Side » This single one image shows you every single relationship from a triangle to a circle and the lines that join both of this shapes together. » Finding the vertical asymtote is the easiest at of all them. Yes! Don't make a big deal out of it it's simple. To locate the vertical » Finding the horizontal asymtote is a bit more tricky then just the simple vertical asymtote. You have to take to consideration a couple of things before » You have 200 yards of fencing and wish to enclose a rectangular area. Create a function such that it expresses width of the rectangular area. The »$$\frac { rise }{ run } =\frac { Δy }{ Δx } =\frac {y2-y1}{ x2-x1 }$$» To determine if a factor is part of a given equation you have to find out if the devision of that factor by the polynomial will » Find the equation of a hyperbola with following characteristics.$$Origin: (0,0)\ Focus: (3,0)\ Vertex: (-2,0)$$a is the distance from the origin of » Hyperbola is a collection of all points in a plane, where the difference of the distance from two fixed points is the foci is a constant. » Find the equation of an ellipse with the following.$$Center: (0,0)\ Focus: (3,0)\ Vertex: (-4,0) a is the distance from the origin of » An ellipse is a collection of all points in a plane which the sum of two fixed points is called the foci a constant. Using the » To analyze an equation of a parabola their special characteristics that can be looked at to determine what type of parabola it is. Example: Let's get » Find the equation of the parabola: Before getting started with the algebra the problem provides the following information. Focus: (-4,0) Vertex: (0,0) Now let's » To figure out the equation of a parabola we must first understand what is a parabola? The equation of a parabola has to have the following. » A parabola is a collection of a all points in a plane that are the same distance from a fixed point as they are from a » When working with logarithmic equations think about each one as a representative of and exponential equation because it is it's inverse. Example: » To solve exponential functions you must first get the equation to be the same base left and right in the LOWEST POSSIBLE BASE. Example » 1.The domain is the set of all real numbers, and the range is the set of positive numbers 2. There are no x-intercepts; the y »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902778029441833, "perplexity": 490.07248346788293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00310.warc.gz"}
https://www.intechopen.com/books/vortex-dynamics-and-optical-vortices/spin-wave-dynamics-in-the-presence-of-magnetic-vortices/
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy. Physics » "Vortex Dynamics and Optical Vortices", book edited by Hector Perez-de-Tejada, ISBN 978-953-51-2930-1, Print ISBN 978-953-51-2929-5, Published: March 1, 2017 under CC BY 3.0 license. © The Author(s). # Spin-Wave Dynamics in the Presence of Magnetic Vortices By Sławomir Mamica DOI: 10.5772/66099 Article top ## Overview Figure 1. Exchange vs. dipolar interactions. Preferred configuration of magnetic moments depends on the sign of the exchange integral J for the exchange interaction while on the alignment of magnetic moments for the dipolar interaction. Figure 2. (a) Different preferred configurations of the in-plane magnetization component in dots of different shape. (b) Core-vortex vs. in-plane vortex. Figure 3. Schematic plots of two in-plane vortices typical for two types of rings: (a) a circular magnetization in a circular ring and (b) closure domains (Landau state) in a square ring. Both rings are based on a 2D square lattice with magnetic moments (represented by the arrows) arranged in the lattice sites. To the right in figure (a), the local coordinate system associated with the magnetic moment indicated by the arrow. Figure 4. (a) An exemplar spin-wave spectrum calculated for the circular dot of the diameter L=101 lattice constants consists of 8000 spins. The in-plane vortex configuration is assumed and the dipolar-to-exchange interaction ratio d is set to 0.42. The inset shows 14 lowest modes of the spectrum. (b) Spin-wave profiles of 14 lowest modes of the spectrum shown in (a). Figure 5. The frequency dependence on the dipolar-to-exchange interaction ratio d (in logarithmic scale) for 36 lowest modes in the spin-wave spectrum of the circular dot of the diameter L=101 in the in-plane vortex state. The color assignment is indicated at the right; the colors repeat cyclically for successive modes. There are no zero-frequency modes between two critical values d1 and d2 which is indicative for the stability of the assumed magnetic configuration. Figure 6. (a) Critical values d1 and d2 vs. the dot size (the number of magnetic moments within the dot, in logarithmic scale) for circular dot in the in-plane vortex state. (b) Exemplar profiles of the lowest mode in circular dots for d≈d1 (left profile) and for d≈d2 (right profile). Above each profile, its section along the indicated lines. Figure 7. (a) The dependence of the lowest mode frequency vs. dipolar-to-exchange interaction ratio d (in logarithmic scale) in circular dots of different diameter L with the in-plane vortex as a magnetic configuration. On every curve, the crossing between first- and second-order azimuthal modes is marked with a black square (if exists). (b, c) Evolution of the lowest mode profile with d in dots of diameters 51 and 101, respectively. Figure 8. (a) The frequency dependence on the dipolar-to-exchange interaction ratio d (in logarithmic scale) for 25 lowest modes in the spin-wave spectrum of the circular ring of the external diameter L=25 and the internal one L′=2 in the in-plane vortex state. (b) The evolution of the lowest mode profile. Profiles are calculated for six values of d marked with arrows in (a). Figure 9. (a) Critical value d1 vs. the internal diameter L′ of the circular ring for four external diameters L. The dashed line in (a) indicates the value of d for Co/Cu(001) calculated from experimental results reported in reference [38]. (b) Spin-wave profiles of the soft mode in circular rings calculated for d≈d1 for two internal diameters L′. The external diameter of rings is fixed at L=23. To the right of each profile, its section along the indicated lines. Figure 10. (a) Critical value d1 vs. the internal size L′ of the square ring for four external sizes L. (b) Spin-wave profiles of the soft mode in square rings calculated for d≈d1 for three internal sizes L′. The external size of rings is fixed at L=22. Above each profile, its section along the indicated lines. # Spin-Wave Dynamics in the Presence of Magnetic Vortices Sławomir Mamica Show details ## Abstract This chapter describes spin-wave excitations in nanosized dots and rings in the presence of the vortex state. The special attention is paid to the manifestation of the competition between exchange and dipolar interactions in the spin-wave spectrum as well as the correlation between the spectrum and the stability of the vortex. The calculation method uses the dynamic matrix for an all-discrete system, the numerical diagonalization of which yields the spectrum of frequencies and spin-wave profiles of normal modes of the dot. We study in-plane vortices of two types: a circular magnetization in circular dots and rings and the Landau state in square rings. We examine the influence of the dipolar-exchange competition and the geometry of the dot on the stability of the vortex and on the spectrum of spin waves. We show that the lowest-frequency mode profile proves to be indicative of the dipolar-to-exchange interaction ratio and the vortex stability is closely related to the spin-wave profile of the soft mode. The negative dispersion relation is also shown. Our results obtained for in-plane vortices are in qualitative agreement with results for core-vortices obtained from experiments, micromagnetic simulations, and analytical calculations. Keywords: magnetic dot, in-plane vortex, spin waves, stability, dipolar-exchange competition ## 1. Introduction One of the hottest topics nowadays are small magnetic dots and rings with a thickness in a range of few tens of nanometers and the diameter ranging from one hundred nanometers to a few micrometers. A strong interest in such systems originates from their potential applicability as well as rich physics [1]. The physical properties of magnetic nanodots are related mostly to the concurrence of two types of magnetic interactions, namely exchange and dipolar ones. Usually, the coexistence of long- and short-distance interactions leads to new phenomena, such as surface and subsurface localization of the spin waves in layered magnetic systems [2, 3], opening of the band gaps in magnonic crystals [4, 5], or splitting the spin-wave spectrum into subbands in patterned multilayers [6, 7]. In the case of exchange and dipolar interactions, the situation is even more interesting due to competitive effects of these two schematically shown in Figure 1. #### Figure 1. Exchange vs. dipolar interactions. Preferred configuration of magnetic moments depends on the sign of the exchange integral J for the exchange interaction while on the alignment of magnetic moments for the dipolar interaction. The favorable alignment of two magnetic moments (also called them spins) coupled via exchange interaction depends on the sign of the so-called exchange integral, J, regardless of their mutual position. If J>0 the spins are parallel (ferromagnetic, FM, coupling) while for J<0 the spins are antiparallel (antiferromagnetic, AFM, coupling). Dipolar coupling, on the other hand, depends on the mutual positions of spins being FM if the spins are aligned one after another and AFM for spins alongside one another (see Figure 1). As a result, the ferromagnetic exchange interaction forces parallel configuration of spins leading to the magnetic monodomain whereas pure dipolar interaction leads to the in-plane alignment of spins and so-called labyrinth magnetic structures [8]. Additionally, the dipolar interaction is a long range one and consequently very sensitive for size and shape of the sample while the exchange interaction is local. Thus, the competition between these two also depends on the size and shape of the system. The concurrence of these to competitive interactions is the origin of the variety of possible magnetic configurations and leads to the occurrence of magnetic vortices in nanosized dots and rings [912]. In the vortex configuration, a magnetization component lying in the plane of the dot forms a closure state. Depending on the shape of the system, this in-plane magnetization can be realized as a circular magnetization in circular dots and rings or as a Landau state (closure domain configuration) in square rings, as shown in Figure 2a. In square dots, according to the simulations [13], the magnetic configuration is a mixture of these two states: along borders Landau state appears, which is the effect of the minimization of the surface magnetic charges, while in the central part of the dot, the magnetization is circular as a result of the tendency to decrease the (local) exchange energy. The area of circular magnetization is relatively small; therefore, in large-square dots, the Landau state prevails in the major part of the dot. However, in small dots, the circular in-plane magnetization fails to fit the geometry of the system only in minor corner regions. #### Figure 2. (a) Different preferred configurations of the in-plane magnetization component in dots of different shape. (b) Core-vortex vs. in-plane vortex. For strong exchange interaction, the circular in-plane configuration is not enough to minimize the exchange energy at the vortex center (which is not necessary the dot center, however, for the stable vortex its center is in close vicinity to the center of the dot). As a consequence, spins at the center are rotated from their in-plane alignment (forced by dipolar interactions) forming so-called vortex core, a tiny region with a nonzero out-of-plane component of magnetization (Figure 2b). In typical ferromagnets, such as cobalt or permalloy, the exchange interaction is strong thus in experiments the vortex core is observed [1416]. In rings, the center of the vortex is removed from the sample, thus the magnetization lies in the plane of the dot throughout its volume [17] except rings with extremely small internal radius [18]. The potential applications of the magnetic vortex itself increase from the possibility of the switching of core polarity (up or down) and chirality (the direction of the in-plane magnetization: clockwise, CW, or counterclockwise, CCW), and these two can be switched independently [19, 20]. In square dots, beside the vortex core, domain walls appear as well at the borders between domains. Roughly speaking, there are two types of domain walls: with and without nonzero out-of-plane magnetization (Bloch and Néel type, respectively) [21]. Thus, in the first case, the total out-of-plane magnetization is not zero even without the vortex core. Consequently, the out-of-plane magnetization can differ from zero in square rings in which the core does not appear. As we will show later, the preferred type of domain walls depends on the competition between exchange and dipolar interactions. There are two types of magnetic excitations in magnetic dots in the vortex state. First one is a gyrotropic mode, i.e., the precession of the vortex core around the dot center. This is a low-frequency excitation with the frequency usually in the range of hundreds of MHz, and it can be utilized to microwave generation [22, 23]. The second type are spin waves; high-frequency excitations with the frequency of several GHz [24]. The spin-wave excitations are normal modes of the confined magnetic system similar to the vibration of the membrane. They prove to be of a key importance for the vortex switching [25], can be used to generate the higher harmonics of the microwave radiation [26], and have a significant influence on the vortex stability [27, 28]. In this chapter, we study the stability of the magnetic vortex state and the spin-wave excitations spectrum in two-dimensional (2D) nanosized dots and rings in their dependence on the competition between dipolar and exchange interactions. We use a very efficient method based on the discrete version of the Landau-Lifshitz equation. Our theoretical approach is described in Section 2. In next sections, we present our results starting with the circular dot in which the in-plane circular vortex is assumed as a magnetic state. In Section 3, we analyze an exemplar spin-wave spectrum of the dot showing typical effects such as the negative dispersion relation and the influence of the lattice symmetry on the spin-wave spectrum. In Section 4, we examine the stability of the in-plane vortex vs. the dipolar-to-exchange interaction ration (d) and the size of the dot. The influence of the competition between dipolar end exchange interactions on the spin-wave spectrum of a dot is studied in Section 5. In next two sections, we consider the influence of the spin-wave profile of the soft mode on the vortex stability in circular (Section 6) and square rings (Section 7). Finally, we provide some concluding remarks in Section 8. ## 2. The model The object of our study is a dot (ring) cut out of a 2D lattice of elementary magnetic moments (Figure 3). For circular dots, the external size L is defined as the number of lattice constants in the diameter of the circle used for cutting out the dot. The internal size of the ring, L, is the radius of the inner circle (in units of the lattice constant). For square rings, L means the number of lattice sites along the side of the square. Similarly, L means the side of the removed square. In linear approximation used in this work, the magnetic moment MR, where R is the position vector, can be expressed as a sum of two components: static, M0,R, and dynamic, mR, with the assumption that |mR||MR|, |M0,R||MR|, and mRM0,R. For any magnetic moment within the dot, we can define a local Cartesian coordinate system as follows: unit vector iR is parallel to the static component M0,R, unit vector jR is oriented toward the vortex center lying in the plane of the dot, and unit vector kR is the third Cartesian unit vector being perpendicular to the other two. In this coordinate system, a dynamic component of the magnetic moment is mR=mj,RjR+mk,RkR, where mj,R and mk,R we will refer to as in-plane and perpendicular coordinates of the magnetic moment, respectively. For in-plane vortices, the last component is always perpendicular to the plane of a dot. #### Figure 3. Schematic plots of two in-plane vortices typical for two types of rings: (a) a circular magnetization in a circular ring and (b) closure domains (Landau state) in a square ring. Both rings are based on a 2D square lattice with magnetic moments (represented by the arrows) arranged in the lattice sites. To the right in figure (a), the local coordinate system associated with the magnetic moment indicated by the arrow. The time evolution of any magnetic moment MR is described by the damping-free Landau-Lifshitz (LL) equation, which in the linear approximation reads: iωγμ0mR=M0,R×hR+mR×HR, (1) where i is the imaginary unit, γ is the gyromagnetic ratio, μ0 is the vacuum permeability, and ω is the frequency of harmonic oscillations of mR. HR and hR are static and dynamic components of the effective field HReff=HR+hR acting on the magnetic moment MR. In this work, we consider exchange-dipolar systems only thus the effective field consists of two components: HReff=2Jμ0(gμB)2RNNMR+14πa3RR(3(RR)(MR(RR))|RR|5MR|RR|3). The first term comes from the exchange interaction and can be derived from the Heisenberg Hamiltonian under the condition of uniform interactions. Since we restrict ourselves to nearest neighbor (NN) interactions the summation runs over NNs of the magnetic moment MR. Here J is the NN exchange integral, μB is the Bohr magneton, and g is the g-factor. The second term is a typical dipolar sum over all magnetic moments within the sample except MR. The position vectors R are expressed in the units of the lattice constant a. From Eq. (1) one can derive the system of equations of motion for dynamic components of all magnetic moments as follows: iΩmr,R=−∑R′∈NN(R)kR⋅mR′+mk,R∑R′∈NN(R)iR⋅iR′−d(∑R′≠R(3[(R′−R)⋅kR][(R′−R)⋅mR′]|R′−R|5−kR⋅mR′|R′−R|3)+mk,R∑R′≠R(3[(R′−R)⋅iR][(R′−R)⋅iR′]|R′−R|5−iR⋅iR′|R′−R|3))iΩmk,R=∑R′∈NN(R)jR⋅mR′−mr,R∑R′∈NN(R)iR⋅iR′+d(∑R′≠R(3[(R′−R)⋅jR][(R′−R)⋅mR′]|R′−R|5−jR⋅mR′|R′−R|3)−mr,R∑R′≠R(3[(R′−R)⋅iR][(R′−R)⋅iR′]|R′−R|5−iR⋅iR′|R′−R|3)), (2) where Ω=(gμBω)/(2γSJ) is the reduced frequency of a spin-wave excitation, S is the spin (we assume that all spins within the dot are the same thus any magnetic moment is equal MR=gμBS), and d is the only material parameter of the model referred to as a dipolar-to-exchange interaction ratio given by: d=(gμB)2μ08πa3J. (3) The above system of equations can be represented as an eigenvalue problem the matrix of which is called a dynamic matrix. The diagonalization of the dynamic matrix leads to the spectrum of frequencies and profiles of normal excitations of the dot. The spin-wave profile is a spatial distribution of the dynamic components of magnetic moments, i.e., the distribution of the amplitude of the magnetic moment precession. Dynamic components obtained from diagonalization are complex numbers with a phase shift π/2 between the real and imaginary part, which gives T/4 shift in time, where T=2π/ω is a period of oscillations for a given mode. Usually, the distribution of these components obtained for the same mode is similar and differ in the intensity only. Therefore, if the situation is clear, it is sufficient to provide one part (Re or Im) of the one component (in-plane or out-of-plane) to explain the character of the mode. The spin-wave profiles in circular dots are marked as (n, m) similarly to the vibrations of a membrane, i.e., accordingly to the number of nodal lines in the radial (n) and azimuthal (m) direction. The azimuthal modes occur in pairs: (n, −m) and (n, +m) with both modes of the same character; thus, in this work we denote them just (n, m), with m denoting |m|. ## 3. Spin-wave spectrum of a circular dot In Figure 4a shows an example of the spin-wave spectrum obtained for a circular dot of the diameter L=101. The dot is cut out from the square lattice and contains 8000 spins. A magnetic configuration is assumed to form an in-plane vortex. The spectrum is calculated for the dipolar-to-exchange interaction ratio d=0.42. The shape of the spectrum is typical for exchange-dipolar systems: for low-frequency modes, the shape of the spectrum is determined by the dipolar interaction, while for the high frequencies by the exchange one. Of course, the spectrum is discrete, which is clearly seen in the inset where frequencies of 14 lowest modes are presented. Among these modes, one can distinguish a pairs of modes of the same frequencies. For example, modes 1 and 2, 7 and 8, 9 and 10, or 13 and 14 are degenerate in pairs. On the other hand, modes 3–6, 11 and 12 have unique frequencies. #### Figure 4. (a) An exemplar spin-wave spectrum calculated for the circular dot of the diameter L=101 lattice constants consists of 8000 spins. The in-plane vortex configuration is assumed and the dipolar-to-exchange interaction ratio d is set to 0.42. The inset shows 14 lowest modes of the spectrum. (b) Spin-wave profiles of 14 lowest modes of the spectrum shown in (a). To investigate this feature, we provide spin-wave profiles of the lowest eight modes in Figure 4b. As we can see that these 14 modes include seven pairs of modes with the same absolute value of the azimuthal number. Degenerate modes are of the odd azimuthal number: (0,3) modes 1 and 2, (0,5) modes 7 and 8, (0,1) modes 9 and 10, and (0,7) modes 13 and 14. In contrast, for even azimuthal numbers, degeneration is lifted. This originates from the discreteness of the lattice the dot is cut out from. If the symmetry of the profile matches the symmetry of the lattice, the degeneration is removed. For example, mode 3 has two nodal lines coincide with high-spin density lines (along the x and y axes in Figure 3a). Its counterpart, i.e., mode 4 is rotated by π/4 having antinodal lines along the high-spin density lines. This situation is analogous to the boundary of the Brillouin zone in the periodic system where the energy gap appears between two excitations: one having nodes in the potential wells and the other one having antinodes. Indeed, if the dot is based on the square lattice it can be considered as a system periodic in the azimuthal direction. A unit cell in this case corresponds to a quarter of the dot and is delimited by high-spin density lines. In such picture, one-half of the wavelength of modes (0,2) fits the unit cell with nodes or antinodes at the unit cell boundary. The same rule holds for hexagonal lattice where the degeneration is lifted if the azimuthal number is divisible by 3 [29]. It is worth to noting that there is also another type of degeneracy lifting caused by the coupling of the azimuthal modes with the gyrotropic mode [30, 31] which is not related to the discreteness of the dot and appears even for first-order azimuthal modes. In our work, this is not the case since we assume coreless vortex as a magnetic configuration. For the dot under consideration, radial and azimuthal numbers are related to the wave vector in the corresponding direction. Thus, the spectrum shown in Figure 4 exhibits negative dispersion relation for modes (0,1), (0,2), and (0,3), i.e., for this modes, the frequency decreases with an increase of azimuthal number. Such negative dispersion was also observed for core-vortices in circular dots experimentally [32, 33] and by means of analytical calculations [34, 35]. It was found that in a dot of a fixed thickness the increase in the diameter will cause the mode order to change, namely it will cause the negative dispersion to be stronger (the modes with higher azimuthal numbers will descend the spectrum). We show that this effect originates in the influence of the dipolar interaction regardless it is enhanced by the size of the system or by change of the dipolar-to-exchange interaction ratio. ## 4. Stability of the in-plane vortex The dependence of the spin-wave spectrum on d is shown in Figure 5 for the dot under consideration (L=101, 8000 spins). For intermediate values of the dipolar-to-exchange interaction ratio, there are no zero-frequency modes in the spectrum which means that the assumed in-plane vortex is a (meta)stable magnetic configuration (see, e.g., our discussion in reference [36]). Going toward smaller values of d the exchange interaction gains the importance until d=d1. From this point, the frequency of the lowest mode is zero and the in-plane vortex is no more stable (or even metastable); the lowest mode becomes the nucleation mode responsible for the reorientation of the magnetic configuration. The profile of this mode reflects the tendency of the system to find a new stable state. Since this transition is forced by the exchange interaction we will call it the exchange-driven reorientation (transition). While d increases, which means the dipolar interaction gains the importance, another transition appears for d=d2. In this case, the reorientation is caused by prevailing dipolar interaction; thus, it is referred to as dipolar-driven reorientation (transition). This behavior reflects the origin of the vortex state: the competition between dipolar and exchange interaction. #### Figure 5. The frequency dependence on the dipolar-to-exchange interaction ratio d (in logarithmic scale) for 36 lowest modes in the spin-wave spectrum of the circular dot of the diameter L=101 in the in-plane vortex state. The color assignment is indicated at the right; the colors repeat cyclically for successive modes. There are no zero-frequency modes between two critical values d1 and d2 which is indicative for the stability of the assumed magnetic configuration. The importance of the dipolar interaction depends, besides its dependence on d, also on the size of the system. Therefore, the critical values of d should change with the dot size. Figure 6a shows critical values d1 and d2 vs. the number of spins, N, in which the dot consists of (which is equivalent to change of the dot diameter since the system is 2D). The critical value d2 (for the dipolar driven reorientation) clearly depends on N, especially for small dots. Surprisingly, for the exchange-driven reorientation the critical value d1=0.1115 and is constant in the whole range of the dot size, i.e., from 60 to 8000 spins (L=9101). (The same value and behavior of d1 is reported in reference [37] where circular dots are studied by means of Monte Carlo simulations.) #### Figure 6. (a) Critical values d1 and d2 vs. the dot size (the number of magnetic moments within the dot, in logarithmic scale) for circular dot in the in-plane vortex state. (b) Exemplar profiles of the lowest mode in circular dots for dd1 (left profile) and for dd2 (right profile). Above each profile, its section along the indicated lines. To address this behavior of critical values in Figure 6b, we provide profiles of the lowest mode for two values of d, for dd1 (left profile) and for dd2 (right profile), for L=23 (408 spins). Both profiles are localized at the vortex center but the localization near d1 is much stronger than for d2. The reorientation at d1 is forced by the exchange interaction which is local and thus the dynamic interaction (between dynamic components of magnetic moments), confined to the very center of the dot, is not sensitive to the size of the dot. (It is not sensitive to the shape of the dot as well [28]). The second transition (d2) is forced by the long-range dipolar interaction and the dynamic interaction of this type, although localized near the center, “feels” the dot size even for rather big dots. However, due to the localization this effect fades for larger dots, which is reflected in the d2 curve in Figure 6a. For typical ferromagnetic materials, the dipolar-to-exchange interaction ratio has very small value due to strong exchange. For example, using experimental data for ultrathin cobalt film [38] from the relationship (3) we obtain dCo=0.00043 which is far below d1. Consequently, in such materials, the in-plane vortex is unstable regardless the size of the dot (since d1 is size independent). ## 5. Competition between interactions As seen from Figure 5, for the majority of modes the frequency decreases with increasing d but with different rate. As a result, the order of modes in the spectrum changes with d; this effect is particularly intensive at the bottom of the spectrum. In particular, the mode of the lowest frequency has different symmetry of its profile in different ranges of d (compare Figure 7). The modes with the decreasing frequency can be divided into two groups: first one contains purely azimuthal modes (radial number equal zero). Within this group, the rate of the decreasing frequency grows with the increasing azimuthal number. However, above c.a. 55 GHz, this rate is visibly lower for another group of modes with the radial number 1. Within this second group, the situation repeats: for the mode (1,m) the frequency decrease rate is almost the same as for the mode (0,m) and it grows with increasing m. It shows that the impact of the dipolar-to-exchange interaction ratio on the mode frequency is determined mainly by its azimuthal number, the radial number being of little influence. #### Figure 7. (a) The dependence of the lowest mode frequency vs. dipolar-to-exchange interaction ratio d (in logarithmic scale) in circular dots of different diameter L with the in-plane vortex as a magnetic configuration. On every curve, the crossing between first- and second-order azimuthal modes is marked with a black square (if exists). (b, c) Evolution of the lowest mode profile with d in dots of diameters 51 and 101, respectively. Besides the localized mode, there is one more mode in Figure 5 the frequency of which acts in different way than the majority; in the broad range of d its frequency is almost constant. This mode, called fundamental mode, is an analogue of the uniform excitation [35]. Its profile is almost uniform within the dots without any nodal lines in azimuthal nor a radial direction thus it is labeled as (0,0). Highly uniform profile is the origin of the independence of the frequency of the fundamental mode on d. As we already noticed that the mode order in the spin-wave spectrum is influenced by the dipolar-to-exchange interaction ratio and by the size of the dot, thus influences the character of the lowest mode. Figure 7a shows the dependence of the lowest mode frequency on d for different size of the dot. Figure 7b and c provides mode profiles for some values of d for two dot diameters: 51 and 101, respectively. Close to the critical value d1 the profile is strongly localized at the vortex center regardless the dot size. This strong localization together with the short range of the exchange interaction (responsible for the magnetic reorientation below d1) results in not only the independence of d1 on the size and shape of the dot but also the frequency vs. d dependence is the same for the dot of any size. In this range of the dipolar-to-exchange interaction ratio, the lowest mode is a soft mode but with growing d its frequency increases rapidly and the mode ascend the spectrum very fast causing crossings with azimuthal modes of decreasing frequencies. After the first crossing, the azimuthal mode becomes the lowest in the spectrum. For small dots (L<100), the mode (0,1) is the lowest one after crossing with the localized mode. While d continues to increase till next crossing appears and (0,2) mode becomes the lowest. The point of crossing of these modes shifts to the smaller d with increasing size of the dot (see Figure 7a). Finally, for L=101, the crossing between modes (0,1) and (0,2) takes place for lower d than the crossing with the localized mode. In a consequence, the first-order azimuthal mode is not the lowest one for any d. On the other hand, higher order modes may have the lowest frequency while d is growing (compare Figure 7b and c). Here, we observe a general tendency of two interactions in question. The dipolar interaction favors higher order azimuthal modes. Thus, modes with the increasing azimuthal number m fall successively to the bottom of the spectrum as this type of interaction gains in importance regardless of whether their strengthen is due to the size (L) or material (d) of the dot. The exchange interaction in contrast favors modes with m = 1. Thus, the competition between the exchange and dipolar interaction manifests itself not only in the preferred magnetic configuration but also in the profile of the lowest frequency modes. This rule changes if the vortex is close to unstable, i.e., close to the critical value of d. In this case, the soft mode is strongly localized at the vortex center. But even here this localized mode has nodal lines in the azimuthal direction for strong dipolar interaction and uniform for strong exchange interaction. ## 6. Circular rings In circular rings, the central part of the dot is removed along with the vortex center. This causes significant reduction in the influence of the exchange interaction and consequently should result in the stabilization of the in-plane vortex for lower values of the dipolar-to-exchange interaction ratio. Figure 8a shows the typical dependence of the spin-wave spectrum in circular rings on d. The exemplar ring has external diameter L=25 and internal one L=2 which means that only four central magnetic moments are removed from the dot. The overall character of the picture is very similar to that for the dot shown in Figure 5 with two exceptions: the range of the in-plane vortex stability and the behavior of the soft mode above d1. (The decreasing of the frequency with growing d is much faster mostly due to the smaller external diameter.) #### Figure 8. (a) The frequency dependence on the dipolar-to-exchange interaction ratio d (in logarithmic scale) for 25 lowest modes in the spin-wave spectrum of the circular ring of the external diameter L=25 and the internal one L=2 in the in-plane vortex state. (b) The evolution of the lowest mode profile. Profiles are calculated for six values of d marked with arrows in (a). Just above d1 the frequency of the soft mode increases steeply, as a consequence of increasing stability of the system, but before first crossing with the azimuthal mode the frequency slows down and finally becomes almost independent on d. The profile of this mode is shown in Figure 8b for d=0.01; it is no more localized. Instead of this, the mode is a fundamental mode (0,0) being almost uniform within the ring. Due to the lack of the topological defect, there is no reason for the localization. Other profiles provided in Figure 8b illustrate the change of the character of the lowest mode. Even if the external diameter of the ring is rather small, higher order azimuthal modes are the lowest for large enough d: (0,3) for d=1.3 and (0,4) for d=2.0. In full dots these modes could be the lowest for c.a. 4 times larger diameter which reflects the change of the balance between exchange and dipolar interaction after removing only few magnetic moments from the center of the dot. The removing of these four central magnetic moments has also great impact on the stability of the in-plane vortex, as it should be expected. The critical value d1 decreases from 0.115 for the full dot down to 0.0052 for the ring under consideration. However, this new critical value is still much larger than the value of d in common ferromagnetic materials. Figure 9a shows the change of the critical value d1 with increasing the internal diameter of the ring for four external diameters: 23, 33, 43, and 63. In contrast to full dots in the rings d1 visibly depends on both internal and external diameters (though for very small internal diameter the influence of the external size is weak). The increase in any diameter of the ring enhances the stability of the in-plane vortex. As a result, this magnetic configuration is stable even for such a material as cobalt if the ring is large enough (d1<dCo). #### Figure 9. (a) Critical value d1 vs. the internal diameter L of the circular ring for four external diameters L. The dashed line in (a) indicates the value of d for Co/Cu(001) calculated from experimental results reported in reference [38]. (b) Spin-wave profiles of the soft mode in circular rings calculated for dd1 for two internal diameters L. The external diameter of rings is fixed at L=23. To the right of each profile, its section along the indicated lines. The enhancement of the in-plane vortex stability due to the increasing of its internal diameter is rather obvious if we notice that the local exchange interaction between neighboring magnetic moments increases with decreasing distance from the vortex center (due to the change in the angle between them). In this context, the removal of the bigger circle from the center of the dot means the decreasing of the exchange interaction at the internal edge of the ring. Of course, this change in the exchange energy at the border should be visible in spin-wave profiles. To illustrate this effect, we calculate the profiles of the lowest mode for dd1 for the ring of the external diameter L=23 and two different internal diameters, L=2 and L=8, shown in Figure 9b. Successive removing of the central part of the dot results in decreasing of the amplitude of the magnetic moments precession (smaller intensity of the profile) at the internal edge of the ring. On the other hand, the amplitude is slightly increased in the rest of the ring, especially at the outer edge. For larger hole in the ring, the profile is almost uniform in radial direction and d1 is very little dependent on L. This nonzero intensity of the spin-wave profile reaching the external edge of the ring explains the influence of the external diameter on the critical value d1. ## 7. Square rings In square rings, the in-plane vortex takes the form of the Landau state (closure domain configuration, see Figure 2). Unlike circular rings, here the magnetization along internal and external edges has the same conditions (no curvature). Another difference is the existence of domains walls. To see how these dissimilarities influence the in-plane vortex stability in Figure 10a we show the critical value d1 vs. the internal size L for square rings of different external size L. Similarly to the circular rings, the removal of the central part of the dot results in the drop of d1, i.e., the in-plane vortex becomes stable for stronger exchange interaction. The critical value changes from 0.115 to 0.049. This time, in contrast to the previous case, this value is constant for broad range of the internal size of the ring. Additionally, d1 does not depend on the external size of the ring as well. Therefore, the in-plane vortex (with the domain walls of Néel type) is not stable in square rings made from typical ferromagnetic materials. #### Figure 10. (a) Critical value d1 vs. the internal size L of the square ring for four external sizes L. (b) Spin-wave profiles of the soft mode in square rings calculated for dd1 for three internal sizes L. The external size of rings is fixed at L=22. Above each profile, its section along the indicated lines. To explain this behavior, Figure 10b shows spin-wave profiles of the lowest mode for dd1 for square rings of the external size L=22 and three different internal sizes: L=0 (full dot), L=2, and L=16. Removing of the central part of a dot, even just few magnetic moments, destroys the central localization of the lowest mode as it was in the case of circular dots, but now the localization is shifted to the corners of the resultant ring. Such corner-localized profile is not affected by the change of the size of the ring in large range of both, internal and external size. Again, the strongly localized spin-wave profile together with the local character of the exchange interaction causes the critical value of d for the exchange-driven reorientation to be independent on the size of the system. The high amplitude of the spin wave at the corners suggests also the increasing of the out-of-plane component of magnetization which means the formation of the Bloch-type domain walls. ## 8. Concluding remarks In this chapter, we have shown our results concerning spin wave normal modes in nanosized dots and rings in the presence of the in-plane magnetic vortex. In experiments, in-plane vortices are observed in rings while in full dots made from typical ferromagnetic materials (e.g., cobalt or permalloy) the vortex core is formed at the vortex center [30]. Our results obtained for circular dots are consistent with this observation: in-plane vortex is stable in such a system for very weak exchange interaction, much weaker than in usual ferromagnets. We obtain the critical dipolar-to-exchange interaction ratio d1=0.1115 (which corresponds to the exchange integral J=0.058 eV) and this value is the same as received from Monte Carlo simulations in reference [37]. This critical value does not depend on the size of a dot which is also in agreement with simulations [37]. An interesting finding is the stability of the in-plane vortex in rings. In circular rings, the removal of a central part of a dot brings the dependence of d1 on both diameters of the ring (external and internal) and, consequently, the in-plane vortex becomes stable even for strong exchange if the ring is large enough. In square rings, the situation is completely different: d1 does not depend on any size of the ring (except extremely narrow rings). The critical value d1 is reduced in comparison with full dots though not enough to stabilize the in-plane vortex. Therefore, in square rings made from usual ferromagnetic materials, the in-plane vortex is not stable (due to the preferred type of domain walls). For the in-plane vortex configuration in full dots we found similar phenomena as reported from experiments, micromagnetic simulations, and analytical calculations, except those which arise from the existence of the gyrotropic motion of the vortex core, e.g., the splitting of the spin-wave frequency due to the coupling to the gyrotropic mode [31]. The qualitative agreement between results for in-plane and core vortices is an effect of the existence of the vortex center. Even without the out-of-plane component of the magnetization, the center of the vortex plays the role of the topological defect in the same manner as the vortex core. This defect acts as a nucleation center if d reaches its critical value and cause the localization of the soft mode. On the other hand, the properties of the spin waves in the presence of the vortex originate from the competition between exchange and dipolar interaction; thus the effects such as negative dispersion relation or diversity of the lowest mode profiles are similar for both types of vortices: with and without the core. In our model, the dot is cut out from a discrete lattice which obviously has a consequence in the results. If the symmetry of azimuthal modes matches the symmetry of the lattice, the frequency of modes with opposite azimuthal numbers splits. Also the fundamental mode, an analogue of the uniform excitation, has nonuniform spin-wave profile whose symmetry reflects the symmetry of the lattice. (A similar effect was observed in micromagnetic simulations due to the artificial discretization of a sample [3941].) In the case of circular dots and rings based on the discrete lattice, the edges are not smooth circles and cannot be smoothed as it is in continuous systems with artificial discretization, e.g., in micromagnetic simulations [42]. With the size of the ring, the edge smoothness increases but even for rather small dots (a dozen of lattice constants in the diameter) we obtain self-consistent results. In this work, the method described in Section 2 is used for 2D dots and rings but its applicability is far beyond these simple systems. It can be used for 2D or 3D systems of an arbitrary shape, size, lattice, or magnetic configuration. Moreover, if the exchange interaction is neglected the method can be applied for nonperiodic systems too. Also interactions taken into account are not limited to dipolar and exchange only (the model with the anisotropy and the external field taken into account is derived in reference [43]). The main disadvantage of our approach is the lack of simulations; the assumption, instead of the simulation, of the magnetic configuration is useful for very simple magnetic configurations only. On the other hand, in comparison with time-domain simulations, the time of calculations is very short, and the spin-wave spectrum is obtained directly from diagonalization of the dynamic matrix (without the usage of the Fourier transformation). For simple magnetic configurations, our results are in perfect agreement with simulations [37, 13, 44]. In the case of more complicated systems, the simulations should be used for finding the stable magnetic configuration and for the simulated configuration the dynamical matrix method can be used to obtain the spin-wave spectrum. ## Acknowledgements The author thanks Jean-Claude Serge Lévy and Maciej Krawczyk for valuable discussions. The author received fund from Polish National Science Centre project DEC-2-12/07/E/ST3/00538 and from the EUs Horizon2020 research and innovation program under the Marie Sklodowska-Curie GA No644348. ## References 1 - Lau JW, Shaw JM. Magnetic nanostructures for advanced technologies: fabrication, metrology and challenges. J. Phys. D. 2011;44:303001. DOI: 10.1088/0022-3727/44/30/303001 2 - Puszkarski H, Lévy J-CS, Mamica S. Does the generation of surface spin-waves hinge critically on the range of neighbour interaction? Phys. Lett. A. 1998;246:347–352. DOI: 10.1016/S0375-9601(98)00518-0 3 - Mamica S, Puszkarski H, Lévy J-CS. The role of next-nearest neighbours for the existence conditions of subsurface spin waves in magnetic films. Phys. Status Solidi B. 2000;218:561–569. DOI: 10.1002/1521-3951(200004)218:2<561::AID-PSSB561>3.0.CO;2-Q 4 - Mamica S, Krawczyk M, Kłos JW. Spin-wave band structure in 2D magnonic crystals with elliptically shaped scattering centres. Adv. Cond. Mat. Phys. 2012;2012:161387. DOI: 10.1155/2012/161387 5 - Romero Vivas J, Mamica S, Krawczyk M, Kruglyak VV. Investigation of spin wave damping in three-dimensional magnonic crystals using the plane wave method. Phys. Rev. B. 2012;86:144417. DOI: 10.1103/PhysRevB.86.144417 6 - Krawczyk M, Mamica S, Kłos JW, Romero Vivas J, Mruczkiewicz M, Barman A. Calculation of spin wave spectra in magnetic nanograins and patterned multilayers with perpendicular anisotropy. J. Appl. Phys. 2011;109:113903. DOI: 10.1063/1.3586249 7 - Pal S, Rana B, Saha S, Mandal R, Hellwig O, Romero Vivas J, Mamica S, Kłos JW, Mruczkiewicz M, Sokolovskyy ML, Krawczyk M, Barman A. Time-resolved measurement of spin-wave spectra in CoO capped [Co(t)/Pt(7 angstrom)](n-1) Co(t) multilayer systems. J. Appl. Phys. 2012;111:07C507. DOI: 10.1063/1.3672857 8 - Vedmedenko EY, Oepen HP, Ghazali A, Lévy J-CS, Kirschner J. Magnetic microstructure of the spin reorientation transition: a computer experiment. Phys. Rev. Lett. 2000;84:5884–5887. DOI: 10.1103/PhysRevLett.84.5884 9 - Vaz CAF, Kläui M, Bland JAC, Heyderman LJ, David C, Nolting F. Fundamental magnetic states of disk and ring elements. Nucl. Instrum. and Meth. B. 2006;246:13–19. DOI: 10.1016/j.nimb.2005.12.006 10 - Metlov KL, Lee YP. Map of metastable states for thin circular magnetic nanocylinders. Appl. Phys. Lett. 2008;92:112506. DOI: 10.1063/1.2898888 11 - Zhang W, Singh R, Bray-Ali N, Haas S. Scaling analysis and application: phase diagram of magnetic nanorings and elliptical nanoparticles. Phys. Rev. B. 2008;77:144428. DOI: 10.1103/PhysRevB.77.144428 12 - Chung S-H, McMichael RD, Pierce DT, Unguris J. Phase diagram of magnetic nanodisks measured by scanning electron microscopy with polarization analysis. Phys. Rev. B. 2010;81:024410. DOI: 10.1103/PhysRevB.81.024410 13 - Depondt P, Lévy J-CS, Mamica S. Vortex polarization dynamics in a square magnetic nanodot. J. Phys. Condens. Matter. 2013;25:466001. DOI: 10.1088/0953-8984/25/46/466001 14 - Shinjo T, Okuno T, Hassdorf R, Shigeto K, Ono T. Magnetic vortex core observation in circular dots of permalloy. Science. 2000;289:930–932. DOI: 10.1126/science.289.5481.930 15 - Wachowiak A, Wiebe J, Bode M, Pietzsch O, Morgenstern M, Wiesendanger R. Direct observation of internal spin structure of magnetic vortex cores. Science. 2002;298:577–580. DOI: 10.1126/science.1075302 16 - Miltat J, Thiaville A. Vortex cores – smaller than small. Science. 2002;298:555. DOI: 10.1126/science.1077704 17 - Li SP, Peyrade D, Natali M, Lebib A, Chen Y, Ebels U, Buda LD, Ounadjela K. Flux closure structures in cobalt rings. Phys. Rev. Lett. 2001;86:1102–1105. DOI: 10.1103/PhysRevLett.86.1102 18 - Mamica S. Stabilization of the in-plane vortex state in two-dimensional circular nanorings. J. Appl. Phys. 2013;113:093901. DOI: 10.1063/1.4794004 19 - Vavassori P, Metlushko V, Ilic B. Domain wall displacement by current pulses injection in submicrometer permalloy square ring structures. Appl. Phys. Lett. 2007;91:093114. DOI: 10.1063/1.2777156 20 - Jain S, Adeyeye AO. Probing the magnetic states in mesoscopic rings by synchronous transport measurements in ring-wire hybrid configuration. Appl. Phys. Lett. 2008;92:202506. DOI: 10.1063/1.2936089 21 - Morrish AH. The Physical Principles of Magnetism. Wiley-IEEE Press. New York. 2001. ISBN: 978-0-7803-6029-7 22 - Pribiag VS, Krivorotov IN, Fuchs GD, Braganca PM, Ozatay O, Sankey JC, Ralph DC, Buhrman RA. Magnetic vortex oscillator driven by d.c. spin-polarized current. Nat. Phys. 2007;3:498–503. DOI: 10.1038/nphys619 23 - Guslienko KY. Spin torque induced magnetic vortex dynamics in layered nanopillars. J. Spintron. Magn. Nanomater. 2012;1:70–74. DOI: 10.1166/jsm.2012.1007 24 - Guslienko KY, Scholz W, Chantrell RW, Novosad V. Vortex-state oscillations in soft magnetic cylindrical dots. Phys. Rev. B. 2005;71:144407. DOI: 10.1103/PhysRevB.71.144407 25 - Bauer HG, Sproll M, Back CH, Woltersdorf G. Vortex core reversal due to spin wave interference. Phys. Rev. Lett. 2014;112:077201. DOI: 10.1103/PhysRevLett.112.077201 26 - Demidov VE, Ulrichs H, Urazhdin S, Demokritov SO, Bessonov V, Gieniusz R, Maziewski A. Resonant frequency multiplication in microscopic magnetic dots. Appl. Phys. Lett. 2011;99:012505. DOI: 10.1063/1.3609011 27 - Mozaffari MR, Esfarjani K. Spin dynamics characterization in magnetic dots. Phys. B. 2007;399:81–93. DOI: 10.1016/j.physb.2007.05.023 28 - Mamica S, Lévy J-CS, Depondt P, Krawczyk M. The effect of the single-spin defect on the stability of the in-plane vortex state in 2D magnetic nanodots. J. Nanopart. Res. 2011;13:6075–6083. DOI: 10.1007/s11051-011-0308-0 29 - Mamica S. Spin-wave spectra and stability of the in-plane vortex state in two-dimensional magnetic nanorings. J. Appl. Phys. 2013;114:233906. DOI: 10.1063/1.4851695 30 - Hoffmann F, Woltersdorf G, Perzlmaier K, Slavin AN, Tiberkevich VS, Bischof A, Weiss D, Back CH. Mode degeneracy due to vortex core removal in magnetic disks. Phys. Rev. B. 2007;76:014416. DOI: 10.1103/PhysRevB.76.014416 31 - Guslienko KY, Slavin AN, Tiberkevich V, Kim S-K. Dynamic origin of azimuthal modes splitting in vortex-state magnetic dots. Phys. Rev. Lett. 2008;101:247203. DOI: 10.1103/PhysRevLett.101.247203 32 - Buess M, Haug T, Scheinfein MR, Back CH. Micromagnetic dissipation, dispersion, and mode conversion in thin permalloy platelets. Phys. Rev. Lett. 2005;94:127205. DOI: 10.1103/PhysRevLett.94.127205 33 - Buess M, Knowles TPJ, Hollinger R, Haug T, Krey U, Weiss D, Pescia D, Scheinfein MR, Back CH. Excitations with negative dispersion in a spin vortex. Phys. Rev. B. 2005;71:104415. DOI: 10.1103/PhysRevB.71.104415 34 - Ivanov BA, Zaspel CE. High frequency modes in vortex-state nanomagnets. Phys. Rev. Lett. 2005;94:027205. DOI: 10.1103/PhysRevLett.94.027205 35 - Zivieri R, Nizzoli F. Theory of spin modes in vortex-state ferromagnetic cylindrical dots. Phys. Rev. B. 2005;71:014411. DOI: 10.1103/PhysRevB.71.014411 36 - Mamica S, Lévy J-CS, Krawczyk M, Depondt P. Stability of the Landau state in square two-dimensional magnetic nanorings. J. Appl. Phys. 2012;112:043901. DOI: 10.1063/1.4745875 37 - Rocha JCS, Coura PZ, Leonel SA, Dias RA, Costa BV. Diagram for vortex formation in quasi-two-dimensional magnetic dots. J. Appl. Phys. 2010;107:053903. DOI: 10.1063/1.3318605 38 - Vollmer R, Etzkorn M, Kumar PSA, Ibach H, Kirschner J. Spin-wave excitation in ultrathin Co and Fe films on Cu(001) by spin-polarized electron energy loss spectroscopy (invited). J. Appl. Phys. 2004;95:7435. DOI: 10.1063/1.1689774 39 - Giovannini L, Montoncello F, Zivieri R, Nizzoli F. Spin excitations in nanometric magnetic dots: calculations and comparison with light scattering measurements. J. Phys. Condens. Matter. 2007;19:225008. DOI: 10.1088/0953-8984/19/22/225008 40 - Montoncello F, Giovannini L, Nizzoli F, Zivieri R, Consolo G, Gubbiotti G. Spin-wave activation by spin-polarized current pulse in magnetic nanopillars. J. Magn. Magn. Mater. 2010;322:2330–2334. DOI: 10.1016/j.jmmm.2010.02.033 41 - Wang R, Dong X. Sub-nanosecond switching of vortex cores using a resonant perpendicular magnetic field. Appl. Phys. Lett. 2012;100:082402. DOI: 10.1063/1.3687909 42 - Usov NA, Peschany SE. Magnetization curling in a fine cylindrical particle. J. Magn. Magn. Mater. 1993;118:L290. DOI: 10.1016/0304-8853(93)90428-5 43 - Mamica S. Vortices in two-dimensional nanorings studied by means of the dynamical matrix method. Low Temp. Phys. 2015;41:806–816. DOI: 10.1063/1.4932355 44 - Mamica S, Lévy J-CS, Krawczyk M. Effects of the competition between the exchange and dipolar interactions in the spin-wave spectrum of two-dimensional circularly magnetized nanodots. J. Phys. D. 2014;47:015003. DOI: 10.1088/0022-3727/47/1/015003
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410394787788391, "perplexity": 1624.0394190433396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00333.warc.gz"}
https://udspace.udel.edu/handle/19716/1547
Chesapeake Bay Sediment Flux Model 1993-06 Authors DiToro, Dominic M. Fitzpatrick, James J. Description The Chesapeake Bay Model development project has as it goal the development of a comprehensive model of eutrophication in the estuary. It is a mass balance model that relates the inputs of nutrients to the growth and death of hytoplankton and the resulting extent and duration of the hypoxia and anoxia. The aim is to identify and quantify the causal chain that begins with nutrient inputs and ends with the dissolved oxygen distributions in space and time. The modeling framework is based on a mass balance of the carbon, nitrogen, phosphorus, silica, and dissolved oxygen in the bay. It requires a detailed specification of the transport that affects all these components and the kinetics that describe the growth and death of phytoplankton biomass, the nutrient cycling, and the resulting dissolved oxygen distribution in the bay and estuaries. A critical component of the model is the role of sediments in recycling nutrients and consuming oxygen. This report presents the formulation and calibration of a sediment model which quantifies these processes within the context of mass balances in the sediment compartment. Keywords Hypoxia , Anoxia , Nutrient cycling , Phytoplankton biomass
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803105890750885, "perplexity": 2261.802227303593}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00063.warc.gz"}
https://www.researchgate.net/publication/24393712_Tunnelling_magnetic_resonances_Dynamic_nuclear_polarisation_and_the_diffusion_of_methyl_group_tunnelling_energy
# Tunnelling magnetic resonances: Dynamic nuclear polarisation and the diffusion of methyl group tunnelling energy ArticleinJournal of Magnetic Resonance 199(1):10-7 · May 2009with16 Reads Impact Factor: 2.51 · DOI: 10.1016/j.jmr.2009.03.013 · Source: PubMed Abstract The dynamic nuclear polarisation (DNP) of (1)H spins arising from methyl tunnelling magnetic resonances has been investigated in copper-doped zinc acetate dihydrate using field-cycling NMR spectroscopy at 4.2K. The tunnel resonances appear in the field range 20-50 mT and trace out the envelope of the electron spin resonance spectrum of the Cu(2+) ion impurities. By investigating the DNP line shapes as a function of time, the cooling of the methyl tunnel reservoir has been probed. The role of spectral diffusion of tunnelling energy in determining the DNP line shapes has been investigated through experiments and numerical simulations based on a theoretical model that describes the time evolution of the (1)H polarisation and the tunnelling temperature. The model is discussed in detail in comparison with the experiments. All effects have been studied as a function of Cu(2+) ion concentration. • ##### A dedicated spectrometer for dissolution DNP NMR spectroscopy [Hide abstract] ABSTRACT: Using low temperature dynamic nuclear polarisation (DNP) in conjunction with dissolution makes it possible to generate highly polarised nuclear spin systems for liquid state applications of nuclear magnetic resonance spectroscopy. However, in its current implementation, which requires the transfer of the solute between two different magnets, the hyperpolarisation strategy is limited to spin systems with relatively long longitudinal relaxation time constants. Here we describe the design and construction of a dedicated spectrometer for DNP applications that is based on a magnet with two isocentres. DNP enhancement is carried out in the upper compartment of this magnet in a low temperature environment at 3.35 T, while a 9.4 T isocentre in the lower compartment is used for high resolution NMR spectroscopy. The close proximity (85 cm) of the two isocentres makes it possible to transfer the sample in the solid state with very little loss of spin polarisation. In first performance tests this novel experimental set-up proved to be superior to the strategy involving two separate magnets. No preview · Article · Jun 2010 · Physical Chemistry Chemical Physics +5 more authors... • ##### Long-Lived Nuclear Spin States in Methyl Groups and Quantum-Rotor-Induced Polarization [Hide abstract] ABSTRACT: Long-lived nuclear spin states have a relaxation time much longer than the longitudinal relaxation time T 1. Long-lived states extend significantly the time scales that may be probed with magnetic resonance, with possible applications to transport and binding studies, and to hyperpolarised imaging. Rapidly rotating methyl groups in solution may support a long-lived state, consisting of a population imbalance between states of different spin exchange symmetries. Here, we expand the formalism for describing the behaviour of long-lived nuclear spin states in methyl groups, with special attention to the hyperpolarisation effects observed in 13CH3 groups upon rapidly converting a material with low-barrier methyl rotation from the cryogenic solid state to a room-temperature solution [M. Icker and S. Berger, J. Magn. Reson. 219, 1 (2012)]. We analyse the relaxation properties of methyl long-lived states using semi-classical relaxation theory. Numerical simulations are supplemented with a spherical-tensor analysis, which captures the essential properties of methyl long-lived states. Full-text · Article · Nov 2013 · Journal of the American Chemical Society +6 more authors... • ##### Spin-symmetry conversion in methyl rotors induced by tunnel resonance at low temperature [Hide abstract] ABSTRACT: Field-cycling NMR in the solid state at low temperature (4.2 K) has been employed to measure the tunneling spectra of methyl (CH3) rotors in phenylacetone and toluene. The phenomenon of tunnel resonance reveals anomalies in (1)H magnetization from which the following tunnel frequencies have been determined: phenylacetone, νt = 6.58 ± 0.08 MHz; toluene, νt(1) = 6.45 ± 0.06 GHz and νt(2) = 7.07 ± 0.06 GHz. The tunnel frequencies in the two samples differ by three orders of magnitude, meaning different experimental approaches are required. In phenylacetone the magnetization anomalies are observed when the tunnel frequency matches one or two times the (1)H Larmor frequency. In toluene, doping with free radicals enables magnetization anomalies to be observed when the tunnel frequency is equal to the electron spin Larmor frequency. Cross-polarization processes between the tunneling and Zeeman systems are proposed and form the basis of a thermodynamic model to simulate the tunnel resonance spectra. These invoke space-spin interactions to drive the changes in nuclear spin-symmetry. The tunnel resonance lineshapes are explained, showing good quantitative agreement between experiment and simulations. Full-text · Article · Feb 2014 · The Journal of Chemical Physics +1 more author...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904211521148682, "perplexity": 3414.4475323847378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051054181.34/warc/CC-MAIN-20160524005054-00072-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math.wikia.com/wiki/Generalized_Stokes%27_theorem
## FANDOM 1,025 Pages In vector calculus and differential geometry, the generalized Stokes' theorem or just Stokes' theorem relates the integral of a function over the boundary of a manifold to the integral of the function's exterior derivative on the manifold itself. Mathematically, it is stated as $\int_{\partial \Omega} \omega = \int_{\Omega} d \omega$ The fundamental theorem of calculus, gradient theorem, Green's theorem, divergence theorem, and Kelvin–Stokes theorem are all special cases of Stokes' theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982667565345764, "perplexity": 243.3877865941583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00070.warc.gz"}
http://www.weizmann.ac.il/math/staff_scientists
You are here # Staff Scientists Name Department Phone Email Koppula, Venkata Department of Computer Science and Applied Mathematics email Marron, Assaf Department of Computer Science and Applied Mathematics +972-8-9344313 +972-3-6316063 email Mendelson Cohen, Netta Department of Computer Science and Applied Mathematics email Mukamel, Zohar Department of Computer Science and Applied Mathematics +972-8-9346959 email Ron, Dorit Department of Computer Science and Applied Mathematics +972-8-9342141 email Weinberger, Adina Department of Computer Science and Applied Mathematics +972-8-9343257 email
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620680212974548, "perplexity": 2099.0794773982957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00542.warc.gz"}
https://stacks.math.columbia.edu/tag/0FN6
Lemma 21.35.6. Assume given a commutative diagram $\xymatrix{ (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}'), \mathcal{O}_{\mathcal{C}'}) \ar[r]_{(g', (g')^\sharp )} \ar[d]_{(f', (f')^\sharp )} & (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}), \mathcal{O}_\mathcal {C}) \ar[d]^{(f, f^\sharp )} \\ (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}'), \mathcal{O}_{\mathcal{D}'}) \ar[r]^{(g, g^\sharp )} & (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}), \mathcal{O}_\mathcal {D}) }$ of ringed topoi. Assume 1. $f$, $f'$, $g$, and $g'$ correspond to cocontinuous functors $u$, $u'$, $v$, and $v'$ as in Sites, Lemma 7.21.1, 2. $v \circ u' = u \circ v'$, 3. $v$ and $v'$ are continuous as well as cocontinuous, 4. for any object $V'$ of $\mathcal{D}'$ the functor ${}^{u'}_{V'}\mathcal{I} \to {}^{\ \ \ u}_{v(V')}\mathcal{I}$ given by $v$ is cofinal, 5. $g^{-1}\mathcal{O}_{\mathcal{D}} = \mathcal{O}_{\mathcal{D}'}$ and $(g')^{-1}\mathcal{O}_{\mathcal{C}} = \mathcal{O}_{\mathcal{C}'}$, and 6. $g'_! : \textit{Ab}(\mathcal{C}') \to \textit{Ab}(\mathcal{C})$ is exact1. Then we have $Rf'_* \circ (g')^* = g^* \circ Rf_*$ as functors $D(\mathcal{O}_\mathcal {C}) \to D(\mathcal{O}_{\mathcal{D}'})$. Proof. We have $g^* = Lg^* = g^{-1}$ and $(g')^* = L(g')^* = (g')^{-1}$ by condition (5). By Lemma 21.20.7 it suffices to prove the result on the derived category $D(\mathcal{C})$ of abelian sheaves. Choose an object $K \in D(\mathcal{C})$. Let $\mathcal{I}^\bullet$ be a K-injective complex of abelian sheaves on $\mathcal{C}$ representing $K$. By Derived Categories, Lemma 13.30.9 and assumption (6) we find that $(g')^{-1}\mathcal{I}^\bullet$ is a K-injective complex of abelian sheaves on $\mathcal{C}'$. By Modules on Sites, Lemma 18.40.3 we find that $f'_*(g')^{-1}\mathcal{I}^\bullet = g^{-1}f_*\mathcal{I}^\bullet$. Since $f_*\mathcal{I}^\bullet$ represents $Rf_*K$ and since $f'_*(g')^{-1}\mathcal{I}^\bullet$ represents $Rf'_*(g')^{-1}K$ we conclude. $\square$ [1] Holds if fibre products and equalizers exist in $\mathcal{C}'$ and $v'$ commutes with them, see Modules on Sites, Lemma 18.16.3. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9953268766403198, "perplexity": 270.8919908790291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664437.49/warc/CC-MAIN-20191111191704-20191111215704-00165.warc.gz"}
https://developers.sherpa.ai/es/privacy-technology/tutorials/dp_basic_concepts/
# Basic Concepts There is a common situation in sociology where a researcher wants to develop a study about a certain property of a population. The point is that this property is very sensitive and must remain private because it could be embarrassing or illegal. However, the researcher can gain insight into the population, without compromising their privacy. There is a simple mechanism in which every person follows a randomization algorithm that provides "plausible deniability". Consider the following algorithm (see The Algorithmic Foundations of Differential Privacy, Section 2.3): 1. Flip a coin; 2. If tails, then respond truthfully; 3. If heads, then flip a second coin and respond "Yes" if heads and "No" if tails. Even if the user answers "Yes", they could argue that this was due to the randomization algorithm and that it is not a true answer. Now, imagine that the researcher wants to use the previous algorithm to estimate the average number of people that have broken a particular law. The interesting point here is that they can obtain a very good approximation of the value, while maintaining the privacy of every individual. Let's look at a specific example to get more familiar with the situation. First, we are going to generate a binary random vector with a concrete mean given by $p$. import numpy as np import matplotlib.pyplot as plt import shfl p = 0.2 data_size = 10000 array = np.random.binomial(1,p,size=data_size) np.mean(array) 0.2001 Let's suppose that every number represents whether the person has broken the law or not. Now, we are going to generate federated data from this array. Every node is assigned one value, simulating, for instance, a piece of information on their mobile device. from math import log, exp federated_array = shfl.private.federated_operation.federate_array(array, data_size) Now, we want every node to execute the previously defined algorithm and to return the result. from shfl.differential_privacy import RandomizedResponseCoins data_access_definition = RandomizedResponseCoins() federated_array.configure_data_access(data_access_definition) # Query data result = federated_array.query() Let's compare the mean of the original vector with the mean of the returned vector. print("Generated binary vector with mean: " + str(np.mean(array))) print("Differential query mean result: " + str(np.mean(result))) Generated binary vector with mean: 0.2001 Differential query mean result: 0.3507 What happened? Obviously, we have modified the true mean of the vector applying the randomization algorithm. We need to remove the influence of the latter to get the correct result. Fortunately, we can reverse the process to get a good estimate. We are introducing noise, half of the time, with mean 0.5. So, the expected value will be: Expected estimated mean = actual mean * 0.5 + random mean * 0.5 np.mean(array) * 0.5 + 0.5*0.5 0.35005 Pretty close, isn't it? To get the corrected estimated mean value, we just need to use the following expression: Corrected estimated mean = (estimated mean - random mean * 0.5) / 0.5 (np.mean(result) - 0.5*0.5)/0.5 0.20140000000000002 Right! We have obtained an estimate that which is very similar to the true mean. However, introducing randomness comes at a cost. In the following sections, we are going to formalize the error introduced in the estimation by the differential privacy mechanism and study the effect of the population size and the privacy on the algorithm's performance. ## Differential Privacy Definition We are now going to introduce the notion of differential privacy (see The Algorithmic Foundations of Differential Privacy, Definition 2.4). Let $\mathcal X$ be the set of possible rows in a database, so that we can represent a particular database using a histogram $x\in\mathbb N^{|\mathcal X|}$. A randomized algorithm is a collection of conditional probabilities $P(z| x)$ for all $x$ and $z\in \mathcal Z$, where $\mathcal Z$ is the response space. Differential privacy is a property of some randomized algorithms. In particular, a randomized algorithm is $\epsilon$-differentially private if $\frac{P(z|x)}{P(z|y)}\le e^\epsilon$ for all $z\in \mathcal Z$, and for all databases $x, y \in\mathbb N^{|\mathcal X|}$ such that $||x-y||_1=1$. In words, this definition means that for any pair of similar databases (i.e., differing in one element only), the probability of getting the same result after randomization is similar (i.e., probabilistically bounded). ## Digging into the Randomized Response Let's get back to sociological studies. Suppose a group of $N$ people take part in a study that wants to estimate the proportion of the population that commits fraud when paying their taxes. Since this is a crime, the participants might be worried about the consequences of telling the truth, so they are told to follow the algorithm described previously. This procedure is, in fact, an $\epsilon$-differentially private randomized mechanism. In particular, given this differential privacy mechanism, we have that $P(\text{respond yes} \,|\, \text{actual yes}) = \frac{3}{4}\qquad \qquad P(\text{respond no} \,|\, \text{actual yes}) = \frac{1}{4}$ $P(\text{respond yes} \,|\, \text{actual no}) = \frac{1}{4}\qquad \qquad P(\text{respond no} \,|\, \text{actual no}) = \frac{3}{4}\,,$ and a direct computation shows that $\epsilon = \log(3)$. The probability of responding "Yes" is given by $P(\text{respond yes}) = \frac{1}{4} + \frac{p}{2}$ where $p = P(\text{actual yes})$ is the quantity of interest in the study. Using that $P(\text{respond yes})$ is estimated to be $r_P = \frac{\#{\text{(respond yes)}}}{N}$, we have an estimate for the proportion of the population that commits fraud, namely $\hat p = 2r_P - \frac{1}{2}\,.$ ### Trade-off between accuracy and privacy Now, we are going to create a set of useful functions to execute some experiments and try to understand better the behavior of the previous expressions. The framework provides an implementation of this algorithm using two parameters, namely the probabilities of getting "heads" in each of the two coin tosses. For simplicity, we are going to consider the case in which the first coin is biased but the second one remains fair. More explicitly, we consider the conditional probabilities given by $P(\text{respond yes} | \text{actual yes}) = f$ $P(\text{respond yes} | \text{actual no}) = 1-f \,.$ with $f\in [1/2,1]$. This corresponds to taking the first coin with $p({\rm{heads}}) = 2(1-f)$ and the second coin to be unbiased. In this case, the amount of privacy depends on the bias of the first coin, $\epsilon = \log \frac{f}{1-f}\,.$ For $f=1/2$, we have that $\epsilon = 0$ and the algorithm is maximally private. On the other hand, for $f = 1$, $\epsilon$ tends to infinity, so the algorithm is not private at all. The relationship between the number of positive responses and $p$ is given by $\hat p = \frac{r_P + f - 1}{2f-1}\,.$ In order to properly see the trade-off between utility and privacy, we may look at the uncertainty of the estimate, which is given by $\Delta p = \frac{2}{2f-1}\sqrt{\frac{r_P(1-r_P)}{N}}\,.$ This expression shows that, as the privacy increases ($f$ approaches $1/2$), the uncertainty of the estimate increases. On the other hand, for $f=1$, the uncertainty of the estimate is purely due to the finite size of the sample $N$. In general, we see that the price we pay for being private is in terms of the accuracy of the estimate (for a fixed sample size). (The expression for the uncertainty is computed using the normal approximation to the error of a binomial distribution.) def get_prob_from_epsilon(epsilon): f = np.exp(epsilon) / (1 + np.exp(epsilon)) return 2*(1-f) We also define the uncertainty function. def uncertainty(dp_mean, n, epsilon): f = np.exp(epsilon) / (1 + np.exp(epsilon)) estimation_uncertainty = 2/(2*f-1) * np.sqrt(dp_mean*(1-dp_mean) / n) return estimation_uncertainty Finally, we define a function to execute the experiments $n_{\rm{runs}}$ times so that we can study the average behavior. def experiment(epsilon, p, size): array = np.random.binomial(1,p,size=size) federated_array = shfl.private.federated_operation.federate_array(array, size) prob = get_prob_from_epsilon(epsilon) federated_array.configure_data_access(data_access_definition) # Query data result = federated_array.query() estimated_mean = (np.mean(result) - prob * 0.5)/(1-prob) estimation_uncertainty = uncertainty(np.mean(result), size, epsilon) return estimated_mean, estimation_uncertainty def run_n_experiments(epsilon, p, size, n_runs): uncertainties = 0 p_est = 0 for i in range(n_runs): estimated_mean, uncertainty = experiment(epsilon,p,size) p_est = p_est + estimated_mean uncertainties = uncertainties + uncertainty p_est = p_est / n_runs uncertainties = uncertainties / n_runs return p_est, uncertainties Now, we are going to execute the experiment and save the results. epsilon_range = np.arange(0.001, 10, 0.1) n_range = [100, 500, 2000] p_est = np.zeros((len(n_range), len(epsilon_range))) uncertainties = np.zeros((len(n_range), len(epsilon_range))) for i_n in range(len(n_range)): for i_e in range(len(epsilon_range)): p_est_i, uncertainty_i = run_n_experiments(epsilon_range[i_e], p, n_range[i_n], 10) p_est[i_n, i_e] = p_est_i uncertainties[i_n, i_e] = uncertainty_i We can now see the results. plt.style.use('fivethirtyeight') fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16,6)) for i_n in range(len(n_range)): ax[0].plot(epsilon_range, p_est[i_n], label = "N = " + str(n_range[i_n])) ax[0].set_xlabel('$\epsilon$') ax[0].set_ylabel('$\hat p$') ax[0].set_xlim([0., 10]) ax[0].set_ylim([0, 1]) caption="Left: The accuracy of the estimated proportion $\hat{p}$ decreases as privacy increases, \ i.e. with smaller $\epsilon$ (the true proportion is $p=0.2$) \n \ Moreover, larger sample sizes $N$ are associated to higher accuracy of the estimated proportion $\hat{p}$.\ \nRight: For a fixed sample size $N$, the uncertainty in the estimate $\Delta p$ \ grows as privacy increases, i.e. with smaller $\epsilon$. Moreover, \n \ larger sample sizes $N$ are associated to lower uncertainty $\Delta p$." ax[0].text(0.5, -.4, caption, ha='left') for i_n in range(len(n_range)): ax[1].plot(epsilon_range, uncertainties[i_n], label = "N = " + str(n_range[i_n])) ax[1].set_xlabel('$\epsilon$') ax[1].set_ylabel('$\Delta p$') ax[1].set_yscale('log') plt.legend(title = "", loc="upper right") plt.show()
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 39, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127770066261292, "perplexity": 734.7816059511764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00344.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/explain-why-maxima-become-weaker-weaker-increasing-n-fraunhofer-diffraction-due-single-slit_4382
# Explain Why the Maxima Become Weaker and Weaker with Increasing N - Physics Explain why the maxima at theta=(n+1/2)lambda/a become weaker and weaker with increasing n #### SolutionShow Solution On increasing the value of n, the part of slit contributing to the maximum decreases. Hence, the maximum becomes weaker. Concept: Fraunhofer Diffraction Due to a Single Slit Is there an error in this question or solution?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533247709274292, "perplexity": 1110.7470422752508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00316.warc.gz"}
http://mathhelpforum.com/differential-geometry/180057-differentiability-multivariable-function.html
# Thread: differentiability multivariable function 1. ## differentiability multivariable function Hi, I need help on the following: Let $f : \mathbb{R}^2 \to \mathbb{R}^2$ be defined by $f(x,y) = (y, x^2)$. Prove that it is differentiable at $(0,0)$. I know the partial derivatives exist at $(0,0)$. Thanks 2. Originally Posted by storchfire1X Hi, I need help on the following: Let $f : \mathbb{R}^2 \to \mathbb{R}^2$ be defined by $f(x,y) = (y, x^2)$. Prove that it is differentiable at $(0,0)$. I know the partial derivatives exist at $(0,0)$. Thanks So the derivative is going to be a 2 by 2 matrix. $\begin{bmatrix} 0 & 1 \\ 2x & 0 \end{bmatrix}$ If you evaluate this at zero we get $m=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ Now you need to calculate the limit $f'(\mathbf{0})=\lim_{\mathbf{h} \to \mathbf{0}}\frac{||f(\mathbf{0}+\mathbf{h})-f(\mathbf{0})-m\mathbf{h}||}{||\mathbf{h}||}=0$ and show that it is equal to zero! Where $\mathbf{h}=\begin{bmatrix}h_1 \\ h_2 \end{bmatrix}$ This should get you started. 3. Originally Posted by storchfire1X Hi, I need help on the following: Let $f : \mathbb{R}^2 \to \mathbb{R}^2$ be defined by $f(x,y) = (y, x^2)$. Prove that it is differentiable at $(0,0)$. I know the partial derivatives exist at $(0,0)$. Thanks You can also use the fact that if $\displaystyle \frac{\partial}{\partial x}f,\frac{\partial}{\partial y}f$ exist and are continuous on a neighborhood of $(0,0)$ then $f$ is differentiable there and $f'(0,0)=\text{Jac}_f(0,0)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993700385093689, "perplexity": 156.92832761360904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00128-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.hepdata.net/search/?q=cmenergies%3A%5B1.3+TO+1.4%5D&author=Jaminion%2C+S.&page=1
Showing 25 of 91 results #### Backward electroproduction of pi0 mesons on protons in the region of nucleon resonances at four momentum transfer squared Q**2 = 1.0-GeV**2 The collaboration Laveissiere, G. ; Degrande, N. ; Jaminion, S. ; et al. Phys.Rev.C 69 (2004) 045203, 2004. Inspire Record 625669 Exclusive electroproduction of pi0 mesons on protons in the backward hemisphere has been studied at Q**2 = 1.0 GeV**2 by detecting protons in the forward direction in coincidence with scattered electrons from the 4 GeV electron beam in Jefferson Lab's Hall A. The data span the range of the total (gamma* p) center-of-mass energy W from the pion production threshold to W = 2.0 GeV. The differential cross sections sigma_T+epsilon*sigma_L, sigma_TL, and sigma_TT were separated from the azimuthal distribution and are presented together with the MAID and SAID parametrizations. 0 data tables match query #### Production of Four Prong Final States in Photon-photon Collisions The collaboration Aihara, H. ; Alston-Garnjost, M. ; Avery, R.E. ; et al. Phys.Rev.D 37 (1988) 28, 1988. Inspire Record 261630 Results are presented on the exclusive production of four-prong final states in photon-photon collisions from the TPC/Two-Gamma detector at the SLAC e+e− storage ring PEP. Measurement of dE/dx and momentum in the time-projection chamber (TPC) provides identification of the final states 2π+2π−, K+K−π+π−, and 2K+2K−. For two quasireal incident photons, both the 2π+2π− and K+K−π+π− cross sections show a steep rise from threshold to a peak value, followed by a decrease at higher mass. Cross sections for the production of the final states ρ0ρ0, ρ0π+π−, and φπ+π− are presented, together with upper limits for φρ0, φφ, and K*0K¯ *0. The ρ0ρ0 contribution dominates the four-pion cross section at low masses, but falls to nearly zero above 2 GeV. Such behavior is inconsistent with expectations from vector dominance but can be accommodated by four-quark resonance models or by t-channel factorization. Angular distributions for the part of the data dominated by ρ0ρ0 final states are consistent with the production of JP=2+ or 0+ resonances but also with isotropic (nonresonant) production. When one of the virtual photons has mass (mγ2=-Q2≠0), the four-pion cross section is still dominated by ρ0ρ0 at low final-state masses Wγγ and by 2π+2π− at higher mass. Further, the dependence of the cross section on Q2 becomes increasingly flat as Wγγ increases. 0 data tables match query #### Measurement of Pion Proton Bremsstrahlung for Pions at 299-{MeV} Meyer, C.A. ; Amsler, Claude ; Bosshard, A. ; et al. Phys.Rev.D 38 (1988) 754-767, 1988. Inspire Record 268867 We have measured the fivefold differential cross section d5σ/dΩπdΩγdEγ for the process π+p→π+pγ with incident pions of energy 299 MeV. The angular regions for the outgoing pions (55°≤θlabπ≤95°), and photons (θlabγ=241°±10°) in coplanar geometry are selected to maximize the sensitivity to the radiation from the magnetic dipole moment of the Δ++(1232) resonance. At low photon energies, the data agree with the soft-photon approximation to pion-proton bremsstrahlung. At forward pion angles the data agree with older data and with the latest theoretical calculations for 2.3μp≤μΔ≤3.3μp. However at more backward pion angles where no data existed, the predictions fail. 0 data tables match query #### Measurement of the pi- p ---> 3 pi0 n total cross-section from threshold to 0.75-GeV/c Starostin, A. ; Nefkens, B.M.K. ; Manley, D.M. ; et al. Phys.Rev.C 67 (2003) 068201, 2003. Inspire Record 620818 We report a new measurement of the π−p→3π0n total cross section from threshold to pπ=0.75GeV/c. The cross section near the N(1535)12− resonance is only a few μb after subtracting the large η→3π0 background associated with π−p→ηn. A simple analysis of our data results in the estimated branching fraction B[S11→πN(1440)12+]=(8±2)%. This is the first such estimate obtained with a three-pion production reaction. 0 data tables match query #### A Measurement of the electric form-factor of the neutron through polarized-d (polarized-e, e-prime n)p at Q**2 = 0.5-(GeV/c)**2 The collaboration Zhu, H. ; Ahmidouch, A. ; Anklin, H. ; et al. Phys.Rev.Lett. 87 (2001) 081801, 2001. Inspire Record 556212 We report the first measurement of the neutron electric form factor $G_E^n$ via $\vec{d}(\vec{e},e'n)p$ using a solid polarized target. $G_E^n$ was determined from the beam-target asymmetry in the scattering of longitudinally polarized electrons from polarized deuterated ammonia, $^{15}$ND$_3$. The measurement was performed in Hall C at Thomas Jefferson National Accelerator Facility (TJNAF) in quasi free kinematics with the target polarization perpendicular to the momentum transfer. The electrons were detected in a magnetic spectrometer in coincidence with neutrons in a large solid angle segmented detector. We find $G_E^n = 0.04632\pm0.00616 (stat.) \pm0.00341 (syst.)$ at $Q^2 = 0.495$ (GeV/c)$^2$. 0 data tables match query #### A Measurement of the Cross-Section for Four Pion Production in gamma gamma Collisions at SPEAR Burke, D.L. ; Abrams, G.S. ; Alam, M.S. ; et al. Phys.Lett.B 103 (1981) 153-156, 1981. Inspire Record 165016 We present a measurement of the cross section for the reaction e + e − → e + e − π + π − π + π − at SPEAR. This channel is found to be large and dominated by the process γγ → ϱ 0 ϱ 0 → π + π − π + π − . The cross section, which is small just above the four-pion threshold, exhibits a large enhancement near the ϱ 0 ϱ 0 threshold. 0 data tables match query #### Positive-Pion Production Asymmetry with Polarized Bremsstrahlung Near Second Resonance Liu, F.F. ; Vitale, S. ; Phys.Rev. 144 (1966) 1093-1100, 1966. Inspire Record 50917 The azimuthal asymmetry Σ=(σ⊥−σII)(σ⊥+σII) in π+ photoproduction by linearly polarized bremsstrahlung was measured at photon energies from 475 to 750 MeV at 90° and 135° in the center-of-mass system. The experimental results show that even in this energy region, π+ are produced predominantly in the plane of the magnetic vector. 0 data tables match query #### Total cross-section measurement for the three double pion production channels on the proton Braghieri, A ; Murphy, L.Y ; Ahrens, J ; et al. Phys.Lett.B 363 (1995) 46-50, 1995. Inspire Record 382744 The total cross sections for the three γp → Nππ reactions have been measured for photon energies from 400 to 800 MeV. The γ p → p π 0 π 0 and γ p → n π + π 0 cross sections have never been measured before while the γ p → p π + π − results are much improved compared to earlier data. These measurements were performed with the large acceptance hadronic detector DAPHNE, at the tagged photon beam facility of the MAMI microtron in Mainz. 0 data tables match query #### Negative Pion Production from Neutrons by Polarized gamma Rays Nishikawa, T. ; Hiramatsu, S. ; Kimura, Y. ; et al. Phys.Rev.Lett. 21 (1968) 1288-1291, 1968. Inspire Record 944914 The differential asymmetry ratio for the process γ+n→p+π− was measured at 90° in the center-of-mass system and for incident photon energies from 352 to 550 MeV. The observed asymmetries are larger than the values predicted from the theory by Berends, Donnachie, and Weaver. A smaller M1- amplitude gives better agreement between the experiment and the theory. 0 data tables match query #### Measurement of pi- p ---> pi0 pi0 n from threshold to p(pi-) 750-MeV/c The collaboration Prakhov, S. ; Nefkens, B.M.K. ; Allgower, C.E. ; et al. Phys.Rev.C 69 (2004) 045202, 2004. Inspire Record 647544 Reaction π−p→π0π0n has been measured with high statistics in the beam momentum range 270–750MeV∕c. The data were obtained using the Crystal Ball multiphoton spectrometer, which has 93% of 4π solid angle coverage. The dynamics of the π−p→π0π0n reaction and the dependence on the beam energy are displayed in total cross sections, Dalitz plots, invariant-mass spectra, and production angular distributions. Special attention is paid to the evaluation of the acceptance that is needed for the precision determination of the total cross section σt(π−p→π0π0n). The energy dependence of σt(π−p→π0π0n) shows a shoulder at the Roper resonance [i.e., the N(1440)12+], and there is also a maximum near the N(1520)32−. It illustrates the importance of these two resonances to the π0π0 production process. The Dalitz plots are highly nonuniform; they indicate that the π0π0n final state is dominantly produced via the π0Δ0(1232) intermediate state. The invariant-mass spectra differ much from the phase-space distributions. The production angular distributions are also different from the isotropic distribution, and their structure depends on the beam energy. For beam momenta above 550MeV∕c, the density distribution in the Dalitz plots strongly depends on the angle of the outgoing dipion system (or equivalently on the neutron angle). The role of the f0(600) meson (also known as the σ) in π0π0n production remains controversial. 0 data tables match query #### Production of $K \bar{K}$ Pairs in Photon-photon Collisions and the Excitation of the Tensor Meson F-prime (1515) The collaboration Althoff, M. ; Brandelik, R. ; Braunschweig, W. ; et al. Phys.Lett.B 121 (1983) 216-222, 1983. Inspire Record 181468 We have observed exclusive production of K + K − and K S O K S O pairs and the excitation of the f′(1515) tensor meson in photon-photon collisions. Assuming the f′ to be production in a helicity 2 state, we determine Λ( f ′ → γγ) B( f ′ → K K ) = 0.11 ± 0.02 ± 0.04 keV . The non-strange quark of the f′ is found to be less than 3% (95% CL). For the θ(1640) we derive an upper limit for the product Λ(θ rarr; γγ K K ) < 0.03 keV (95% CL ) . 0 data tables match query #### Polarized Target Asymmetry in $\pi^+$ Photoproduction Between 0.3-GeV and 1.0-GeV at 130° Feller, P. ; Fukushima, M. ; Horikawa, N. ; et al. Nucl.Phys.B 102 (1976) 207, 1976. Inspire Record 90055 The polarized target asymmetry for γ + p → π + + n was measured at c.m. angles around 130° for the energy range between 0.3 and 1.0 GeV. A magnetic spectrometer system was used to detect π + mesons from the polarized butanol target. The data show two prominent positive peaks at 0.4 and 0.8 GeV and a deep minimum at 0.6 GeV. These features are well reproduced by the phenomenological analysis made by us. 0 data tables match query #### Compton scattering by the proton through Theta(CMS) = 75-degrees and 90-degrees in the Delta resonance region Hünger, A ; Peise, J ; Robbiano, A ; et al. Nucl.Phys.A 620 (1997) 385-416, 1997. Inspire Record 458618 Differential cross sections for Compton scattering by the proton have been measured in the energy interval between 200 and 500 MeV at scattering angles of θ cms = 75° and θ cms = 90° using the CATS, the CATS/TRAJAN, and the COPP setups with the Glasgow Tagger at MAMI (Mainz). The data are compared with predictions from dispersion theory using photo-meson amplitudes from the recent VPI solution SM95. The experiment and the theoretical procedure are described in detail. It is found that the experiment and predictions are in agreement as far as the energy dependence of the differential cross sections in the Δ-range is concerned. However, there is evidence that a scaling down of the resonance part of the M 1+ 3 2 photo-meson amplitude by (2.8 ± 0.9)% is required in comparison with the VPI analysis. The deduced value of the M 1+ 3 2 - photoproduction amplitude at the resonance energy of 320 MeV is: |M 1+ 3 2 | = (39.6 ± 0.4) × 10 −3 m π + −1 . 0 data tables match query #### Measurement of the asymmetry for pi+ photoproduction from polarized protons between 300 and 900 mev Arai, S. ; Fukui, S. ; Horikawa, N. ; et al. Nucl.Phys.B 48 (1972) 397-414, 1972. Inspire Record 84444 The asymmetry of the cross section for π + photoproduction from a polarized butanol target has been measured at a c.m. angle 90° and photon energies between 300 and 900 MeV by a single-arm spectrometer detecting positive pions. Our results indicate that the asymmetry has clear positive peaks at photon energies 400 and 700 MeV with a deep valley at about 600 MeV. The general feature of the results is well reproduced by the phenomenological analyses made by Walker and ourselves; however, the best fit to the polarized target asymmetry data seems to give a somewhat different set of parameters from that given by Walker. 0 data tables match query #### Differential Cross-sections of the Proton Compton Scattering in the Resonance Region Ishii, T. ; Egawa, K. ; Kato, S. ; et al. Nucl.Phys.B 165 (1980) 189-208, 1980. Inspire Record 142130 Differential cross sections of proton Compton scattering have been measured in the energy range between 375 MeV and 1150 MeV in steps of 25 MeV at c.m. angles of 130°, 100° and 70°. The recoil proton was detected with a magnetic spectrometer. In coincidence with the proton, the scattered photon was detected with a lead-glass Čerenkov counter of the total absorption type. 0 data tables match query #### The Measurement of Polarized Target Asymmetry on gamma p --> pi0 p Below 1-GeV Fukushima, M. ; Horikawa, N. ; Kajikawa, R. ; et al. Nucl.Phys.B 136 (1978) 189-200, 1978. Inspire Record 119548 The polarized target asymmetry in the reaction γ p → π 0 p has been measured at c.m. angles of 30°, 80°, 105° and 120° for incident photon energies below 1 GeV. Two decay photons from π 0 were detected in coincidence at 30°, and at the other angles recoil protons and single photons from π 0 were detected. The results are compared with recent phenomenological analyses. 0 data tables match query #### Differential Cross-Sections of the Proton Compton Scattering in the Energy Between 450-MeV and 950-MeV Toshioka, K. ; Chiba, M. ; Kato, S. ; et al. Nucl.Phys.B 141 (1978) 364-378, 1978. Inspire Record 120614 The differential cross sections of the proton Compton scattering around the second resonance have been measured at a c.m. angle of 90° for incident photon energies between 450 MeV and 950 MeV in steps of 50 MeV, and at an angle of 60° for energies between 600 MeV and 800 MeV. The results show that the peak of the 2nd resonance agrees with that of the pion photoproduction process. We also calculated the proton Compton scattering based on unitarity and fixed- t dispersion relations. The calculation describes well the data of the cross section and the recoil proton polarization. 0 data tables match query #### Charged-pi photoproduction at 180 degress in the energy range between 300 and 1200 mev Fujii, T. ; Okuno, H. ; Orito, S. ; et al. Phys.Rev.Lett. 26 (1971) 1672-1675, 1971. Inspire Record 68981 The differential cross sections at 180° for the reactions γ+p→π++n and γ+n→π−+p were measured using a magnetic spectrometer to detect π± mesons. In order to reduce the spread of energy resolution due to the nucleon motion inside the deuteron, a photon difference method was employed with a 50-MeV step for the reaction γ+n→π−+p. The data show structures at the second- and the third-resonance regions for both reactions. A simple phenomenological analysis was made for fitting the data, and the results are compared with those of previous analyses. 0 data tables match query #### Polarized Target Asymmetry in pi0 Photoproduction Between 0.4-GeV and 1.0-GeV Around 100-Degrees Feller, P. ; Fukushima, M. ; Horikawa, N. ; et al. Phys.Lett.B 55 (1975) 241-244, 1975. Inspire Record 90929 The polarized target asymmetry in the reaction γp→π°p has been measured at c.m. angles around 100° for photon energies between 0.4 and 1.0 GeV by detecting both the recoil proton and the π°. The result is compared with recent analyses. 0 data tables match query #### Rho Production by Virtual Photons Joos, P. ; Ladage, A. ; Meyer, H. ; et al. Nucl.Phys.B 113 (1976) 53-92, 1976. Inspire Record 108749 The reaction γ V p → p π + π − was studied in the W , Q 2 region 1.3–2.8 GeV, 0.3–1.4 GeV 2 using the streamer chamber at DESY. A detailed analysis of rho production via γ V p→ ϱ 0 p is presented. Near threshold rho production has peripheral and non-peripheral contributions of comparable magnitude. At higher energies ( W > 2 GeV) the peripheral component is dominant. The Q 2 dependence of σ ( γ V p→ ϱ 0 p) follows that of the rho propagator as predicted by VDM. The slope of d σ /d t at 〈 Q 2 〉 = 0.4 and 0.8 GeV 2 is within errors equal to its value at Q 2 = 0. The overall shape of the ϱ 0 is t dependent as in photoproduction, but is independent of Q 2 . The decay angular distribution shows that longitudinal rhos dominate in the threshold region. At higher energies transverse rhos are dominant. Rho production by transverse photons proceeds almost exclusively by natural parity exchange, σ T N ⩾ (0.83 ± 0.06) σ T for 2.2 < W < 2.8 GeV. The s -channel helicity-flip amplitudes are small compared to non-flip amplitudes. The ratio R = σ L / σ T was determined assuming s -channel helicity conservation. We find R = ξ 2 Q 2 / M ϱ 2 with ξ 2 ≈ 0.4 for 〈 W 〉 = 2.45 GeV. Interference between rho production amplitudes from longitudinal and transverse photons is observed. With increasing energy the phase between the two amplitudes decreases. The observed features of rho electroproduction are consistent with a dominantly diffractive production mechanism for W > 2 GeV. 0 data tables match query #### The Measurement of Polarized Target Asymmetry on gamma p --> pi+ n Below 1.02-GeV Fukushima, M. ; Horikawa, N. ; Kajikawa, R. ; et al. Nucl.Phys.B 130 (1977) 486-504, 1977. Inspire Record 119547 The polarized target asymmetry for the process γ p → π + n has been measured for incident photon energies below 1.02 GeV over a range of c.m. angles from 40° to 160°. π + mesons from a polarized butanol target were detected by a magnetic spectrometer. The results are compared with predictions given by existing analyses. A tentative interpretation of the data is performed, and a larger contribution of S-wave resonances is suggested. The photocouplings of dominant resonances were hardly changed by the inclusion of new data and they seem to be almost uniquely determined. 0 data tables match query #### RECOIL PROTON POLARIZATION OF PROTON COMPTON SCATTERING IN THE RESONANCE REGION Wada, Y. ; Kato, S. ; Miyachi, T. ; et al. Nuovo Cim.A 63 (1981) 57-70, 1981. Inspire Record 170488 The recoil proton polarization of proton Compton scattering (γp→γp) was measured in the photon energy range from 500 MeV to 1000 MeV atθ∗=100° and from 400MeV to 800 MeV atθ∗=130°. A recoil proton and a scattered photon were detected in coincidence with a magnetic spectrometer and a photon detector. The recoil proton polarization was measured with a carbon polarimeter. The results are compared with a phenomenological analysis based on an isobar model and a dynamical analysis based on the dispersion relation. 0 data tables match query #### Measurement of the Polarization Parameter in $\pi^- p$ Scattering at 291.5-{MeV} and 308-{MeV} Alder, J.C. ; Perroud, J.P. ; Tran, M.T. ; et al. Lett.Nuovo Cim. 23 (1978) 381, 1978. Inspire Record 130236 0 data tables match query #### Measurement of Polarized Target Asymmetry on $\gamma n \to \pi^- p$ Around the Second Resonance Region Fujii, K. ; Hayashii, H. ; Iwata, S. ; et al. Nucl.Phys.B 187 (1981) 53-70, 1981. Inspire Record 156223 The polarized target asymmetry for γ n→ π − p was measured over the second resonance region from 0.55 to 0.9 GeV at pion c.m. angles between 60° and 120°. A double-arm spectrometer was used with a deuterated butanol target to detect both the pion and the proton, thus considerably improving the data quality. Including the new data in the amplitude analysis, the radiative decay widths of three resonances were determined more accurately than before. The results are compared with various quark models. 0 data tables match query #### Recoil Proton Polarization of Neutral Pion Photoproduction From Proton in the Energy Range Between 400-{MeV} and 1142-{MeV} Kato, S. ; Miyachi, T. ; Sugano, K. ; et al. Nucl.Phys.B 168 (1980) 1-16, 1980. Inspire Record 142131 The recoil proton polarization of the reaction γ p → π 0 p was measured at a c.m. angle of 100° for incident photon energies between 451 and 1106 MeV, and at an angle of 130° for energies from 400 to 1142 MeV. One photon, decayed from a π 0 meson, and a recoil proton were detected in coincidence. Two kinds of polarization analyzer were employed. In the range of proton kinetic energy less than 420 MeV and higher than 346 MeV, carbon plates and liquid hydrogen were used for determining the polarization, respectively. The data given by the two polarimeter systems are in good agreement. Results are compared with recent phenomenological analyses. From the comparison between the present data and the polarized target data, the invariant amplitude A 3 can be estimated to be small. 0 data tables match query
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646481275558472, "perplexity": 3901.978846410961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00405.warc.gz"}
https://www.arxiv-vanity.com/papers/astro-ph/9907083/
# Radio Constraints on the Identifications and Redshifts of Submm Galaxies Ian Smail,1 2 R. J. Ivison,1 3 F. N. Owen,1 A. W. Blain1 & J.-P. Kneib1 1) Department of Physics, University of Durham, South Road, Durham DH1 3LE, UK 3) Department of Physics & Astronomy, University College London, Gower Street, London WC1E 6BT, UK 5) NRAO, P.O. Box 0, 1003 Lopezville Road, Socorro, NM 87801 6) Cavendish Laboratory, Madingley Road, Cambridge CB3 OHE, UK 7) Observatoire Midi-Pyrénées, CNRS-UMR5572, 14 Avenue E. Belin, 31400 Toulouse, France 1affiliationmark: 2affiliation: Royal Society University Research Fellow. ###### Abstract We present radio maps from the Very Large Array (VLA) for 16 sources detected in a sub-millimeter (submm) survey of the distant Universe. Our deep VLA 1.4-GHz maps allow us to identify radio counterparts or place stringent limits (Jy in the source plane) on the radio flux of the submm sources. We compare the spectral indices of our sources between 850 m and 1.4 GHz to empirical and theoretical models for distant starburst galaxies and active galactic nuclei (AGN) as a function of redshift. In this way we can derive redshift limits for the submm sources, even in the absence of an optical or near-infrared counterpart. We conclude that the submm population brighter than  mJy has a median redshift of at least , more probably –3, with almost all galaxies at . This estimate is a strong lower limit as both misidentification of the radio counterparts and non-thermal emission from an AGN will bias our redshift estimates to lower values. The high median redshift means that the submm population, if predominately powered by starbursts, contributes a substantial fraction of the total star formation density at high redshifts. A comparison of the spectral index limits with spectroscopic redshifts for proposed optical counterparts to individual submm galaxies suggests that half of the submm sources remain unidentified and thus their counterparts must be fainter than . cosmology: observations — galaxies: evolution — galaxies: formation — infrared: galaxies — radio: galaxies \submitted Received: July 01, 1999; Accepted August 20, 1999 ## 1 Introduction Faint submm sources are likely to be highly obscured starburst galaxies and AGN, within which optical/UV radiation from massive stars or an active nucleus is absorbed by dust and reradiated in the far-infrared. The dust emission peaks at m and thus long-wavelength observations of distant dusty galaxies can benefit as this peak is redshifted into their window. At 850 m, this increase balances the geometrical dimming at higher redshifts, resulting in a constant flux density out to –10 and the opportunity to select very high redshift galaxies. A number of deep submm surveys have been published (Smail, Ivison & Blain 1997; Barger et al. 1998, 1999b; Hughes et al. 1998; Blain et al. 1999a; Eales et al. 1999) providing counts which are in good agreement at 850-m flux densities above 2 mJy (the confusion limit of the blank-field surveys). The surface density of submm galaxies reaches per sq. arcmin by 1 mJy (Blain et al. 1999a). If these galaxies lie at , then they have bolometric luminosities of and they are the distant analogs of the local ultraluminous infrared galaxy (ULIRG) population. However, the observed surface density of submm galaxies is several orders of magnitude greater than that expected from the local ULIRG population (Smail et al. 1997) indicating very substantial evolution of these systems in the distant Universe. The integrated emission from this population can account for the bulk of the extragalactic background detected at 850 m by COBE (e.g. Fixsen et al. 1998), and hence confirms these galaxies as an important source of radiation in the Universe (Blain et al. 1999a, 1999b). To identify the era of obscured emission in the Universe, whether from AGN or starbursts, we have to measure the redshifts of a complete sample of submm galaxies. Several groups have attempted this (Hughes et al. 1998; Barger et al. 1999a; Lilly et al. 1999). Hughes et al. (1998) concluded that the bulk of the population is at –4, based on photometric redshift limits for the probable counterparts of five submm sources in the Hubble Deep Field (HDF, c.f. Richards 1999 and Downes et al. 1999). Barger et al. (1999a) undertook a spectroscopic survey of the same submm sample analysed here and concluded that the median redshift was –2, with the bulk of the population having –3. Finally, Lilly et al. (1999) used archival spectroscopy and broad-band photometry of submm sources from the Eales et al. (1999) survey to claim that the population spans –3, with a third at . The differences between these studies are significant and important for our understanding of the nature of submm galaxies. It is very difficult to achieve high completeness in optical spectroscopic surveys of submm galaxies (e.g. Barger et al. 1999a) due to the very different behaviour of the K corrections for distant galaxies between submm and optical passbands. However, even a crude estimate of the median redshift of a complete sample of submm galaxies would provide a powerful insight into the relative dominance of obscured and unobscured emission at different epochs (Blain et al. 1999b). In a recent paper, Carilli & Yun (1999, CY) demonstrated that using the spectral index between the submm (850 m) and radio (1.4 GHz) wavebands, , it was possible to obtain crude redshift limits for distant dusty galaxies, irrespective of the nature of the emission mechanism, AGN or starburst. CY employed a number of theoretical and empirical spectral energy distributions (SEDs) to investigate the range in for different assumed SEDs and showed that these models adequately described the small sample of high-redshift galaxies for which both radio and submm observations were available. As pointed out by Blain (1999, B99), if we adopt lower dust temperatures for the submm population than are seen in the local sources used in CY’s models, then the allowed range of redshifts is slightly lower for a given value of . Nevertheless, the modest scatter between the models in CY suggests that this technique can provide useful limits on the redshifts of submm galaxies in the absence of an optical counterpart. In this paper we apply the CY analysis to deep radio observations of a complete sample of submm galaxies selected from the SCUBA Cluster Lens Survey (Smail et al. 1998). Our aim is to constrain the redshift distribution of this population and in the process test the optical identifications and spectroscopic redshifts from Smail et al. (1998) and Barger et al. (1999a). We present the observations and their analysis in §2, discuss our results in §3 and give our main conclusions in §4. ## 2 Observations, Reduction and Analysis The 850-m maps on which our survey is based were obtained using the long-wavelength array of the Sub-millimeter Common-User Bolometer Array (SCUBA, Holland et al. 1999) on the James Clerk Maxwell Telescope (JCMT)666The JCMT is operated by the Joint Astronomy Centre on behalf of the United Kingdom Particle Physics and Astronomy Research Council, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada.. The details of the observations, their reduction and analysis are given in Smail et al. (1997, 1998) and Ivison et al. (1998). Each field covers an area of 5.2 arcmin with a typical 1 sensitivity of 1.7 mJy, giving a total survey area of 0.01 deg. The median amplification by the cluster lenses for background sources detected in our fields is expected to be (Blain et al. 1999a; Barger et al. 1999a), and so we have effectively surveyed an area of about 15 arcmin in the source plane to an equivalent sensitivity of 0.7 mJy. The follow-up of these submm sources also benefits from the achromatic amplification, which boosts the apparent brightness of counterparts in all other wavebands. All the radio maps used in this work were obtained with the VLA777The VLA is run by NRAO and is operated by Associated Universities Inc., under a cooperative agreement with the National Science Foundation. at 1.4 GHz in A or B configuration, giving effective resolutions of 1.5 and 5, respectively. More details of the reduction and analysis of these maps are given in the following references (we list the VLA configuration and 1 map noise for each cluster): Morrison et al. (1999) for Cl 002416 (B/15 Jy), A 370 (B/10 Jy) and Cl 093947 (B/9 Jy); Ivison et al. (1999) for A 1835 (B/16 Jy) and Ivison et al. (2000) for MS 044002 (A/15 Jy) and Cl 224402 (A/17 Jy). No deep radio map is available for A 2390, although shallower observations were used to study the submm/radio spectral index of the central cluster galaxy (Edge et al. 1999). The 850-m and 1.4-GHz fluxes or limits for 16 of the sources in Smail et al. (1998), for which we have radio observations, are listed in Table 1 in order of their apparent submm fluxes, along with their proposed spectroscopic redshifts, , from Barger et al. (1999a). Where a radio counterpart is identified the spectroscopic redshift of the closest optical candidate is listed in the table. The errors on are calculated assuming the 1 flux uncertainties in each band (§2.1 and Smail et al. 1998). For non-detections at 1.4 GHz we use the 3 flux limit of the relevant radio map. The two central cluster galaxies in our sample are not included in our analysis. Using the values or limits, the redshift ranges, , are derived from the extremes of the predictions from the four CY models (two empirical SEDs representing Arp 220 and M 82, and two models with dust temperatures of –60 K and emissivities of or ) and a further model from B99, with and a  K to illustrate the minimum possible redshift assuming a very low . For a model SED at , a variation of or  K results in an change of , equivalent to an uncertainty in the derived redshift of . There are three caveats to bear in mind when using to estimate redshifts for distant galaxies. First, most of the distant galaxies which CY used to compare with their model predictions show some signs of AGN activity. If these AGN also contribute to the 1.4-GHz non-thermal emission of the galaxy they will lower the observed values (as some obviously do in Fig. 1a of CY). This will mean that any radio-quiet submm sources could lie at the high end of the predicted range at each epoch. Secondly, the CY and B99 models we use assume effective dust temperatures for the galaxies,  K. If the dust in distant obscured galaxies is much cooler then this, it will again shift the predicted redshifts systematically lower (see B99). Finally, as mentioned in CY, the effects of inverse Compton scattering of radio photons off the microwave background may reduce the radio luminosities of star-forming galaxies at and hence increase for the most distant galaxies. Nevertheless, the relatively good agreement shown in CY of the spectral indices of distant galaxies with the models is an important confirmation that indices can be used to derive robust lower limits to the redshifts of submm sources without reliable spectroscopic identifications. ## 3 Results and Discussion We show in Fig. 2 three cumulative redshift distributions for the population representing extreme interpretations of the limits from the spectral index models. We see that even making the most conservative assumptions about the likely redshifts from the indices we still predict a median redshift for the submm population above an intrinsic 850-m flux of 1 mJy of , and more likely closer to –3. Comparing the cumulative redshift distribution to that derived from the (incomplete) spectroscopic study of this sample (Barger et al. 1999a) we see broad similarities. However, comparison of redshifts for individual sources from the two studies (Table 1), while showing good agreement for those submm sources with reliable identifications (e.g. Ivison et al. 1998, 1999) also indicates that the majority of the uncertain spectroscopic IDs are likely to be incorrect. Barger et al. (1999a) obtained spectroscopy of most of the possible optical counterparts within each submm error-box. We can therefore state that the true submm sources must be fainter than the faintest spectroscopic target. Including the two optical blank-field sources already known (Smail et al. 1998), we conclude that approximately half of the submm population are therefore currently unidentified. These submm sources have no radio counterparts and are too faint for optical spectroscopy, , their identification will thus be very difficult. Our median redshift is compatible with the results of Hughes et al. (1998) and CY for the five submm sources in the HDF based on analyses of their SEDs and radio-submm indices. The only other submm survey for which spectroscopic redshift information has been published is by Lilly et al. (1999) for the Eales et al. (1999) sample — they find — similar to our median redshift. However, Lilly et al. (1999) claim that a third of the submm population lies at ; in contrast, we find no galaxies in our field sample at . This apparent contradication may result simply from the small sizes of the current samples or might indicate that foreground bright optical galaxies are lensing the distant submm sources detected in the field surveys (see Blain, Möller & Maller 1999; Hughes et al. 1998). The detection rate of radio counterparts to the submm sources is higher for the intrinsically brighter sources. All the submm sources with observed fluxes above  mJy (intrinsic fluxes of  mJy ) have radio counterparts, while the majority of the fainter sources do not (this is consistent with CY’s results in the HDF). The detections and astrometry of the fainter sources are sufficiently reliable that this result is not due to spurious detections (see Ivison et al. 1999). Making the conservative assumption of placing all of the non-detections at their lower bounds on , we find that this distribution is consistent with being drawn from the distribution of the brighter sources with a probability of . However, simply comparing the indices for the two subsamples we see that half of the radio-detected bright submm sources have values lower than the lowest limit on the undetected sources, the likelihood that this occurs by chance is only suggesting that there may be real differences between the values for the two subsamples. Several factors could cause this, most simply the apparently fainter submm sources may be at higher redshifts (a correlation which could naturally exist in a low density Universe). Alternatively, intrinsic differences in the spectral indices of fainter sources would occur if they contain a lower fraction of radio-loud AGN or have typically cooler dust temperatures (or higher emissivities). Both effects are plausible given what we know about the correlations of AGN fractions, dust temperature and emissivity with luminosity in local ULIRG samples (Sanders & Mirabel 1996). Further detailed observations of both distant and local ULIRGs are needed to distinguish between these possibilities. Fig. 2. The cumulative redshift distribution for the full submm sample. We have used the spectroscopic redshifts of those sources thought to be reliable (Table 1) and combined these with the probable redshift ranges of the remaining sources derived from their indices or limits. The solid line shows the cumulative distribution if we assume the minimum redshift distribution which is obtained if all sources are assumed to lie at their lower limit given in Table 1 (the dashed line is the equivalent analysis but restricted to just the CY models). The effect of non-thermal radio emission, which drives down the indices, means that this is a very conservative assumption if some fraction of the population harbor radio-loud AGN. The dot-dashed line assumes a flat probability distribution for the sources within their ranges and a maximum redshift of for those sources where we only have a lower limit on . Finally, the dotted line is the cumulative redshift distribution from Barger et al. (1999a) with two of the source identifications corrected as in Smail et al. (1999) and all blank-field/ERO candidates placed at . The relatively high median redshift we find for the submm population, –3, indicates that their equivalent star formation density at these epochs is around 0.5  yr Mpc (Blain et al. 1999a), roughly three times that seen in UV-selected samples (Steidel et al. 1999). Emission from dust heated by obscured AGN will reduce this estimate, but it is difficult not to conclude that the submm galaxies contain a substantial fraction of the star formation in the high redshift Universe. ## 4 Conclusions We present radio maps of 16 galaxies selected in a deep submm survey. We combine submm and radio fluxes (or limits) to determine the radio-submm spectral indices of these galaxies and interpret these using model predictions to derive the redshifts for a complete sample of faint submm galaxies. We find a median redshift for the submm population down to  mJy under conservative assumptions, and –3 for more reasonable assumptions. Median redshifts below are only possible if the bulk of the emission is coming from dust at  K (compared to the 40–50 K typically seen in well-studied, distant submm galaxies, or their low-redshift analogs: ULIRGs). As a result we find no evidence for a significant low-redshift, , tail in our distribution in contrast to Lilly et al. (1999). We compare the individual redshifts estimated from with the spectroscopic observations of proposed optical counterparts of the submm sources. We find that the majority of the ‘uncertain’ spectroscopic identifications from Barger et al. (1999a) are likely to be incorrect. We conclude that the true counterparts lie at higher redshifts and are intrinsically very faint, , making the prospects for a complete optical spectroscopic survey of the submm population bleak. ## Acknowledgements We thank Amy Barger, Chris Carilli, Len Cowie, Glenn Morrison, Jason Stevens and Min Yun for useful conversations and help.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524023532867432, "perplexity": 2527.49374236157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00477.warc.gz"}
https://labs.tib.eu/arxiv/?author=G.%20Kornakov
• ### Deep sub-threshold {\phi} production and implications for the K+/K- freeze-out in Au+Au collisions(1703.08418) Nov. 28, 2017 hep-ex, nucl-ex We present data on charged kaons (K+-) and {\phi} mesons in Au(1.23A GeV)+Au collisions. It is the first simultaneous measurement of K and {\phi} mesons in central heavy-ion collisions below a kinetic beam energy of 10A GeV. The {\phi}/K- multiplicity ratio is found to be surprisingly high with a value of 0.52 +- 0.16 and shows no dependence on the centrality of the collision. Consequently, the different slopes of the K+ and K- transverse-mass spectra can be explained solely by feed- down, which substantially softens the spectra of K- mesons. Hence, in contrast to the commonly adapted argumentation in literature, the different slopes do not necessarily imply diverging freeze- out temperatures of K+ and K- mesons caused by different couplings to baryons. • ### TRAGALDABAS. First results on cosmic ray studies and their relation with the solar activity, the Earth magnetic field and the atmospheric properties(1701.07277) Cosmic rays originating from extraterrestrial sources are permanently arriving at Earth atmosphere, where they produce up to billions of secondary particles. The analysis of the secondary particles reaching to the surface of the Earth may provide a very valuable information about the Sun activity, changes in the geomagnetic field and the atmosphere, among others. In this article, we present the first preliminary results of the analysis of the cosmic rays measured with a high resolution tracking detector, TRAGALDABAS, located at the Univ. of Santiago de Compostela, in Spain. • ### Inclusive {\Lambda} production in proton-proton collisions at 3.5 GeV(1611.01040) Nov. 3, 2016 nucl-ex The inclusive production of {\Lambda} hyperons in proton-proton collisions at $\sqrt{s}$ = 3.18 GeV was measured with HADES at the GSI Helmholtzzentrum f\"ur Schwerionenforschung in Darmstadt. The experimental data are compared to a data-based model for individual exclusive {\Lambda} production channels in the same reaction. The contributions of intermediate resonances such as {\Sigma}(1385), {\Delta}++ or N* are considered in detail. In particular, the result of a partial wave analysis is accounted for the abundant pK$^+${\Lambda} final state. Model and data show a reasonable agreement at mid rapidities, while a difference is found for larger rapidities. A total {\Lambda} production cross section in p+p collisions at $\sqrt{s}$ = 3.18 GeV of {\sigma}(pp $\to$ {\Lambda} + X) = 207.3 $\pm$ 1.3 +6.0 -7.3 (stat.) $\pm$ 8.4 (syst.) +0.4 -0.5 (model) {\mu}b is found. • ### The $\bf{\Lambda p}$ interaction studied via femtoscopy in p + Nb reactions at $\mathbf{\sqrt{s_{NN}}=3.18} ~\mathrm{\bf{GeV}}$(1602.08880) Feb. 29, 2016 nucl-ex We report on the first measurement of $p\Lambda$ and $pp$ correlations via the femtoscopy method in p+Nb reactions at $\mathrm{\sqrt{s_{NN}}=3.18} ~\mathrm{GeV}$, studied with the High Acceptance Di-Electron Spectrometer (HADES). By comparing the experimental correlation function to model calculations, a source size for $pp$ pairs of $r_{0,pp}=2.02 \pm 0.01(\mathrm{stat})^{+0.11}_{-0.12} (\mathrm{sys}) ~\mathrm{fm}$ and a slightly smaller value for $p\Lambda$ of $r_{0,\Lambda p}=1.62 \pm 0.02(\mathrm{stat})^{+0.19}_{-0.08}(\mathrm{sys}) ~\mathrm{fm}$ is extracted. Using the geometrical extent of the particle emitting region, determined experimentally with $pp$ correlations as reference together with a source function from a transport model, it is possible to study different sets of scattering parameters. The $p\Lambda$ correlation is proven sensitive to predicted scattering length values from chiral effective field theory. We demonstrate that the femtoscopy technique can be used as valid alternative to the analysis of scattering data to study the hyperon-nucleon interaction. • ### Statistical model analysis of hadron yields in proton-nucleus and heavy-ion collisions at SIS 18 energies(1512.07070) Dec. 22, 2015 hep-ex, nucl-ex, nucl-th The HADES data from p+Nb collisions at center of mass energy of $\sqrt{s_{NN}}$= 3.2 GeV are analyzed by employing a statistical model. Accounting for the identified hadrons $\pi^0$, $\eta$, $\Lambda$, $K^{0}_{s}$, $\omega$ allows a surprisingly good description of their abundances with parameters $T_{chem}=(99\pm11)$ MeV and $\mu_{b}=(619\pm34)$ MeV, which fits well in the chemical freeze-out systematics found in heavy-ion collisions. In supplement we reanalyze our previous HADES data from Ar+KCl collisions at $\sqrt{s_{NN}}$= 2.6 GeV with an updated version of the statistical model. We address equilibration in heavy-ion collisions by testing two aspects: the description of yields and the regularity of freeze-out parameters from a statistical model fit. Special emphasis is put on feed-down contributions from higher-lying resonance states which have been proposed to explain the experimentally observed $\Xi^-$ excess present in both data samples. • ### K*(892)+ production in proton-proton collisions at E_beam = 3.5 GeV(1505.06184) May 22, 2015 nucl-ex We present results on the K*(892)+ production in proton-proton collisions at a beam energy of E = 3.5 GeV, which is hitherto the lowest energy at which this mesonic resonance has been observed in nucleon-nucleon reactions. The data are interpreted within a two-channel model that includes the 3-body production of K*(892)+ associated with the Lambda- or Sigma-hyperon. The relative contributions of both channels are estimated. Besides the total cross section sigma(p+p -> K*(892)+ + X) = 9.5 +- 0.9 +1.1 -0.9 +- 0.7 mub, that adds a new data point to the excitation function of the K*(892)+ production in the region of low excess energy, transverse momenta and angular spectra are extracted and compared with the predictions of the two-channel model. The spin characteristics of K*(892)+ are discussed as well in terms of the spin-alignment. • ### Subthreshold Xi- Production in Collisions of p(3.5 GeV)+Nb(1501.03894) Jan. 16, 2015 nucl-ex Results on the production of the double-strange cascade hyperon $\mathrm{\Xi^-}$ are reported for collisions of p\,(3.5~GeV)\,+\,Nb, studied with the High Acceptance Di-Electron Spectrometer (HADES) at SIS18 at GSI Helmholtzzentrum for Heavy-Ion Research, Darmstadt. For the first time, subthreshold $\mathrm{\Xi^-}$ production is observed in proton-nucleus interactions. Assuming a $\mathrm{\Xi^-}$ phase-space distribution similar to that of $\mathrm{\Lambda}$ hyperons, the production probability amounts to $P_{\mathrm{\Xi^-}}=(2.0\,\pm0.4\,\mathrm{(stat)}\,\pm 0.3\,\mathrm{(norm)}\,\pm 0.6\,\mathrm{(syst)})\times10^{-4}$ resulting in a $\mathrm{\Xi^-/(\Lambda+\Sigma^0)}$ ratio of $P_{\mathrm{\Xi^-}}/\ P_{\mathrm{\Lambda+\Sigma^0}}=(1.2\pm 0.3\,\mathrm{(stat)}\pm0.4\,\mathrm{(syst)})\times10^{-2}$. Available model predictions are significantly lower than the estimated $\mathrm{\Xi^-}$ yield. • ### Partial Wave Analysis of the Reaction $p(3.5 GeV)+p \to pK^+\Lambda$ to Search for the "$ppK^-$" Bound State(1410.8188) Oct. 29, 2014 nucl-ex Employing the Bonn-Gatchina partial wave analysis framework (PWA), we have analyzed HADES data of the reaction $p(3.5GeV)+p\to pK^{+}\Lambda$. This reaction might contain information about the kaonic cluster "$ppK^-$" via its decay into $p\Lambda$. Due to interference effects in our coherent description of the data, a hypothetical $\overline{K}NN$ (or, specifically "$ppK^-$") cluster signal must not necessarily show up as a pronounced feature (e.g. a peak) in an invariant mass spectra like $p\Lambda$. Our PWA analysis includes a variety of resonant and non-resonant intermediate states and delivers a good description of our data (various angular distributions and two-hadron invariant mass spectra) without a contribution of a $\overline{K}NN$ cluster. At a confidence level of CL$_{s}$=95\% such a cluster can not contribute more than 2-12\% to the total cross section with a $pK^{+}\Lambda$ final state, which translates into a production cross-section between 0.7 $\mu b$ and 4.2 $\mu b$, respectively. The range of the upper limit depends on the assumed cluster mass, width and production process. • ### Medium effects in proton-induced $K^{0}$ production at 3.5 GeV(1404.7011) April 29, 2014 nucl-ex We present the analysis of the inclusive $K^{0}$ production in p+p and p+Nb collisions measured with the HADES detector at a beam kinetic energy of 3.5 GeV. Data are compared to the GiBUU transport model. The data suggest the presence of a repulsive momentum-dependent kaon potential as predicted by the Chiral Perturbation Theory (ChPT). For the kaon at rest and at normal nuclear density, the ChPT potential amounts to $\approx 35$ MeV. A detailed tuning of the kaon production cross sections implemented in the model has been carried out to reproduce the experimental data measured in p+p collisions. The uncertainties in the parameters of the model were examined with respect to the sensitivity of the experimental results from p+Nb collisions to the in-medium kaon potential. • ### Lambda hyperon production and polarization in collisions of p(3.5 GeV) + Nb(1404.3014) April 14, 2014 nucl-ex Results on $\Lambda$ hyperon production are reported for collisions of p(3.5 GeV) + Nb, studied with the High Acceptance Di-Electron Spectrometer (HADES) at SIS18 at GSI Helmholtzzentrum for Heavy-Ion Research, Darmstadt. The transverse mass distributions in rapidity bins are well described by Boltzmann shapes with a maximum inverse slope parameter of about $90\,$MeV at a rapidity of $y=1.0$, i.e. slightly below the center-of-mass rapidity for nucleon-nucleon collisions, $y_{cm}=1.12$. The rapidity density decreases monotonically with increasing rapidity within a rapidity window ranging from 0.3 to 1.3. The $\Lambda$ phase-space distribution is compared with results of other experiments and with predictions of two transport approaches which are available publicly. None of the present versions of the employed models is able to fully reproduce the experimental distributions, i.e. in absolute yield and in shape. Presumably, this finding results from an insufficient modelling in the transport models of the elementary processes being relevant for $\Lambda$ production, rescattering and absorption. The present high-statistics data allow for a genuine two-dimensional investigation as a function of phase space of the self-analyzing $\Lambda$ polarization in the weak decay $\Lambda\rightarrow p \pi^-$. Finite negative values of the polarization in the order of $5-20\,\%$ are observed over the entire phase space studied. The absolute value of the polarization increases almost linearly with increasing transverse momentum for $p_t>300\,$MeV/c and increases with decreasing rapidity for $y < 0.8$. • ### Searching a Dark Photon with HADES(1311.0216) Nov. 1, 2013 hep-ph, hep-ex We present a search for the e+e- decay of a hypothetical dark photon, also names U vector boson, in inclusive dielectron spectra measured by HADES in the p (3.5 GeV) + p, Nb reactions, as well as the Ar (1.756 GeV/u) + KCl reaction. An upper limit on the kinetic mixing parameter squared epsilon^{2} at 90% CL has been obtained for the mass range M(U) = 0.02 - 0.55 GeV/c2 and is compared with the present world data set. For masses 0.03 - 0.1 GeV/c^2, the limit has been lowered with respect to previous results, allowing now to exclude a large part of the parameter region favoured by the muon g-2 anomaly. Furthermore, an improved upper limit on the branching ratio of 2.3 * 10^{-6} has been set on the helicity-suppressed direct decay of the eta meson, eta-> e+e-, at 90% CL. • ### Inclusive pion and eta production in p+Nb collisions at 3.5 GeV beam energy(1305.3118) July 5, 2013 nucl-ex Data on inclusive pion and eta production measured with the dielectron spectrometer HADES in the reaction p+93Nb at a kinetic beam energy of 3.5 GeV are presented. Our results, obtained with the photon conversion method, supplement the rather sparse information on neutral meson production in proton-nucleus reactions existing for this bombarding energy regime. The reconstructed e+e-e+e- transverse-momentum and rapidity distributions are confronted with transport model calculations, which account fairly well for both pi0 and eta production. • ### Baryonic resonances close to the Kbar-N threshold: the case of Lambda(1405) in pp collisions(1208.0205) Feb. 5, 2013 nucl-ex We present an analysis of the Lambda(1405) resonance produced in the reaction p+p->Sigma^{pm}+pi^{mp}+K+p at 3.5 GeV kinetic beam energy measured with HADES at GSI. The two charged decay channels Lambda(1405) -> Sigma^{\pm}+pi^{\mp} have been reconstructed for the first time in p+p collisions. The efficiency and acceptance-corrected spectral shapes show a peak position clearly below 1400 MeV/c^2. We find a total production cross section of sigma_{Lambda(1405)}=9.2 +- 0.9 +- 0.7 +3.3-1.0 mub. The analysis of its polar angle distribution suggests that the Lambda(1405) is produced isotropically in the p-p center of mass system. • ### First measurement of low momentum dielectrons radiated off cold nuclear matter(1205.1918) Sept. 25, 2012 hep-ex, nucl-ex We present data on dielectron emission in proton induced reactions on a Nb target at 3.5 GeV kinetic beam energy measured with HADES installed at GSI. The data represent the first high statistics measurement of proton-induced dielectron radiation from cold nuclear matter in a kinematic regime, where strong medium effects are expected. Combined with the good mass resolution of 2%, it is the first measurement sensitive to changes of the spectral functions of vector mesons, as predicted by models for hadrons at rest or small relative momenta. Comparing the e+e invariant mass spectra to elementary p+p data, we observe for e+e momenta Pee < 0.8 GeV/c a strong modification of the shape of the spectrum, which we attribute to an additional rho-like contribution and a decrease of omega yield. These opposite trends are tentatively interpreted as a strong coupling of the rho meson to baryonic resonances and an absorption of the omega meson, which are two aspects of in-medium modification of vector mesons.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667419791221619, "perplexity": 2355.7991685358143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00505.warc.gz"}
https://tex.stackexchange.com/questions/341846/split-equation-into-multiple-lines
# Split equation into multiple lines I am trying to split a long equation into lines as explained in one of the answers here using multline (amsmath package: \begin{multline} m(X)=1.00 \cdot 0.94 \cdot 1.30 \cdot 1.30 \cdot 1.21 \cdot 1.00 \cdot 1.07 \cdot 1.00 & 1.29 \cdot 0.86 \cdot 1.00 \cdot 0.95 \cdot 1.00 \cdot 0.91 \cdot 1.23 = 2.4262 \end{multline} However when compiling I am getting an error on the last line saying: Extra alignment tab has been changed to \cr. You have written too many alignment tabs in a table, causing one of them to be turned into a line break. Make sure you have specified the correct number of columns in your table. I can't figure out what's wrong. Also, a magic number (2.1) is being displayed right after the last term of the equation. multline doesn't use any alignment points, indicated by &, only linebreaks, indicated by \\. Replace the & in your code by \cdot{} \\. The \cdot was missing, the {} is to get proper spacing. About the number, I don't know why you say "magic". by default multline is a numbered equation, if you don't want numbering use multline*. \documentclass{article} \usepackage{amsmath} \begin{document} Numbered: \begin{multline} m(X)=1.00 \cdot 0.94 \cdot 1.30 \cdot 1.30 \cdot 1.21 \cdot 1.00 \cdot 1.07 \cdot 1.00 \cdot{} \\ 1.29 \cdot 0.86 \cdot 1.00 \cdot 0.95 \cdot 1.00 \cdot 0.91 \cdot 1.23 = 2.4262 \end{multline} Or an unnumbered version: \begin{multline*} m(X)=1.00 \cdot 0.94 \cdot 1.30 \cdot 1.30 \cdot 1.21 \cdot 1.00 \cdot 1.07 \cdot 1.00 \cdot{} \\ 1.29 \cdot 0.86 \cdot 1.00 \cdot 0.95 \cdot 1.00 \cdot 0.91 \cdot 1.23 = 2.4262 \end{multline*} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 284.48848811122286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00028.warc.gz"}
https://brilliant.org/discussions/thread/results-can-you-outsmart-everyone-else/
# Results: Can You Outsmart Everyone Else? Congrats to Mathh Mathh and Ahaan Rungta for being the closest! They will be receiving a Brilliant T-shirt for outsmarting everyone else. Number of participants: 25 Average: 18.944 W = 0.9 * ave : 17.0499 For the rules + actual game breakdown, see Can You Outsmart Everyone Else? Raw results: Values have been rounded down to 3 decimal places Name Entry Absolute Difference from W Math Man 8.539 8.510 Sharky Kesa 22.722 5.672 Daniel Liu 9.869 7.180 Mietantei Conan 10 7.049 Yannick Yao 31.006 13.956 Anthony Susevski 13.413 3.636 Tan Li Xuan 21.415 4.365 Ahaan Rungta 16 1.049 Aneesh Kundu 19.911 2.861 Victor Song 38.660 21.610 Andy Hayes 14.771 2.278 Mathh Mathh 16.581 0.468 Raj Magesh 14.106 2.943 Victor Martin 29.1 12.050 Enrique Naranjo Bejarano 42 24.950 Samuraiwarm Tsunayoshi 42.377 25.327 Zhijie Goh 28 10.950 Bogdan Simeonov 2.685 14.364 Tan Wee Kean 20.678 3.628 Daniel Ploch 0 17.049 Pranshu Gaba 15.154 1.895 Justin Wong 1 16.049 John Muradeli 0.616 16.433 Chung Kevin 10 7.049 Ajala Singh 45 27.950 Entries which did not follow the participation rules have been ignored. Here is a follow-up discussion: 1) No one voted above 45. Why? 2) Few people voted above 40. Does this make sense? Why, or why not? 3) Why didn't the rational strategy of "vote 0" win? It turned out to be the 5th worst performer. 4) If this game occurred again, how would your strategy change? 5) Is there an "ideal number" to submit? 6) What would be your best strategy in approaching this game? Note by Calvin Lin 4 years, 2 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: I am happy I wasn't horribly off! Can we do another one of these? It was awesome! :D - 4 years, 2 months ago After playing this game, I Googled it and stumbled upon this interesting Wikipedia article on the Keynesian Beauty Contest, in which, similar to this game, A attempts to reason what B would believe C would think, ad infinitum (ouch!). Here's the link. Why didn't "Vote 0!" win? Possibly because you phrased the question as "Can you outsmart everyone else?", so the contest had two aims: to get the number closest to 0.9 times the average, and to make sure no one else got any closer than you, making 0 an irrational choice. If the aim had been simply to get the number closest to 0.9 times the average, sans any competition, the rational choice would have been zero, in which case everyone would have won and you would have run out of T-shirts! Also, since we were able to see the other participants' entries, we could guesstimate the value we would need to enter to win. I wonder if the results would have been more different if no one knew anyone else's entry (i.e. assuming everyone else was perfectly rational). Then this becomes analogous to the Keynesian beauty contest. What if, on the other hand, you gave us, say, 50 random nonzero values and told us that these would be included in calculating the average, in addition to our entries? What would the optimal strategy be then? After a lot of random Wikipedia surfing, I chanced upon the Monte Carlo method, and began wondering if it would make sense to apply it here. After all, if we repeatedly use large, random sets of data, eventually a small range of numbers would emerge as the clear winners, making it optimal to select these. But then again, if everyone starts doing this, the pattern is upset once more... We can't attempt to hack the system without messing up the system. - 4 years, 2 months ago The "official" name of this game is P-beauty, where "P" stands for the proportion of the average. In this case, we played a 0.9-Beauty game. Typically, the game involved blind bidding by everyone, so I had to tweak it slightly to suit our current system. I think that the random time cutoff added an interesting element to this game, where withholding your entry gave you more information, but you could potentially lose on the ability to submit an entry. In almost all simulations run by game theorists, they were unable to find a scenario where "Many people bid 0-1 (close to 0)", and there were few cases where a 0 to 1 bid actually won. There are various explanations for this, and it's worthwhile to find a "rational" explanation why a bid of 10 could make sense. Possible explanation: You may only assume that you are rational. You do not know to what degree someone else is rational. Possible explanation: Some people are just out to screw everyone else. Staff - 4 years, 2 months ago 1) In my opinion, when the first few numbers were put in, most people saw the scale of these numbers and went for smaller numbers. 2) Same as first. 3) I think that everyone wanted their own answer, unique and different. We had plenty of quirky as well as standard numbers. So really, we all overcame conformity. 4) I would have picked a slightly smaller number but who knows how if this game would be played if played again. A strategy everyone could have used, however, is to keep al the numbers roughly equivalent. 5) In my opinion, there really is no 'ideal number'. People picked completely random numbers which all happened to be mainly small. 6) Equivalence means that the average is closer to your number. However, Randomness means that you will get an average closer to you anyway. So really, there is no best strategy. They all work fine if you work in the same scale. - 4 years, 2 months ago That's much more a game of luck than of math! There could be some game changer person who doesn't want a Tshirt (already has one) could post an extreme value! Then that could change the winner from one person to the other! You need luck if you want a Tshirt, but moreover, you need luck to see that something like this has started ! For example me, I saw this discussion of results before I saw that original post i.e., after it has been closed ! $$:($$ - 4 years, 2 months ago It is partially luck, but remember, this is a game after all and it's for fun. You haven't lost anything so there's no need to sweat about it. I bet it would be less enjoyable if it was less luck-based. Also, such a game poses some nice questions, which you can't get if you make it extremely simple and 0% luck. - 4 years, 2 months ago Truly said, I agree ! You're right ! - 4 years, 2 months ago Well, you need luck to win the lottery, but if you do win, it's not like you have to divide it evenly between the people who were a number off. - 4 years, 2 months ago $$\Huge{\color{Red}{\textbf{LOL}}}$$ - 4 years, 2 months ago There were no stakes involved initially, other than "bragging rights" and the fun of participation. See my reply to Justin's comment. Check back often on Brilliant! You get exciting problems posted by your friends, and who knows what else you may see :) Staff - 4 years, 2 months ago ooh darn I thought it was every person posts 0.9 of what you post or something WHOA I would NOT have posted $$\frac{\pi^2}{16}$$! I would've posted 25. Oh well. - 4 years, 2 months ago I'm horrible at math, but this generally is my opinion. 1) And 2) The first few posts were around pi and e, thus nobody wanted to post anything much higher than that (since the aim was to get 0.9x the mean) 3) People are selfish. They don't want everyone to win, they want the whole thing to themselves 4) And 6) Hmm. I don't know, check the trend, if it happens to be around 10, play about 10 too, as not many people would want tout something much higher than the average 5) Hmm. Not really. At least I don't think so. - 4 years, 2 months ago Wait what... Oh it autocorrected the headings Should read: First line for 1 and 2 Second line for 3 Third line for 4 and 6 Fourth line for 5 - 4 years, 2 months ago When people type 1., 2., 3., we automatically assume that they want it to be a sequential list and rename it as such. I've edited your response to 1), 2), 3), to fool our system. Staff - 4 years, 2 months ago @mathh mathh , @Ahaan Rungta Can you email me your mailing address + T-shirt sizes? Thanks! Staff - 4 years, 2 months ago Wow. Just wow. - 4 years, 2 months ago I didn't participated in the contest but why "For those of you who were waiting to snipe at the last possible moment, tough luck."? - 2 years, 6 months ago Assuming that everyone is rational, it would make sense that they would take the average of everyone else's answers multiplied by 0.9 as their own answer, to have a higher chance of winning. However, when I did this simulation on excel ,(assuming that there were 25 people, and the first answer was 45),I found that the average was 33.85101, and when this was multiplied by 0.9, the result was 30.46591. This is way off from the results of the experiment , which shows that not everyone is rational. - 3 years, 10 months ago Hmm... I saw the first few posts and thought people would post things related to pi and e. So I thought the average would be around 16. Then, I remember seeing someone posting a number ~40 and I thought it would increase the average a lot. I guess the best strategy would be to get as much information as possible? Perhaps if we knew when @Calvin Lin is usually online, we could then guess when the discussion would be locked XD. I went to this website before entering my results. The average then was around ~14. - 4 years, 2 months ago 1) 45 was the first submission. In an attempt to be lower than the average at first sight (which started at 45) each consecutive person voted lower than the highest vote, creating a cascade of numbers securing the average around 15-20ish. I guess people weren't brave enough to vote higher because they feared there wouldn't be enough support in the higher tier - maybe that strategy could be easily foiled. Maybe early voters didn't predict future averages and followed the popular strategy of immediate averaging (at least I think that's what happened). - 4 years, 2 months ago :( Aww @Calvin Lin you overlooked my entry... (I thought it followed the rules?) Anyway it was terribly, terribly off by about $$8.574$$, so never mind :D - 4 years, 2 months ago Sorry, I decided to ignore your entry as it violated the rule of "you may not edit your entry". Staff - 4 years, 2 months ago Are these the only way to get Brilliant t-shirts? - 4 years, 2 months ago As for now, winning this competition is the only existing way to win Brilliant t-shirts; and now that it's done, there are no existing ways to win Brilliant t-shirts. I guess you'll have to wait for the next competition. @mathh mathh @Ahaan Rungta Congrats! But Ahaan, don't you already own a Brilliant.org t-shirt? I would expect you have already. - 4 years, 2 months ago Oh, I just forgot I had claimed a brilliant t-shirt around 8-9 months ago, but I haven't gotten it yet, ohh well. - 4 years, 2 months ago LOL, thanks. And yes, I owned a Brilliant shirt but I'm one size larger now. I was a baby when I got the previous one. =P - 4 years, 2 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416118025779724, "perplexity": 2234.950301659927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743913.6/warc/CC-MAIN-20181117230600-20181118012600-00181.warc.gz"}
https://matsci.org/t/the-interpretation-of-pressure-in-lammps-for-a-non-ideal-bonded-system/27686
The interpretation of pressure in LAMMPS for a non-ideal bonded system Dear LAMMPS community, Firstly, many thanks for this valuable mailing list, and for all the important answers that I have received from so many of you over the years. This time, I would like to ask a question about pressure that I came across in while trying to understand how the enthalpy of a system is calculated in LAMMPS. It is common knowledge that in MD, one evaluates pressure P as an ensemble of the instantaneous/microscopic pressure P*, that for a system of N particles in a volume V is as follows: P*=1/V(1/3\sum_i m_i v_i •v_i+1/3\sum_i r_i•f_i) where r is the position, v- velocity and f- force. The macroscopic pressure is P=<P*>, a statistical average of an ensemble. In case of pairwise interactions, the pressure is defined as P=<(N/V) k_BT>+<(1/3V)\sum_i sum_{j<i} r_{ij}•f_{ij} Where r_{ij} is the intermolecular vector between i and j and f the corresponding force. When we use NVE+Langevin (mimicking Brownian Dynamics), a simulation of an ensemble of ideal gas particles produces a linear relation between pressure and temperature upon cooling, as expected. However, what happens if we have the same NVE+Langevin dynamics whereby the monomers are connected through a bond, say harmonic or FENE? What if one has a collapsing polymer chain (gradually with a specific rate of cooling), where pressure is not the same as it is in the ideal gas case due to bonding interactions? If one wishes to study the behavior of enthalpy as a function of Temperature, for instance, how does one define the pressure that is part of the enthalpy calculation? With best wishes Anna Dear LAMMPS community, Firstly, many thanks for this valuable mailing list, and for all the important answers that I have received from so many of you over the years. This time, I would like to ask a question about pressure that I came across in while trying to understand how the enthalpy of a system is calculated in LAMMPS. It is common knowledge that in MD, one evaluates pressure P as an ensemble of the instantaneous/microscopic pressure P*, that for a system of N particles in a volume V is as follows: P*=1/V(1/3\sum_i m_i v_i •v_i+1/3\sum_i r_i•f_i) where r is the position, v- velocity and f- force. The macroscopic pressure is P=<P*>, a statistical average of an ensemble. In case of pairwise interactions, the pressure is defined as P=<(N/V) k_BT>+<(1/3V)\sum_i sum_{j<i} r_{ij}•f_{ij} Where r_{ij} is the intermolecular vector between i and j and f the corresponding force. When we use NVE+Langevin (mimicking Brownian Dynamics), a simulation of an ensemble of ideal gas particles produces a linear relation between pressure and temperature upon cooling, as expected. ​this last statement is not correct. particles in an ideal gas do not interact, so there are no forces. you only have the kinetic energy contribution and if you look at it closely, you will see that you can recover the ideal gas law from it.​ also the interactions implicitly contained in fix langevin do not apply to an ideal gas. ​the F <dot> R term only applies to interacting particles. However, what happens if we have the same NVE+Langevin dynamics whereby the monomers are connected through a bond, say harmonic or FENE? What if one has a collapsing polymer chain (gradually with a specific rate of cooling), where pressure is not the same as it is in the ideal gas case due to bonding interactions? If one wishes to study the behavior of enthalpy as a function of Temperature, for instance, how does one define the pressure that is part of the enthalpy calculation? ​the F <dot> R relation can be applied for bonded interactions just as well.​ it is easy to set up tests for that. i've done this for example last fall to validate two bugfixes: https://github.com/lammps/lammps/pull/213 ​axel.​ Thank you Axel, Dear Axel What remains a confusion, that all these terms are located where the polymer chain is - are different on “droplet” -the collapsed polymer- surface, and certainly zero outside of “droplet”. Whereas thermodynamic pressure must be constant across the system in equilibrium. Am I interpreting this correctly? Many thanks Anna Dear Axel What remains a confusion, that all these terms are located where the polymer chain is - are different on "droplet" -the collapsed polymer- surface, and certainly zero outside of "droplet". Whereas thermodynamic pressure must be constant across the system in equilibrium. Am I interpreting this correctly? ​i don't think so. pressure is a property of the entire observed system and the physical interpretation of (total) pressure is (the average) force per area on the bounding surface(s). thus the pressure of an isolated system is by definition zero, since​ the volume is infinite, i.e. unbounded. now, you *can* compute a "local" pressure, by subdividing the volume and computing a pressure for this, but there is nothing requiring a system to have equipartitioning on this kind of property in an inhomogeneous system. all that is required would be an (on average) zero net pressure on the dividing surfaces between two such subsystems. axel. I should also point out that if you are using the Langevin thermostat to model the presence of an implicit solvent, then the pressure reported by LAMMPS (based on velocities and forces of the solute “atoms”) is not capturing the dominant contribution to the pressure from the solvent. As an example, you could simulate polymer chains dissolved in water compressed to very high pressure using this approach, but the pressure reported by LAMMPS would not reflect the high pressure of the water molecules. The LAMMPS pressure is probably related to the osmotic pressure of the polymer. Aidan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846508800983429, "perplexity": 1297.2213817525458}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00341.warc.gz"}
https://www.aanda.org/articles/aa/full/2001/10/aah2468/aah2468.right.html
A&A 368, 292-297 (2001) DOI: 10.1051/0004-6361:20000545 ## Comets in full sky maps of the SWAN instrument ### I. Survey from 1996 to 1998 J. T. T. Mäkinen1 - J.-L. Bertaux2 - T. I. Pulkkinen1 - W. Schmidt1 - E. Kyrölä1 - T. Summanen1 - E. Quémerais2 - R. Lallement2 1 - Finnish Meteorological Institute, Geophysics Research, PO Box 503, 00101 Helsinki, Finland 2 - Service d'Aéronomie, BP 3, 91371 Verrières-le-Buisson, France Received 29 September 2000 / Accepted 8 December 2000 Abstract The SWAN instrument onboard the SOHO spacecraft is a Lyman scanning photometer cabable of mapping the whole sky with resolution. Since January 1996 the instrument has produced on average three full sky maps a week with the principal scientific objective of observing the distribution of heliospheric neutral hydrogen. In addition, these systematic observations are a valuable source for studying comets brighter than a visual magnitude of 7-11, the observing limit depending on the abundance ratios of produced radicals and the location of the comet relative to the galactic plane. When the data before the temporary loss of control of SOHO at the end of June 1998 were processed, altogether 18 comets were positively identified, of which one is a new discovery and another 5 can be detected on SWAN images before their actual discovery date. This demonstrates the feasibility of SWAN as an instrument for cometary surveys. The observations are used to estimate the water production rates of the detected comets near their perihelion passages. Key words: data analysis - surveys - comets - ultraviolet: solar system ### 1 Introduction The SOHO (Solar and Heliospheric Observatory) spacecraft has already been recognized as the most successful comet finder ever, the total number of discoveries exceeding 200 during less than five years of operation. Almost all new comets belong to the Kreutz sungrazer family and can be detected from LASCO (Large Angle and Spectrometric Coronagraph) images. While most of the SOHO instruments are studying the immediate surroundings of the Sun, one of them covers the rest of the sky. The SWAN (Solar Wind Anisotropies) instrument (Bertaux et al. 1995) onboard SOHO is a Lyman multianode scanning photometer with an instantaneous field of view (FOV) of with 25 pixels each. The instrument consists of two sensor heads each of which has an overall FOV of over steradians covering northern and southern ecliptic hemisphere, respectively. In the normal operation mode the instrument is capable of mapping the whole sky in one day but since the time has to be shared with other kinds of observations, the instrument has produced a complete sky map (Fig. 1) every three days on average. The observing activity has varied over the operational period (Fig. 2). In this respect the period from December 1996 to June 1998 represents the full capability of the instrument. At the end of the period the control of the spacecraft was lost for several months, and data gathered after the recovery show that the instrument was degraded through direct exposure to sunlight during the period of inactivity, decreasing its sensitivity and spectral resolution. Despite additional setbacks like the subsequent loss of attitude control gyroscopes of the spacecraft the observing campaign proceeds at full scale. Figure 1: SWAN full sky map in ecliptic coordinates. This resolution UV image recorded April 9, 1996, shows the spatial distribution of interplanetary hydrogen, the area of bright stars near the galactic plane and the comet C/1996 B2 Hyakutake, denoted by the white arrow. The view of the instrument is always obstructed in two directions: pointing the instrument too close to the Sun (s) might damage the sensors, and the antisolar direction (a) is partly obstructed by the spacecraft itself. The locations of the unobservable zones change as the spacecraft moves on its orbit around the Sun Open with DEXTER The primary use of the SWAN full sky UV maps is to study the latitudinal distribution of solar wind deductible from asymmetries in the cavity that it carves in the passing cloud of interstellar neutral hydrogen which resonantly scatters solar light (Bertaux et al. 1999b). Another contribution to neutral hydrogen in the solar system comes from the photodissociation of , the major volatile component of cometary nuclei. Several known comets have been observed separately, obtaining valuable results (Bertaux et al. 1998, 1999a; Combi et al. 2000). Since these observations cannot create a complete track record of comets possibly detectable by the instrument, especially new comets only discovered near their perihelion, a cometary survey was undertaken using the full sky images which constitute the most systematic set of measurements with best available coverage. Since the SWAN instrument was not designed primarily to detect comets, its performance in this respect is far from ideal. The spatial resolution is restricted to the of sensor pixels although this shortcoming is somewhat compensated by the fact that the hydrogen cloud of a comet is almost spherically symmetric and orders of magnitude larger than the visible dust tail. The point spread function (PSF) of the instrument has a standard deviation comparable to the pixel size near the line but grows significantly towards the limits of the observing window of 115-170 nm and has a slight dependence on observing geometry. This spreads out the images of hot UV stars on the sky maps. The situation is further complicated by line of sight (LOS) retrieval inaccuracies which contribute another of random diffusion to the observed signal. These effects together pose great difficulties for observing comets especially near the galactic plane where UV stars are abundant. Also the short exposure time of the normal mode - 13 s in any particular direction - gives only a modest signal to noise ratio despite the relatively high photometric sensitivity of about 0.84 counts per second per Rayleigh per pixel. All these complications are further accentuated on the post-recovery data but the systematic nature of observations combined with very high coverage and the location of the spacecraft at the L1 point between the Earth and the Sun, unpolluted by the Earth's exospheric emission still makes the SWAN instrument an important tool for cometary studies. Figure 2: Number of produced SWAN full sky maps per month from January 1996 to June 1998 Open with DEXTER ### 2 Survey method Figure 3: Detecting comets with neural network. The trail of C/1995 O1 (Hale-Bopp) as denoted by the white arrow, is easily discernible on the combined sky image for the period of May to July 1997 (above). The neural network tracks minute differences in subsequent images and highlights the trails of other comets as well (below). Denoted by arrows are the trails of (a) C/1997 O1 (Tilbrook), (b) 2P/Encke, (c) C/1995 O1, (d) C/1997 N1 (Tabur) and (e) C/1997 K2. The random noise around the data gap is caused by reflections from the spacecraft Open with DEXTER The aforementioned shortcomings in mind, a heuristic combined filtering and neural network method (Mäkinen et al. 2000b) was developed to detect all possible candidates for cometary objects. The single exposures of one observing session are binned into a grid in ecliptic coordinates. A variable-scale median difference filtering removes the smooth background caused by the interstellar neutral hydrogen cloud. The method is similar to the conventional overlaying of an image with its out-of-focus negative to reveal fine structure. The median filter was chosen because it preserves sharp changes in base intensity, e.g., around the borders of a data gap. Since the remaining comets and stars appear identical in shape and because of the mentioned inaccuracies the random deformations of stars prevent direct image subtraction, a 3-layer bidirectional neural network was developed to process several months worth of images in a batch, highlighting consistent motion of any image feature. If Skij is a set of T sky maps where the indices i and j denote longitude and latitude, respectively, and k the observing time, then the nodes in the successive layers A, B and C of the neural network are defined as (1) (2) where over connected i, j and b0 is a user-definable triggering threshold. The layer A and B node maps are thus two-dimensional and have one-to-one correspondence with the bins of the sky map. A layer A node fires when the corresponding bin receives its maximal value and a layer B node in the same location fires if the number of simultaneously firing adjacent layer A nodes exceeds the given threshold value. Then (3) where the summation is over a particular trajectory and g(i,j) is a geometric weight correction for spherical coordinates. The layer C nodes reside in the orbital parameters space and they are evaluated through initialization of potential traces of comets which are then followed over the layer B node map. The principle can be understood by visualizing an expanding and attenuating probability wave around every firing layer B node. Firing nodes are affected by the local probability field so that a nonlinear amplification of waves emitted by successive firings at suitable intervals soon forms a coherent pulse denoting the probable trajectory of a comet. The implementation of the network contains several parameters for restricting the evaluation of layer C nodes shortening the processing time considerably. After the layer C is completed the data need to be visualized. Because the node space is multidimensional, the flow is reversed by feeding the obtained weights through established connections back to the layer A from which the results can be read. The output from the neural network (Fig. 3) depicts cometary trails which can be immediately compared to the orbits of known comets. The sensitivity of the neural network has been tested with simulated data and found to be comparable or in some cases even better than visual inspection of time lapse series of filtered images. The largest problem with the algorithm so far has been that the observation times are only known to an accuracy of one day but this problem could be eliminated by improving the data preprocessing tools. ### 3 Survey results Comet Name m1 m1L Found First Last N 2P Encke 19970523.6 0.331 6 7.5 19970604 19970624 9 45P Honda-Mrkos-Pajdusakova 19951226.0 0.532 7 7.1 19960128 19960202 2 46P Wirtanen 19970314.2 1.064 10 10 19970125 19970313 20 55P Tempel-Tuttle 19980228.1 0.977 8 10 19980101 19980324 38 96P Machholz 1 19961015.1 0.125 4 19960928 19961012 4 103P Hartley 2 19971222.0 1.032 8 10 19971115 19980217 42 C/1995 O1 Hale-Bopp 19970401.1 0.914 -2 4.7 19950723 19960706 19970920 142 C/1995 Y1 Hyakutake 19960224.3 1.055 8 8.7 19951225 19960202 19960313 6 C/1996 B1 Szczepanski 19960206.9 1.449 7 8 19960127 19960121 19960305 7 C/1996 B2 Hyakutake 19960501.4 0.230 -1 7.1 19960130 19960328 19960710 18 C/1996 N1 Brewington 19960803.4 0.926 8 9.2 19960704 19960706 19960918 16 C/1996 Q1 Tabur 19961103.5 0.840 5 9.7 19960819 19960918 19961110 11 C/1997 K2 19970626.2 1.546 19970520 19970718 24 C/1997 N1 Tabur 19970815.5 0.396 10 19970702 19970701 19970811 18 C/1997 O1 Tilbrook 19970713.4 1.372 10 11.0 19970722 19970520 19970826 41 C/1997 T1 Utsunomiya 19971210.1 1.359 10 10 19971003 19970919 19971223 43 C/1998 H1 Stonehouse 19980414.4 1.324 11 19980422 19980423 19980507 9 C/1998 J1 SOHO 19980508.6 0.153 0 19980503 19980224 19980623 59 The SWAN images were processed by the neural network in quarterly sets. Altogether 18 comets as listed in Table 1 were identified from the SWAN full sky images from January 1996 to June 1998 of which C/1997 K2 proved to be a new discovery (Mäkinen et al. 2000a). The visibility period of every comet on SWAN images was determined by visually estimating the first and last images where the existence of a comet could be confirmed without a priori knowledge of its location. When the known orbital elements are used in combination with advanced processing methods these limits can be extended to yield valuable data. The list contains 6 short-period comets and 12 long-period comets. When the actual discovery dates of the long-period comets are compared to their visibility on SWAN it can be noticed that half of the long-period comets are visible on SWAN before their discovery. The situation is especially clear with comets C/1997 O1 (Tilbrook) and C/1998 J1 (SOHO) both of which were found near the perihelion but the SWAN instrument has recorded months of prediscovery data. The visual magnitude of a comet at the last time when it is detectable in SWAN images m1L is given in Table 1 where it is relevant. From the listed values it can be estimated that the limiting magnitude for the SWAN instrument is 7-8 in areas of high star density and 10-11 in voids. Furthermore, the southern hemisphere sensor is less sensitive than the northern one by a factor of about 2.6 and this affects the detection limit. Considering these values, the early disappearance of C/1995 O1 (Hale-Bopp) is quite peculiar but it can be explained by the fact that at the moment the comet was located in a densely populated part of the galactic plane in the direction of Puppis and moving relatively little and it was, at least by visual observation, lost among background star contamination. Figure 4: Spatial coverage of all comets detected by SWAN. The trails of detected comets are plotted on ecliptic coordinates with one day stepping. Open with DEXTER When the trails of the detected comets are depicted in the same plot (Fig. 4), it can be seen that the distribution is fairly homogeneous as a result of the high coverage of the SWAN instrument. When studying the list of known comets, it can be noticed that during the SWAN observing period two short-period comets, 22P/Kopff and 81P/Wild 2, were brighter than magnitude 11 for some time but nevertheless did not appear on the SWAN list of detected comets. It could be possible that the comets were relatively depleted in water thus making them that much dimmer in UV light but when one examines the orbits of these comets a more probable explanation can be seen. Both comets stay close to the ecliptic plane during the perihelion passage and once their locations are correlated with the SWAN visible area it can be noticed that both comets remain close to the part of the sky which is obstructed by the spacecraft. It can thus be concluded that the conducted survey is complete to the sensitivity limit of the instrument. ### 4 Water production rates for comets The water production rate of a comet is important because it is the most abundant product of cometary activity to which other production ratios are scaled. It can be derived from the observed neutral H distribution by assuming that the photodissociation of outside the collision sphere is the only noteworthy process. A simple stationary and monokinetic Haser model (Haser 1957; Festou 1981) is used to calculate the H column densities for a reference value of . The equations follow directly from recursively applying the relation for the density n where the production term P(r) for daughter products is determined by the destruction rate of parent molecules. Thus for the reaction (4) and for the reaction (5) where the subscripts denote hydrogen atoms produced in the first and second photodissociation, respectively, and the density is integrated over the entire column (6) and and vi are the inverse scale lengths and radial velocities of the respective particle populations and r is the shortest distance between the column and the nucleus. The intensity map is then calculated taking into account the effect of the radial velocity of the comet on the scattering efficiency and the spectral response of the instrument. The background contribution is eliminated by subtracting two observations from each other. The choice of observations is restricted by the conflicting requirements of sufficient separation between subsequent comet positions and minimal change in background signal possibly caused by variations in solar intensity or by the stereoscopic effect of SOHO orbiting around the Sun. In practice most comets do not move rapidly enough for the observations to be considered independent and thus a simultaneous least squares fit of models for both the positive and negative image of the comet along with a second degree polynomial for background residual is calculated with the Singular Value Decomposition (SVD) method. Since the model is linear in the fitting coefficients directly give the water production rates at both observations as well as an error estimation. For the purpose of this study, water production rate of each detected comet is calculated near the perihelion. The obtained values are listed in Table 2. Comet Date r 2P 970624 0.81 0.27 0.958 45P 960128 0.85 0.18 0.504 46P 970307 1.08 1.55 1.55 55P 980115 1.21 0.37 0.546 96P 961009 0.29 0.88 3.74 103P 971222 1.03 0.85 1.67 C/1995 O1 970401 0.91 1.34 1020 C/1995 Y1 960219 1.06 1.23 3.94 C/1996 B1 960202 1.45 0.74 1.88 C/1996 B2 960416 0.54 0.71 56.0 C/1996 N1 960807 0.93 0.90 0.774 C/1996 Q1 961015 0.92 0.47 4.22 C/1997 K2 970624 1.55 1.22 1.59 C/1997 N1 970710 0.97 1.17 0.913 C/1997 O1 970821 1.48 1.93 2.40 C/1997 T1 971206 1.36 1.86 3.86 C/1998 H1 980428 1.50 0.56 0.700 C/1998 J1 980516 0.32 0.88 71.4 ### 5 Discussion In the recent years the search for near-Earth objects (NEOs) has received considerable attention. The existing surveys, however, concentrate primarily on cataloguing all the potentially hazardous asteroids which is arguably a feasible objective since these objects are on orbits which bring them near enough to be observed every few years. The survey coverage is fairly limited which is demonstrated by the fact that amateurs still have a fair chance of discovering a new comet. Comets have much more variation in their orbital parameters and a large part of them may visit the inner solar system just once. Because of their higher kinetic energy and virtual invisibility before the nucleus is activated they pose a direct long term global threat. Collision probability estimates depend on the size distribution of comets which is still not adequately known. The late discovery of C/1997 K2 and other prediscovery data underline the advantage of an instrument with full sky coverage in detecting new comets. An instrument looking for OH emission as suggested by Brandt et al. (1996a, 1996b) would not be affected by the interstellar neutral hydrogen. On the other hand this is not the largest problem in the SWAN data. The binning method applied in producing full sky maps contributes to the degradation of spatial resolution since it is optimized for large bin sizes. With advanced processing methods higher resolution will be achieved. The use of the simple Haser model is justifiable in most situations. The largest comets provide ample data so that a more complex model can be used like with C/1995 O1 (Combi et al. 2000). Such models can also benefit from the rudimentary spectral measurement capacity of the instrument given by a H cell filter which can be used to derive the velocity of neutral H atoms. Full sky maps with H cell active are made as well although not as often as ordinary observations. Another shortcoming of the Haser model is apparent with comets whose undergoes rapid fluctuations. A dynamical model could not use the currently available full sky maps since their time resolution is too coarse. Thus, in combination with developing a time-dependent model one should use single exposures directly. The SWAN full sky maps are very useful for calculating systematic water production rates with some caveats. Besides that the random spreading of stars on SWAN full sky maps introduces uncertainties to the determination of cometary water production rates, other sources of error exist which are not compensated for in this study. The spatial and temporal solar intensity variations are considerable but in principle they could be tracked by observing the apparent background intensity changes at suitable areas over time. Furthermore, the instrument still has some calibration issues which must be addressed before a systematic record can be constructed. Especially the post-recovery data will need considerable calibrating effort. Once these issues have been adequately addressed, more comprehensive reviews can be produced concerning each major data set: the initial full sky observations, the post-recovery full sky observations and the comet specific observations. Acknowledgements SOHO is an international co-operative mission of ESA and NASA. SWAN was financed in France by CNES with support from CNRS and in Finland by TEKES and the Finnish Meteorological Institute. The work of J.T.T.M. and T.S. was supported by the Academy of Finland. ## References • Bertaux, J. L., Kyrölä, E., Quémerais, E., et al. 1995, Solar Phys., 162, 403 In the text NASA ADS • Bertaux, J. L., Costa, J., Quémerais, E., et al. 1998, Planet. Space Sci., 46, 555 In the text • Bertaux, J. L., Costa, J., Mäkinen, T., et al. 1999a, Planet. Space Sci., 47, 725 In the text • Bertaux, J. L., Kyrölä, E., Quémerais, E., Lallement, R., et al. 1999b, Space Sci. Rev., 87, 129 In the text NASA ADS • Brandt, J. C., A'Hearn, M. F., Randall, C. E., et al. 1996a, Earth, Moon, & Planets, 72, 243 In the text NASA ADS • Brandt, J. C., A'Hearn, M. F., Randall, C. E., et al. 1996b, Small Comets (SCs): An Unstudied Population in the Solar System Inventory, in: Completing the Inventory of the Solar System, ed. T. W. Rettig, & J. M. Hahn, ASP Conf. Ser., 107, 289 In the text • Combi, M. R., Reinard, A. A., Bertaux, J. L., et al. 2000, Icarus, 144, 191 In the text NASA ADS • Festou, M. C. 1981, A&A, 95, 69 In the text NASA ADS • Haser, L. 1957, B. Acad. R. Sci. Liège, 43, 740 In the text NASA ADS • Mäkinen, J. T. T., Bertaux, J. L., Laakso, H., et al. 2000a, Nature, 405, 321 In the text NASA ADS • Mäkinen, J. T. T., Syrjäsuo, M. T., & Pulkkinen, T. I. 2000b, A method for detecting moving fuzzy objects from SWAN sky images, in Proceedings of the IASTED International Conference, Signal and Image Processing, November 19-23, 2000, Las Vegas, USA, 151 In the text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155685067176819, "perplexity": 1913.653304192072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00336.warc.gz"}
http://mathhelpforum.com/geometry/47802-geometry-proof.html
# Math Help - Geometry Proof 1. ## Geometry Proof Complete this proof using axioms and already proven theorems. Given: Point P is equidistant from endpoints X and Y of line XY.. Prove: P is on the perpendicular bisector of XY Proof: Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector. Case 2: 1. Draw PX and PY. (On 2 points there is exactly 1 line) 2. Let M be the midpoint of XY. (Midpoint Thm) 3. Draw PM (On 2 points there is exactly 1 line) 4.... I'm stuck after that. Any help would be greatly appreciated. 2. Originally Posted by GoldendoodleMom Complete this proof using axioms and already proven theorems. Given: Point P is equidistant from endpoints X and Y of line XY.. Prove: P is on the perpendicular bisector of XY Proof: Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector. Case 2: 1. Draw PX and PY. (On 2 points there is exactly 1 line) 2. Let M be the midpoint of XY. (Midpoint Thm) 3. Draw PM (On 2 points there is exactly 1 line) 4.... I'm stuck after that. Any help would be greatly appreciated. i dont know if you have already proven the theorems i will be using.. 1. P is not on XY, there is a line passing P that is perpendicular to XY. (Parallel/Perpendicular Postulate) 2. Let M be the point of intersection of XY and the perpendicular passing P. (Non-parallel lines intersect) 3. PM is perpendicular to XY (By 1 and 2) 4. PMX and PMY form right triangles. (Def. of Right triangles) 5. PX is congruent to PY (Given) 6. PM is congruent to itself (Reflexive) 7. triangles PMX and PMY are congruent (Hypotenuse-Leg Theorem) 8. MX is congruent to MY. (CPCTC) 9. M is the midpoint of XY (Mdpt theorem (or def of mdpt of a segment)) 10. PM is the perpendicular bisector. (by 3 and 9) tell me what theorems are not yet proven so that i can revise it for you.. 3. Originally Posted by GoldendoodleMom Complete this proof using axioms and already proven theorems. Given: Point P is equidistant from endpoints X and Y of line XY.. Prove: P is on the perpendicular bisector of XY Proof: Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector. Case 2: 1. Draw PX and PY. (On 2 points there is exactly 1 line) 2. Let M be the midpoint of XY. (Midpoint Thm) 3. Draw PM (On 2 points there is exactly 1 line) 4. PX = PY (Given) 5. XM = YM (Definition of midpoint) 6. PM = PM (Reflexive Property of equality) 7. $\triangle XPM \cong \triangle YPM$ (SSS Postulate) 8. $\angle XPM \cong \angle YPM$ and $m\angle XPM = m\angle YPM$ (CPCTC and definition of congruency) 9. $\angle XPM \ \ and \ \ \angle YPM$ make up a Linear Pair (Definition of Linear Pair) 10. $m\angle XPM + m\angle YPM = 180$ (If two angles form a linear pair then they are supplementary) 11. $m\angle XPM + m\angle XPM = 180$ (Substitution using #8 and #10) 12. $2 m\angle XPM = 180$ (Addition) 13. $m\angle XPM = 90$ (Division) Similarly, you can show that $m\angle YPM = 90$ 14. $\angle XPM$ is a right angle. (Definition of right angle) 15. $\overline{PM} \perp \overline {XY}$ (If two lines meet to form right angles, then they are perpendicular) 16. P lies on $\overline{PM}$ (Step #3) Q.E.D. P lies on the perpendicular bisector of $\overline {XY}$ I'm stuck after that. Any help would be greatly appreciated. Here's another approach. I Don't know how much detail you need. Sometimes, geometry teachers can be pretty picky.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078587889671326, "perplexity": 1852.318452010926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
https://portalrecerca.uab.cat/en/publications/differential-equations-with-given-partial-and-first-integrals
# Differential equations with given partial and first integrals Jaume Llibre, Rafael Ramírez Research output: Chapter in BookChapterResearchpeer-review ## Abstract © Springer International Publishing Switzerland 2016. In this chapter we present two different kind of results. First, under very general assumptions we characterize the ordinary differential equations in ℝN which have a given set of either M ≤ N, or M >N partial integrals, or M <N first integrals, or M ≤ N partial and first integrals. Second, in ℝN we provide some results on integrability, in the sense that the characterized differential equations admit N −1 independent first integrals. Original language English Progress in Mathematics 1-40 39 313 2296-505X https://doi.org/10.1007/978-3-319-26339-7_1 Published - 1 Jan 2016 ## Fingerprint Dive into the research topics of 'Differential equations with given partial and first integrals'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746044635772705, "perplexity": 2607.586952751392}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00407.warc.gz"}
http://www.mathematicalfoodforthought.com/2006/06/five-whole-numbers-topic_10.html
## Saturday, June 10, 2006 ### Five Whole Numbers. Topic: Algebra/Polynomials. Level: AIME. Problem: (1999 JBMO - #1) Let $a, b, c, x, y$ be five real numbers such that $a^3+ax+y = 0$, $b^3+bx+y = 0$, and $c^3+cx+y = 0$. If $a, b, c$ are all distinct numbers, prove that their sum is zero. Solution: Consider the polynomial $f(t) = t^3+tx+y$. Since it is a third degree polynomial, it can have at most three real roots. But we are given that $f(a) = 0$, $f(b) = 0$, $f(c) = 0$ so we know that it in fact has exactly three distinct real roots and no other ones. But by Vieta's Formulas, we know that the sum of the roots is the coefficient of the $t^2$ term, which is zero. Thus $a+b+c = 0$, as desired. QED. -------------------- Comment: A pretty simple problem to be on an olympiad, even if it is the Junior Balkan Math Olympiad (JBMO). The solution is almost immediate if you have worked with polynomials a lot. -------------------- Practice Problem: (2000 JBMO - #1) Let $x$ and $y$ be positive reals such that $x^3 + y^3 + (x + y)^3 + 30xy = 2000$. Show that $x + y = 10$. 1. Uh, x^3+y^3 = (x+y)^3 - 3xy(x+y) then just treat it as a polynomial in (x+y), and you're doneee. 2. Viewing the problem as a polynomial in t is a nice trick, I should learn to see those kinds of solutions XD Anonymous: Your solution method can be generalized. In the general case we are presented with a condition symmetric in all of its variables, so we may substitute the symmetric polynomials u = x + y, v = xy. In this case, we are left with u^3 - 3uv + u^3 + 30v = 2000 2u^3 + 3v(10-u) = 2000 u(2u^2 + 27v) = 2000 (Incidentally, I don't see how it's immediately obvious from here that u = 10 is the only real solution. Possibly I'm tired.) Let w = 10-u. Then 2(1000 - 300w + 30w^2 - w^3) + 3vw = 2000 2w^3 - 60w^2 + (600 - 3v) w = 0 w = 0 is an obvious root, and it's easy to show the other two are complex. 3. Actually, your expression factors into (u - 10)(2u^2 +20u + 200 - 3v) = 0, from which it isn't so difficult to show that the second term is always positive. Completing squares is one way to do it. 4. Whoa, okay, the third line of that first part is completely wrong. Not sure what I did there.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8699719309806824, "perplexity": 498.1199525557531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216718.53/warc/CC-MAIN-20180820160510-20180820180510-00002.warc.gz"}
http://math.stackexchange.com/questions/133411/orthogonal-polynomials-in-the-limit-n-rightarrow-infty
# Orthogonal polynomials in the limit $n \rightarrow \infty$ can we find a set of Orthogonal Polynomials so in the limit $n \rightarrow \infty$ satisfty $\frac{\sin(x)}{x}= \frac{p_{2n}(x)}{p_{2n}(0)}$ the set of orthogonal polynomials satisfy $\int_{-\infty}^{\infty}dx w(x) P_{n}(x)P_{m}(x)= \delta _{n}^{m}$ and the measure $w(x) \ge 0$ and $w(x)=w(-x)$ is this problem solvable - Are you sure, that $w(x)$ appears twice under the integral? –  draks ... Apr 18 '12 at 12:36 No it was a mistake :) sorry –  Jose Garcia Apr 18 '12 at 15:05 Maybe something like $p_n ~ 1-x^2/3!+x^4/5!...$. Of course you still have to find a weight $w(x)$. –  Alex R. Apr 18 '12 at 18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9341704249382019, "perplexity": 501.2325839114386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097354.86/warc/CC-MAIN-20150627031817-00171-ip-10-179-60-89.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/36052/verbatim-text-inside-italic-block
# verbatim text inside italic block I am trying the following: \begin{frame}[fragile]{OCP en Exceptions} If you throw a checked exception from a method in your code and the \verb!catch! is three levels above, \textit{you must declare that exception in the signature of each method between you and the \verb!catch!}'' \end{frame} Somehow I can't use the verbatim text inside the italic block. How do I solve this the right way? - You can't use verbatim material inside another macros argument. The already mentioned cprotect package works around that, but I would call it overkill using it here. Simply use a font declaration inside a group instead of a font macro, i.e. {\itshape text \verb+$%^+ text} instead of \textit{text \verb+$%^+ text}. \begin{frame}[fragile]{OCP en Exceptions} If you throw a checked exception from a method in your code and the \verb!catch! is three levels above, {\itshape you must declare that exception in the signature of each method between you and the \verb!catch!}'' \end{frame} Also using verbatim only to get the font effect is a misuse. Simply use tt font directly! For often used real verbatim material I would recommend to define them as macros using \verbdef of the verbdef package. (There is also \Verbdef of my newverbs package if you need it in an expandable form, but normally this is not the case.) - Use cprotect package. More details, see the manual. \documentclass{article} \usepackage{cprotect} \begin{document} \cprotect\textit{\verb|\LaTeX| is one of my primary weapons} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150682687759399, "perplexity": 4255.170865872471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.8/warc/CC-MAIN-20160723071024-00224-ip-10-185-27-174.ec2.internal.warc.gz"}
https://poritz.net/jonathan/share/OER4ColoMATYC/
## Spring 2020 ColoMATYC Conference Open Educational Resources for Mathematics: the First 2,500 Years ### Jonathan A. Poritz Eventually, I'll port this to HTML, but at the moment it is only available as PDF or, for those who want to remix, in the original LATEX source. And here are the files one needs to make this with LATEX: 1. The actual source: OER4ColoMATYC.tex. 2. Required image files: On my machine, which is running Linux Mint "18.3 (Sylvia)" and which has a quite complete suite of texlive packages (including, crucially, texlive-base and texlive-latex-base; other required LATEX packages will be obvious to anyone who knows LATEX when looking at the \includepackage commands in the OER4ColoMATYC.tex source file), at the command line, I do the following in a directory containing the above files: pdflatex OER4ColoMATYC pdflatex OER4ColoMATYC If you want just to download one archive with all of those files in it, you could use either of the following two choices:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931683897972107, "perplexity": 2593.4857759953657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00238.warc.gz"}
https://cs.stackexchange.com/questions/64717/are-lrk-languages-and-dcfls-equivalent?noredirect=1
# Are LR(k) languages and DCFLs equivalent? In the familiar book of Theory of Computation by M. Sipser, the author proved that for endmarked context-free languages, the set of languages having a LR(k) grammar for a predefined $k \in \mathbb{N}$ (denoted as LR(k) languages) is the set of deterministic context-free languages (denoted as DCFL). My question is also about the relation of those two sets, but in the broader field of all context-free languages. Specifically, are LR(k) languages and DCFLs equivalent? And, in which book can I find a proof? For now, I just have some surrounding facts as followed. Also in the book, the author proved that LR(0) languages strictly belong to DCFLs, and LR(k) languages belong to DCFLs for all $k \in \mathbb{N} \setminus \{0\}$. In addition, it's obvious that LR(a) languages belong to LR(b) languages for all $0 \le a < b$. Recently, I've got an unchecked fact from here: LR(1) languages and DCFLs are equivalent. • This question resp. its answer may be interesting for you. – Raphael Oct 17 '16 at 12:51 ## 1 Answer According to Wikipedia: • For every fixed $k \geq 1$: A language has an LR($k$) grammar iff it is DCFL. • A language has an LR(0) grammar iff it is DCFL and has the prefix property (no word is a prefix of another word). The first property is proved in Knuth's original paper, in Section V on page 628.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224074244499207, "perplexity": 1386.4133748168672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922746.99/warc/CC-MAIN-20201101001251-20201101031251-00381.warc.gz"}
http://mathhelpforum.com/geometry/211126-assistance-sought-how-go-about-solving-graphing-problem-print.html
# Assistance sought on how to go about solving graphing problem. Printable View • January 10th 2013, 02:54 PM KhanDisciple Assistance sought on how to go about solving graphing problem. The text asks: Find the possible slopes of a line that passes through (4,3) so that the portion of the line in the first quadrant forms a triangle of area 27 with the positive coordinate axes. I know that the equation for finding the area of a triangle is $\frac{1}{2}bh$. How do i go about solving this algebraically? The answers in the book are $\frac{-3}{2}, \frac{-3}{8}$. I do not know how they arrived at these answers. I don't know how you could graph it without first determining more than one set of points. Any help would be greatly appreciated. Thanks. • January 10th 2013, 03:03 PM HallsofIvy Re: Assistance sought on how to go about solving graphing problem. Let (X, 0) and (0, Y) be the points at which such a line intercepts the x and y axes, respectively. Then the area of the triangle formed is (1/2)XY= 27 we we must have XY= 54. Any line through (4,3) can be written y= m(x- 4)+ 3. That means we must have y= m(0-4)+ 3= -4m+ 3= Y and 0= m(X- 4)+ 3 so that mX- 1= 0, mX= 1, and X= 1/m. XY= (-4m+ 3)(1/m)= 54. Solve that for m. • January 10th 2013, 05:10 PM Soroban Re: Assistance sought on how to go about solving graphing problem. Hello, KhanDisciple! Here's another approach . . . Quote: Find the possible slopes of a line that passes through (4,3) so that the portion of the line in the first quadrant forms a triangle of area 27 with the positive coordinate axes. Answers: $\text{-}\tfrac{3}{2},\;\text{-}\tfrac{3}{8}$ Code:       |     b*       |  *       |    *  (4,3)       |        o       |          *       |              *   - - + - - - - - - - - * - -       |                a A line through $(4,3)$ with slope $m$ . . has the equation: $y - 3\:=\:m(x-4) \quad\Rightarrow\quad y \:=\:mx + 3 - 4m$ It has x-intercept: $\left(\frac{4m-3}{m},\:0\right)$ It has y-intercept: $\big(0,\:-[4m-3]\big)$ The area of the triangle is: . $A \:=\:\tfrac{1}{2}bh \:=\:\tfrac{1}{2}\left(\frac{4m-3}{m}\right)\big(-[4m-3]\big)$ The area is 27: . $-\tfrac{1}{2}\frac{(4m-3)^2}{m} \;=\;27$ . . . . . . . . $(4m-3)^2 \:=\:-54m \quad\Rightarrow\quad 16m^2 - 24m + 9 \:=\:-54m$ . . . . . . . $16m^2 + 30m + 9 \:=\:0 \quad\Rightarrow\quad (2m+3)(8m+3) \:=\:0$ Therefore: . $\begin{Bmatrix}2m+3 \:=\:0 & \Rightarrow & m \:=\:\text{-}\frac{3}{2} \\ \\[-4mm] 8m+3 \:=\:0 & \Rightarrow & m \:=\:\text{-}\frac{3}{8} \end{Bmatrix}$ • January 10th 2013, 11:33 PM KhanDisciple Re: Assistance sought on how to go about solving graphing problem. Thanks a ton hallsofivy, and soroban, you guys are the best. Thank you for showing me how to solve this algebraically (the image of the graph helped also), although in hallsofivy's answer I don't see how solving for the x-intercept 0=m(x-4)+3 can be reduced to mx-1=0. I really appreciate it thank you guys.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614447116851807, "perplexity": 1126.5252335591015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770400.105/warc/CC-MAIN-20141217075250-00074-ip-10-231-17-201.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v27n2/aas186/abs/S503.html
Gamma Ray Bursts from Relativistic Outflows: Spectral Characteristics of Multiple Delayed Bursts Session 5 -- Gamma Ray Astronomy Display presentation, Monday, June 12, 1995, 9:20am - 6:30pm ## [5.03] Gamma Ray Bursts from Relativistic Outflows: Spectral Characteristics of Multiple Delayed Bursts Hara Papathanassiou and Peter M\'esz\'aros (Penn State) We present spectral calculations for Gamma Ray Bursts (GRB) capable of explaining the bulk-part of the bursts' spectral properties. Relativistic outflows are invoked by many GRB models. Here, we consider the unsteady relativistic wind (Rees and M\'esz\'aros, 1994) in a cosmological framework. We stress its ability to produce up to three bursts with distinct spectra, and examine them, in the context of explaining bursts like the exceptional burst of 2/17/1994, as well as the more typical ones. The optically thick wind, that lasts for \$t_{w}\$ and varies on \$t_{var}\$, consists mainly of radiation with some baryonic contamination. When relativistic expansion (\$\Gamma \sim 10^{2}\$) turns the flow optically thin, a burst approximating a black body spectrum of \$T_{eff} \sim 0.1- 10 keV\$ with \$m_{bol} > 9\$ occurs, lasts for \$t_{w}\$ and goes undetected (in most cases). The bulk of the energy will be dissipated and radiated away later, partly when,due to the flow being unsteady, internal shocks develop and partly when the swept-up surrounding material decelerates the flow causing the formation of a blast wave and a reverse shock. The former burst lasts for \$t_{w}\$ while the latter takes the typical expansion time-scale (\$t_{ex}\$). Those two bursts will occur with a time difference of \$2 t_{ex}\$ and will both have non-thermal spectra. The shocks accelerate electrons and carry frozen-in magnetic field and/or turbulently generate it, thus giving rise to radiation via synchrotron and inverse Compton scattering processes. The power indices and spectral break frequencies are in good agreement with observations. The delayed burst will be, in general, brighter and more energetic (up to \$10^{2} GeV\$) than the one due to internal shocks (\$10^{2} keV- GeV\$). The latter only, or both bursts, are expected to have low energy tails (X-ray down to UV) that may be detectable. A steady wind will not produce the 'internal shock' burst, and a flow that is very poor in baryonic contaminants will result in a thermal burst only thus covering the full range of the bursts' observed spectral characteristics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875175714492798, "perplexity": 4112.221572097119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00074-ip-10-180-212-252.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/68732/is-the-problem-asking-to-show-that-r-times-nabla-psi-satisfies-wave-equation
# Is the problem asking to show that $r\times \nabla \psi$ satisfies wave equation wrong? Considering the wave equation in spherical coordinates, if we know that $\psi(\vec{r})$ is a solution, then $\vec{r}\times \nabla \psi$ is also a solution. (The hint is to take the difference between $\psi(r,\theta,\phi)$ and $\psi(r,\theta ',\phi ')$) If I interpreted it correctly, it says that if $\psi$ solves $$\nabla^2\psi - \frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2} =0$$ Then show that: $$\nabla^2(\vec{r}\times \nabla\psi) - \frac{1}{c^2}\frac{\partial^2 (\vec{r}\times \nabla\psi)}{\partial t^2} =0$$ This question seems outright wrong. As the argument of the Laplacian is a vector. Or am I misinterpreting it? - Sure, you can have a vector Laplacian - it's just the Laplacian of each component (in Cartesian coordinates). Observe (with some notational sleuth, and using $\Delta$ in place of $\nabla^2$), $$\frac{\partial}{\partial x_i} (\;\vec{r}\times\nabla\psi)=\vec{e}_i\times\nabla\psi+\vec{r}\times\left(\frac{\partial}{\partial x_i}\nabla\psi\right),$$ $$\implies\frac{\partial^2}{\partial x_i^2} (\;\vec{r}\times\nabla\psi)=2\,\left(\frac{\partial}{\partial x_i}\vec{e}_i\right)\times\nabla\psi+\vec{r}\times\left(\frac{\partial^2}{\partial x_i^2} \nabla\psi\right)$$ $$\implies \Delta(\;\vec{r}\times\nabla\psi)=2\nabla\times\nabla\psi+\vec{r}\times\nabla(\Delta\psi).$$ Note how we move around the partial derivatives in suggestive and loose but legal and meaningful ways. And $\nabla\times\nabla=\vec{0}$ (it kills any function - basic vector calculus identity), so that drops off. Putting the vector function $\vec{r}\times\nabla\psi$ into the LHS of the differential equation gives $$\vec{r}\times\nabla(\Delta\psi)-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}(\;\vec{r}\times\nabla\psi)=\vec{r}\times\nabla\left(\Delta\psi-\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}\right)$$ $$=\vec{r}\times\nabla(0)=\vec{0}.$$ Thus the vector function does indeed satisfy the differential equation. This may not be the sort of derivation your text or homework desires - it might want you to exploit rules specific to polar coordinates, hence the hint, but I can't off the top of my head figure out a heading in that direction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575444459915161, "perplexity": 200.97206547719827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679281.15/warc/CC-MAIN-20151001215759-00004-ip-10-137-6-227.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/62277/smallest-force-to-move-a-brick
Smallest force to move a brick Having a brick lying on a table, I can exert horizontal force equal to $\mu m g$ to a middle of it's side, and it will start moving (assume $\mu$ is the friction coefficient). However, can I make the brick moving with less horizontal force? May be, applying it not to a middle of a side can help? I have no idea of how to calculate or estimate it, but it would be interesting to know. - Suppose that you exert the force with angle $\theta$ (with respect to ground). Then you will have: $$\mu(mg-F\sin(\theta))=F\cos(\theta)\text{, so }F=\frac{{\mu}mg}{\cos(\theta)+{\mu}\sin(\theta)}.$$ Now, if you minimize this function with respect to $\theta$ you will find that $$\tan(\theta)=\mu.$$ Replacing this $\theta$ (a function of $\mu$) for $\sin(\theta)$ and $\cos(\theta)$ in the second formula (for $F$) you will have: $$F_{min}=\frac{\mu}{\sqrt{1+\mu^2}}mg.$$ - This may be a bit of an eye-rolly answer, but since you specifically state that all you care about is minimizing the horizontal force: All you need to do is lower the normal force since the horizontal force you need to apply to get the brick moving needs to overcome the force of friction. Since friction, in the static case, is the coefficient of static friction times the normal force - and assuming that you can't change the interface between the surface and brick and therefore can't change the coefficient of static frction - lowering the normal force lowers the amount of horizontal force you need to apply. Therefore, apply a vertical force to the brick to lower the normal force and - ZOOM - you're off to the races! Ahem. You know what I mean... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262105226516724, "perplexity": 257.8472534279938}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997879037.61/warc/CC-MAIN-20140722025759-00059-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=transform(deprecated)/standardscore
transform(deprecated)/standardscore - Help stats[transform, standardscore] replace each item by its standard score Calling Sequence stats[transform, standardscore[n_constraints]](data) transform[standardscore[n_constraints]](data) Parameters n_constraints - (optional, default=0) use 0 for population, 1 for sample data - statistical list Description • Important: The stats package has been deprecated. Use the superseding package Statistics instead. • The function standardscore of the subpackage stats[transform, ...] replaces each item in data by its standard score. • The standard score of a quantity $x$ is $\frac{x-\mathrm{mean}}{\mathrm{standarddeviation}}$, where mean and $\mathrm{standarddeviation}$ are the mean and the standard deviation  of data, respectively. • Standard scores are also known as zscores, or z-scores. • The quantity n_constraints  is explained in more detail in the description of stats[describe,standarddeviation]. • The standard score is very useful in comparing distributions. For example, a student can compare her relative standing between two courses if she knows her mark, the courses averages and standard deviations. • Results expressed in terms of standard score are also known as being expressed in standard units. • By definition, the set of standard scores of a list of statistical data will have mean equal to 0 and standard deviation equal to 1. • Missing items remain unchanged. Weighted data and class data are recognized. Examples Important: The stats package has been deprecated. Use the superseding package Statistics instead. > $\mathrm{with}\left(\mathrm{stats}\right):$ > $\mathrm{data}≔\left[\mathrm{Weight}\left(3,10\right),\mathrm{missing},4,\mathrm{Weight}\left(11..12,3\right)\right]$ ${\mathrm{data}}{≔}\left[{\mathrm{Weight}}{}\left({3}{,}{10}\right){,}{\mathrm{missing}}{,}{4}{,}{\mathrm{Weight}}{}\left({11}{..}{12}{,}{3}\right)\right]$ (1) The standard scores for the given data are > $\mathrm{transform}[\mathrm{standardscore}]\left(\mathrm{data}\right):$$\mathrm{transform}[\mathrm{apply}[\mathrm{evalf}]]\left(\right)$ $\left[{\mathrm{Weight}}{}\left({3.}{,}{10}\right){,}{\mathrm{missing}}{,}{4.}{,}{\mathrm{Weight}}{}\left({11.}{..}{12.}{,}{3}\right)\right]$ (2) Here is another way of computing the standard scores. > $\mathrm{transform}[\mathrm{divideby}[\mathrm{standarddeviation}]]\left(\mathrm{transform}[\mathrm{subtractfrom}[\mathrm{mean}]]\left(\mathrm{data}\right)\right)$ $\left[{\mathrm{Weight}}{}\left({-}\frac{{53}}{{9385}}{}\sqrt{{9385}}{,}{10}\right){,}{\mathrm{missing}}{,}{-}\frac{{5}}{{1877}}{}\sqrt{{9385}}{,}{\mathrm{Weight}}{}\left(\frac{{171}}{{9385}}{}\sqrt{{9385}}{..}\frac{{199}}{{9385}}{}\sqrt{{9385}}{,}{3}\right)\right]$ (3) > $\mathrm{transform}[\mathrm{apply}[\mathrm{evalf}]]\left(\right)$ $\left[{\mathrm{Weight}}{}\left({-}{0.5470899427}{,}{10}\right){,}{\mathrm{missing}}{,}{-}{0.2580612937}{,}{\mathrm{Weight}}{}\left({1.765139249}{..}{2.054167898}{,}{3}\right)\right]$ (4) And here is a third way. > $\mathrm{the_sd}≔\mathrm{describe}[\mathrm{standarddeviation}]\left(\mathrm{data}\right)$ ${\mathrm{the_sd}}{≔}\frac{{1}}{{28}}{}\sqrt{{9385}}$ (5) > $\mathrm{the_mean}≔\mathrm{describe}[\mathrm{mean}]\left(\mathrm{data}\right)$ ${\mathrm{the_mean}}{≔}\frac{{137}}{{28}}$ (6) > $\mathrm{transform}[\mathrm{apply}[\mathrm{unapply}\left(\frac{x-\mathrm{the_mean}}{\mathrm{the_sd}},x\right)]]\left(\mathrm{data}\right)$ $\left[{\mathrm{Weight}}{}\left({-}\frac{{53}}{{9385}}{}\sqrt{{9385}}{,}{10}\right){,}{\mathrm{missing}}{,}{-}\frac{{5}}{{1877}}{}\sqrt{{9385}}{,}{\mathrm{Weight}}{}\left(\frac{{171}}{{9385}}{}\sqrt{{9385}}{..}\frac{{199}}{{9385}}{}\sqrt{{9385}}{,}{3}\right)\right]$ (7) > $\mathrm{transform}[\mathrm{apply}[\mathrm{evalf}]]\left(\right)$ $\left[{\mathrm{Weight}}{}\left({-}{0.5470899427}{,}{10}\right){,}{\mathrm{missing}}{,}{-}{0.2580612937}{,}{\mathrm{Weight}}{}\left({1.765139249}{..}{2.054167898}{,}{3}\right)\right]$ (8)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8924838304519653, "perplexity": 2128.665618529794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320209.66/warc/CC-MAIN-20170624013626-20170624033626-00503.warc.gz"}
https://api.queryxchange.com/q/21_3387368/inequality-using-continuity-of-the-function/
# Inequality using continuity of the function by Random-generator   Last Updated October 09, 2019 21:20 PM Suppose $$f\in C[0,1]$$ and $$f(a)>0$$ for an $$a\in [0,1]$$. To show that there is a closed interval $$[c,d]\subseteq [0,1]$$(which contains $$a$$) such that $$f(x)\geq f(a)/2$$ for all $$x\in [c,d].$$ My try: From continuity of $$f$$ at $$a$$, for all $$\epsilon>0$$, there exists a $$\delta>0$$ such that $$|x-a|<\epsilon \implies |f(x)-f(a)|<\delta.$$ Can we say that there is some $$\epsilon>0$$ for which $$0<\delta< f(a)/2$$, and hence $$f(x)\geq f(a)/2$$ the statement is true? Thanks in advance for any help! Tags : First of all, it should be $$|x-a|<\delta\implies |f(x)-f(a)|<\epsilon$$ (your $$\epsilon$$ and $$\delta$$ were switched). Now, see what happens if you take $$\epsilon=\frac{f(a)}{2}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509707093238831, "perplexity": 97.48326351602935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436828.65/warc/CC-MAIN-20200604001115-20200604031115-00510.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/19238
# A BEHIND THE SCENES LOOK AT INTRAMOLECULAR VIBRATIONAL ENERGY REDISTRIBUTION Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19238 Files Size Format View 1999-MA-03.jpg 160.3Kb JPEG image Title: A BEHIND THE SCENES LOOK AT INTRAMOLECULAR VIBRATIONAL ENERGY REDISTRIBUTION Creators: Pate, Brooks H. Issue Date: 1999 Publisher: Ohio State University Abstract: The flow of vibrational energy in polyatomic molecules is the fundamental physical process of chemical kinetics. Statistical theories of reaction rates, such as Rice-Ramsperger-Kassel-Marcus (RRKM) theory, assume that the redistribution of vibrational energy is rapid compared to reaction times. Over the past decade high-resolution infrared spectroscopy techniques in molecular beams have been used to quantitatively determine the time scale for energy redistribution in isolated molecules. We have developed high-sensitivity infrared-microwave double-resonance techniques that permit rapid and accurate assignment of these complex spectra. However, single photon infrared spectroscopy still provides limited information on the intramolecular dynamics because the dynamical information is filtered through a single vibrational state. One important problem in intramolecular dynamics that is difficult to study through infrared spectroscopy is conformational isomerization. To study this fundamental chemical process we have developed the theory for the rotational spectrum of a single molecular quantum state in an energy region where vibrational energy flow and isomerization occur. The theory is an extension of the exchange (or motional) narrowing theories first formulated for NMR spectroscopy. To make the measurements of the rotational spectrum of a highly excited quantum state we employ infrared-microwave double-resonance and infrared-microwave-microwave triple resonance techniques that exploit the Autler-Towness splitting of states (or AC Stark effect). The theory of this type of spectroscopy and useful features of our spectroscopy techniques will be illustrated through the spectra we have measured for propargyl alcohol, allyl fluoride, 2-fluoroethanol, and 4-chlorobut-1-yne. Description: Author Institution: Department of Chemistry, University of Virginia URI: http://hdl.handle.net/1811/19238 Other Identifiers: 1999-MA-03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113115429878235, "perplexity": 3191.2885391255027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678520.45/warc/CC-MAIN-20151001215758-00052-ip-10-137-6-227.ec2.internal.warc.gz"}
https://wiki.contextgarden.net/index.php?title=Command/definetypeface&oldid=22064
# Command/definetypeface (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) # \definetypeface ## Syntax \definetypeface[...][...][...][...][...][...] [...] TEXT (typescript identifier) [...] rm ss tt mm hw cg ("basic style") [...] IDENTIFIER (existing font set) [...] IDENTIFIER (existing font set) [...] IDENTIFIER (?) [...] features = IDENTIFIER rscale = NUMBER encoding = IDENTIFIER text = IDENTIFIER ## Description \definetypeface sets up a typeface for use within a typescript. The third and fourth arguments to \definetypeface are pointers to already declared font sets; these are defined elsewhere. Table 5.8 gives the full list of predefined typescripts (the first argument of \starttypescript) and font sets that are attached to the styles (the third and fourth argument of each \definetypeface). The names in the third argument (like serif and sans) do not have the same meaning as the names used in \setupbodyfont. Inside \setupbodyfont, they were keywords that were in- ternally remapped to one of the two-letter internal styles. Inside \definetypeface, they are nothing more than convenience names that are attached to a group of fonts by the person that wrote the font definition. They only reflect a grouping that the person believed that could be a single font style. Oftentimes, these names are identical to the official style keywords, just as the typescript and typeface names are often the same, but there can be (and sometimes are) different names altogether. How to define your own font sets is explained in the reference manual, but there are quite a few predefined font sets that come with ConTEXt; these are all listed in the four tables 5.9, 5.10, 5.11, and 5.12. For everything to work properly in MkII, the predefined font sets also have to have an encoding attached, you can look those up in the relevant tables as well. The fifth argument to \definetypeface specifies specific font size setups (if any), these will be covered in section ?? in the next chapter. Almost always, specifying default will suffice. The optional sixth argument is used for tweaking font settings like the specification of font features or adjusting parameters. In this case, the two modern font sets are loaded with a small magnification, this evens out the visual heights of the font styles. ## Example ```\starttypescript [palatino] [texnansi,ec,qx,t5,default] \definetypeface[palatino] [rm] [serif][palatino] [default] \definetypeface[palatino] [ss] [sans] [modern] [default] [rscale=1.075] \definetypeface[palatino] [tt] [mono] [modern] [default] [rscale=1.075] \definetypeface[palatino] [mm] [math] [palatino] [default] \stoptypescript ``` This defines a typescript named palatino in five different encodings. When this typescript is executed via \usetypescript, it will define four typefaces, one of each of the four basic styles rm, ss, tt, and mm.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239875078201294, "perplexity": 4667.380688557192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00469.warc.gz"}
https://www.mathworks.com/help/econ/unit-root-nonstationarity.html
Documentation ## Unit Root Nonstationarity ### What Is a Unit Root Test? A unit root process is a data-generating process whose first difference is stationary. In other words, a unit root process yt has the form yt = yt–1 + stationary process. A unit root test attempts to determine whether a given time series is consistent with a unit root process. The next section gives more details of unit root processes, and suggests why it is important to detect them. ### Modeling Unit Root Processes There are two basic models for economic data with linear growth characteristics: • Trend-stationary process (TSP): yt = c + δt + stationary process • Unit root process, also called a difference-stationary process (DSP): Δyt = δ + stationary process Here Δ is the differencing operator, Δyt = yt – yt–1 = (1 – L)yt, where L is the lag operator defined by Liyt = yt – i. The processes are indistinguishable for finite data. In other words, there are both a TSP and a DSP that fit a finite data set arbitrarily well. However, the processes are distinguishable when restricted to a particular subclass of data-generating processes, such as AR(p) processes. After fitting a model to data, a unit root test checks if the AR(1) coefficient is 1. There are two main reasons to distinguish between these types of processes: #### Forecasting A TSP and a DSP produce different forecasts. Basically, shocks to a TSP return to the trend line c + δt as time increases. In contrast, shocks to a DSP might be persistent over time. For example, consider the simple trend-stationary model y1,t = 0.9y1,t – 1 + 0.02t + ε1,t and the difference-stationary model y2,t = 0.2 + y2,t – 1 + ε2,t. In these models, ε1,t and ε2,t are independent innovation processes. For this example, the innovations are independent and distributed N(0,1). Both processes grow at rate 0.2. To calculate the growth rate for the TSP, which has a linear term 0.02t, set ε1(t) = 0. Then solve the model y1(t) = c + δt for c and δ: c + δt = 0.9(c + δ(t–1)) + 0.02t. The solution is c = –1.8, δ = 0.2. A plot for t = 1:1000 shows the TSP stays very close to the trend line, while the DSP has persistent deviations away from the trend line. ```T = 1000; % Sample size t = (1:T)'; % Period vector rng(5); % For reproducibility randm = randn(T,2); % Innovations y = zeros(T,2); % Columns of y are data series % Build trend stationary series y(:,1) = .02*t + randm(:,1); for ii = 2:T y(ii,1) = y(ii,1) + y(ii-1,1)*.9; end % Build difference stationary series y(:,2) = .2 + randm(:,2); y(:,2) = cumsum(y(:,2)); figure plot(y(:,1),'b') hold on plot(y(:,2),'g') plot((1:T)*0.2,'k--') legend('Trend Stationary','Difference Stationary',... 'Trend Line','Location','NorthWest') hold off``` Forecasts based on the two series are different. To see this difference, plot the predicted behavior of the two series using `varm`, `estimate`, and `forecast`. The following plot shows the last 100 data points in the two series and predictions of the next 100 points, including confidence bounds. ```AR = {[NaN 0; 0 NaN]}; % Independent response series trend = [NaN; 0]; % Linear trend in first series only Mdl = varm('AR',AR,'Trend',trend); EstMdl = estimate(Mdl,y); EstMdl.SeriesNames = ["Trend stationary" "Difference stationary"]; [ynew,ycov] = forecast(EstMdl,100,y); % This generates predictions for 100 time steps seY = sqrt(diag(EstMdl.Covariance))'; % Extract standard deviations of y CIY = zeros([size(y) 2]); % In-sample intervals CIY(:,:,1) = y - seY; CIY(:,:,2) = y + seY; extractFSE = cellfun(@(x)sqrt(diag(x))',ycov,'UniformOutput',false); seYNew = cell2mat(extractFSE); CIYNew = zeros([size(ynew) 2]); % Forecast intervals CIYNew(:,:,1) = ynew - seYNew; CIYNew(:,:,2) = ynew + seYNew; tx = (T-100:T+100); hs = 1:2; figure; for j = 1:Mdl.NumSeries hs(j) = subplot(2,1,j); hold on; h1 = plot(tx,tx*0.2,'k--'); axis tight; ha = gca; h2 = plot(tx,[y(end-100:end,j); ynew(:,j)]); h3 = plot(tx(1:101),squeeze(CIY(end-100:end,j,:)),'r:'); plot(tx(102:end),squeeze(CIYNew(:,j,:)),'r:'); h4 = fill([tx(102) ha.XLim([2 2]) tx(102)],ha.YLim([1 1 2 2]),[0.7 0.7 0.7],... 'FaceAlpha',0.1,'EdgeColor','none'); title(EstMdl.SeriesNames{j}); hold off; end legend(hs(1),[h1 h2 h3(1) h4],... {'Trend','Process','Interval estimate','Forecast horizon'},'Location','Best');``` Examine the fitted parameters by executing `summarize(EstMdl)` and you find `estimate` did an excellent job. The TSP has confidence intervals that do not grow with time, whereas the DSP has confidence intervals that grow. Furthermore, the TSP goes to the trend line quickly, while the DSP does not tend towards the trend line y = 0.2t asymptotically. #### Spurious Regression The presence of unit roots can lead to false inferences in regressions between time series. Suppose xt and yt are unit root processes with independent increments, such as random walks with drift xt = c1 + xt–1 + ε1(t) yt = c2 + yt–1 + ε2(t), where εi(t) are independent innovations processes. Regressing y on x results, in general, in a nonzero regression coefficient, and significant coefficient of determination R2. This result holds despite xt and yt being independent random walks. If both processes have trends (ci ≠ 0), there is a correlation between x and y because of their linear trends. However, even if the ci = 0, the presence of unit roots in the xt and yt processes yields correlation. For more information on spurious regression, see Granger and Newbold [1]. ### Available Tests There are four Econometrics Toolbox™ tests for unit roots. These functions test for the existence of a single unit root. When there are two or more unit roots, the results of these tests might not be valid. #### Dickey-Fuller and Phillips-Perron Tests `adftest` performs the augmented Dickey-Fuller test. `pptest` performs the Phillips-Perron test. These two classes of tests have a null hypothesis of a unit root process of the form yt = yt–1 + c + δt + εt, which the functions test against an alternative model yt = γyt–1 + c + δt + εt, where γ < 1. The null and alternative models for a Dickey-Fuller test are like those for a Phillips-Perron test. The difference is `adftest` extends the model with extra parameters accounting for serial correlation among the innovations: yt = c + δt + γyt – 1 + ϕ1Δyt – 1 + ϕ2Δyt – 2 +...+ ϕpΔytp + εt, where • L is the lag operator: Lyt = yt–1. • Δ = 1 – L, so Δyt = ytyt–1. • εt is the innovations process. Phillips-Perron adjusts the test statistics to account for serial correlation. There are three variants of both `adftest` and `pptest`, corresponding to the following values of the `'model'` parameter: • `'AR'` assumes c and δ, which appear in the preceding equations, are both `0`; the `'AR'` alternative has mean 0. • `'ARD'` assumes δ is `0`. The `'ARD'` alternative has mean c/(1–γ). • `'TS'` makes no assumption about c and δ. For information on how to choose the appropriate value of `'model'`, see Choose Models to Test. #### KPSS Test The KPSS test, `kpsstest`, is an inverse of the Phillips-Perron test: it reverses the null and alternative hypotheses. The KPSS test uses the model: yt = ct + δt + ut, with ct = ct–1 + vt. Here ut is a stationary process, and vt is an i.i.d. process with mean 0 and variance σ2. The null hypothesis is that σ2 = 0, so that the random walk term ct becomes a constant intercept. The alternative is σ2 > 0, which introduces the unit root in the random walk. #### Variance Ratio Test The variance ratio test, `vratiotest`, is based on the fact that the variance of a random walk increases linearly with time. `vratiotest` can also take into account heteroscedasticity, where the variance increases at a variable rate with time. The test has a null hypotheses of a random walk: Δyt = εt. ### Testing for Unit Roots #### Transform Data Transform your time series to be approximately linear before testing for a unit root. If a series has exponential growth, take its logarithm. For example, GDP and consumer prices typically have exponential growth, so test their logarithms for unit roots. If you want to transform your data to be stationary instead of approximately linear, unit root tests can help you determine whether to difference your data, or to subtract a linear trend. For a discussion of this topic, see What Is a Unit Root Test? #### Choose Models to Test • For `adftest` or `pptest`, choose `model` in as follows: • If your data shows a linear trend, set `model` to `'TS'`. • If your data shows no trend, but seem to have a nonzero mean, set `model` to `'ARD'`. • If your data shows no trend and seem to have a zero mean, set `model` to `'AR'` (the default). • For `kpsstest`, set `trend` to `true` (default) if the data shows a linear trend. Otherwise, set `trend` to `false`. • For `vratiotest`, set `IID` to `true` if you want to test for independent, identically distributed innovations (no heteroscedasticity). Otherwise, leave `IID` at the default value, `false`. Linear trends do not affect `vratiotest`. #### Determine Appropriate Lags Setting appropriate lags depends on the test you use: • `adftest` — One method is to begin with a maximum lag, such as the one recommended by Schwert [2]. Then, test down by assessing the significance of the coefficient of the term at lag pmax. Schwert recommends a maximum lag of where $⌊x⌋$ is the integer part of x. The usual t statistic is appropriate for testing the significance of coefficients, as reported in the `reg` output structure. Another method is to combine a measure of fit, such as SSR, with information criteria such as AIC, BIC, and HQC. These statistics also appear in the `reg` output structure. Ng and Perron [3] provide further guidelines. • `kpsstest` — One method is to begin with few lags, and then evaluate the sensitivity of the results by adding more lags. For consistency of the Newey-West estimator, the number of lags must go to infinity as the sample size increases. Kwiatkowski et al. [4] suggest using a number of lags on the order of T1/2, where T is the sample size. For an example of choosing lags for `kpsstest`, see Test Time Series Data for Unit Root. • `pptest` — One method is to begin with few lags, and then evaluate the sensitivity of the results by adding more lags. Another method is to look at sample autocorrelations of yt – yt–1; slow rates of decay require more lags. The Newey-West estimator is consistent if the number of lags is O(T1/4), where T is the effective sample size, adjusted for lag and missing values. White and Domowitz [5] and Perron [6] provide further guidelines. For an example of choosing lags for `pptest`, see Test Time Series Data for Unit Root. • `vratiotest` does not use lags. #### Conduct Unit Root Tests at Multiple Lags Run multiple tests simultaneously by entering a vector of parameters for `lags`, `alpha`, `model`, or `test`. All vector parameters must have the same length. The test expands any scalar parameter to the length of a vector parameter. For an example using this technique, see Test Time Series Data for Unit Root. ## References [1] Granger, C. W. J., and P. Newbold. “Spurious Regressions in Econometrics.” Journal of Econometrics. Vol 2, 1974, pp. 111–120. [2] Schwert, W. “Tests for Unit Roots: A Monte Carlo Investigation.” Journal of Business and Economic Statistics. Vol. 7, 1989, pp. 147–159. [3] Ng, S., and P. Perron. “Unit Root Tests in ARMA Models with Data-Dependent Methods for the Selection of the Truncation Lag.” Journal of the American Statistical Association. Vol. 90, 1995, pp. 268–281. [4] Kwiatkowski, D., P. C. B. Phillips, P. Schmidt, and Y. Shin. “Testing the Null Hypothesis of Stationarity against the Alternative of a Unit Root.” Journal of Econometrics. Vol. 54, 1992, pp. 159–178. [5] White, H., and I. Domowitz. “Nonlinear Regression with Dependent Observations.” Econometrica. Vol. 52, 1984, pp. 143–162. [6] Perron, P. “Trends and Random Walks in Macroeconomic Time Series: Further Evidence from a New Approach.” Journal of Economic Dynamics and Control. Vol. 12, 1988, pp. 297–332.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577672839164734, "perplexity": 1399.69167970818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00258.warc.gz"}
http://mathhelpforum.com/calculus/55811-bounded-set-question-real-analysis.html
## Bounded Set Question - Real Analysis Let a > 0 and n >= 3 be an integer. Define the set S = {x > 0 : x^n <= a}. 1. Show that S is bounded above and thus that b = sup S exists. 2. Show that b^n = a. To do this, show that it cannot be true that b^n < a or b^n > a.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958884596824646, "perplexity": 573.5973427958323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320841.35/warc/CC-MAIN-20170626170406-20170626190406-00517.warc.gz"}
http://mathoverflow.net/questions/36443/innocent-question-on-tensor-products-of-modular-representations
# Innocent question on tensor products of modular representations Let $K$ be a field (of course, of positive characteristic, unless you want a trivial question). Let $G$ be a finite group, and $V$ and $W$ be two completely reducible (finite-dimensional) representations of $G$ over $K$. Is the (interior) tensor product $V\otimes_K W$ a completely reducible representation of $G$ ? I know that this holds for exterior tensor products: Let $K$ be a field. Let $G$ and $H$ be two finite groups, and $V$ and $W$ be two completely reducible (finite-dimensional) representations of $G$ and $H$, respectively, over $K$. Then, the tensor product $V\otimes_K W$ is a completely reducible representation of $G\times H$. (Proof: Combine Curtis/Reiner "Methods of Representation Theory I" Theorems 7.10 and 10.38 (i).) Note that my above question is equivalent to the Jacobson radical of the group ring $KG$ being a coideal (the coalgebra structure on $KG$ is the canonical one, of course: $\Delta g=g\otimes g$). It may be total nonsense but unfortunately I don't have any nontrivial examples of modular representations to check with. - Not sure why you're mentioning "coideals" or "coalgebras". If $A = K[G]$ is the group algebra (or any finite-dim. assoc. $K$-algebra) and $J$ is its Jacobson radical then a left $A$-module is semisimple if and only if $J$ acts as 0 on it (see Lang's "Algebra", possibly the exercises, in the section on semisimplicity or Jacobson radical, or both). Your $V \otimes_K W$ as a left $A$-module also has $J$ acting as 0, hence it is semisimple. (Main point is that $J$ is 2-sided ideal and $A/J$ is semisimple ring.) So answer is "yes". –  BCnrd Aug 23 '10 at 13:08 have a look at Example 4.10 of math.rwth-aachen.de/~Gerhard.Hiss/Preprints/projsum.pdf which gives a group with two simple modules whose tensor product has a non-simple projective summand. Also, shouldn't your comultiplication be g--> g\otimes g? –  M T Aug 23 '10 at 14:24 When I become Emperor of the Universe, one of my first decrees will be the one establishing, once and for all eternity, the term 'simple' as the unique way to refer to things without subthings. –  Mariano Suárez-Alvarez Aug 23 '10 at 17:04 After they finally classified the finite simple groups, some emperor comes along and wants to redefine "simple" to prohibit sub-things (presumably meaning nontrivial sub-things) rather than (non-trivial) quotient things. I suppose I should rejoice that, in this empire, even I can classify the simple groups. –  Andreas Blass Aug 23 '10 at 19:53 The non-abelian group of order 6 is a counterexample in characteristic 2. It is probably wise to have at least worked with this example. –  Jack Schmidt Aug 24 '10 at 1:07 Here is a hopefully instructive example. Let $k$ be an alg. closed field of pos. char $p$, and let $G = SL_2(k)$. Write $V = k^2$ for the "natural" 2-dimensional representation of $G$ say with basis $e_1,e_2$. Let $W = S^pV$ be the $p$-th symmetric power of $V$. Then $W$ contains a 2 dimensional submodule $A$ spanned by the $p$-th powers $e_1^p$ and $e_2^p$; the module $A$ is isom. to the "first Frobenius twist" of $V$. It is an exercise to check that there is no $G$-stable complement to $A$ in $W$; i.e. the SES $$0 \to A \to W \to W/A \to 0$$ is not split. Thus $W$ is not completely reducible. Evidently there is a surjective mapping $V^{\otimes p} \to W$, thus also the $p$-th tensor power $V^{\otimes p}$ is not completely reducible. But $V$ is a simple (hence completely reducible) $G$-module; thus tensor powers of a completely reducible module are not in general completely reducible. In fact, the $(p-1)$-th tensor power $V^{\otimes p-1}$ is completely reducible; arguing as before, one sees that $V \otimes (V^{\otimes p-1})$ is not completely reducible; thus in general the tensor product of two completely reducible modules is not completely reducible. I gave some further remarks about semisimplicity of tensor products in an answer to this question. - Actually, the symmetric powers were the thing I was interested in first. But your group is not finite ;) –  darij grinberg Aug 23 '10 at 19:54 The story shouldn't change upon replacing alg closed $k$ by the finite field $F=\mathbf{F}_p$ (or $F = \mathbf{F}_q$...). The $F$-points $V(F)$ of $V$ form an abs irred module (over $F$) for the finite group $G(F)$, and the $F$-points of $W$ given by $W(F) = S^p V(F)$ are not completely reducible as $G(F)$-module, at least if #$F$ >> 0. (Well, $W$ is indecomposable as $G$-module, hence for #$F$ large, $W(F)$ is indec as $G(F)$-module. Since I haven't thought this through recently, I fret a bit about the indecomposability of $W(F)$ as $G(F)$-module for tiny $F$.). –  George McNinch Aug 23 '10 at 20:25 The answer to your question is usually no (which is fortunate because the lack of complete reducibility gives modular representation theorists something to do), starting for example with the tensor product of two irreducible representations of $G$ over an algebraically closed field whose prime characteristic divides the group order. Examples for finite groups of Lie type are legion and come up naturally when you tensor the Steinberg representation with an arbitrary one: then you get a projective module whose indecomposable direct summands are rarely irreducible. Textbooks like those by Jon Alperin, Curtis-Reiner, Serre, or me on modular representations illustrate such outcomes of tensoring. ADDED: Concerning failure of complete reducibility in general, see also the related MO question 18280. For references to some older literature on tensoring with the Steinberg representation, see the third section of my 1987 AMS Bulletin survey here. http://www.ams.org/journals/bull/1987-16-02/S0273-0979-1987-15512-1/S0273-0979-1987-15512-1.pdf">here. - Mea culpa for giving a bogus argument for the wrong answer above. Jim, apart from the rationale I gave in terms of poor behavior of Jacobson radical with respect to the comultiplication $K[G] \rightarrow K[G] \otimes K[G]$ to explain how it can fail, is there a better "ring-theoretic" explanation for this ubiquitous phenomenon? –  BCnrd Aug 23 '10 at 17:43 @BCnrd: I haven't seen a ring-theoretic approach to this, which may get complicated: some fairly nontrivial tensor products do turn out to be completely reducible. (In other words, "ubiquitous" is tricky here.) Historically, the fact that tensor products of modules for group algebras arise from Hopf algebra structure wasn't so explicit. By now there are other interesting classes of Hopf algebras for which complete reducibility is also an issue. –  Jim Humphreys Aug 23 '10 at 19:07 I claim that semisimple KG-modules are closed under tensor product in the modular setting iff G has a unique p-Sylow subgroup where p is the characteristic. Pf. Let P be the p-radical of G. That is P is the largest normal p-subgroup of G. It is well known that P is the intersection of the kernels of all irreps of G over K. So we have $$KG\to K[G/P]\to KG/Rad(KG).$$ If P is a p-Sylow then $K[G/P]$ is semisimple by Maschke and so the last map is an isomorphism. Thus Rad(KG) is a Hopf ideal and so the completely reducible reps are closed under tensor product. On the other hand if the completely reducible reps are closed under tensor, the radical is a Hopf ideal. Since the Hopf algebra quotients of a group algebra are the algebras of quotient groups it follows the last map is an iso (since g-1 is in the radical iff g is in P). But then by Maschke p does not divide the order of G/P so P is a p-Sylow. - Oh, so what you call the $p$-radical is the $p$-core, as far as I understand. I fear I need some more proofs or references here. I've got a reference for the fact that completely reducible reps are closed under tensor products if and only if the Jacobson radical is a Hopf ideal (Satz 5.3 in Theresia Nolte's diploma thesis math.rwth-aachen.de/~Gerhard.Hiss/Students/… ). But I'm missing a proof that $P$ is the intersection of the kernels of all irreps of $G$ over $K$. (This generalizes the fact that all irreps of a $p$-group over $K$ are trivial, but ... –  darij grinberg Dec 28 '12 at 2:51 ... the proof of this fact that I know doesn't carry over.) –  darij grinberg Dec 28 '12 at 2:51 There are 2 proofs on mathoverflow.net/questions/69039/… that every normal p-subgroup is contained in the kernel of each irrep. Now consider the regular rep of G. It can be written in block triangular form with the diagonal blocks irreducible reps. The kernel of the projection to the diagonal is precisely the intersection of the kernels of the irreps. But the kernel is unitriangular hence a p-group. –  Benjamin Steinberg Dec 28 '12 at 5:17 The last map follows because g-1 is in the radical iff g is in the kernel of each irrep. By above this occurs iff g is in P. The kernel of $KG\rightarrow K[G/P]$ is generated by the elements g-1 with g in P. –  Benjamin Steinberg Dec 28 '12 at 5:20 math.wisc.edu/~passman/balgebra.pdf is a good reference for the Hopf ideal result and the fact that each Hopf ideal in a group algebra is generated by the elements g-1 ranging over some g in some normal subgroup N. The moral is that the largest Hopf ideal contained on the radical is generated by g-1 with g in the p-radical. –  Benjamin Steinberg Dec 28 '12 at 5:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500657916069031, "perplexity": 341.0384910867116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637364.20/warc/CC-MAIN-20150417045717-00268-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.oxygenxml.com/doc/versions/22.0/ug-developer/topics/add-font-to-builtin-FOP.html
# Add a Font to the Built-in FO Processor - Advanced VersionEdit online .edit-link { display: table-cell; font-size: 12px; opacity: 0.6; text-align: right; vertical-align: "middle" } .edit-link-container { display: table-cell; margin-top: 0 } If an XML document is transformed to PDF using the built-in Apache FOP processor but it contains some Unicode characters that cannot be rendered by the default PDF fonts, then a special font that is capable to render these characters must be configured and embedded in the PDF result. Important: On Windows, fonts are located into the C:\Windows\Fonts directory. On Mac, they are placed in /Library/Fonts. To install a new font on your system, it is enough to copy it in the Fonts directory. If a special font is installed in the operating system, there is a simple way of telling FOP to look for it. See the simplified procedure for adding a font to FOP. 1. Locate the font. First, find out the name of a font that has the glyphs for the special characters you used. One font that covers most characters, including Japanese, Cyrillic, and Greek, is Arial Unicode MS. 2. Register the font in the FOP configuration. Note: DITA PDF transformations have their own fop.xconf (DITA-OT-DIR/plugins/org.dita.pdf2.fop/fop/conf/fop.xconf). If the font is not installed in the system, it needs to be referenced in the fop.xconf. 1. For information about registering the font in the FOP Configuration, see: https://xmlgraphics.apache.org/fop/2.3/fonts.html. 2. Open the Preferences dialog box (Options > Preferences), go to XML > XSLT/FO/XQuery > FO Processors, and enter the path of the FOP configuration file in the Configuration file text field. 3. Set the font on the document content. This is usually done with XSLT stylesheet parameters and depends on the document type processed by the stylesheet. DocBook Example: For DocBook documents, you can start with the built-in scenario called DocBook PDF, edit the XSLT parameters, and set the font name (for example, Arialuni) to the `body.font.family` and `title.font.family` parameters. TEI Example: For TEI documents, you can start with the built-in scenario called TEI PDF, edit the XSLT parameters, and set the font name (for example, Arialuni) to the `bodyFont` and `sansFont` parameters. DITA Example: For DITA to PDF transformations using DITA-OT modify the following two files: • DITA-OT-DIR/plugins/org.dita.pdf2/cfg/fo/font-mappings.xml - The `<font-face>` element included in each `<physical-font>` element that has the `char-set="default"` attribute must contain the name of the font. • DITA-OT-DIR/plugins/org.dita.pdf2/fop/conf/fop.xconf - A `<font>` element must be inserted in the `<fonts>` element, which is inside the `<renderer>` element that has the `mime="application/pdf"` attribute.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197403192520142, "perplexity": 4544.031375701354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00347.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-sqrt-7-sqrt-5-4-sqrt-7-sqrt-5-4
Algebra Topics # How do you simplify (sqrt(7) + sqrt(5))^4 + (sqrt(7) - sqrt(5))^4? Feb 7, 2015 Okay i should thank you for such a question OK lets start Call $\sqrt{7} = \sqrt{a} \mathmr{and} \sqrt{5} = \sqrt{b}$ So lets rewrite you equation as ${\left\{{\left(\sqrt{a} + \sqrt{b}\right)}^{2}\right\}}^{2} + {\left\{{\left(\sqrt{a} - \sqrt{b}\right)}^{2}\right\}}^{2}$ So lets take the first part and simplify ${\left(\sqrt{a} + \sqrt{b}\right)}^{2} = a + 2 \sqrt{a b} + b$ So lets substitute $7 + 5 + 2 \sqrt{35} = 12 + \sqrt{35}$ Square this gain and you will get that${\left\{{\left(\sqrt{a} + \sqrt{b}\right)}^{2}\right\}}^{2} = 144 + 35 + 24 \sqrt{35} = 179 + 24 \sqrt{35}$ Similarly repeat the above steps for ${\left\{{\left(\sqrt{a} - \sqrt{b}\right)}^{2}\right\}}^{2}$ After substituting you should get that the above equation is $= 179 - 24 \sqrt{35}$ So finally lets put the puzzle together $h e n c e {\left\{{\left(\sqrt{a} + \sqrt{b}\right)}^{2}\right\}}^{2} + {\left\{{\left(\sqrt{a} - \sqrt{b}\right)}^{2}\right\}}^{2} = 179 + 24 \sqrt{35} + 179 - 24 \sqrt{35}$ (sqrt(7) + sqrt(5))^4 + (sqrt(7) - sqrt(5))^4 = 358 Hope that this is what you wanted if this is what you wanted please write down in the comments ##### Impact of this question 1852 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020001292228699, "perplexity": 2222.610564260636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00531.warc.gz"}
https://brilliant.org/problems/mod-4/
# Mod 4 I want to fill in each cell of a $$5\times 5$$ with exactly one positive integer, such that • the product of the 5 numbers in each row makes 1 remainder, when it is divided by 4, • the product of the 5 numbers in each column makes 3 remainder, when it is divided by 4. Is this possible? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843040943145752, "perplexity": 316.0798548065026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00527.warc.gz"}
https://www.physicsoverflow.org/8690/ricci-tensor-of-the-orthogonal-space
# Ricci tensor of the orthogonal space + 5 like - 0 dislike 178 views While reading this article I got stuck with Eq.$(54)$. I've been trying to derive it but I can't get their result. I believe my problem is in understanding their hints. They say that they get the result from the Gauss embedding equation and the Ricci identities for the 4-velocity, $u^a$. Is the Gauss equation they refer the one in the wiki article? Looking at the terms that appear in their equation it looks like the Raychaudhuri equation or the Einstein field equations are to be used in the derivation in order to get the density and the cosmological constant, but even though I realize this I can't really get their result. Can anyone point me in the right direction? $Note:$The reason why I'm trying so hard to prove their result is because I wanted to know if it would still be valid if the orthogonal space were 2 dimensional (aside some constants). It appears to be the case but to be sure I needed to be able to prove it. This post imported from StackExchange Physics at 2014-03-22 17:15 (UCT), posted by SE-user PML retagged Mar 25, 2014 Is the Gauss equation they refer the one in the wiki article?: I think it is, according to page 610 (or 30 of the file) of link.springer.com/article/10.1007%2Fs10714-009-0760-7. This post imported from StackExchange Physics at 2014-03-22 17:15 (UCT), posted by SE-user Taiki @Taiki nice reference. It appears to be so. I think I might be on the right track, after several pages of calculations. Your reference has been very helpful. Thank you. This post imported from StackExchange Physics at 2014-03-22 17:15 (UCT), posted by SE-user PML + 4 like - 0 dislike I don't have time to do the full calculation (it will be rather long!), but I'll indicate what I think are the steps that get you there: We start with our congruence given by the normalized vector field $u^a$, $u_au^a=-1$ The covariant derivative of $u^a$ splits into a part parallel to the congruence and a part orthogonal to it: $$\nabla_au_b=-u_a\dot{u_b}+{\tilde{\nabla}}_au_b$$ Where the tilde derivative is defined by projecting orthogonal to $u^a$ $${\tilde{\nabla}}_au_b=h_a^ch_b^d\nabla_cu_d$$ $$h_a^b=\delta_a^b+u_au^b$$Now we can decompose ${\tilde{\nabla}}_au_b$ into its irreducible parts $${\tilde{\nabla}}_au_b = \omega_{ab}+\frac{1}{3}\Theta h_{ab}+\sigma_{ab}$$ Where $\omega_{ab}$ is the antisymmetric part, $\Theta$ is the trace part, and $\sigma_{ab}$ is the trace-free symmetric part. In most derivations of the Gauss-Codazzi equations, they assume that $u_a$ is vorticity-free ($\omega$ is the vorticity). Here we can't make that assumption. We wish to investigate curvature orthogonal to the congruence so we want to calculate $$({\tilde{\nabla}}_a{\tilde{\nabla}}_b-{\tilde{\nabla}}_b{\tilde{\nabla}}_a)X_c$$ where X is a vector field orthogonal to the congruence. Directly substituting for the ${\tilde{\nabla}}$ factors a couple of pages of calculation got me to $$({\tilde{\nabla}}_a{\tilde{\nabla}}_b-{\tilde{\nabla}}_b{\tilde{\nabla}}_a)X_c$$ $$=2\omega_{ab}{\dot{X}}_{<c>}+(^{\perp}R_{abcd})X_d+(K_{cb}K_{da}-K_{ca}K_{db})X^d$$where$$K_{ab}={\tilde{\nabla}}_bu_a$$ (I'm using the angle brackets and time derivatives defined in eqns (9) and (10) of your reference and the perp just means project all free indices using the $h$'s. Also the Gauss Codazzi section in Wald is useful here. Oh BTW I can't guarantee signs and factors of two!). I believe the next step would be to contact this equation to obtain the desired three-Ricci tensor. It contains all the ingredients in your desired equation (54). The only problem is that you still have the (projected) Riemann tensor involved. To get rid of that you would have to use the field equations - this will bring in ingredients like $\pi_{ab}$ Sorry - it's more of a long hint than an answer, but it is a rather messy calculation! (you may have already completed it yourself now...) This post imported from StackExchange Physics at 2014-03-22 17:15 (UCT), posted by SE-user twistor59 answered Jun 1, 2013 by (2,500 points) Ah, your answer/hint (=)) is spot on. I wasn't sure of one definition and an incorrect use was making me run in circles. Thank you very much for your effort. I just checked Wald's book and indeed is very (very!) useful. Thank you once again. I didn't know that the calculation was so messy or I should have given a greater bounty...=/ Thank you once again. This post imported from StackExchange Physics at 2014-03-22 17:15 (UCT), posted by SE-user PML Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031514883041382, "perplexity": 594.3999504363554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00286.warc.gz"}
https://www.groundai.com/project/parametric-polynomial-minimal-surfaces-of-arbitrary-degree6533/
Parametric polynomial minimal surfaces of arbitrary degree # Parametric polynomial minimal surfaces of arbitrary degree Gang Xu Guozhao Wang Galaad, INRIA Sophia-Antipolis, 2004 Route des Lucioles, 06902 Cedex, France Department of Mathematics, Zhejiang University, Hangzhou, China ###### Abstract Weierstrass representation is a classical parameterization of minimal surfaces. However, two functions should be specified to construct the parametric form in Weierestrass representation. In this paper, we propose an explicit parametric form for a class of parametric polynomial minimal surfaces of arbitrary degree. It includes the classical Enneper surface for cubic case. The proposed minimal surfaces also have some interesting properties such as symmetry, containing straight lines and self-intersections. According to the shape properties, the proposed minimal surface can be classified into four categories with respect to , and . The explicit parametric form of corresponding conjugate minimal surfaces is given and the isometric deformation is also implemented. ###### keywords: minimal surface; parametric polynomial minimal surface; Enneper surface ; conjugate minimal surface journal: Elsevier Science ## 1 Introduction Minimal surface is a kind of surface with vanishing mean curvature Nitsche:lecture (). As the mean curvature is the variation of area functional, minimal surfaces include the surfaces minimizing the area with a fixed boundary Morgan1 (); Morgan2 (). There have been many literatures on minimal surface in classical differential geometry Osserman (); Meeks1 (); Meeks2 (). Because of their attractive properties, minimal surfaces have been extensively employed in many areas such as architecture, material science, aviation, ship manufacture, biology and so on. For instance, the shape of the membrane structure, which has appeared frequently in modern architecture, is mainly based on minimal surfaces Blet:minimal (). Furthermore, triply periodic minimal surfaces naturally arise in a variety of systems, including nano-composites, lipid-water systems and certain cell membranes Jung:levelset (). In CAD systems, parametric polynomial representation is the standard form. For parametric polynomial minimal surface, plane is the unique quadratic parametric polynomial minimal surface, Enneper surface is the unique cubic parametric polynomial minimal surface. There are few research work on the parametric form of polynomial minimal surface with higher degree. Weierstrass representation is a classical parameterization of minimal surfaces. However, two functions should be specified to construct the parametric form in Weierestrass representation. In this paper, we discuss the answer to the following questions: What are the possible explicit parametric form of polynomial minimal surface of arbitrary degree and how about their properties? The proposed minimal surfaces include the classical Enneper surface for cubic case, and also have some interesting properties such as symmetry, containing straight lines and self-intersections. According to the shape properties, the proposed minimal surface can be classified into four categories with respect to , and . The explicit parametric form of corresponding conjugate minimal surfaces is given and the isometric deformation is also implemented. The paper includes five sections. Preliminary introduces some notations and lemmas. Main Results presents the explicit parametric formula of parametric polynomial minimal surface of arbitrary degree. The next section, Properties and Classification presents the corresponding properties and classification of the proposed minimal surfaces. The following section focuses on corresponding conjugate counterpart of proposed minimal surface. Finally, in Conclusions, we summarize the main results. ## 2 Preliminary In this section, we introduce the following two notations. Pn = ⌈n−12⌉∑k=0(−1)k(n2k)un−2kv2k, (1) Qn = ⌊n−12⌋∑k=0(−1)k(n2k+1)un−2k−1v2k+1 (2) and have the following properties: Lemma 1 ∂Pn∂u=nPn−1,∂Pn∂v=−nQn−1 ∂Qn∂u=nQn−1,∂Qn∂v=nPn−1 Lemma 2 Pn = uPn−1−vQn−1 Qn = vPn−1+uQn−1 Lemma 2 can be proved by using the following equation: (n2k)+(n2k+1)=(n+12k+1). ## 3 Main Results Theorem 1 If the parametric representation of polynomial surface with arbitrary degree , is given by , where X(u,v) = −Pn+ωPn−2, Y(u,v) = Qn+ωQn−2, (3) Z(u,v) = 2√n(n−2)ωn−1Pn−1, then is a minimal surface. Proof of Theorem 1. From Lemma 2, we have ∂2\emphr(u,v)∂2u+∂2\emphr(u,v)∂2v=0 Hence, is harmonic surface. By using Lemma 2, we have F = ∂\emphr(u,v)∂u∂\emphr(u,v)∂v = 2n(n−2)ω(Qn−3Pn−1+Pn−3Qn−1−2Qn−2Pn−2) From Lemma 2, we have Pn−2 = uPn−3−vQn−3, (5) Qn−2 = vPn−3+uQn−3, (6) Pn−1 = (u2−v2)Pn−3−2uvQn−3, (7) Qn−1 = (u2−v2)Qn−3+2uvPn−3, (8) Substituting (5)(6)(7)(8) into (4), we can obtain . Similarly, we have E−G = ∂\emphr(u,v)∂u∂\emphr(u,v)∂u−∂\emphr(u,v)∂v∂\emphr(u,v)∂v = 4n(n−2)ω(Qn−1Qn−3−Pn−3Pn−1+P2n−2−Q2n−2) = 0 Hence, r(u,v) is a parametric surface with isothermal parameterization. From Nitsche:lecture (), if a parametric surface with isothermal parameterization is harmonic, then it is a minimal surface. The proof is completed. ## 4 Properties and Classification From Theorem 1, if , we can get the Enneper surface, which is the unique cubic parametric polynomial minimal surface. It has the following parametric form \emph{E}(u,v)=(−(u3−3uv2)+ωu,−(v3−3vu2)+ωv,√3ω(u2−v2)). Enneper surface has several interesting properties, such as symmetry, self-intersection, and containing orthogonal straight lines on it. For the new proposed minimal surface, we can also prove that it has also has these properties. If , a kind of quintic polynomial minimal surface proposed in Gang1 () can be obtained as follows \emph{Q}(u,v)=(X(u,v),Y(u,v),Z(u,v)). (9) where X(u,v) = −(u5−10u3v2+5uv4)+ωu(u2−3v2), Y(u,v) = −(v5−10v3u2+5vu4)+ωv(v2−3u2), Z(u,v) = √15ω2(u4−6u2v2+v4). According to the shape properties, the proposed minimal surface in Theorem 1 can be classified into four classes with , , . Proposition 1. In case of , the corresponding proposed minimal surface has the following properties: • is symmetric about the plane and the plane , • contains two orthogonal straight lines on the plane Fig. 1(a) shows an example of Enneper surface, Fig. 1 (b) shows an example of proposed minimal surface with . The symmetry plane and straight lines of minimal surface in Fig. 1 (b) are shown in Fig. 1 (c) and Fig. 1 (d). Proposition 2. In case of , the corresponding proposed minimal surface is symmetric about the plane and the plane . Fig. 2 (a) present an example of proposed quartic minimal surface and the corresponding symmetry planes are shown in Fig. 2 (b). Proposition 3. In case of , the corresponding proposed minimal surface has the following properties: • is symmetric about the plane , the plane , the plane and the plane . • Self-intersection points of are only on the symmetric planes, i.e., there are no other self-intersection points on , and the self-intersection curve has the same symmetric plane as the minimal surface. Fig. 3 (a) present an example of proposed quintic minimal surface and the corresponding symmetry planes are shown in Fig. 3 (b). Proposition 4. In case of , the corresponding proposed minimal surface is symmetric about the plane and the plane . For the case of , it has been studied in Gang2 (). Fig.4 (a) present an example of proposed minimal surface with and the corresponding symmetry planes are shown in Fig. 4 (b). ## 5 Conjugate Minimal Surface Definition 1. If two differentiable functions satisfy the Cauchy-Riemann equations ∂p∂u=∂q∂v,∂p∂v=−∂q∂u, and both are harmonic, then the functions are said to be harmonic conjugate. Definition 2. If and are with isothermal parameterizations such that and are harmonic conjugate for , then P and Q are said to be parametric conjugate minimal surfaces. Helicoid and catenoid are a pair of conjugate minimal surfaces. For , we can find out a new pair of conjugate minimal surfaces as follows. Theorem 2 The conjugate minimal surface of has the following parametric form s(u,v)=(Xs(u,v),Ys(u,v),Zs(u,v)) where Xs(u,v) = −Qn+ωQn−2, Ys(u,v) = −Pn−ωPn−2, (10) Zs(u,v) = 2√n(n−2)ωn−1Qn−1, It can be proved directly by Lemma 2. From [2], the surfaces of one-parametric family \emph{C}t(u,v)=(cost)\emphr(u,v)+(sint)\emphs(u,v) are minimal surfaces with the same first fundamental form. These minimal surfaces are isometric and have the same Gaussian curvature at corresponding points. Fig. 5 illustrates the isometric deformation between and . It is similar with the isometric deformation between helicoid and catenoid. ## 6 Conclusion The explicit parametric formula of polynomial minimal surface is presented. It can be considered as the generalization of Enneper surface in cubic case. The corresponding properties and classification of the proposed minimal surface are investigated. The corresponding conjugate minimal surface are constructed and the dynamic isometric deformation between them are also implemented. Acknowlegments This work was partially supported by the National Nature Science Foundation of China (No.60970079, 60933008), Foundation of State Key Basic Research 973 Development Programming Item of China (No.2004CB318000), and the Nature Science Foundation of Zhejiang Province, China(No. Y1090718). ## References • (1) Nitsche J.C.C. 1989. Lectures on minimal surfaces, vol. 1. Cambridge Univ. Press, Cambridge • (2) Frank Morgan. Minimal surfaces, crystals, and norms on Rn, Proc. 7th Annual Symp. on Computational Geom., June, 1991, N. Conway, NH. • (3) Frank Morgan. Area-minimizing surfaces, faces of Grassmannians, and calibrations. The Am. Math. Monthly 95 (1988), 813-822 • (4) Osserman R.. 1986. A survey of minimal surfaces, Dover publ. 2nd ed., New York • (5) W. H. Meeks III, L. Simon and S. T. Yau. Embedded minimal surfaces, exotic spheres, and manifolds with positive Ricci curvature . Ann. of Math. (2) 116 (1982), no. 3, 621-659. • (6) W. H. Meeks III and S. T. Yau. The existence of embedded minimal surfaces and the problem of uniqueness . Math. Z. 179(1982), 151-168. • (7) T.H. Colding and W.P.Minicozzi II. Shapes of embedded minimal surfaces, Proc. Nat. Acad. Sciences, July 25, 2006, vol. 103, no. 30, 11106-11111 • (8) Bletzinger K.W.. 1997. Form finding of membrane structures and minimal surfaces by numerical continuation. Proceeding of the IASS Congress on Structural Morphology: Towards the New Millennium, Nottingham. ISBN 0-85358-064-2, 68-75 • (9) Jung K., Chu, K.T., Torquato, S.. 2007. A variational level set approach for surface area minimization of triply-periodic surfaces. Journal of Computational Physics, 2, 711-730 • (10) G. Xu, G.Z Wang. Quintic parametric polynomial minimal surfaces and their properties. Differential Geometry and its Applications, to appear • (11) G. Xu, G.Z Wang. Parametric polynomial minimal surfaces of degree six with isothermal parameter Proc. of Geometric Modeling and Processing (GMP 2008), 2008, LNCS 4975, 329-343 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657755851745605, "perplexity": 1078.5840902613252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00230.warc.gz"}
https://www.physicsforums.com/threads/numerical-integration-verlet-algorithm-accuracy.788806/
# Numerical integration - verlet algorithm - accuracy • Start date • #1 2 0 In my computational physics textbook, three different velocity estimators are derived for a problem with equation of motion: $\ddot x = F(x)$ where the positions are found by using the Verlet algorithm: $x(t+h) = 2 x(t) - x(t-h) + h^2 F[x(t)]$ The three velocity estimators are: $v(t) = \frac{x(t+h) - x(t-h)}{2h} + \mathcal{O}(h^2)$ $v_{improved}(t) = \frac{x(t+h) - x(t-h)}{2h} - \frac{h}{12}\left( F[x(t+h)] - F[x(t-h)] \right) + \mathcal{O}(h^3)$ $v_{leapfrog}(t + h/2) = \frac{x(t+h) - x(t)}{h} + \mathcal{O}(h^2)$ I have no problems deriving these equations, so far everything is clear. But, in the textbook they apply the methods for the 1D harmonic oscillator and they conclude: The leap-frog energy estimator is an order of magnitude worse than the other two. This is not surprising since the fact that the velocity is not calculated at the same time instants as the position results in deviation of the energy from the continuum value of order h instead of h^2. So, just because the time instants are different, the leapfrog's results are 1 orde worse than the other two? I can't find an explanation/reasoning for this... Can someone help me? Regards, Jan • #2 mfb Mentor 35,988 12,850 So, just because the time instants are different, the leapfrog's results are 1 orde worse than the other two? Wrong time also means wrong position and therefore wrong potential. As the potential is monotonically increasing / decreasing for many steps at a time, you get a consistent direction of the error there. • #3 2 0 Okay, but I still don't see why the leapfrog is one order worse than the first estimator. Both use 2 positions, which are calculated using the verlet algorithm. I understand what you're saying about the potential, but I don't get why this would result in such a difference between the 2 estimators... Replies 1 Views 876 Replies 0 Views 2K Replies 2 Views 16K • Last Post Replies 2 Views 3K Replies 8 Views 4K • Last Post Replies 22 Views 1K • Last Post Replies 5 Views 3K • Last Post Replies 8 Views 809 • Last Post Replies 5 Views 1K • Last Post • Optics Replies 5 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081713318824768, "perplexity": 793.0665448015316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00648.warc.gz"}
http://math.gatech.edu/node/16416
## Global well-posedness for some cubic dispersive equations Series: PDE Seminar Tuesday, March 24, 2015 - 3:05pm 1 hour (actually 50 minutes) Location: Skiles 006 , Johns Hopkins University In this talk we examine the cubic nonlinear wave and Schrodinger equations. In three dimensions, each of these equations is H^{1/2} critical. It has been showed that such equations are well-posed and scattering when the H^{1/2} norm is bounded, however, there is no known quantity that controls the H^{1/2} norm. In this talk we use the I-method to prove global well posedness for data in H^{s}, s > 1/2.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012655377388, "perplexity": 1435.475533642153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.55/warc/CC-MAIN-20181016113811-20181016135311-00457.warc.gz"}
https://slideplayer.com/slide/5979294/
# Number Theory – Introduction (1/22) Very general question: What is mathematics? Possible answer: The search for structure and patterns in the universe. ## Presentation on theme: "Number Theory – Introduction (1/22) Very general question: What is mathematics? Possible answer: The search for structure and patterns in the universe."— Presentation transcript: Number Theory – Introduction (1/22) Very general question: What is mathematics? Possible answer: The search for structure and patterns in the universe. Question: What is Number Theory? Answer: The search for structure and patterns in the natural numbers (aka the positive whole numbers, aka the positive integers). Note: In general, in this course, when we say “number”, we mean natural number (as opposed to rational number, real number, complex number, etc.). Some Sample Problems in Number Theory Can any number be written a sum of square numbers? Can any number be written as sum of just 2 square numbers? Experiment! See any patterns? Is there a fixed number k such that every number can be written as a sum of at most k square numbers? Same question as the last for cubes, quartics (i.e., 4 th powers), etc. This general problem is called the Waring Problem. More problems.... Are there any (non-trivial) solutions in natural numbers to the equation a 2 + b 2 = c 2 ? If so, are there only finitely many, or are the infinitely many? Are there any (non-trivial) solutions in natural numbers to the equation a 3 + b 3 = c 3 ? If so, are there only finitely many, or are the infinitely many? For any k > 2, are there any (non-trivial) solutions in natural numbers to the equation a k + b k = c k ? If so, are there only finitely many, or are the infinitely many? This last problem is called Fermat’s Last Theorem. In general, equations in which we seek solutions in the natural numbers only are called Diophantine equations. And yet more... Primes! Definition. A natural number > 1 is called prime if.... Are there infinitely many primes? About how many primes are there below a given number n? (The answer is called the Prime Number Theorem.) Definition. Two primes are called twins if they differ by 2. (Examples?) Are there infinitely many twin primes? (This is, of course, called the Twin Primes Problem.) Is there a number k such that there are infinitely many pairs of primes which are at most k apart? The existence of such a number k was proved this past summer!!! More with primes Definition. We say two numbers a and b are congruent modulo m if m divides b – a. Are there infinitely many primes which are congruent to 1 modulo 4? To 2 modulo 4? To 3 modulo 4? Can every even number be written as a sum of two primes? (This is called the Goldbach Problem.) Can every odd number be written as a sum of three primes? (This – sort of - is called Vinogradov’s Theorem.) Assignment for Friday Obtain the text. Read the Introduction and Chapter 1. In Chapter 1 try out Exercises 1, 2, 3 and 5. Download ppt "Number Theory – Introduction (1/22) Very general question: What is mathematics? Possible answer: The search for structure and patterns in the universe." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642158150672913, "perplexity": 401.64948609677765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00320.warc.gz"}
https://en.wikisource.org/wiki/Page:TolmanEmission.djvu/3
Page:TolmanEmission.djvu/3 138 RICHARD C. TOLMAN. the only difference in length of path occurs, before reflection, i. e., ${\displaystyle AB=L_{1}>AC=L_{2}}$ Consider first a stationary source, and let τ be the period of the source which produces a bright line at D. For the production of such a line, it is evident that light impulses coming over the two paths ABD and ACD must arrive at D in the same phase. If Δt is the time interval between the departures from the source of two light impulses which arrive simultaneously at D, the condition necessary for their arrival in phase is evidently given by the equation ${\displaystyle i\tau =\Delta t={\frac {L_{1}-L_{2}}{c}}}$ (I) where i is a whole number. (Note that ${\displaystyle L_{2}-L_{1}}$ with the apparatus as arranged.) Consider now a source of light approaching the slit with the velocity v. If τ' is the period of the source which now produces a bright line at D and Δt' the time interval between departures from the source of two light impulses which now arrive simultaneously at D we evidently have the relation ${\displaystyle i\tau '=\Delta t'}$. In order to obtain an expression for Δt' in terms of ${\displaystyle L_{1}}$ and ${\displaystyle L_{2}}$, we must note that the source moves toward the slit the distance vΔt' during the interval of time between the departures of the two light impulses, and hence the difference in path which was ${\displaystyle L_{1}-L_{2}}$ for a stationary source has now become ${\displaystyle L_{1}-L_{2}+v\Delta t'}$. Furthermore we must remember that according to the theory which we are investigating the light before reflection will have the velocity c+v,[1] and hence ${\displaystyle i\tau '=\Delta t'={\frac {L_{1}-L_{2}+v\Delta t'}{c+v}},}$ ${\displaystyle i\tau '={\frac {L_{1}-L_{2}}{c}},}$ (2) which by comparison with equation (i) gives us ${\displaystyle \tau '=\tau }$. In other words if the first of the above emission theories of light is true, both before and after the source of light is set in motion, light produced by the same period of the source gives a bright line at the point D, that is, the expected Doppler effect or shifting of the lines does not occur. In interpreting actual experimental results, it must be borne in mind that the adjustment of the grating was assumed to be such that the 1. The slight difference in direction between the rays AB and AC and the motion of the source may be neglected.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109588265419006, "perplexity": 343.99439290102373}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00162-ip-10-171-10-108.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/tags/image-processing/hot?filter=year
# Tag Info 12 If I understand your method 1 correctly, with it, if you used a circularly symmetrical region and did the rotation about the center of the region, you would eliminate the region's dependency on the rotation angle and get a more fair comparison by the merit function between different rotation angles. I will suggest a method that is essentially equivalent to ... 6 Common Approaches for Commercial Denoisers Commercial denoisers are different than what you'd see on most papers. While on papers the results are mostly using objective metrics (PSNR / SSIM) and are evaluated vs. Additive White Gaussian Noise (AWGN) with high level of noise real world images are mostly with moderate level of noise with Mixed Poisson ... 5 There is a similar DSP trick here, but I don't remember the details exactly. I read about it somewhere, some while ago. It has to do with figuring out fabric pattern matches regardless of the orientation. So you may want to research on that. Grab a circle sample. Do sums along spokes of the circle to get a circumference profile. Then they did a DFT on ... 5 Digital image processing is an extension of digital signal processing and linear system theory into two dimensional signals. Image processing involves all low level tasks such as filter design and filtering, spatial scaling, sampling, intensity manipulations, geometry manipulations, Fourier analysis and spectrum analysis, motion estimation, noise reduction, ... 4 The result of a convolution of a data vector of length M with a kernel of length G is of length M + G - 1. (the maximum length of the non-zero portion, even though the limits of integration is sometimes written as from -infinity to +infinity) This is clearly (G - 1) elements longer than the original data vector. So where do these new, "extra", additional ... 4 I've went ahead and basically adjusted the Hough transform example of opencv to your use case. The idea is nice, but since your image already has plenty of edges due to its edgy nature, the edge detection shouldn't have much benefit. So, what I did above said example was Omit the edge detection decompose your input image into color channels and process ... 4 This is a go at the first suggested extension of my previous answer. Ideal circularly symmetric band-limiting filters We construct an orthogonal bank of four filters bandlimited to inside a circle of radius $\omega_c$ on the frequency plane. The impulse responses of these filters can be linearly combined to form directional edge detection kernels. An ... 4 Approximation by the real part of a weighted sum of separable complex Gaussian component kernels Figure 1. The proposed scheme illustrated as 1-d real convolutions ($*$) and additions ($+$), for cut-off frequency $\omega_c = \pi/4$ and kernel width $N=41$. Each the upper and the lower half of the diagram is equivalent to taking the real part of a 1-d ... 4 I myself recently graduated from Applied Mathematics and began PhD in signal processing. I do Stochastic Geometry modeling of wireless networks in particular, which is quite mathematical subject. It involves measure theory, probability theory, Fourier Analysis etc. etc. The area of Signal Processing is very broad indeed. It of course depends if you want to ... 4 In general, the time derivative property of the Fourier Transform is given as $$\mathscr{F}[\frac{d}{dt}x(t)] = j\omega X(j\omega)$$ Notice that we can simply multiply by the frequency index in the Fourier Transform result. For the 2D FT result: $$\mathscr{F}[f(x,y)]= F(u,v)$$ Using the same property results in: \mathscr{F}[\frac{d}{dx}f(x,y)]= uF(u,... 4 Sampling is the process of making the x-axis (time) discrete and quantization is the process of making the y-axis (magnitude) discrete. You can sample without quantization (such as done with an analog sample and hold circuit). Quantization is introduced through rounding or truncation when the sampled analog signal is mapped to a digital representation. ... 3 In my StackExchange Signal Processing Q38542 GitHub Repository you will be able to see a code which implements 2D Circular Convolution both in Spatial and Frequency Domain. Pay attention to the function CircularExtension2D(). This function align the axis origin between the image and the kernel before working in the Frequency Domain. Remember that for ... 3 We need to assume the reader knows some basic stuff to answer that. Let's give it a try. Lets understand the sentence - Zero / First Order Hold. We have the Zero / First Order and the Hold. Zero / First Order hold means the order of the Taylor Series of the function we use to interpolate. In other words, the degree of the Polynomial we can write the ... 3 Why Does 2D FFT of Gaussian Looks More Sharper than Gaussian Itself? Have a look at the Fourier Transfrom of a Gaussian Signal. \mathcal{F}_{x} \left\{ {e}^{-a {x}^{2} } \right\} \left( \omega \right) = \sqrt{\frac{\pi}{a}} {e}^{- {\pi}^{2} \frac{ {\omega}^{2} }{a} } $$First, Gaussian Signal stays Gaussian under Fourier Transform. As you can see, the ... 3 This is a good question and something that I remember asking myself when I first learned about impulse responses and convolution. To understand this, it is first necessary to understand the significance of impulses and impulse responses. Referring to the image below, you can see that an impulse is an instantaneous like input and the impulse response is the ... 3 In the Total Variation framework we define 2 flavors:$$ \text{Isotropic TV} \; {TV}_{ {L}_{2} } \left( X \right) = \sum_{ij} \sqrt{ { \left( {D}_{h} X \right) }_{ij}^{2} + { \left( {D}_{v} X \right) }_{ij}^{2} } \text{Anisotropic TV} \; {TV}_{ {L}_{1} } \left( X \right) = \sum_{ij} \sqrt{ { \left( {D}_{h} X \right) }_{ij}^{2} } + \sqrt{{ \left( {D}_{... 3 Focus on the first equation for EY. Back in the day when color television was being developed, the color signal had to be compatible with black and white TVs and vice versa. So the compatible brightness signal (luma Y) has to be calculated from the three primary color signals (R, G B) for transmission. Human visual system does not perceive brightnesses of ... 3 Rather performance intensive, but should get you accuracy as wanted: Edge detect the image Hough transform to a space where you have enough pixels for the wanted accuracy. Because there are enough orthogonal lines; the image in the hough space will contain maxima lying on two lines. These are easily detectable and give you the desired angle. 3 Excerpted from Jae S.Lim 2D signal and image processing ch.1, as an example of $2$-D circularly symmetric lowpass filter with a cutoff frequency of $\omega_c$ radians per sample, whose impulse response is given by: $$h[n_1,n_2] = \frac{\omega_c}{2\pi \sqrt{n_1^2 + n_2^2} } J_1 \big( \omega_c \sqrt{n_1^2 + n_2^2} \big)$$ where $J_1$ is the Bessel function ... 3 In the context of image processing (and machine vision as well), blurring is an operation that reduces the sharpness of an image by some lowpass filtering applied on it. There are different causes of blurring such as lens blur, motion blur, or just LSI (linear shift invariant) lowpass filtering. Deblurring refers to any restoration performed on the image ... 3 Let me present the following Diagram: So, both Deblurring and Deconvolution are operations within the family of Image Restoration (Which is a subset of Inverse Problem set). Basically we build the Image Restoration set by different Degradation Models. The one related to the question are: Linear Degradation Model Namely, the degradation is made by a Linear ... 3 General Idea The general idea of Principal Component Analysis (PCA) is as following (Intuition over formalism): Given a set of points in space (Inner Product Space) find a set of vectors (Directions) which are uncorrelated which span the data in the most energy preserving manner. The tricky part is explaining "most energy preserving manner". So we're ... 3 Answer to the first post I guess that is pretty much dependent on the problem that you are dealing with. Boundaries are always a problem that needs extra care. In most applications, it would be an option to set those values to zero (and handle normalization factors of the filter appropriately). Other options would be to reflect the data, so that the index ... 3 Questioner's answer... sigma have the same units as x and y i.e. number of pixels. In multi-scale filtering, the size of the filter must change when the sigma changes. Obtain the number of pixels per one millimeter or the vice-versa. (I did this using the property of pixel spacing included in the DICOM metadata in Matlab you can do this as info=dicominfo('... 3 In support of Comparable mixin, a default <=> or spaceship operator for pixels is defined in the function Pixel_spaceship in rmpixel.c. However, in your use of the sort method, you define your own code block that overrides the <=> operator, and yours takes a single argument rather than two which would be correct, so the definition is broken and ... 3 Your interpretation is correct: directional derivation operators highlight variation in a given direction. Here, you use the $2$-point discrete derivative in the $x$-direction (along image rows). It may emphasize vertical features. First, such operators indeed extend the initial image range. However, one often uses the absolute value of the derivative to ... 3 MATLAB is one of the most important software inventions of the twentieth century, from a DSP point of view its syntax is simply the best in the world. And image processing is one of its strongest parts. However it's mainly of academic focus and if you look for industrial output you should consider having a number of additional tools. LabView is one such ... 3 Well, look at your original picture: it's constant for all points but the edges, which means your derivative is zero for all points but these edges. By applying a "rounding, smoothing" filter to it, you "smear" the edges enough to make the derivative be non-zero for multiple pixels, in every direction. Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521244525909424, "perplexity": 821.7690741836537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00382.warc.gz"}
http://mathhelpforum.com/differential-equations/59784-second-order-differential-equation-print.html
# Second Order Differential Equation • November 16th 2008, 01:08 AM panda* Second Order Differential Equation Hi! How do I determine the particular integral for second order differential equations with mixed f(x)s? Like, it's neither strictly polynomial, trigonometry or exponential. Example of questions, (i) $y'' - 4y' + 5y = (16x + 4)e^{3x}$ (ii) $y'' + 3y' = (10x + 6)sin x$ (iii) $y'' - 2y' + 4y = 541e^{2x}cos5x$ Thank you! (: • November 16th 2008, 08:14 AM shawsend Hey, each of those have right members which are particular solutions to some homogeneous differential equation. For example, the first one has a right member which is a solution to the equation: $(D-3)^2 y=0$ So the operator $(D-3)^2$ becomes an "annihilation" operator that we can apply to both side of the equation to convert it to a homogeneous equation: $(D-3)^2 (D^2-4D+5)y=0$ This is the method of undetermined coefficients. Are you familiar with that method? Try it first on some simple ones. Any DE book should have a section on this subject. • November 16th 2008, 08:49 PM panda* Sorry but I do not understand what are you talking about at all. :( • November 16th 2008, 08:56 PM Chris L T521 Quote: Originally Posted by shawsend Hey, each of those have right members which are particular solutions to some homogeneous differential equation. For example, the first one has a right member which is a solution to the equation: $(D-3)^2 y=0$ So the operator $(D-3)^2$ becomes an "annihilation" operator that we can apply to both side of the equation to convert it to a homogeneous equation: $(D-3)^2 (D^2-4D+5)y=0$ This is the method of undetermined coefficients. Are you familiar with that method? Try it first on some simple ones. Any DE book should have a section on this subject. Aha!! I'm not the only one to know about the Annihilator approach :D Quote: Originally Posted by panda* Sorry but I do not understand what are you talking about at all. :( Read post #6 and #7 here to see how to tackle equations like these. --Chris • November 16th 2008, 09:05 PM Math_Helper 1. First you must solve the homogeneous equations (i) y''-4y-+5y=0 (ii) y''+3y'=0 (iii) y''-2y'+4y=0 and you will find the general solutions for them. 2. For non homogeneous equations you have to find the partial solutions according to what is on the right side of the equations (i) http://www.mathhelpforum.com/math-he...9a1005f4-1.gif (ii) http://www.mathhelpforum.com/math-he...abc676a5-1.gif (iii) http://www.mathhelpforum.com/math-he...fa9942ab-1.gif 3. The final solution of your equations is the sum of the general solutions from 1st point and partial solutions from the 2nd point. • November 17th 2008, 12:36 AM panda* Thank you for referrals of posts and helps. But I still don't really get it after I read the posts :( Generally, I do know how to handle second order differential equations if f(x) was one of the normal terms like, purely exponential, or trig or polynomials. But when it is multiplied together like that, I can't immediately identify the pattern of the particular integral. For example, the particular integral form for an exponential function of $f(x) = e^{kx}$ would be $y_p = pe^{2x}$ and we differentiate and sub it back in to compare coefficients in order to obtain p. How do I deal when f(x) is not purely of those forms but multiplied together? Thank you. • November 17th 2008, 02:53 AM shawsend I'll work the first one using undetermined coefficients; differential equations open a unique window into the universe such that all her secrets are revealed: $(D-3)^2(D^2-4D+5)y=0$ The general solution from the auxiliary equation is then: $y=c_1 e^{3x}+c_2 e^{3x}+c_4 e^{2x}\cos(x)+c_5 e^{2x}\sin(x)$ with the desired solution (I'm taking this right out of Rainville almost word for word): $y=y_c+y_p$ where $y_c=c_4 e^{2x}\cos(x)+c_5 e^{2x}\sin(x)$. Then there must be a particular solution of the original equation containing at most the remaining terms: $y_p=Ae^{3x}+Bxe^{3x}$. That's the undetermined coefficients which can be determined by substituting this $y_p$ into the original DE: When I do that I get: $2Ae^{3x}+2Bxe^{3x}+2Be^{3x}=16xe^{3x}+4e^{3x}$ Equating coefficients, I get $B=8$ and $A=-6$. Then the general solution of $y''-4y'+5y=(16x+4)e^{3x}$ is: $y(x)=-6e^{3x}+8xe^{3x}+c_1e^{2x}\cos(x)+c_2e^{2x}\sin(x)$ • November 17th 2008, 06:57 PM panda* Although I still do not understand the method you recommended, but I'm still thankful for your generous help and time! I figured out a different approach to do it already :) But still thanks anyway! • November 18th 2008, 04:23 AM shawsend Quote: Originally Posted by panda* I figured out a different approach to do it already Close enough. :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245669603347778, "perplexity": 501.55453352513257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00196-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/19440-suvat-equation-help.html
1. ## SUVAT equation help!! Q. Two clay pidgeons are launched vertically upwards from exactly the same spot at 1 s intervals. Each clay pidgeon has an initial speed 30 m/s and acceleration 10 m/s downwards. How high above the ground do they collide? I know i need to find out when both the pidgeons have the same height using a suvat, but im not sure how i would use an equation as i only have the acceleration and the initial speed. Can anyone give me a hint, or even tell me the whole method. Thanks in advanced. 2. SUVAT? I know Math is a language, but suvat equations? Wikipedia says, "The SUVAT equations are five basic equations used to describe motion of a classical system under constant acceleration. They are named SUVAT equations after the five variables that they contain." s--- Displacement. Units of m (meters, i.e distance and direction from start. It is a vector quantity). u--- Initial velocity. Units of ms - 1 (meters per second, i.e speed and direction. Is a vector quantity). v--- Final velocity. Units of ms - 1 (meters per second, i.e speed and direction. Is a vector quantity). a--- Acceleration. Units of ms - 2 (meters per second squared, i.e rate of change of speed, and direction. Is a vector quantity). t ---Time. Units of s (seconds, i.e an amount of time. Is a scalar quantity). v = u +at -------------(1) s = [(u +v)/2]*t -------(2) And 3 more formulas ---(two for s and one for v^2)---that can be easily derived from the two mentioned above. I have memorized (1) and (2), in different forms, long time ago, and they are all I need to take care of these so-called SUVAT equations now. Final velocity, Vf = Vo +at --------------------------(1') distance travelled, d = [(Vo +Vf)/2]*t ---------------(2') ------------------------------------------ Q. Two clay pidgeons are launched vertically upwards from exactly the same spot at 1 s intervals. Each clay pidgeon has an initial speed 30 m/s and acceleration 10 m/s downwards. How high above the ground do they collide? The two clay pigeons will collide when the first one is already going down while the second is still rising up ----at the same height above the ground. s1 = s2 ------(i) Given: u1 = u2 = 30 m/sec upwards a1 = a2 = 10 m/sec/sec downwards. ----so it is -10 m/sec/sec as we take upwards to be positive. If we make the time the first one is fired as our reference, then t1 = t seconds t2 = (t-1) seconds -----because t2 is less than t1. We use the suvat equation, s = ut +a(t^2)/2: s1 = 30t +(-10)(t^2)/2 s2 = 30(t-1) +(-10)[(t-1)^2]/2 s1 = s2 So, 30t +(-10)(t^2)/2 = 30(t-1) +(-10)[(t-1)^2]/2 30t -5t^2 = 30t -30 -5[t^2 -2t +1] -5t^2 = -30 -5t^2 +10t -5 0 = 10t -35 t = 35/10 = 3.5 sec Hence, s1 = 30(3.5) -5(3.5)^2 = 43.75 m s2 = 30(3.5 -1) -5(3.5 -1)^2 = 43.75 m Therefore, the two clay pigeons will collide at 43.75 meters above the ground.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223432302474976, "perplexity": 2678.0143776425134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00450.warc.gz"}
http://mathoverflow.net/questions/105477/what-are-the-limits-of-the-erd%c5%91s-rankin-method-for-covering-intervals-by-arithme
# What are the limits of the Erdős-Rankin method for covering intervals by arithmetic progressions? To construct gaps between primes which are marginally larger than average, Erdős and Rankin covered an interval $[1,y]$ with arithmetic progressions with prime differences. A nice short exposition is here, but I'll summarize. The classic construction $z!+2, z!+3, ...z!+z$ constructs an interval of composite numbers which is embarrassingly short. It is shorter than the average distance between primes of size $z!$, $\log z! \sim z \log z,$ but we only constructed an interval of length $z.$ Slightly better is to replace $z!$ with the product of primes up to $z$, but this only produces a gap of about the average distance between primes. The method of Erdős and Rankin constructs a slightly larger gap, not quite $g \log g$ where $g$ is the size of an average gap. The point of covering an interval with arithmetic progressions is as follows. If you have a covering of $[1,y]$ by arithmetic progressions $a_p \mod p$ with $p\lt z \lt y$ then by the Chinese Remainder Theorem choose $n$ between $z$ and $z+\prod_{p\lt z} p$ so that $n \equiv -a_p \mod p.$ Then $n+k\in \lbrace n+1, n+2, ... n+y \rbrace$ is divisible by the difference of whichever arithmetic progression covers $k$. If you can cover an interval of length $y$ which is much longer than $z$, then you can construct a gap which is much longer than average. One ingredient in the construction is to choose $a_p = 0$ for primes between well-chosen $z_1 \lt z_2 \lt z$ with $z_1z_2 \gt y$. The point is that we get no collisions, so the arithmetic progression corresponding to each prime $p \in [z_1,z_2]$ covers a different $\lfloor \frac y p \rfloor$ integers in $[1,y].$ Second, use a greedy algorithm for small primes, choosing $a_p$ for $p \lt z_1$ so that each arithmetic progression covers as many uncovered integers in $[1,y]$ as possible. By the pigeonhole principle, you can reduce the number of uncovered integers by a factor of $(1-\frac 1 p).$ Use Mertens' Theorem, that $$\prod_{p\lt z_1}(1-\frac1p) \sim \frac{e^{-\gamma}}{\log z_1}.$$ Third, use the larger primes $p \gt z_2$ to eliminate the remaining uncovered integers, using each prime to cover at least one integer until everything is covered. Optimizing $z_1$ and $z_2$ is a bit messy, but using arithmetic progressions whose differences are primes up to $z$, they covered an interval of length at least $c \frac{z \log z \log\log\log z}{(\log\log z)^2} = o(z \log z).$ My question is what upper bounds are known for the effectiveness of this type of construction. I suspect that Erdős and Rankin couldn't have done much better by this technique. ## If you take arithmetic progressions whose differences are the primes up to $z$, must there be an integer smaller than $O(z^2)$ which is not covered by any arithmetic progression? $O(z^{3/2})?$ $O(z \log z)$? If there must be an uncovered integer smaller than $z^2$ then a different technique, perhaps not a constructive one, would be needed to establish that the existence of gaps of the conjectured size $z^2$ between primes of size about $\exp(z)$. - I am still working through the literature myself, so I don't know the answer. I take it you know of the further advances on prime gap lower bounds (Pomerance, Maier, Pintz, I think?), and that they bear no resemblance to Rankin's method? Also, have you checked Hagedorn's 2009 paper on computing Jacobsthal's function to make sure there is nothing you want there? Gerhard "Just Checking On The Obvious" Paseman, 2012.08.25 –  Gerhard Paseman Aug 25 '12 at 21:46 Also, Westzynthius uses a similar argument to get bounds close to what Rankin and Erdos have. I will review the paper and post something summarizing the differences between W's method and the one you outline above (which may very well be no difference). Gerhard "Ask Me About System Design" Paseman, 2012.08.25 –  Gerhard Paseman Aug 25 '12 at 22:18 If I remember correctly, Erdös himself was positive that this construction can't be improved easily (he called it hopeless even), which is the reason he offered a large prize for it. –  Woett Aug 25 '12 at 23:58 I wouldn't be shocked if it could be improved by a really clever trick, but I'd still like to know if there is some clear obstruction to improving it all of the way to $z^2$, say. If so, then to prove there are large prime gaps one has to use other techniques than covering intervals by arithmetic progressions, although that still looks like a natural problem on its own. @Gerhard, I'm not very familiar with the recent progress on this problem, and I'll look into the work you mention. Thanks. –  Douglas Zare Aug 26 '12 at 0:22 Maier-Pomerance indicate that they expect $z(\log z)^2$ as the limit (see 1.5), if one knew prime $k$-tuples. They basically use an on-average version of that (in AP), in the paper improving the constant. Thus for large primes, they can't show that any of them individually sieves out more than 1 number, but on average they can show at least 1.31, and Pintz 2. When knowing prime $k$-tuples, at least with uniformity enough, the large primes would then be shown to be more optimal, in sieving out. –  Junkie Oct 25 '12 at 2:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343786239624023, "perplexity": 342.17310600073176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133033.29/warc/CC-MAIN-20140914011213-00331-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.scholarpedia.org/article/Talk:Kicked_top
# Talk:Kicked top Jump to: navigation, search ## Contents ##### Review of Reviewer A: Well written article which demonstrates why 'kicked top' can be considred as one of the standard dynamical models used to study quantum signatures of chaos. Of special importance is the discussion of various symmetries of the quantum system which determine which of three universality classes describe statistical fluctuations of the spectra of evolution operators. The paper covers the entire subject, provides links to the relevant literature and is illustrated with nice figures, so I find it appropriate for Scolarpedia. The only minor improvement which I could suggest concerns a sentence after eq (3), which reads "The factor (2j+1)^{-1} appears in the second exponent" but it should be 'the FIRST exponent' (or the equation should be modified accordingly). ## Author Kus: Thank you for a correct remark. The text of the article was changed accordingly ##### Review of Reviewer B: This article about the "Kicked top" is very interesting and important in the context classical and quantum chaos. I have some suggestions concerning the presentation and pedagogical aspects: 1) In the first section the authors introduce the kicked top in a general form with arbitary polynomials H_0 and H_1. I have the impression that there is somehow a "generic choice" of these polynomials as examples which is used in most of the studies of the Kicked Top. I think the authors should give from the very beginning very clearly this generic choice in a separate equation, i.e. provide in Eq. (2) the prefactors with parameters (and not only "\propto") which are (in my understanding ?) the parameters \alpha and \tau used in the Figures 1-3, or in other words provide the exact classical Hamiltonian corresponding to these figures. If possible this classical Hamiltonian should also correspond to the quantum version given in Eq. (3), eventually providing a translation between "classical" and "quantum" parameters (due to the factor (2j+1)^{-1}). 2) I think in section 1, below the (modified) Eq. (2) one should also provide an explicit expression for the classical map which is obtained from the time dependant Hamiltonian (for the "generic choice", and without an explicit derivation, only the result), i.e. the explicit equations relating J^{(n+1)} to J^{(n)} and which allows to reproduce the figures 1-3 for anybody with reasonable programming skills in the field. 3) I think there is a visibility problem concerning the figures 1-3, especially 2 and 3. They are still quite well visible on the computer screen (provided a reasonable resolution). However, when printed out on paper the dots are barely visible. If possible it would be nice to increase slightly the dot size of these figures and maybe this can even be done by changing a parameter in the corresponding (source) postscript files or the plot program used to produce these figures. This is a more optional suggestion from my part but I think it would be nice if the authors can consider this suggestion. 4) In Section 4 when the authors speak of different "symmetry classes" and the Wigner surmise for P(S) it would be appropriate to add some reference to Random matrix theory, for example the book of Mehta since many readers are not necessarily familiar with these things. Of course a Scholarpedia reference would be ideal but I have seen that for the moment there is only an unfinished version but which seems already visible by an automatic link. I am not sure how to handle this exactly. Maybe a reference to the book of Mehta for now and replace it later with a Scholarpedia reference ? 5) In the last section about the rotator limit it would be nice to provide the translation of the parameters, i.e. to give an equation "K=..." where K is the kicked rotator parameter and "..." is the expression of the kicked top parameters (always for the "generic choice"). There is already an implicit Scholoarpedia reference (by an automatic link) for the kicked rotator which explains the "K"-parameter. Therefore, I suppose it is not necessary to add an explicit additional reference about the kicked rotator. 6) Optionally one might add some explanation about the localization length, i.e. provide an additional equation of the type: l \propto D/\hbar^2 \approx K^2/(2\hbar^2) if K\gg 1 and then finally "l \propto ..." in terms of the kicked top parameters using the expression "K=..." given previously (according to point 5) and where D is the Diffusion constant (=> refering to kicked rotator article as explanation). This would also give some (simplified) expression of the resulting localization length in terms of the initial kicked top parameters. ## Author Kus : All improvements suggested by the Referee B in points 1), 2), 4), 5), and 6) were included. For the moment we did not find an effective method to change the visibility of the figures. ## Editor notes a) please, update the figs which are not visible b) it may be useful to quote other physical systems where the kicked top model naturally appears (e.g. see Phys. Rev. Lett. v.74, p.2098 (1995) with resonant tunneling diode) c) it will be useful to quote works of other groups which used the kicked top to study propeties of quantum chaos (e.g. P.Jacquod et al Phys. Rev. E v.64, p.055203(R) (2001))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961006999015808, "perplexity": 791.8000505976983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513784.2/warc/CC-MAIN-20171211164220-20171211184220-00608.warc.gz"}
https://brilliant.org/problems/choose-the-number/
# choose the number Algebra Level 3 there are 3 temples across the bridge you have cross the bridge to put a garlands in all 3 temples, each time you put a number of garland the remaining number of garland in your hand will double. how many number of garlands will you take so that you are able to put equal garlands in all temples and no garland is left in your hand? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192543983459473, "perplexity": 1386.5863703473046}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.quantumdiaries.org/2010/09/16/the-w-mass-from-fermilab/
## View Blog | Read Bio ### The W mass from Fermilab With the LHC running and experimentalists busy taking real data, one of the things left for theory grad students like me is to learn how to interpret the plots that we hope will be ready next summer. A side remark: the LHC will keep taking physics into next year, but in 2012 will shut down for a year to make the adjustments necessary to ramp up to its full 7 TeV/beam energy. Optimistic theorists are hoping that the summer conferences before the shutdown will share plots that offer some clues about the new physics we hope to see in the subsequent years. The most basic feature we can hope for is a resonance, as we described when we met the Z boson. The second most basic signal of a new particle is a little more subtle, it’s a bump in the transverse mass distribution. I was reminded of this last night because of a new conference proceeding paper on the arXiv last night (1009.2903) presenting the most recent fit to the W boson mass from the CDF and D0 collaborations at the Tevatron. The result isn’t “earth shattering” by any extent. We’ve known that the W mass is around 80 GeV for quite some time. The combined results with the most recent data is really an update about the precision with which we measure the value because it is so important for determining other Standard Model relations. Here’s the plot: It’s not the prettiest looking plot, but let’s see what we can learn before going into any physics. In fact, we won’t go into very much physics in this post. The art of understanding plots can be subtle and is worth a discussion in itself. • On the x-axis is some quantity called mT. The plot tells us that it is measured in GeV, which is a unit of energy (and hence mass). So somehow the x axis is telling us about the mass of the W. • On the y-axis is “Events per 0.5 GeV.” This tells us how many events they measured with a given mT value. • What is the significance of the “per 0.5 GeV” on the y-axis? This means, for example, that they count the number of events between 70 GeV and 70.5 GeV and plot that on the graph. This is called “binning” because it sets how “bins” you divide your data set into. If you have very small bins then you end up with more data points on the x axis, but you have much fewer data points per bin (worse statistics per bin). On the other hand, if you have very large bins you end up with lots of data per bin, but less ability to determine the overall shape of the plot. • The label for the plot tells us that we are looking at events where a W boson is produced and decays into a muon and a neutrino. This means (I assume?) that the experimentalists have already subtracted off the “background” events that mimic the signature of a muon and a neutrino in the detector. (In general this is a highly non-trivial step.) • The blue crosses are data: the center of the cross is the measured value and the length of the bars gives the error. • The values under the plot give us the summary of the statistical fit: it tells us that the W mass is 80.35 GeV and that the χ2/dof is reasonable. This latter value is a measure of how consistent the data is with your theory. Any value near 1 is pretty good, so this is indeed a good fit. • The red line is the expected simulated data using the statistical fit parameters. We can see visually that the fit is very good. You might wonder why it is necessary to simulate data—can’t the clever theorists just do the calculation and give an explicit plot? In general it is necessary to simulate data because of QCD which leads to effects that are intractable to calculate from first principles, but this is a [very interesting] story for another time. Now for the relevant question: what exactly are we plotting? In order to answer this, we should start by thinking about the big picture. We smash together some particles, somehow produce a W boson, which decays into a muon and a neutrino. We would like to measure the mass of the W boson from the “final states” of the decay. The primary quantities we need to reconstruct the W mass are the energies and momenta of the muon and neutrino. Then we can use energy and momentum conservation to figure out the W‘s rest mass. (There’s some special relativity involved in here which I won’t get into.) Homework: for those of you with some background in high school or college physics, think about how you would solve for the W mass if you had a measurement for the muon energy and momentum. For this “toy calculation” you don’t need special relativity, just use E = (rest mass energy) + (kinetic energy) and assume that the neutrino is massless. [The discussion below isn’t too technical, but it will help if you think about this problem a little before reading on.] The first point is that we cannot measure the neutrino: it’s so weakly interacting that it just shoots out of our detector without any direct signals… like a ninja. That’s okay! Conservation of energy and momentum tells us that it is sufficient to determine the energy and momentum of the muon. We know that the neutrino momentum has to be ‘equal and opposite’ and from this we can reconstruct its energy (knowing that it has negligible mass). … except that this too is a little simplified. This would be absolutely true if the W boson were produced at rest, such as at electron-positron colliders like LEP or SLAC. However, at the Tevatron we’re colliding protons and antiprotons…. which means we’re accelerating protons and antiprotons to equal and opposite energies, but the actual stuff that’s colliding are quarks, which each carry an unknown fraction of the proton energy and momentum! Thus the W boson could end up having some nonzero momentum along the axis of the beam and this spoils our ability to use a simple calculation based on energy/momentum conservation to determine the W mass. This is where things get slick—but I’ll have to be heuristic because the kinematics involved would be more trouble than they’re worth. The idea is to ignore the momentum along the beam direction: it’s worthless information because we don’t know what the initial momentum in that direction was. We only look at the transverse momenta, which we know should be conserved and was initially zero. If we use conservation of energy/momentum on only the transverse information, we can extract a “fake” mass. Let us call this the transverse mass, mT. (Technically this is not yet the “transverse mass,” but since we’re not giving rigorous mathematical definitions, it won’t matter.) This fake mass is exactly equal to the real mass when the W has no initial longitudinal momentum. This is a problem: we have no way to know the initial longitudinal momentum for any particular event… we just know that sometimes it is close to zero and other times its not. The trick, then, is to take a bunch of events. Up to this point, in principle you didn’t need more than one event to determine the W mass as long as you knew that the one event had zero longitudinal momentum. Now that we don’t know this, we can plot a distribution of events. For the events where the longitudinal momentum of the W is zero, we expect that our transverse mass measurements are close to the true W mass. For the events with a non-negligible longitudinal momentum, part of the “energy” of the W goes into the longitudinal direction which we’re ignoring, and thus we end up measuring a transverse mass which is less (never greater!) than the true W mass. Thus we have a strategy: if we can measure a bunch of events, we can look at the distribution and the largest possible value that we measure should represent those events with the smallest longitudinal momentum, and hence should give the correct W mass. This is almost right. It turns out that there are a few quantum effects that spoil this. During the production of the W, nature can conspire to pollute even the transverse momentum data: the W might emit a photon that shifts its transverse momentum a little, or the quarks and gluons might produce some hadrons that also give the W some transverse momentum kick. This ends up smearing out the distribution. It turns out that these can be taken into account in a very clever—but essentially mathematical—way, and the result is the plot above. You can see that the distribution is still smeared out a little bit towards the tail, but that there is a sharp-ish edge at the true W boson mass. This is what experimentalists look for to fit their data to get extract the W mass. (For more discussion on the W mass and a CMS perspective, see this post by Tommaso a few months ago.) I really like this story—there’s a lot of intuition and physics that goes into the actual calculations. It turns out, however, that for the LHC things can get a lot more complicated. Instead of single W bosons, we hope to produce pairs of exotic particles. These can each end up decaying into things that are visible and invisible, just like the muon–neutrino system that the W decays into. However, now that there are two such decays, the kinematics ends up becoming much trickier. Recently some very clever theorists from Cambridge, Korea, and Florida have made lots of progress on this problem and have developed an industry for so-called “transverse mass” variables. For those interested in the technical details, there’s now an excellent review article (arXiv:1004.2732). [These sorts of analyses will probably not be very important until after the LHC 2012 shutdown when more data can be collected, but they offer a lot of promise for how we can connect models of new physics to data from experiments.] Cheers, Flip
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716284036636353, "perplexity": 422.14283689370114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00427.warc.gz"}
http://mathhelpforum.com/algebra/197965-simplifying-complex-fraction.html
# Math Help - Simplifying this complex fraction 1. ## Simplifying this complex fraction $\frac{m-\frac{1}{2m+1}}{1-\frac{m}{2m+1}}$ I've done all the problems in my homework but this one, I don't even find the process difficult but something about this one is eluding me, I can't make my answer match up with the book's no matter how I work it out. Can anyone lend a hand please? 2. ## Re: Simplifying this complex fraction Simplification gives you : $\frac{2m^2+m-1}{m+1}=\frac{(2m-1)(m+1)}{m+1}=2m-1$ 3. ## Re: Simplifying this complex fraction I would start by multiplying by $\frac{2m+1}{2m+1}$ 4. ## Re: Simplifying this complex fraction Thanks guys! Really appreciate the help. I see where I was going wrong now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435566663742065, "perplexity": 1097.9280370341166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.89/warc/CC-MAIN-20150521113212-00008-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.sparrho.com/item/branes-from-free-fields-to-general-backgrounds/93e21f/
# Branes: from free fields to general backgrounds Research paper by J. Fuchs, C. Schweigert Indexed on: 28 Jan '98Published on: 28 Jan '98Published in: High Energy Physics - Theory #### Abstract Motivated by recent developments in string theory, we study the structure of boundary conditions in arbitrary conformal field theories. A boundary condition is specified by two types of data: first, a consistent collection of reflection coefficients for bulk fields on the disk; and second, a choice of an automorphism $\omega$ of the fusion rules that preserves conformal weights. Non-trivial automorphisms $\omega$ correspond to D-brane configurations for arbitrary conformal field theories. The choice of the fusion rule automorphism $\omega$ amounts to fixing the dimension and certain global topological features of the D-brane world volume and the background gauge field on it. We present evidence that for fixed choice of $\omega$ the boundary conditions are classified as the irreducible representations of some commutative associative algebra, a generalization of the fusion rule algebra. Each of these irreducible representations corresponds to a choice of the moduli for the world volume of the D-brane and the moduli of the flat connection on it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111477136611938, "perplexity": 415.355077790707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00458.warc.gz"}
https://scicomp.stackexchange.com/questions/26529/gaussian-numerical-differentiation/26532
# Gaussian Numerical Differentiation Gaussian quadrature improves on Newton-Cotes formulas by allowing the abscissas to vary along with the weights in order to integrate higher order polynomials. Can this idea be extended to numerical differentiation? To wit, can I choose a set $\{h_{i}\}$ such that $f'$ is much better approximated by a weighted sum of evaluations at $x+h_{i}$ than it could be at equally spaced data point? For example, maybe the following relation could hold for some $j$: \begin{align*} f'(x) = \sum_{i=1}^{n}\frac{w_i}{h_i}f(x+h_{i}) + \mathcal{O}(\left\| hf^{(n+j)}\right\|^{n+j}) \end{align*} • Short answer: Yes, look at Chebyshev collocation methods (in particular, Nick Trefethen's book Approximation Theory and Approximation Practice and the Chebfun software). – Christian Clason Mar 31 '17 at 19:24 • In particular, you might want to see Trefethen's short script for computing a Chebyshev differentiation matrix. – J. M. Apr 1 '17 at 9:47 ## 1 Answer Yes. As you may know, numerical differentiation and integration is closely related to (polynomial) interpolation: The idea to approximately differentiate or integrate a given function is to approximate it with a function (often an interpolating polynomial) that can be differentiated or integrated exactly. For example, the standard central difference quotient formulas for $f(x)$ can be derived from differentiating a quadratic interpolating polynomial through the points $x-h, x, x+h$. The benefit ist that the error in approximating the derivative or integral is only determined by the error in approximating the function -- which is well understood for polynomial interpolation. In particular, it turns out that uniform interpolation points are a poor choice in general, and interpolation points based on roots of orthogonal polynomials are much better. In the context of quadrature, this leads to the different variants of Gaussian quadrature (Legendre, Chebyshev, Jacobi, Laguerre, Hermite...); in the context of differentiation, this is referred to as spectral collocation. Since it is the basis of spectral methods for solving differential equations, rather than a finite-difference quotient this is usually realized by a differentiation matrix that maps the values of $f$ at a selection of collocation points to the corresponding values of $f'$. (In contrast to the standard finite-difference matrices, these are usually dense; spectral methods are therefore global methods.) The deep relations between these concepts are explained in Trefethen, Lloyd N., Approximation theory and approximation practice, Other Titles in Applied Mathematics 128. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM) (ISBN 978-1-611972-39-9/pbk). 305~p. (2013). ZBL1264.41001. In particular, chapter 21 is concerned with spectral collocation, and Theorem 21.1 is precisely the kind of error estimate you wrote. The standard choice for Gaussian quadrature are the roots of the Legendre polynomials, and you can in fact use the same points for differentiation as well (aptly called Legendre collocation); here's a Matlab script that sets up the corresponding differentiation matrix. More commonly used are the roots of Chebyshev polynomials leading to Chebyshev collocation, which is the basis of the remarkable Chebfun Matlab package (and its Julia child, ApproxFun.jl).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767636060714722, "perplexity": 462.76122206505454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00127.warc.gz"}
http://mathhelpforum.com/math-topics/28265-locus-vertex.html
# Thread: Locus of The Vertex 1. ## Locus of The Vertex Hi, I got a question asking "Find the equation of the locus of the vertex of f(x)=x^2-2mx+2m+1 "What is locus is and how can i write an equation of a locus i made a search but didnt really get anything useful. Thanks 2. Originally Posted by JohnDoe Hi, I got a question asking "Find the equation of the locus of the vertex of f(x)=x^2-2mx+2m+1 "What is locus is and how can i write an equation of a locus i made a search but didnt really get anything useful. Thanks 1. Calculate the coordinates of the vertex: $f(x) = x^2 - 2mx + 2m +1~\iff~f(x) = x^2-2mx + m^2 - m^2 + 2m +1$ $= (x-m)^2-m^2+2m+1$ . Therefore the coordinates of the vertex are: V(m, -m^2+2m+1) 2. From the coordinates you know: $\left|\begin{array}{l}x = m\\ y = -m^2 + 2m +1\end{array}\right.$ 3. Substitute the term for m from the 1rst equation into the 2nd equation. You get: $y = -x^2+2x+1$ . And that's the equation of the curve on which all vertices of all parabola are placed. (Red line) Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723440170288086, "perplexity": 539.575068202146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163055862/warc/CC-MAIN-20131204131735-00017-ip-10-33-133-15.ec2.internal.warc.gz"}
https://brilliant.org/problems/an-algebra-problem-by-jaydee-lucero/
# Two by Two Algebra Level 3 Consider a 2 by 2 matrix $$C$$ given by $\begin{bmatrix} a & c \\ b & d \end{bmatrix}$ where $$a$$, $$b$$, $$c$$ and $$d$$ are real numbers. If $$C$$ has the property that its inverse is equal to its transpose, i.e. $$C^{-1}=C^{T}$$, then what is the value of $$a^2+b^2+c^2+d^2$$? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769832849502563, "perplexity": 92.44672279637439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/73435-solved-proof-involving-natural-numbers-print.html
# [SOLVED] Proof involving natural numbers • Feb 12th 2009, 09:14 PM jzellt [SOLVED] Proof involving natural numbers Suppose m,n e N. Show that m < n and n < m cannot both occur. This is obvious, but I having trouble proving it mathematically. • Feb 12th 2009, 09:20 PM Prove It Quote: Originally Posted by jzellt Suppose m,n e N. Show that m < n and n < m cannot both occur. This is obvious, but I having trouble proving it mathematically. If $m < n$ then $n = m + c, c > 0$. Assume $n < m$. Then $m + c < m \implies c < 0$. But we said $c > 0$. So $n < m$ can not occur.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907851815223694, "perplexity": 1850.8151625268197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720615.90/warc/CC-MAIN-20161020183840-00090-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/130945/c-c-infty0-tv-is-dense-in-c-c10-tv/130961
# $C_c^{\infty}([0,T];V)$ is dense in $C_c^{1}([0,T];V)$? Is it true that the space $C_c^{\infty}([0,T];V)$ is dense in $C_c^{1}([0,T];V)$? These are compactly supported functions that are $V$ valued, where $V$ is a Banach or Hilbert space. - Convolution with a smooth compactly supported bump function? –  André Henriques May 17 '13 at 11:57 Isn't it superfluous to require compact support, given that the domain is a compact interval? –  MTS May 17 '13 at 13:16 @MTS: $C_c^\infty(K) = \lbrace f\in C^\infty(\mathbb R): \mathrm{supp}(f) \subseteq K\rbrace$. –  Jochen Wengenroth May 17 '13 at 13:47 As Andre suggests: Convolution with smooth bump function with very small support will give you an approximation by a smooth function which however need not have support in $[0,T]$. However, you may first squeeze the support of the given function you want to approximate in order to make its support a compact subset of $(0,T)$. Then the support of the convolution stays in $[0,T]$ if the support of the bump function is small enough.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849651217460632, "perplexity": 292.6557750048224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00413-ip-10-180-136-8.ec2.internal.warc.gz"}
http://physics.ucr.edu/~wudka/Physics7/Notes_www/node98.html
Next: Gravitational red-shift. Up: Tests of general relativity. Previous: Tests of general relativity. ## Precession of the perihelion of Mercury A long-standing problem in the study of the Solar System was that the orbit of Mercury did not behave as required by Newton's equations. To understand what the problem is let me describe the way Mercury's orbit looks. As it orbits the Sun, this planet follows an ellipse...but only approximately: it is found that the point of closest approach of Mercury to the sun does not always occur at the same place but that it slowly moves around the sun (see Fig. 7.20). This rotation of the orbit is called a precession. The precession of the orbit is not peculiar to Mercury, all the planetary orbits precess. In fact, Newton's theory predicts these effects, as being produced by the pull of the planets on one another. The question is whether Newton's predictions agree with the amount an orbit precesses; it is not enough to understand qualitatively what is the origin of an effect, such arguments must be backed by hard numbers to give them credence. The precession of the orbits of all planets except for Mercury's can, in fact, be understood using Newton;s equations. But Mercury seemed to be an exception. As seen from Earth the precession of Mercury's orbit is measured to be 5600 seconds of arc per century (one second of arc=1/3600 degrees). Newton's equations, taking into account all the effects from the other planets (as well as a very slight deformation of the sun due to its rotation) and the fact that the Earth is not an inertial frame of reference, predicts a precession of 5557 seconds of arc per century. There is a discrepancy of 43 seconds of arc per century. This discrepancy cannot be accounted for using Newton's formalism. Many ad-hoc fixes were devised (such as assuming there was a certain amount of dust between the Sun and Mercury) but none were consistent with other observations (for example, no evidence of dust was found when the region between Mercury and the Sun was carefully scrutinized). In contrast, Einstein was able to predict, without any adjustments whatsoever, that the orbit of Mercury should precess by an extra 43 seconds of arc per century should the General Theory of Relativity be correct. Next: Gravitational red-shift. Up: Tests of general relativity. Previous: Tests of general relativity. Jose Wudka 9/24/1998
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941008985042572, "perplexity": 467.33758759427434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.15/warc/CC-MAIN-20160723071028-00011-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/104303/matrices-with-entries-in-c-algebra
# Matrices with entries in $C^*$-algebra Let $\mathcal{A}$ be a $C^*$-algebra. Consider vector space of matrices of size $n\times n$ whose entries in $\mathcal{A}$. Denote this vector space $M_{n,n}(\mathcal{A})$. We can define involution on $M_{n,n}(\mathcal{A})$ by equality $$[a_{ij}]^*=[a_{ji}^*],\qquad\text{where}\quad [a_{ij}]\in M_{n,n}(\mathcal{A}).$$ Thus we have an involutive algebra $M_{n,n}(\mathcal{A})$. It is well known that there exist at most one norm on $M_{n,n}(\mathcal{A})$ making it a $C^*$-algebra. This norm does exist. Indeed take universal representation $\pi:\mathcal{A}\to\mathcal{B}(H)$ and define linear injective $^*$-homomorphism $$\Pi:M_{n,n}(\mathcal{A})\to\mathcal{B}\left(\bigoplus\limits_{k=1}^n H\right):[a_{ij}]\mapsto\left((x_1,\ldots,x_n)\mapsto\left(\sum\limits_{j=1}^n\pi(a_{1j})x_j,\ldots,\sum\limits_{j=1}^n\pi(a_{nj})x_j\right)\right)$$ Hence we can define norm on $M_{n,n}(\mathcal{A})$ as $\left\Vert[a_{ij}]\right\Vert_{M_{n,n}(\mathcal{A})}=\Vert\Pi([a_{ij}])\Vert$. At first sight this definition depends on the choice of representation, but in fact it does not. My question This norm on $M_{n,n}(\mathcal{A})$ can be defined internally. Namely $$\Vert[a_{ij}]\Vert_{M_{n,n}(\mathcal{A})}=\sup\left\Vert\sum\limits_{i=1}^n\sum\limits_{j=1}^n x_i a_{ij}y_j^*\right\Vert$$ where supremum is taken over all tuples $\{x_i\}_{i=1}^n\subset\mathcal{A}$, $\{y_i\}_{i=1}^n\subset\mathcal{A}$ such that $\left\Vert\sum\limits_{i=1}^n x_i x_i^*\right\Vert\leq 1$, $\left\Vert\sum\limits_{i=1}^n y_i y_i^*\right\Vert\leq 1$. Is there proof of this fact without usage of structural theorem for $C^*$-algebras, a straightforward proof which can be made by simple checking axioms of $C^*$-algebras? P.S. There is another answer on this question on mathoverflow.net - This doesn't answer the question, but for further reference this fact is proved (using irreducible representations) as Lemma 2.3 (i) in "Norming C*-algebras by C*-subalgebras" by Pop, Sinclair, and Smith. The norm also has the internal characterization $\|A\|=r(\sqrt{A^*A})$, where $r$ denotes the spectral radius, as for any C*-algebra. – Jonas Meyer Jan 31 '12 at 23:00 I think it would be good manners for you to now link to the copy of the question that you have posted on MathOverflow – user16299 Feb 1 '12 at 16:22 In fact, since the question has now been answered on both sites, I suggest that it be closed here. – user16299 Feb 1 '12 at 16:46 Actually, I don't think either should be closed. In many ways, Jonas's answer below is complimentary to mine over at MO, so why not leave them both? – Matthew Daws Feb 1 '12 at 17:08 @Yemon: I do see the logic. I don't feel like I know the community views here as I do over at MO, so I think I won't make any further comments... – Matthew Daws Feb 1 '12 at 20:48 This norm comes from considering $M_n(\mathcal A)$ as acting as operators on the Hilbert C*-module $\mathcal A^n$, and no Hilbert space representation is required. I will try to give a fairly minimal overview of the situation in this special case, and more details can be found in the first chapter of Lance's Hilbert C*-modules: a toolkit for operator algebraists. How straightforward it is depends on your familiarity with these objects. Please feel free to ask for elaboration. Define on the direct sum $\mathcal A^n$ the $\mathcal A$-valued inner product $\langle\cdot,\cdot\rangle:\mathcal A^n\times\mathcal A^n\to\mathcal A$, given by $$\langle (x_i),(y_i)\rangle=\sum_{i=1}^n x_i^*y_i.$$ The norm on $\mathcal A^n$ is $\|(x_i)\|=\sqrt{\|\langle(x_i),(x_i)\rangle\|}$ (the Cauchy-Schwarz inequality for Hilbert C*-modules, Proposition 1.1 on page 3 of Lance, gives one way to see that this is in fact a norm). Let $\mathcal L(\mathcal A^n)$ denote the set of adjointable operators on $\mathcal A^n$. These are the maps $T:\mathcal A^n\to\mathcal A^n$ such that there exists a map $T^*:\mathcal A^n\to\mathcal A^n$ satisfying $\langle T(x_i),(y_i)\rangle=\langle(x_i),T^*(y_i)\rangle$ for all $(x_i),(y_i)\in\mathcal A^n$. With the operator norm, $\mathcal L(\mathcal A^n)$ is a closed subalgebra of the Banach algebra of all bounded operators on the Banach space $\mathcal A^n$, so $\mathcal L(\mathcal A^n)$ is a Banach algebra. With the conjugate linear involutive anti-automorphism $T\mapsto T^*$, it is also a $*$-algebra. A straightforward computation shows that $\|T^*T\|=\|T\|^2$ for all $T$, so $\mathcal L(\mathcal A^n)$ is a C*-algebra. Let $\pi:M_n(\mathcal A)\to\mathcal L(\mathcal A^n)$ be defined by $\pi[a_{ij}](x_i)=\left(\sum\limits_{j=1}^n a_{ij}x_j\right)$; that is, $\pi$ is the action of $M_n(\mathcal A)$ on $\mathcal A^n$ by multiplying matrices with column vectors. The fact that $\pi([a_{ij}]^*)=(\pi[a_{ij}])^*$ shows that the codomain of $\pi$ is appropriate. Since $\pi$ is an injective $*$-homomorphism between $C^*$-algebras, it is isometric (alternatively, this could be used to define the unique C*-norm on $M_n(\mathcal A)$). (Incidentally, $\pi$ is surjective if and only if $\mathcal A$ is unital.) Let's see how this gives the characterization in question of the norm. Let $[a_{ij}]\in M_n(\mathcal A)$. From the definition of the norm on $\mathcal A^n$ and the Cauchy-Schwarz inequality for Hilbert C*-modules, $$\sup\limits_{\|(x_i)\|,\|(y_i)\|\leq 1}\left\Vert\sum\limits_{i=1}^n\sum\limits_{j=1}^n x_i^* a_{ij}y_j\right\Vert=\sup\limits_{\|(x_i)\|,\|(y_i)\|\leq 1}\|\langle (x_i),\pi[a_{ij}](y_i)\rangle\|=\|\pi[a_{ij}]\|=\|[a_{ij}]\|.$$ This is what you have, but with a slightly different appearance due to the convention I used (following Lance) for how the inner product on $\mathcal A^n$ is defined.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98987877368927, "perplexity": 116.88494283735858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00057-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.r-bloggers.com/harmonic-means-reciprocals-and-ratios-of-random-variables/
# Harmonic means, reciprocals, and ratios of random variables (This article was first published on ExploringDataBlog, and kindly contributed to R-bloggers) In my last few posts, I have considered “long-tailed” distributions whose probability density decays much more slowly than standard distributions like the Gaussian.  For these slowly-decaying distributions, the harmonic mean often turns out to be a much better (i.e., less variable) characterization than the arithmetic mean, which is generally not even well-defined theoretically for these distributions.  Since the harmonic mean is defined as the reciprocal of the mean of the reciprocal values, it is intimately related to the reciprocal transformation.  The main point of this post is to show how profoundly the reciprocal transformation can alter the character of a distribution, for better or worse.   One way that reciprocal transformations sneak into analysis results is through attempts to characterize ratios of random numbers.  The key issue underlying all of these ideas is the question of when the denominator variable in either a reciprocal transformation or a ratio exhibits non-negligible probability in a finite neighborhood of zero.  I discuss transformations in Chapter 12 of Exploring Data in Engineering, the Sciences and Medicine, with a section (12.7) devoted to reciprocal transformations, showing what happens when we apply them to six different distributions: Gaussian, Laplace, Cauchy, beta, Pareto, and lognormal. In the general case, if a random variable x has the density p(x), the distribution g(y) of the reciprocal y = 1/x has the density: g(y) = p(1/y)/y2 As I discuss in greater detail in Exploring Data, the consequence of this transformation is typically (though not always) to convert a well-behaved distribution into a very poorly behaved one.  As a specific example, the plot below shows the effect of the reciprocal transformation on a Gaussian random variable with mean 1 and standard deviation 2.  The most obvious characteristic of this transformed distribution is its strongly asymmetric, bimodal character, but another non-obvious consequence of the reciprocal transformation is that it takes a distribution that is completely characterized by its first two moments into a new distribution with Cauchy-like tails, for which none of the integer moments exist. The implications of the reciprocal transformation for many other distributions are equally non-obvious.  For example, both the badly-behaved Cauchy distribution (no moments exist) and the well-behaved lognormal distribution (all moments exist, but interestingly, do not completely characterize the distribution, as I have discussed in a previous post) are invariant under the reciprocal transformation.  Also, applying the reciprocal transformation to the long-tailed Pareto type I distribution (which exhibits few or no finite moments, depending on its tail decay rate) yields a beta distribution, all of whose moments are finite.  Finally, it is worth noting that the invariance of the Cauchy distribution under the reciprocal transformation lies at the heart of the following result, presented in the book Continuous Univariate Distributions by Johnson, Kotz, and Balakrishnan (Volume 1, 2nd edition, Wiley, 1994, page 319).  They note that if the density of x is positive, continuous, and differentiable at x = 0 – all true for the Gaussian case – the distribution of the harmonic mean of N samples approaches a Cauchy limit as N becomes infinitely large. As noted above, the key issue responsible for the pathological behavior of the reciprocal transformation is the question of whether the original data distribution exhibits nonzero probability of taking on values within a neighborhood around zero.  In particular, note that if x can only assume values larger than some positive lower limit L, it follows that 1/x necessarily lies between 0 and 1/L, which is enough to guarantee that all moments of the transformed distribution exist.  For the Gaussian distribution, even if the mean is large enough and the standard deviation is small enough that the probability of observing values less than some limit L > 0 is negligible, the fact that this probability is not zero means that the moments of any reciprocally-transformed Gaussian distribution are not finite.  As a practical matter, however, reciprocal transformations and related characterizations – like harmonic means and ratios – do become better-behaved as the probability of observing values near zero become negligibly small. To see this point, consider two reciprocally-transformed Gaussian examples.  The first is the one considered above: the reciprocal transformation of a Gaussian random variable with mean 1 and standard deviation 2.  In this case, the probability that x assumes values smaller than or equal to zero is non-negligible.  Specifically, this probability is simply the cumulative distribution function for the distribution evaluated at zero, easily computed in R as approximately 31%: > pnorm(0,mean=1,sd=2) [1] 0.3085375 In contrast, for a Gaussian random variable with mean 1 and standard deviation 0.1, the corresponding probability is negligibly small: > pnorm(0,mean=1,sd=0.1) [1] 7.619853e-24 If we consider the harmonic means of these two examples, we see that the first one is horribly behaved, as all of the results presented here would lead us to expect.  In fact, the qqPlot command in the car package  in R allows us to compute quantile-quantile plots for the Student’s t-distribution with one degree of freedom, corresponding to the Cauchy distribution, yielding the plot shown below.  The Cauchy-like tail behavior expected from the results presented by Johnson, Kotz and Balakrishnan is seen clearly in this Cauchy Q-Q plot, constructed from 1000 harmonic means, each computed from statistically independent samples drawn from a Gaussian distribution with mean 1 and standard deviation 2.  The fact that almost all of the observations fall within the – very wide – 95% confidence interval around the reference line suggest that the Cauchy tail behavior is appropriate here. To further confirm this point, compare the corresponding normal Q-Q plot for the same sequence of harmonic means, shown below.  There, the extreme non-Gaussian character of these harmonic means is readily apparent from the pronounced outliers evident in both the upper and lower tails. In marked contrast, for the second example with the mean of 1 as before but the much smaller standard deviation of 0.1, the harmonic mean is much better behaved, as the normal Q-Q plot below illustrates.  Specifically, this plot is identical in construction to the one above, except it was computed from samples drawn from the second data distribution.  Here, most of the computed harmonic mean values fall within the 95% confidence limits around the Gaussian reference line, suggesting that it is not unreasonable in practice to regard these values as approximately normally distributed, in spite of the pathologies of the reciprocal transformation. One reason the reciprocal transformation is important in practice – particularly in connection with the Gaussian distribution – is that the desire to characterize ratios of uncertain quantities does arise from time to time.  In particular, if we are interested in characterizing the ratio of two averages, the Central Limit Theorem would lead us to expect that, at least approximately, this ratio should behave like the ratio of two Gaussian random variables.  If these component averages are statistically independent, the expected value of the ratio can be re-written as the product of the expected value of the numerator average and the expected value of the reciprocal of the denominator average, leading us directly to the reciprocal Gaussian transformation discussed here.  In fact, if these two averages are both zero mean, it is a standard result that the ratio has a Cauchy distribution (this result is presented in the same discussion from Johnson, Kotz and Balakrishnan noted above).  As in the second harmonic mean example presented above, however, it turns out to be true that if the mean and standard deviation of the denominator variable are such that the probability of a zero or negative denominator are negligible, the distribution of the ratio may be approximated reasonably well as Gaussian.  A very readable and detailed discussion of this fact is given in the paper by George Marsaglia in the May 2006 issue of Journal of Statistical Software. Finally, it is important to note that the “reciprocally-transformed Gaussian distribution” I have been discussing here is not the same as the inverse Gaussian distribution, to which Johnson, Kotz and Balakrishnan devote a 39-page chapter (Chapter 15).  That distribution takes only positive values and exhibits moments of all orders, both positive and negative, and as a consequence, it has the interesting characteristic that it remains well-behaved under reciprocal transformations, in marked contrast to the Gaussian case.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9720470905303955, "perplexity": 368.6655723608658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378956.26/warc/CC-MAIN-20141119123258-00100-ip-10-235-23-156.ec2.internal.warc.gz"}
https://pureportal.strath.ac.uk/en/publications/simulation-methods-for-system-reliability-using-the-survival-sign
# Simulation methods for system reliability using the survival signature Edoardo Patelli, Geng Feng, Frank PA Coolen, Tahani Coolen-Maturi Research output: Contribution to journalArticle 25 Citations (Scopus) ## Abstract Recently, the survival signature has been presented as a summary of the structure function which is sufficient for computation of common reliability metrics and has the crucial advantage that it can be applied to systems with components whose failure times are not exchangeable. The survival signature provides a huge reduction in required information, e.g. for its storage, compared to the full structure function, its implementation to larger systems is still difficult in a purely analytical manner and simulations may be required to derive the reliability metrics of interest. Hence, the main question addressed in this paper is whether or not the survival signature provides sufficient information for efficient simulation to derive the system’s failure time distribution. We answer this question in the affirmative by presenting two algorithms for survival signature-based simulation. In addition, we present a third simulation algorithm that can be used in case of repairable components. It turns out that these algorithms are very efficient, beyond the initial advantage of requiring only the survival signature to be available, instead of the full structure function. Original language English 327-337 11 Reliability Engineering and System Safety 167 15 Jun 2017 https://doi.org/10.1016/j.ress.2017.06.018 Published - 30 Nov 2017 ## Keywords • reliability analysis • survival signature • Monte Carlo method • complex systems • multi state components
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455401659011841, "perplexity": 1027.8904771830364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00316.warc.gz"}