url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/discrete-math/126834-logical-equivalence-help-print.html
# Logical Equivalence Help • Feb 2nd 2010, 12:30 PM triathlete Logical Equivalence Help Hello, I have a problem in my computer science class that goes as follows: " prove the following using logical proofs (not truth tables)" (p --> q) V (p --> r) V p is a tautology • Feb 2nd 2010, 12:46 PM Plato Quote: Originally Posted by triathlete Hello, I have a problem in my computer science class that goes as follows: " prove the following using logical proofs (not truth tables)" (p --> q) V (p --> r) V p is a tautology $\begin{gathered} \left( {p \to q} \right) \vee \left( {p \to r} \right) \vee p \hfill \\ \left( {\neg p \vee q} \right) \vee \left( {\neg p \vee r} \right) \vee p \hfill \\ \neg p \vee \left( {q \vee r} \right) \vee p \hfill \\ \left( {\neg p \vee p} \right) \vee \left( {q \vee r} \right) \hfill \\ \end{gathered}$ • Feb 2nd 2010, 01:06 PM novice By predicate calculus: $P$ (hypothesis) $P\vee (P\rightarrow R)$ (Disjunction Introduction) $(P\vee (P\rightarrow R))\vee (P\rightarrow Q)$ (Disjunction Introduction and end of proof)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401774406433105, "perplexity": 4151.9516890234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721606.94/warc/CC-MAIN-20161020183841-00516-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/15129-sketch-area.html
# Math Help - Sketch Area 1. ## Sketch Area Here is the question: Sketch the region* enclosed by the given curves. Decide whether to integrate with respect to x or y. Draw a typical approximating rectangle and label its height and width. y = sqrt(x) y = (1/3)x x = 25 *Now here is where I am confused. It says "region" but not "regions". When I graphed this thing I saw two regions that I could calculate the area of. I guess that it should be one that encompasses all of the givens, however I am not 100% sure. Thanks! -qbkr21 Attached Thumbnails 2. Originally Posted by qbkr21 [FONT="Comic Sans MS"][B]Here is the question: Sketch the region* enclosed by the given curves. ... Hello, to me it looks like one region which is a little bit "distorted". I've attached a screenshot. b = 25.33 is the bigger part of the area c = 29.83 is the complete area Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888434112071991, "perplexity": 801.2704804446295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768561.127/warc/CC-MAIN-20141217075248-00084-ip-10-231-17-201.ec2.internal.warc.gz"}
https://jeremykun.com/tag/coding-theory/
The Codes of Solomon, Reed, and Muller Last time we defined the Hamming code. We also saw that it meets the Hamming bound, which is a measure of how densely a code can be packed inside an ambient space and still maintain a given distance. This time we’ll define the Reed-Solomon code which optimizes a different bound called the Singleton bound, and then generalize them to a larger class of codes called Reed-Muller codes. In future posts we’ll consider algorithmic issues behind decoding the codes, for now we just care about their existence and optimality properties. The Singleton bound Recall that a code $C$ is a set of strings called codewords, and that the parameters of a code $C$ are written $(n,k,d)_q$. Remember $n$ is the length of a codeword, $k = \log_q |C|$ is the message length, $d$ is the minimum distance between any two codewords, and $q$ is the size of the alphabet used for the codewords. Finally, remember that for linear codes our alphabets were either just $\{ 0,1 \}$ where $q=2$, or more generally a finite field $\mathbb{F}_q$ for $q$ a prime power. One way to motivate for the Singleton bound goes like this. We can easily come up with codes for the following parameters. For $(n,n,1)_2$ the identity function works. And to get a $(n,n-1,2)_2$-code we can encode a binary string $x$ by appending the parity bit $\sum_i x_i \mod 2$ to the end (as an easy exercise, verify this has distance 2). An obvious question is can we generalize this to a $(n, n-d+1, d)_2$-code for any $d$? Perhaps a more obvious question is: why can’t we hope for better? A larger $d$ or $k \geq n-d+1$? Because the Singleton bound says so. Theorem [Singleton 64]: If $C$ is an $(n,k,d)_q$-code, then $k \leq n-d+1$. Proof. The proof is pleasantly simple. Let $\Sigma$ be your alphabet and look at the projection map $\pi : \Sigma^n \to \Sigma^{k-1}$ which projects $x = (x_1, \dots, x_n) \mapsto (x_1, \dots, x_{k-1})$. Remember that the size of the code is $|C| = q^k$, and because the codomain of $\pi,$ i.e. $\Sigma^{k-1}$ has size $q^{k-1} < q^k$, it follows that $\pi$ is not an injective map. In particular, there are two codewords $x,y$ whose first $k-1$ coordinates are equal. Even if all of their remaining coordinates differ, this implies that $d(x,y) < n-k+1$. $\square$ It’s embarrassing that such a simple argument can prove that one can do no better. There are codes that meet this bound and they are called maximum distance separable (MDS) codes. One might wonder how MDS codes relate to perfect codes, but they are incomparable; there are perfect codes that are not MDS codes, and conversely MDS codes need not be perfect. The Reed-Solomon code is an example of the latter. The Reed-Solomon Code Irving Reed (left) and Gustave Solomon (right). The Reed-Solomon code has a very simple definition, especially for those of you who have read about secret sharing. Given a prime power $q$ and integers $k \leq n \leq q$, the Reed-Solomon code with these parameters is defined by its encoding function $E: \mathbb{F}_q^k \to \mathbb{F}_q^n$ as follows. 1. Generate $\mathbb{F}_q$ explicitly. 2. Pick $n$ distinct elements $\alpha_i \in \mathbb{F}_q$. 3. A message $x \in \mathbb{F}_q^k$ is a list of elements $c_0 \dots c_{k-1}$. Represent the message as a polynomial $m(x) = \sum_j c_jx^j$. 4. The encoding of a message is the tuple $E(m) = (m(\alpha_1), \dots, m(\alpha_n))$. That is, we just evaluate $m(x)$ at our chosen locations in $\alpha_i$. Here’s an example when $q=5, n=3, k=3$. We’ll pick the points $1,3,4 \in \mathbb{F}_5$, and let our message be $x = (4,1,2)$, which is encoded as a polynomial $m(x) = 4 + x + 2x^2$. Then the encoding of the message is $\displaystyle E(m) = (m(1), m(3), m(4)) = (2, 0, 0)$ Decoding the message is a bit more difficult (more on that next time), but for now let’s prove the basic facts about this code. Fact: The Reed-Solomon code is linear. This is just because polynomials of a limited degree form a vector space. Adding polynomials is adding their coefficients, and scaling them is scaling their coefficients. Moreover, the evaluation of a polynomial at a point is a linear map, i.e. it’s always true that $m_1(\alpha) + m_2(\alpha) = (m_1 + m_2)(\alpha)$, and scaling the coefficients is no different. So the codewords also form a vector space. Fact: $d = n - k + 1$, or equivalently the Reed-Solomon code meets the Singleton bound. This follows from a simple fact: any two different single-variable polynomials of degree at most $k-1$ agree on at most $k-1$ points. Indeed, otherwise two such polynomials $f,g$ would give a new polynomial $f-g$ which has more than $k-1$ roots, but the fundamental theorem of algebra (the adaptation for finite fields) says the only polynomial with this many roots is the zero polynomial. So the Reed-Solomon code is maximum distance separable. Neat! One might wonder why one would want good codes with large alphabets. One reason is that with a large alphabet we can interpret a byte as an element of $\mathbb{F}_{256}$ to get error correction on bytes. So if you want to encode some really large stream of bytes (like a DVD) using such a scheme and you get bursts of contiguous errors in small regions (like a scratch), then you can do pretty powerful error correction. In fact, this is more or less the idea behind error correction for DVDs. So I hear. You can read more about the famous applications at Wikipedia. The Reed-Muller code The Reed-Muller code is a neat generalization of the Reed-Solomon code to multivariable polynomials. The reason they’re so useful is not necessarily because they optimize some bound (if they do, I haven’t heard of it), but because they specialize to all sorts of useful codes with useful properties. One of these is properties is called local decodability, which has big applications in theoretical computer science. Anyway, before I state the definition let me remind the reader about compact notation for multivariable polynomials. I can represent the variables $x_1, \dots, x_n$ used in the polynomial as a vector $\mathbf{x}$ and likewise a monomial $x_1^{\alpha_1} x_2^{\alpha_2} \dots x_n^{\alpha_n}$ by a “vector power” $\mathbf{x}^\alpha$, where $\sum_i \alpha_i = d$ is the degree of that monomial, and you’d write an entire polynomial as $\sum_\alpha c_\alpha x^{\alpha}$ where $\alpha$ ranges over all exponents you want. Definition: Let $m, l$ be positive integers and $q > l$ be a prime power. The Reed-Muller code with parameters $m,l,q$ is defined as follows: 1. The message is the list of multinomial coefficients of a homogeneous degree $l$ polynomial in $m$ variables, $f(\mathbf{x}) = \sum_{\alpha} c_\alpha x^\alpha$. 2. You encode a message $f(\mathbf{x})$ as the tuple of all polynomial evaluations $(f(x))_{x \in \mathbb{F}_q^m}$. Here the actual parameters of the code are $n=q^m$, and $k = \binom{m+l}{m}$ being the number of possible coefficients. Finally $d = (1 - l/q)n$, and we can prove this in the same way as we did for the Reed-Solomon code, using a beefed up fact about the number of roots of a multivariate polynomial: Fact: Two multivariate degree $\leq l$ polynomials over a finite field $\mathbb{F}_q$ agree on at most an $l/q$ fraction of $\mathbb{F}_q^m$. For messages of desired length $k$, a clever choice of parameters gives a good code. Let $m = \log k / \log \log k$, $q = \log^2 k$, and pick $l$ such that $\binom{m+l}{m} = k$. Then the Reed-Muller code has polynomial length $n = k^2$, and because $l = o(q)$ we get that the distance of the code is asymptotically $d = (1-o(1))n$, i.e. it tends to $n$. A fun fact about Reed-Muller codes: they were apparently used on the Voyager space missions to relay image data back to Earth. The Way Forward So we defined Reed-Solomon and Reed-Muller codes, but we didn’t really do any programming yet. The reason is because the encoding algorithms are very straightforward. If you’ve been following this blog you’ll know we have already written code to explicitly represent polynomials over finite fields, and extending that code to multivariable polynomials, at least for the sake of encoding the Reed-Muller code, is straightforward. The real interesting algorithms come when you’re trying to decode. For example, in the Reed-Solomon code we’d take as input a bunch of points in a plane (over a finite field), only some of which are consistent with the underlying polynomial that generated them, and we have to reconstruct the unknown polynomial exactly. Even worse, for Reed-Muller we have to do it with many variables! We’ll see exactly how to do that and produce working code next time. Until then! Posts in this series: Hamming’s Code Or how to detect and correct errors Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel. 1. What is the best encoding for information when you are guaranteed that your communication channel is error free? 2. Are there any encoding schemes that can recover from random noise introduced during transmission? The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes. We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary $\{ 0,1 \}$ and still grok the important stuff. Richard Hamming, inventor of Hamming codes. [image source] What is a code? The formal definition of a code is simple: a code $C$ is just a subset of $\{ 0,1 \}^n$ for some $n$. Elements of $C$ are called codewords. This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length $k$, so that our messages are in $\{ 0,1 \}^k$. Then we’re really viewing a code $C$ as the image of some encoding function $\textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n$. We can define $C$ by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that $\textup{Enc}$ is an injective function, so that no two messages get sent to the same codeword. Then $|C| = 2^k$, and we can call $k = \log |C|$ the message length of $C$ even if we don’t have an explicit encoding function. Moreover, while in this post we’ll always work with $\{ 0,1 \}$, the alphabet of your encoded messages could be an arbitrary set $\Sigma$. So then a code $C$ would be a subset of tuples in $\Sigma^n$, and we would call $q = |\Sigma|$. So we have these parameters $n, k, q$, and we need one more. This is the minimum distance of a code, which we’ll denote by $d$. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message $y$ (possibly with noisy bits) we look for the (unencoded) message $x$ whose encoding $\textup{Enc}(x)$ is as close to $y$ as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code. So coding theorists turn this mess of parameters into notation. Definition: A code $C$ is called an $(n, k, d)_q$-code if • $C \subset \Sigma^n$ for some alphabet $\Sigma$, • $k = \log |C|$, • $C$ has minimum distance $d$, and • the alphabet $\Sigma$ has size $q$. The basic goals of coding theory are: 1. For which values of these four parameters do codes exist? 2. Fixing any three parameters, how can we optimize the other one? In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing $k$ for $d=3$, and we’ll state a characterization theorem for optimizing $k$ for a general $d$. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound. Linear codes and the Hamming code A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be $\{ 0,1 \}^n$, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field $\mathbb{F}_q$ for a prime power $q$, i.e. have your vector space be $\mathbb{F}_q^n$. We’ll go back and forth between describing a binary code $q=2$ over $\{ 0,1 \}$ and a code in $\mathbb{F}_q^n$. So to say a code is linear means: • The zero vector is a codeword. • The sum of any two codewords is a codeword. • Any scalar multiple of a codeword is a codeword. Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space. Definition: generator matrix of a $(n,k,d)_q$-code $C$ is a $k \times n$ matrix $G$ whose rows form a basis for $C$. There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages $x \in \{0,1 \}^k$ by left multiplication $xG$. Intuitively, we can think of the bits of $x$ as describing the coefficients of the chosen linear combination of the rows of $G$, which uniquely describes an element of the subspace. Note that because a $k$-dimensional subspace of $\{ 0,1 \}^n$ has $2^k$ elements, we’re not abusing notation by calling $k = \log |C|$ both the message length and the dimension. For the second description of $C$, we’ll remind the reader that every linear subspace $C$ has a unique orthogonal complement $C^\perp$, which is the subspace of vectors that are orthogonal to vectors in $C$. Definition: Let $H^T$ be a generator matrix for $C^\perp$. Then $H$ is called a parity check matrix. Note $H$ has the basis for $C^\perp$ as columns. This means it has dimensions $n \times (n-k)$. Moreover, it has the property that $x \in C$ if and only if the left multiplication $xH = 0$. Having zero dot product with all columns of $H$ characterizes membership in $C$. The benefit of having a parity check matrix is that you can do efficient error detection: just compute $yH$ on your received message $y$, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that $y$ coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code. So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease. Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector $x$, denoted $wt(x)$, is the number of nonzero entries in $x$. Proposition: The minimum distance of a linear code $C$ is the minimum Hamming weight over all nonzero vectors $x \in C$. Proof. Consider a nonzero $x \in C$. On one hand, the zero vector is a codeword and $wt(x)$ is by definition the Hamming distance between $x$ and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if $x,y$ are two nonzero codewords, then $x-y$ is also a codeword and $wt(x-y)$ is the Hamming distance between $x$ and $y$. $\square$ So now we can define our first code, the Hamming code. It will be a $(n, k, 3)_2$-code. The construction is quite simple. We have fixed $d=3, q=2$, and we will also fix $l = n-k$. One can think of this as fixing $n$ and maximizing $k$, but it will only work for $n$ of a special form. We’ll construct the Hamming code by describing a parity-check matrix $H$. In fact, we’re going to see what conditions the minimum distance $d=3$ imposes on $H$, and find out those conditions are actually sufficient to get $d=3$. We’ll start with 2. If we want to ensure $d \geq 2$, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if $e_i$ is a vector with all zeros except a one in position $i$, then $e_i H = h_i$ is the $i$-th row of $H$. We need $e_i H \neq 0$, so this imposes the condition that no row of $H$ can be zero. It’s easy to see that this is sufficient for $d \geq 2$. Likewise for $d \geq 3$, given a vector $y = e_i + e_j$ for some positions $i \neq j$, then $yH = h_i + h_j$ may not be zero. But because our sums are mod 2, saying that $h_i + h_j \neq 0$ is the same as saying $h_i \neq h_j$. Again it’s an if and only if. So we have the two conditions. • No row of $H$ may be zero. • All rows of $H$ must be distinct. That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can $n$  be if the vectors have length $n-k = l$? That’s just the number of distinct nonzero binary strings of length $l$, which is $2^l - 1$. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix. Theorem: For every $l > 0$, there is a $(2^l - 1, 2^l - l - 1, 3)_2$-code called the Hamming code. Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If $x \in C$ and $wt(e) = 1$ is an error bit in position $i$, then the incoming message would be $y = x + e$. Now compute $yH = xH + eH = 0 + eH = h_i$ and flip bit $i$ of $y$. That is, whichever row of $H$ you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then $h_i = i$ as a binary number. Very slick. Before we move on, we should note one interesting feature of linear codes. Definition: A code is called systematic if it can be realized by an encoding function that appends some number $n-k$ “check bits” to the end of each message. The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix $G$ of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on $G$ and get a new generator matrix that looks like $[I \mid A]$ where $I$ is the identity matrix of the appropriate size and $A$ is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by $A$. It’s a different encoding function on $\{ 0,1\}^k$, but it has the same image in $\{ 0,1 \}^n$, i.e. the code is unchanged. Gaussian elimination just performed a change of basis. If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds $\Theta(\log n)$ check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of $n$ requires $\log n$ bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information. The Hamming bound and perfect codes One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing $d$ given a fixed choice of $n$, $k$, and $q$. To get this let’s define $V_n(r)$ denote the volume of a ball of radius $r$ in the space $\mathbb{F}_2^n$. I.e., if you fix any string (doesn’t matter which) $x$, $V_n(r)$ is the size of the set $\{ y : d(x,y) \leq r \}$, where $d(x,y)$ is the hamming distance. There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius $r$ inside $\mathbb{F}_2^n$. Theorem: If an $(n,k,d)_2$-code exists, then $\displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n$ Proof. The proof is quite simple. To say a code $C$ has distance $d$ means that for every string $x \in C$ there is no other string $y$ within Hamming distance $d$ of $x$. In other words, the balls centered around both $x,y$ of radius $r = \lfloor (d-1)/2 \rfloor$ are disjoint. The extra difference of one is for odd $d$, e.g. when $d=3$ you need balls of radius 1 to guarantee no overlap. Now $|C| = 2^k$, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most $2^n$ strings in $\mathbb{F}_2^n$, establishing the desired inequality. $\square$ Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader. The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when $d=1$ you can just use the identity encoding to get the code $C = \mathbb{F}_2^n$. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem. Theorem [van Lint ’71, Tietavainen ‘73]: Let $C$ be a nontrivial perfect $(n,d,k)_q$ code. Then the parameters must either be that of a Hamming code, or one of the two: • A $(23, 12, 7)_2$-code • A $(11, 6, 5)_3$-code The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes. So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package. Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory. Until then! Posts in this series: A Proofless Introduction to Information Theory There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are: 1. Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the noiseless coding problem. 2. Say communication is not expensive, but error prone. In particular, each bit $i$ of your message is erroneously flipped with some known probably $p$, and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the noisy coding problem. There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here. Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average $7 \cdot 1/2 + (-1) \cdot 1/2 = 3$ bits in every message. One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of entropy, which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string. Claude Shannon. Image credit: Wikipedia Entropy and noiseless encoding Before we can state Shannon’s theorems we have to define entropy. Definition: Suppose $D$ is a distribution on a finite set $X$, and I’ll use $D(x)$ to denote the probability of drawing $x$ from $D$. The entropy of $D$, denoted $H(D)$ is defined as $H(D) = \sum_{x \in X} D(x) \log \frac{1}{D(x)}$ It is strange to think about this sum in abstract, so let’s suppose $D$ is a biased coin flip with bias $0 \leq p \leq 1$ of landing heads. Then we can plot the entropy as follows Image source: Wikipedia The horizontal axis is the bias $p$, and the vertical axis is the value of $H(D)$, which with some algebra is $- p \log p - (1-p) \log (1-p)$. From the graph above we can see that the entropy is maximized when $p=1/2$ and minimized at $p=0, 1$. You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well. So what is this saying? A high entropy measures how incompressible something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and $p$ was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if $p=1/2$ we couldn’t get any compression at all. All strings would be equally likely. Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: $\{ 0,1 \}^*$ is the set of all binary strings. Theorem (Noiseless Coding Theorem) [Shannon 1948]: For every finite set $X$ and distribution $D$ over $X$, there are encoding and decoding functions $\textup{Enc}: X \to \{0,1 \}^*, \textup{Dec}: \{ 0,1 \}^* \to X$ such that 1. The encoding/decoding actually works, i.e. $\textup{Dec}(\textup{Enc}(x)) = x$ for all $x$. 2. The expected length of an encoded message is between $H(D)$ and $H(D) + 1$. Moreover, no encoding scheme can do better. Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you precisely know how long to expect your messages to be. And you know that you can’t hope to do any better! As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details. Noisy Coding The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called error correction. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors. It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions. Theorem (Noisy Coding Theorem) [Shannon 1948]: For any constant noise rate $p < 1/2$, there is an encoding scheme $\textup{Enc} : \{ 0,1 \}^k \to \{0,1\}^{ck}, \textup{Dec} : \{ 0,1 \}^{ck} \to \{ 0,1\}^k$ with the following property. If $x$ is the message sent by Alice, and $y$ is the message received by Bob (i.e. $\textup{Enc}(x)$ with random noise), then $\Pr[\textup{Dec}(y) = x] \to 1$ as a function of $n=ck$. In addition, if we denote by $H(p)$ the entropy of the distribution of an error on a single bit, then choosing any $c > \frac{1}{1-H(p)}$ guarantees the existence of such an encoding scheme, and no scheme exists for any smaller $c$. This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that it only depends on the noise rate. A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking $\textup{Enc}$ to be a random function (!). Then $\textup{Dec}(y)$ finds (nonconstructively) the string $x$ such that the number of bits different between $\textup{Enc}(x)$ and $y$ is minimized. This “number of bits that differ” measure is called the Hamming distance. Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for $c$ takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class. The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee at most 12 errors, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability. Indeed, two years later Richard Hamming spawned the theory of error correcting codes and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing. We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then! Posts in this series:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 312, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040562510490417, "perplexity": 364.2439793844788}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687324.6/warc/CC-MAIN-20170920142244-20170920162244-00302.warc.gz"}
https://www.administrator.de/frage/standardprogramme-server-2008-dateitypen-verkn%C3%BCpfen-109790.html
GELÖST # Standardprogramme Server 2008 (Dateitypen verknüpfen) ## Hallo IT Welt, folgendes Problem: 1 Terminal Server 2008 Wenn User ein jpg öffnen, wird standardmäßig paint geöffnet, wenn man Irfanviwe im auto install Mode installiert, wird es nur als standardprogramm für den installierenden User ausgewählt. Also unter Systemsteuerung, "Standardprogramme" nachgeschaut --> ist auch nur Benutzerbezogen (Bei Vista gibts ne 4. Computerübergreifende Einstellung) Also meine Frage: Wie kann ich Benutzerübergreifend ein Standardprogramm für eine gewisse Dateiendung festlegen? Hab auch schon im Hkey Classes root reingeschaut, und was geändert, zieht aber nicht auf die anderen User, also mal Regshot vor und nach der Installation von Winrar verglichen: HKLM\SOFTWARE\Classes\Applications\i_view32.exe HKLM\SOFTWARE\Classes\Applications\i_view32.exe\shell HKLM\SOFTWARE\Classes\Applications\i_view32.exe\shell\open HKLM\SOFTWARE\Classes\Applications\i_view32.exe\shell\open\command HKLM\SOFTWARE\Classes\IrfanView HKLM\SOFTWARE\Classes\IrfanView\shell HKLM\SOFTWARE\Classes\IrfanView\shell\open HKLM\SOFTWARE\Classes\IrfanView\shell\open\command HK HKLM\SOFTWARE\Classes\IrfanView.jpg HKLM\SOFTWARE\Classes\IrfanView.jpg\DefaultIcon HKLM\SOFTWARE\Classes\IrfanView.jpg\shell HKLM\SOFTWARE\Classes\IrfanView.jpg\shell\open HKLM\SOFTWARE\Classes\IrfanView.jpg\shell\open\command HK HK HK HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\IrfanView HKLM\SOFTWARE\IrfanView HKLM\SOFTWARE\IrfanView\Capabilities HKLM\SOFTWARE\IrfanView\Capabilities\FileAssociations HKLM\SOFTWARE\IrfanView\shell HKLM\SOFTWARE\IrfanView\shell\open HKLM\SOFTWARE\IrfanView\shell\open\command HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656} HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656} Werte hinzugefügt:463 HK HKLM\SOFTWARE\Classes\.jpe\OpenWithProgids\IrfanView.JPG: "" HKLM\SOFTWARE\Classes\.jpeg\OpenWithProgids\IrfanView.JPG: "" HKLM\SOFTWARE\Classes\.jpg\OpenWithProgids\IrfanView.jpg: "" HK HK HKLM\SOFTWARE\Classes\IrfanView.jpg\shell\open\command\: ""C:\Program Files (x86)\IrfanView\i_view32.exe" "%1"" HKLM\SOFTWARE\Classes\IrfanView.jpg\DefaultIcon\: "C:\Program Files (x86)\IrfanView\i_view32.exe,0" HKLM\SOFTWARE\Classes\IrfanView.jpg\: "IrfanView JPG File" HK HK HK HK HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\IrfanView\: "" HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\IrfanView\DisplayName: "IrfanView (remove only)" HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\IrfanView\UninstallString: "C:\Program Files (x86)\IrfanView\iv_uninstall.exe" HK HK HK HKLM\SOFTWARE\IrfanView\Capabilities\FileAssociations\.JPG: "IrfanView.JPG" HK HKLM\SOFTWARE\IrfanView\shell\open\command\: ""C:\Program Files (x86)\IrfanView\i_view32.exe"" HKLM\SOFTWARE\IrfanView\Capabilities\ApplicationDescription: "Picture viewer, editor and converter - one of the most popular viewers worldwide" HKU\S-1-5-21-2780537013-3993079493-21417 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpe\UserChoice\Progid: "IrfanView.JPG" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpeg\UserChoice\Progid: "IrfanView.JPG" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpg\UserChoice\Progid: "IrfanView.JPG" HKU\S-1-5-21-2780537013-3993079493-21417 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Rev: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\FFlags: 0x40200001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\HotKey: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Buttons: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Vid: "{137E7700-3573-11CF-AE69-08002B2E1262}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Mode: 0x00000004 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ScrollPos1024x768(1).x: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ScrollPos1024x768(1).y: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\IconSize: 0x00000010 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\LogicalViewMode: 0x00000001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupView: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\FMTID:GroupByKey: "{B725F130-47EF-101A-A5F1-02608C9EEBAC}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\PID:GroupByKey: 0x0000000A HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupByGUID: "{00000000-0000-0000-0000-000000000000}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupByDirection: 0x00000001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ColInfo: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FD DF DF FD 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Sort: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 30 F1 25 B7 EF 47 1A 10 A5 F1 02 60 8C 9E EB AC 0A 00 00 00 01 00 00 00 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\KnownFolderDerivedFolderType: "{57807898-8C4F-4462-BB63-71042380B109}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\SniffedFolderType: "Documents" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Rev: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\FFlags: 0x40200001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\HotKey: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Buttons: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Vid: "{137E7700-3573-11CF-AE69-08002B2E1262}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Mode: 0x00000004 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ScrollPos1024x768(1).x: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ScrollPos1024x768(1).y: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\IconSize: 0x00000010 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\LogicalViewMode: 0x00000001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupView: 0x00000000 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\FMTID:GroupByKey: "{B725F130-47EF-101A-A5F1-02608C9EEBAC}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\PID:GroupByKey: 0x0000000A HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupByGUID: "{00000000-0000-0000-0000-000000000000}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\GroupByDirection: 0x00000001 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\ColInfo: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FD DF DF FD 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\{7D49D726-3C21-4F05-99AA-FDC2C9474656}\Sort: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 30 F1 25 B7 EF 47 1A 10 A5 F1 02 60 8C 9E EB AC 0A 00 00 00 01 00 00 00 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\KnownFolderDerivedFolderType: "{57807898-8C4F-4462-BB63-71042380B109}" HKU\S-1-5-21-2780537013-3993079493-2141789299-1142_Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\88\Shell\SniffedFolderType: "Documents" Werte geändert:100 HKLM\SOFTWARE\Classes\.jpe\: "jpegfile" HKLM\SOFTWARE\Classes\.jpe\: "IrfanView.JPG" HKLM\SOFTWARE\Classes\.jpeg\: "jpegfile" HKLM\SOFTWARE\Classes\.jpeg\: "IrfanView.JPG" HKLM\SOFTWARE\Classes\.jpg\: "jpegfile" HKLM\SOFTWARE\Classes\.jpg\: "IrfanView.jpg" HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\GlobalAssocChangedCounter: 0x00000021 HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\GlobalAssocChangedCounter: 0x00000022 HKU\S-1-5-21-2780537013-3993079493-2141789299-1142\Software\Microsoft\Windows\CurrentVersion\Explorer\StartPage\ProgramsCache: 0C 00 00 00 C3 53 5B 62 48 AB C1 4E BA 1F A1 EF 41 46 FC 19 00 7C 00 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 00 00 01 48 01 00 00 46 01 32 00 57 03 00 00 3F 3A 8F 84 80 00 49 4E 54 45 52 4E 7E 31 2E 4C 4E 4B 00 00 B4 00 07 00 04 00 EF BE 3F 3A 8F 84 3F 3A 8F 84 26 00 00 00 27 F8 00 00 00 00 20 00 00 00 00 00 00 00 00 00 64 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 20 00 28 00 36 00 34 00 2D 00 62 00 69 00 74 00 29 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 65 00 34 00 75 00 69 00 6E 00 69 00 74 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 37 00 33 00 35 00 00 00 1C 00 76 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 5C 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 5C 00 69 00 65 00 78 00 70 00 6C 00 6F 00 72 00 65 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 42 01 00 00 40 01 32 00 6F 03 00 00 3F 3A 8F 84 80 00 49 4E 54 45 52 4E 7E 32 2E 4C 4E 4B 00 00 A2 00 07 00 04 00 EF BE 3F 3A 8E 84 3F 3A 8F 84 26 00 00 00 51 F8 00 00 00 00 07 00 00 00 00 00 00 00 00 00 52 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 65 00 34 00 75 00 69 00 6E 00 69 00 74 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 37 00 33 00 34 00 00 00 1C 00 82 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 5C 00 69 00 65 00 78 00 70 00 6C 00 6F 00 72 00 65 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 00 FC 00 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 10 72 11 00 41 43 43 45 53 53 7E 31 00 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 D2 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 3E 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 31 00 00 00 18 00 00 00 01 E6 00 00 00 E4 00 32 00 2B 06 00 00 33 38 0E 72 80 00 43 4F 4D 4D 41 4E 7E 31 2E 4C 4E 4B 00 00 76 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 22 F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 4C 00 43 00 6F 00 6D 00 6D 00 61 00 6E 00 64 00 20 00 50 00 72 00 6F 00 6D 00 70 00 74 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 32 00 32 00 00 00 1C 00 52 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 6D 00 64 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 DE 00 00 00 DC 00 32 00 4F 06 00 00 33 38 10 72 80 00 4E 6F 74 65 70 61 64 2E 6C 6E 6B 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 04 F4 00 00 00 00 07 00 00 00 00 00 00 00 00 00 3E 00 4E 00 6F 00 74 00 65 00 70 00 61 00 64 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 35 00 31 00 00 00 1A 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6E 00 6F 00 74 00 65 00 70 00 61 00 64 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1A 00 00 00 01 44 01 00 00 42 01 32 00 E6 00 00 00 33 38 E5 71 80 00 52 75 6E 2E 6C 6E 6B 00 60 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 12 F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 36 00 52 00 75 00 6E 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 32 00 37 00 31 00 30 00 00 00 16 00 CC 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00 5C 00 6E 00 74 00 61 00 64 00 6D 00 69 00 6E 00 5C 00 41 00 70 00 70 00 44 00 61 00 74 00 61 00 5C 00 52 00 6F 00 61 00 6D 00 69 00 6E 00 67 00 5C 00 4D 00 69 00 63 00 72 00 6F 00 73 00 6F 00 66 00 74 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 74 00 61 00 72 00 74 00 20 00 4D 00 65 00 6E 00 75 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 5C 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 5C 00 52 00 75 00 6E 00 2E 00 6C 00 6E 00 6B 00 00 00 00 00 00 00 16 00 00 00 01 E2 00 00 00 E0 00 32 00 B5 05 00 00 33 38 E5 71 80 00 57 49 4E 44 4F 57 7E 31 2E 4C 4E 4B 00 00 7A 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 03 F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 50 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 36 00 37 00 00 00 1C 00 4A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 65 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 00 80 01 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 10 72 11 00 41 43 43 45 53 53 7E 31 00 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 D2 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 3E 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 31 00 00 00 18 00 84 00 31 00 00 00 00 00 33 38 0C 72 11 00 41 43 43 45 53 53 7E 31 00 00 6C 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 D5 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 42 00 41 00 63 00 63 00 65 00 73 00 73 00 69 00 62 00 69 00 6C 00 69 00 74 00 79 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 30 00 00 00 18 00 00 00 01 22 01 00 00 20 01 32 00 85 06 00 00 33 38 03 72 80 00 45 41 53 45 4F 46 7E 31 2E 4C 4E 4B 00 00 AA 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 FF F3 00 00 00 00 08 00 00 00 00 00 00 00 00 00 4C 00 45 00 61 00 73 00 65 00 20 00 6F 00 66 00 20 00 41 00 63 00 63 00 65 00 73 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 41 00 63 00 63 00 65 00 73 00 73 00 69 00 62 00 69 00 6C 00 69 00 74 00 79 00 43 00 70 00 6C 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 00 00 1C 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 6F 00 6E 00 74 00 72 00 6F 00 6C 00 2E 00 65 00 78 00 65 00 00 00 00 00 01 00 1C 00 00 00 01 DE 00 00 00 DC 00 32 00 21 06 00 00 33 38 03 72 80 00 4D 61 67 6E 69 66 79 2E 6C 6E 6B 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 08 F4 00 00 00 00 07 00 00 00 00 00 00 00 00 00 3E 00 4D 00 61 00 67 00 6E 00 69 00 66 00 79 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 34 00 31 00 00 00 1A 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 61 00 67 00 6E 00 69 00 66 00 79 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1A 00 00 00 01 E4 00 00 00 E2 00 32 00 29 06 00 00 33 38 E8 71 80 00 4E 61 72 72 61 74 6F 72 2E 6C 6E 6B 00 00 6A 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 02 F4 00 00 00 00 08 00 00 00 00 00 00 00 00 00 40 00 4E 00 61 00 72 00 72 00 61 00 74 00 6F 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 34 00 38 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6E 00 61 00 72 00 72 00 61 00 74 00 6F 00 72 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 EE 00 00 00 EC 00 32 00 0D 06 00 00 33 38 0C 72 80 00 4F 4E 2D 53 43 52 7E 31 2E 4C 4E 4B 00 00 7E 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 26 F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 54 00 4F 00 6E 00 2D 00 53 00 63 00 72 00 65 00 65 00 6E 00 20 00 4B 00 65 00 79 00 62 00 6F 00 61 00 72 00 64 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 35 00 32 00 00 00 1C 00 52 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6F 00 73 00 6B 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 00 7E 01 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 10 72 11 00 41 43 43 45 53 53 7E 31 00 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 D2 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 3E 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 31 00 00 00 18 00 82 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 53 59 53 54 45 4D 7E 31 00 00 6A 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8E 84 26 00 00 00 D4 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 40 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 54 00 6F 00 6F 00 6C 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 38 00 00 00 18 00 00 00 01 78 01 00 00 76 01 32 00 E6 00 00 00 33 38 E5 71 80 00 63 6F 6D 70 75 74 65 72 2E 6C 6E 6B 00 00 6A 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 1C F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 40 00 63 00 6F 00 6D 00 70 00 75 00 74 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 32 00 37 00 31 00 31 00 00 00 1C 00 F0 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00 5C 00 6E 00 74 00 61 00 64 00 6D 00 69 00 6E 00 5C 00 41 00 70 00 70 00 44 00 61 00 74 00 61 00 5C 00 52 00 6F 00 61 00 6D 00 69 00 6E 00 67 00 5C 00 4D 00 69 00 63 00 72 00 6F 00 73 00 6F 00 66 00 74 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 74 00 61 00 72 00 74 00 20 00 4D 00 65 00 6E 00 75 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 5C 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 5C 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 54 00 6F 00 6F 00 6C 00 73 00 5C 00 63 00 6F 00 6D 00 70 00 75 00 74 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 00 00 00 00 1C 00 00 00 01 8C 01 00 00 8A 01 32 00 E6 00 00 00 33 38 E5 71 80 00 43 4F 4E 54 52 4F 7E 31 2E 4C 4E 4B 00 00 74 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 21 F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 4A 00 43 00 6F 00 6E 00 74 00 72 00 6F 00 6C 00 20 00 50 00 61 00 6E 00 65 00 6C 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 32 00 37 00 31 00 32 00 00 00 1C 00 FA 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00 5C 00 6E 00 74 00 61 00 64 00 6D 00 69 00 6E 00 5C 00 41 00 70 00 70 00 44 00 61 00 74 00 61 00 5C 00 52 00 6F 00 61 00 6D 00 69 00 6E 00 67 00 5C 00 4D 00 69 00 63 00 72 00 6F 00 73 00 6F 00 66 00 74 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 74 00 61 00 72 00 74 00 20 00 4D 00 65 00 6E 00 75 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 5C 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 5C 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 54 00 6F 00 6F 00 6C 00 73 00 5C 00 43 00 6F 00 6E 00 74 00 72 00 6F 00 6C 00 20 00 50 00 61 00 6E 00 65 00 6C 00 2E 00 6C 00 6E 00 6B 00 00 00 00 00 00 00 1C 00 00 00 01 5C 01 00 00 5A 01 32 00 8D 03 00 00 3F 3A 8F 84 80 00 49 4E 54 45 52 4E 7E 31 2E 4C 4E 4B 00 00 BC 00 07 00 04 00 EF BE 3F 3A 8E 84 3F 3A 8E 84 26 00 00 00 26 F8 00 00 00 00 13 00 00 00 00 00 00 00 00 00 6C 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 20 00 28 00 4E 00 6F 00 20 00 41 00 64 00 64 00 2D 00 6F 00 6E 00 73 00 29 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 65 00 34 00 75 00 69 00 6E 00 69 00 74 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 37 00 33 00 37 00 00 00 1C 00 82 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 74 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 5C 00 69 00 65 00 78 00 70 00 6C 00 6F 00 72 00 65 00 2E 00 65 00 78 00 65 00 00 00 00 00 01 00 1C 00 00 00 00 D0 00 00 00 7A 00 31 00 00 00 00 00 57 3A 7D 4F 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 57 3A 7D 4F 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 54 00 31 00 00 00 00 00 57 3A 7D 4F 10 00 49 52 46 41 4E 56 7E 31 00 00 3C 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 7B 77 01 00 00 00 54 00 00 00 00 00 00 00 00 00 00 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 00 00 18 00 00 00 01 DE 00 00 00 DC 00 32 00 D5 02 00 00 57 3A 7D 4F 20 00 41 42 4F 55 54 49 7E 31 2E 4C 4E 4B 00 00 50 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 80 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 41 00 62 00 6F 00 75 00 74 00 20 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 70 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 61 00 62 00 6F 00 75 00 74 00 2E 00 74 00 78 00 74 00 00 00 00 00 00 00 1C 00 00 00 01 EE 00 00 00 EC 00 32 00 E7 02 00 00 57 3A 7D 4F 20 00 41 56 41 49 4C 41 7E 32 2E 4C 4E 4B 00 00 58 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 84 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 41 00 76 00 61 00 69 00 6C 00 61 00 62 00 6C 00 65 00 20 00 4C 00 61 00 6E 00 67 00 75 00 61 00 67 00 65 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 78 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 6C 00 61 00 6E 00 67 00 75 00 61 00 67 00 65 00 73 00 2E 00 74 00 78 00 74 00 00 00 00 00 00 00 1C 00 00 00 01 E6 00 00 00 E4 00 32 00 DF 02 00 00 57 3A 7D 4F 20 00 41 56 41 49 4C 41 7E 31 2E 4C 4E 4B 00 00 54 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 83 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 41 00 76 00 61 00 69 00 6C 00 61 00 62 00 6C 00 65 00 20 00 50 00 6C 00 75 00 67 00 49 00 6E 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 74 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 70 00 6C 00 75 00 67 00 69 00 6E 00 73 00 2E 00 74 00 78 00 74 00 00 00 00 00 00 00 1C 00 00 00 01 EC 00 00 00 EA 00 32 00 DF 02 00 00 57 3A 7D 4F 20 00 43 4F 4D 4D 41 4E 7E 31 2E 4C 4E 4B 00 00 5A 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 82 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 43 00 6F 00 6D 00 6D 00 61 00 6E 00 64 00 20 00 6C 00 69 00 6E 00 65 00 20 00 4F 00 70 00 74 00 69 00 6F 00 6E 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 74 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 6F 00 70 00 74 00 69 00 6F 00 6E 00 73 00 2E 00 74 00 78 00 74 00 00 00 00 00 00 00 1C 00 00 00 01 EE 00 00 00 EC 00 32 00 5B 06 00 00 57 3A 7D 4F 20 00 49 52 46 41 4E 56 7E 32 2E 4C 4E 4B 00 00 5E 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 7E 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 20 00 2D 00 20 00 54 00 68 00 75 00 6D 00 62 00 6E 00 61 00 69 00 6C 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 72 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 76 00 69 00 65 00 77 00 33 00 32 00 2E 00 65 00 78 00 65 00 00 00 00 00 01 00 1C 00 00 00 01 DE 00 00 00 DC 00 32 00 DB 02 00 00 57 3A 7D 4F 20 00 49 52 46 41 4E 56 7E 31 2E 4C 4E 4B 00 00 4E 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 7C 77 01 00 00 00 43 00 00 00 00 00 00 00 00 00 00 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 20 00 34 00 2E 00 32 00 33 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 72 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 76 00 69 00 65 00 77 00 33 00 32 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 DE 00 00 00 DC 00 32 00 DB 02 00 00 57 3A 7D 4F 20 00 49 52 46 41 4E 56 7E 33 2E 4C 4E 4B 00 00 4E 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 85 77 01 00 00 00 E9 02 00 00 00 00 00 00 00 00 00 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 20 00 48 00 65 00 6C 00 70 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 72 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 76 00 69 00 65 00 77 00 33 00 32 00 2E 00 63 00 68 00 6D 00 00 00 00 00 00 00 1C 00 00 00 01 F0 00 00 00 EE 00 32 00 EB 02 00 00 57 3A 7D 4F 20 00 55 4E 49 4E 53 54 7E 31 2E 4C 4E 4B 00 00 58 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 7F 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 55 00 6E 00 69 00 6E 00 73 00 74 00 61 00 6C 00 6C 00 20 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 7A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 76 00 5F 00 75 00 6E 00 69 00 6E 00 73 00 74 00 61 00 6C 00 6C 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 D8 00 00 00 D6 00 32 00 DF 02 00 00 57 3A 7D 4F 20 00 57 48 41 54 27 53 7E 31 2E 4C 4E 4B 00 00 46 00 07 00 04 00 EF BE 57 3A 7D 4F 57 3A 7D 4F 26 00 00 00 81 77 01 00 00 00 40 00 00 00 00 00 00 00 00 00 00 00 57 00 68 00 61 00 74 00 27 00 73 00 20 00 4E 00 65 00 77 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 74 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 20 00 46 00 69 00 6C 00 65 00 73 00 20 00 28 00 78 00 38 00 36 00 29 00 5C 00 49 00 72 00 66 00 61 00 6E 00 56 00 69 00 65 00 77 00 5C 00 69 00 5F 00 63 00 68 00 61 00 6E 00 67 00 65 00 73 00 2E 00 74 00 78 00 74 00 00 00 00 00 00 00 1C 00 00 00 00 FC 00 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 E5 71 11 00 4D 41 49 4E 54 45 7E 31 00 00 68 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 D1 F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 3E 00 4D 00 61 00 69 00 6E 00 74 00 65 00 6E 00 61 00 6E 00 63 00 65 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 38 00 31 00 31 00 00 00 18 00 00 00 01 4A 01 00 00 48 01 32 00 E6 00 00 00 33 38 E5 71 80 00 48 65 6C 70 2E 6C 6E 6B 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 88 84 26 00 00 00 0D F4 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 48 00 65 00 6C 00 70 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 32 00 37 00 30 00 39 00 00 00 18 00 CE 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00 5C 00 6E 00 74 00 61 00 64 00 6D 00 69 00 6E 00 5C 00 41 00 70 00 70 00 44 00 61 00 74 00 61 00 5C 00 52 00 6F 00 61 00 6D 00 69 00 6E 00 67 00 5C 00 4D 00 69 00 63 00 72 00 6F 00 73 00 6F 00 66 00 74 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 74 00 61 00 72 00 74 00 20 00 4D 00 65 00 6E 00 75 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 5C 00 4D 00 61 00 69 00 6E 00 74 00 65 00 6E 00 61 00 6E 00 63 00 65 00 5C 00 48 00 65 00 6C 00 70 00 2E 00 6C 00 6E 00 6B 00 00 00 00 00 00 00 18 00 00 00 00 02 01 00 00 7A 00 31 00 00 00 00 00 3F 3A 8F 84 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 3F 3A 88 84 3F 3A 8F 84 26 00 00 00 CA F3 00 00 00 00 06 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 86 00 31 00 00 00 00 00 3F 3A 8B 84 10 00 57 49 4E 44 4F 57 7E 31 00 00 6E 00 07 00 04 00 EF BE 3F 3A 8B 84 3F 3A 8B 84 26 00 00 00 E6 F7 00 00 00 00 06 00 00 00 00 00 00 00 00 00 00 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 53 00 6D 00 61 00 6C 00 6C 00 20 00 42 00 75 00 73 00 69 00 6E 00 65 00 73 00 73 00 20 00 53 00 65 00 72 00 76 00 65 00 72 00 20 00 32 00 30 00 30 00 38 00 00 00 18 00 00 00 01 80 01 00 00 7E 01 32 00 84 01 00 00 57 3A AF 66 20 00 49 4E 54 45 52 4E 7E 31 2E 4C 4E 4B 00 00 50 00 07 00 04 00 EF BE 3F 3A 8B 84 3F 3A 8B 84 26 00 00 00 E7 F7 00 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 20 00 57 00 65 00 62 00 73 00 69 00 74 00 65 00 2E 00 6C 00 6E 00 6B 00 00 00 1C 00 12 01 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00 5C 00 6E 00 74 00 61 00 64 00 6D 00 69 00 6E 00 5C 00 41 00 70 00 70 00 44 00 61 00 74 00 61 00 5C 00 52 00 6F 00 61 00 6D 00 69 00 6E 00 67 00 5C 00 4D 00 69 00 63 00 72 00 6F 00 73 00 6F 00 66 00 74 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 53 00 74 00 61 00 72 00 74 00 20 00 4D 00 65 00 6E 00 75 00 5C 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 53 00 6D 00 61 00 6C 00 6C 00 20 00 42 00 75 00 73 00 69 00 6E 00 65 00 73 00 73 00 20 00 53 00 65 00 72 00 76 00 65 00 72 00 20 00 32 00 30 00 30 00 38 00 5C 00 49 00 6E 00 74 00 65 00 72 00 6E 00 65 00 20 00 57 00 65 00 62 00 73 00 69 00 74 00 65 00 2E 00 6C 00 6E 00 6B 00 00 00 00 00 00 00 1C 00 00 00 02 19 57 11 A4 2E D6 1D 49 AA 7C E7 4B 8B E3 B0 67 00 02 00 00 00 00 00 01 0A 01 00 00 08 01 32 00 17 06 00 00 33 38 73 71 80 00 57 49 4E 44 4F 57 7E 31 2E 4C 4E 4B 00 00 96 00 07 00 04 00 EF BE 33 38 73 71 33 38 73 71 26 00 00 00 3D 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 4C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 55 00 70 00 64 00 61 00 74 00 65 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 77 00 75 00 63 00 6C 00 74 00 75 00 78 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 00 00 1C 00 56 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 77 00 75 00 61 00 70 00 70 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 00 7C 00 00 00 7A 00 31 00 00 00 00 00 3F 3A B1 80 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 33 38 6B 51 3F 3A B1 80 26 00 00 00 EE 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 00 00 01 00 01 00 00 FE 00 32 00 47 06 00 00 33 38 D4 71 80 00 57 49 4E 44 4F 57 7E 31 2E 4C 4E 4B 00 00 7A 00 07 00 04 00 EF BE 33 38 B5 71 33 38 B5 71 26 00 00 00 3F 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 50 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 43 00 6F 00 6E 00 74 00 61 00 63 00 74 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 31 00 37 00 00 00 1C 00 68 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 46 00 69 00 6C 00 65 00 73 00 28 00 78 00 38 00 36 00 29 00 25 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 4D 00 61 00 69 00 6C 00 5C 00 77 00 61 00 62 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 00 FC 00 00 00 7A 00 31 00 00 00 00 00 3F 3A B1 80 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 33 38 6B 51 3F 3A B1 80 26 00 00 00 EE 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 2B 72 11 00 41 43 43 45 53 53 7E 31 00 00 68 00 07 00 04 00 EF BE 33 38 6B 51 33 38 2B 72 26 00 00 00 EF 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 3E 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 31 00 00 00 18 00 00 00 01 E0 00 00 00 DE 00 32 00 FD 05 00 00 33 38 2B 72 80 00 43 41 4C 43 55 4C 7E 31 2E 4C 4E 4B 00 00 6E 00 07 00 04 00 EF BE 33 38 2B 72 33 38 2B 72 26 00 00 00 40 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 44 00 43 00 61 00 6C 00 63 00 75 00 6C 00 61 00 74 00 6F 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 31 00 39 00 00 00 1C 00 54 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 61 00 6C 00 63 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 D8 00 00 00 D6 00 32 00 11 06 00 00 33 38 10 72 80 00 50 61 69 6E 74 2E 6C 6E 6B 00 64 00 07 00 04 00 EF BE 33 38 F9 71 33 38 F9 71 26 00 00 00 42 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 3A 00 50 00 61 00 69 00 6E 00 74 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 35 00 34 00 00 00 18 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 70 00 61 00 69 00 6E 00 74 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 18 00 00 00 01 22 01 00 00 20 01 32 00 EB 05 00 00 33 38 01 72 80 00 52 45 4D 4F 54 45 7E 31 2E 4C 4E 4B 00 00 AE 00 07 00 04 00 EF BE 33 38 E2 71 33 38 E2 71 26 00 00 00 43 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 62 00 52 00 65 00 6D 00 6F 00 74 00 65 00 20 00 44 00 65 00 73 00 6B 00 74 00 6F 00 70 00 20 00 43 00 6F 00 6E 00 6E 00 65 00 63 00 74 00 69 00 6F 00 6E 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 74 00 73 00 63 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 34 00 30 00 30 00 30 00 00 00 1C 00 56 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 73 00 79 00 73 00 74 00 65 00 6D 00 72 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 74 00 73 00 63 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 FE 00 00 00 FC 00 32 00 EB 06 00 00 33 38 24 72 80 00 57 6F 72 64 70 61 64 2E 6C 6E 6B 00 68 00 07 00 04 00 EF BE 33 38 24 72 33 38 24 72 26 00 00 00 44 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 3E 00 57 00 6F 00 72 00 64 00 70 00 61 00 64 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 36 00 39 00 00 00 1A 00 7A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 46 00 69 00 6C 00 65 00 73 00 25 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 4E 00 54 00 5C 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 5C 00 77 00 6F 00 72 00 64 00 70 00 61 00 64 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1A 00 00 00 00 7E 01 00 00 7A 00 31 00 00 00 00 00 3F 3A B1 80 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 33 38 6B 51 3F 3A B1 80 26 00 00 00 EE 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 80 00 31 00 00 00 00 00 33 38 2B 72 11 00 41 43 43 45 53 53 7E 31 00 00 68 00 07 00 04 00 EF BE 33 38 6B 51 33 38 2B 72 26 00 00 00 EF 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 3E 00 41 00 63 00 63 00 65 00 73 00 73 00 6F 00 72 00 69 00 65 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 31 00 00 00 18 00 82 00 31 00 00 00 00 00 33 38 0F 72 11 00 53 59 53 54 45 4D 7E 31 00 00 6A 00 07 00 04 00 EF BE 33 38 6B 51 33 38 0F 72 26 00 00 00 F1 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 40 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 54 00 6F 00 6F 00 6C 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 38 00 00 00 18 00 00 00 01 FC 00 00 00 FA 00 32 00 37 06 00 00 33 38 0D 72 80 00 64 66 72 67 75 69 2E 6C 6E 6B 00 00 88 00 07 00 04 00 EF BE 33 38 F6 71 33 38 F6 71 26 00 00 00 4B 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 3C 00 64 00 66 00 72 00 67 00 75 00 69 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 64 00 66 00 72 00 67 00 75 00 69 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 31 00 30 00 33 00 00 00 1A 00 58 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 64 00 66 00 72 00 67 00 75 00 69 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1A 00 00 00 01 1E 01 00 00 1C 01 32 00 1D 06 00 00 33 38 D5 71 80 00 53 59 53 54 45 4D 7E 31 2E 4C 4E 4B 00 00 A4 00 07 00 04 00 EF BE 33 38 BA 71 33 38 BA 71 26 00 00 00 4C 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 54 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 49 00 6E 00 66 00 6F 00 72 00 6D 00 61 00 74 00 69 00 6F 00 6E 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 69 00 6E 00 66 00 6F 00 33 00 32 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 31 00 30 00 30 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 53 00 79 00 73 00 57 00 4F 00 57 00 36 00 34 00 5C 00 6D 00 73 00 69 00 6E 00 66 00 6F 00 33 00 32 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 20 01 00 00 1E 01 32 00 37 06 00 00 33 38 0F 72 80 00 54 41 53 4B 53 43 7E 31 2E 4C 4E 4B 00 00 A6 00 07 00 04 00 EF BE 33 38 0F 72 33 38 0F 72 26 00 00 00 4D 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 4C 00 54 00 61 00 73 00 6B 00 20 00 53 00 63 00 68 00 65 00 64 00 75 00 6C 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 69 00 67 00 75 00 69 00 72 00 65 00 73 00 6F 00 75 00 72 00 63 00 65 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 30 00 31 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 74 00 61 00 73 00 6B 00 73 00 63 00 68 00 64 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 00 0E 01 00 00 7A 00 31 00 00 00 00 00 3F 3A B1 80 11 00 50 72 6F 67 72 61 6D 73 00 00 62 00 07 00 04 00 EF BE 33 38 6B 51 3F 3A B1 80 26 00 00 00 EE 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 38 00 50 00 72 00 6F 00 67 00 72 00 61 00 6D 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 38 00 32 00 00 00 18 00 92 00 31 00 00 00 00 00 33 38 38 72 11 00 41 44 4D 49 4E 49 7E 31 00 00 7A 00 07 00 04 00 EF BE 33 38 D1 6E 33 38 38 72 26 00 00 00 F2 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 50 00 41 00 64 00 6D 00 69 00 6E 00 69 00 73 00 74 00 72 00 61 00 74 00 69 00 76 00 65 00 20 00 54 00 6F 00 6F 00 6C 00 73 00 00 00 40 00 73 00 68 00 65 00 6C 00 6C 00 33 00 32 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 31 00 37 00 36 00 32 00 00 00 18 00 00 00 01 16 01 00 00 14 01 32 00 05 06 00 00 33 38 38 72 80 00 43 4F 4D 50 4F 4E 7E 31 2E 4C 4E 4B 00 00 A0 00 07 00 04 00 EF BE 33 38 30 72 33 38 30 72 26 00 00 00 4E 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 54 00 43 00 6F 00 6D 00 70 00 6F 00 6E 00 65 00 6E 00 74 00 20 00 53 00 65 00 72 00 76 00 69 00 63 00 65 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 6F 00 6D 00 72 00 65 00 73 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 36 00 30 00 32 00 00 00 1C 00 58 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 6F 00 6D 00 65 00 78 00 70 00 2E 00 6D 00 73 00 63 00 00 00 00 00 00 00 1C 00 00 00 01 20 01 00 00 1E 01 32 00 49 06 00 00 33 38 0B 72 80 00 43 4F 4D 50 55 54 7E 31 2E 4C 4E 4B 00 00 A6 00 07 00 04 00 EF BE 33 38 F4 71 33 38 F4 71 26 00 00 00 4F 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 56 00 43 00 6F 00 6D 00 70 00 75 00 74 00 65 00 72 00 20 00 4D 00 61 00 6E 00 61 00 67 00 65 00 6D 00 65 00 6E 00 74 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 79 00 63 00 6F 00 6D 00 70 00 75 00 74 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 33 00 30 00 30 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 63 00 6F 00 6D 00 70 00 6D 00 67 00 6D 00 74 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 01 20 01 00 00 1E 01 32 00 41 06 00 00 33 38 FD 71 80 00 44 41 54 41 53 4F 7E 31 2E 4C 4E 4B 00 00 A6 00 07 00 04 00 EF BE 33 38 E0 71 33 38 E0 71 26 00 00 00 50 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 56 00 44 00 61 00 74 00 61 00 20 00 53 00 6F 00 75 00 72 00 63 00 65 00 73 00 20 00 28 00 4F 00 44 00 42 00 43 00 29 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6F 00 64 00 62 00 63 00 69 00 6E 00 74 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 33 00 31 00 30 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6F 00 64 00 62 00 63 00 61 00 64 00 33 00 32 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 1C 01 00 00 1A 01 32 00 5D 06 00 00 33 38 0F 72 80 00 45 56 45 4E 54 56 7E 31 2E 4C 4E 4B 00 00 A2 00 07 00 04 00 EF BE 33 38 0F 72 33 38 0F 72 26 00 00 00 52 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 48 00 45 00 76 00 65 00 6E 00 74 00 20 00 56 00 69 00 65 00 77 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 69 00 67 00 75 00 69 00 72 00 65 00 73 00 6F 00 75 00 72 00 63 00 65 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 31 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 65 00 76 00 65 00 6E 00 74 00 76 00 77 00 72 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 01 2C 01 00 00 2A 01 32 00 8B 06 00 00 46 3A CE 41 80 00 49 49 53 4D 41 4E 7E 31 2E 4C 4E 4B 00 00 A4 00 07 00 04 00 EF BE 46 3A CE 41 46 3A CE 41 26 00 00 00 4F 4E 01 00 00 00 0A 00 00 00 00 00 00 00 00 00 46 00 49 00 49 00 53 00 20 00 4D 00 61 00 6E 00 61 00 67 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 6E 00 65 00 74 00 73 00 72 00 76 00 5C 00 49 00 6E 00 65 00 74 00 4D 00 67 00 72 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 31 00 30 00 31 00 00 00 1C 00 6A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 6E 00 65 00 74 00 73 00 72 00 76 00 5C 00 49 00 6E 00 65 00 74 00 4D 00 67 00 72 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 1A 01 00 00 18 01 32 00 45 06 00 00 33 38 0C 72 80 00 49 53 43 53 49 49 7E 31 2E 4C 4E 4B 00 00 A0 00 07 00 04 00 EF BE 33 38 F4 71 33 38 F4 71 26 00 00 00 53 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 4E 00 69 00 53 00 43 00 53 00 49 00 20 00 49 00 6E 00 69 00 74 00 69 00 61 00 74 00 6F 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 73 00 63 00 73 00 69 00 63 00 70 00 6C 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 35 00 30 00 30 00 31 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 73 00 63 00 73 00 69 00 63 00 70 00 6C 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 26 01 00 00 24 01 32 00 3B 06 00 00 33 38 B9 71 80 00 4D 45 4D 4F 52 59 7E 31 2E 4C 4E 4B 00 00 AE 00 07 00 04 00 EF BE 33 38 B9 71 33 38 B9 71 26 00 00 00 54 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 5E 00 4D 00 65 00 6D 00 6F 00 72 00 79 00 20 00 44 00 69 00 61 00 67 00 6E 00 6F 00 73 00 74 00 69 00 63 00 73 00 20 00 54 00 6F 00 6F 00 6C 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 4D 00 64 00 53 00 63 00 68 00 65 00 64 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 34 00 30 00 30 00 31 00 00 00 1C 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 4D 00 64 00 53 00 63 00 68 00 65 00 64 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 20 01 00 00 1E 01 32 00 E9 05 00 00 46 3A D5 41 80 00 4E 45 54 57 4F 52 7E 31 2E 4C 4E 4B 00 00 B0 00 07 00 04 00 EF BE 46 3A D5 41 46 3A D5 41 26 00 00 00 A0 4F 01 00 00 00 01 00 00 00 00 00 00 00 00 00 5A 00 4E 00 65 00 74 00 77 00 6F 00 72 00 6B 00 20 00 50 00 6F 00 6C 00 69 00 63 00 79 00 20 00 53 00 65 00 72 00 76 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 69 00 61 00 73 00 75 00 69 00 68 00 65 00 6C 00 70 00 65 00 72 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 32 00 00 00 1C 00 52 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6E 00 70 00 73 00 2E 00 6D 00 73 00 63 00 00 00 00 00 00 00 1C 00 00 00 01 38 01 00 00 36 01 32 00 07 06 00 00 33 38 FC 71 80 00 52 45 4C 49 41 42 7E 31 2E 4C 4E 4B 00 00 C0 00 07 00 04 00 EF BE 33 38 DC 71 33 38 DC 71 26 00 00 00 55 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 76 00 52 00 65 00 6C 00 69 00 61 00 62 00 69 00 6C 00 69 00 74 00 79 00 20 00 61 00 6E 00 64 00 20 00 50 00 65 00 72 00 66 00 6F 00 72 00 6D 00 61 00 6E 00 63 00 65 00 20 00 4D 00 6F 00 6E 00 69 00 74 00 6F 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 77 00 64 00 63 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 30 00 32 00 31 00 00 00 1C 00 5A 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 70 00 65 00 72 00 66 00 6D 00 6F 00 6E 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 01 38 01 00 00 36 01 32 00 15 06 00 00 33 38 38 72 80 00 53 45 43 55 52 49 7E 32 2E 4C 4E 4B 00 00 C2 00 07 00 04 00 EF BE 33 38 38 72 33 38 38 72 26 00 00 00 56 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 72 00 53 00 65 00 63 00 75 00 72 00 69 00 74 00 79 00 20 00 43 00 6F 00 6E 00 66 00 69 00 67 00 75 00 72 00 61 00 74 00 69 00 6F 00 6E 00 20 00 4D 00 61 00 6E 00 61 00 67 00 65 00 6D 00 65 00 6E 00 74 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 77 00 73 00 65 00 63 00 65 00 64 00 69 00 74 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 37 00 31 00 38 00 00 00 1C 00 58 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 65 00 63 00 70 00 6F 00 6C 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 01 1C 01 00 00 1A 01 32 00 17 06 00 00 33 38 37 72 80 00 53 45 43 55 52 49 7E 31 2E 4C 4E 4B 00 00 AC 00 07 00 04 00 EF BE 33 38 30 72 33 38 30 72 26 00 00 00 57 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 6A 00 53 00 65 00 63 00 75 00 72 00 69 00 74 00 79 00 20 00 43 00 6F 00 6E 00 66 00 69 00 67 00 75 00 72 00 61 00 74 00 69 00 6F 00 6E 00 20 00 57 00 69 00 7A 00 61 00 72 00 64 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 63 00 77 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 32 00 00 00 1C 00 52 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 63 00 77 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 26 01 00 00 24 01 32 00 63 06 00 00 33 38 37 72 80 00 53 45 52 56 45 52 7E 31 2E 4C 4E 4B 00 00 9C 00 07 00 04 00 EF BE 33 38 37 72 33 38 37 72 26 00 00 00 58 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 4C 00 53 00 65 00 72 00 76 00 65 00 72 00 20 00 4D 00 61 00 6E 00 61 00 67 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 76 00 72 00 6D 00 67 00 72 00 6E 00 63 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 31 00 00 00 1C 00 6C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 43 00 6F 00 6D 00 70 00 4D 00 67 00 6D 00 74 00 4C 00 61 00 75 00 6E 00 63 00 68 00 65 00 72 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 0C 01 00 00 0A 01 32 00 43 06 00 00 33 38 05 72 80 00 73 65 72 76 69 63 65 73 2E 6C 6E 6B 00 00 92 00 07 00 04 00 EF BE 33 38 E9 71 33 38 E9 71 26 00 00 00 59 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 40 00 73 00 65 00 72 00 76 00 69 00 63 00 65 00 73 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 66 00 69 00 6C 00 65 00 6D 00 67 00 6D 00 74 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 32 00 30 00 34 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 65 00 72 00 76 00 69 00 63 00 65 00 73 00 2E 00 6D 00 73 00 63 00 00 00 00 00 00 00 1C 00 00 00 01 3C 01 00 00 3A 01 32 00 0F 06 00 00 33 38 38 72 80 00 53 48 41 52 45 41 7E 31 2E 4C 4E 4B 00 00 BC 00 07 00 04 00 EF BE 33 38 38 72 33 38 38 72 26 00 00 00 5A 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 68 00 53 00 68 00 61 00 72 00 65 00 20 00 61 00 6E 00 64 00 20 00 53 00 74 00 6F 00 72 00 61 00 67 00 65 00 20 00 4D 00 61 00 6E 00 61 00 67 00 65 00 6D 00 65 00 6E 00 74 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 53 00 74 00 6F 00 72 00 61 00 67 00 65 00 52 00 65 00 73 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 31 00 30 00 35 00 00 00 1C 00 62 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 73 00 79 00 73 00 74 00 65 00 6D 00 72 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 53 00 74 00 6F 00 72 00 61 00 67 00 65 00 4D 00 67 00 6D 00 74 00 2E 00 6D 00 73 00 63 00 00 00 00 00 00 00 1C 00 00 00 01 1C 01 00 00 1A 01 32 00 45 06 00 00 33 38 33 72 80 00 53 54 4F 52 41 47 7E 31 2E 4C 4E 4B 00 00 A2 00 07 00 04 00 EF BE 33 38 2D 72 33 38 2D 72 26 00 00 00 5B 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 50 00 53 00 74 00 6F 00 72 00 61 00 67 00 65 00 20 00 45 00 78 00 70 00 6C 00 6F 00 72 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 74 00 6F 00 72 00 65 00 78 00 70 00 6C 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 35 00 30 00 30 00 30 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 73 00 79 00 73 00 74 00 65 00 6D 00 72 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 73 00 74 00 6F 00 72 00 65 00 78 00 70 00 6C 00 2E 00 6D 00 73 00 63 00 00 00 00 00 00 00 1C 00 00 00 01 22 01 00 00 20 01 32 00 19 06 00 00 33 38 B9 71 80 00 53 59 53 54 45 4D 7E 31 2E 4C 4E 4B 00 00 A8 00 07 00 04 00 EF BE 33 38 B9 71 33 38 B9 71 26 00 00 00 5C 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 58 00 53 00 79 00 73 00 74 00 65 00 6D 00 20 00 43 00 6F 00 6E 00 66 00 69 00 67 00 75 00 72 00 61 00 74 00 69 00 6F 00 6E 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 63 00 6F 00 6E 00 66 00 69 00 67 00 2E 00 65 00 78 00 65 00 2C 00 2D 00 31 00 32 00 36 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 73 00 63 00 6F 00 6E 00 66 00 69 00 67 00 2E 00 65 00 78 00 65 00 00 00 00 00 00 00 1C 00 00 00 01 20 01 00 00 1E 01 32 00 31 06 00 00 33 38 0F 72 80 00 54 41 53 4B 53 43 7E 31 2E 4C 4E 4B 00 00 A6 00 07 00 04 00 EF BE 33 38 0F 72 33 38 0F 72 26 00 00 00 5D 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 4C 00 54 00 61 00 73 00 6B 00 20 00 53 00 63 00 68 00 65 00 64 00 75 00 6C 00 65 00 72 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 6D 00 69 00 67 00 75 00 69 00 72 00 65 00 73 00 6F 00 75 00 72 00 63 00 65 00 2E 00 64 00 6C 00 6C 00 2C 00 2D 00 32 00 30 00 31 00 00 00 1C 00 5C 00 01 00 0B 00 EF BE 00 00 00 00 00 00 00 00 25 00 53 00 79 00 73 00 74 00 65 00 6D 00 52 00 6F 00 6F 00 74 00 25 00 5C 00 73 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 5C 00 74 00 61 00 73 00 6B 00 73 00 63 00 68 00 64 00 2E 00 6D 00 73 00 63 00 00 00 00 00 01 00 1C 00 00 00 01 3A 01 00 00 38 01 32 00 23 06 00 00 33 38 02 72 80 00 57 49 4E 44 4F 57 7E 31 2E 4C 4E 4B 00 00 CC 00 07 00 04 00 EF BE 33 38 E4 71 33 38 E4 71 26 00 00 00 5E 2E 00 00 00 00 01 00 00 00 00 00 00 00 00 00 7E 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 00 46 00 69 00 72 00 65 00 77 00 61 00 6C 00 6C 00 20 00 77 00 69 00 74 00 68 00 20 00 41 00 64 00 76 00 61 00 6E 00 63 00 65 00 64 00 20 00 53 00 65 00 63 00 75 00 72 00 69 00 74 00 79 00 2E 00 6C 00 6E 00 6B 00 00 00 40 00 43 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 Ähnliche Inhalte Windows 10 Standardprogramme zum öffnen definieren (1) Frage von DragonKill zum Thema Windows 10 ... Unerwünschte Dateitypen auf Fileserver blockieren (4) Frage von manu90 zum Thema Windows Server ... Dateitypen aus Archiven löschen via Batch! (5) Frage von Nightowl71 zum Thema Batch & Shell ... gelöst Datei-Ordner Überwachung für bestimmte Dateitypen (4) Frage von hushpuppies zum Thema Windows Server ... Neue Wissensbeiträge Heiß diskutierte Inhalte Programm soll in verschiedenen Versionen lizenziert sein (20) Frage von Yanmai zum Thema Lizenzierung ... Wo ist der Fehler auf dem Bild? (17) Information von the-buccaneer zum Thema Humor (lol) ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787137866020203, "perplexity": 15.879630601812698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00103.warc.gz"}
https://www.physicsforums.com/threads/simple-differential-equation.762752/
# Simple differential equation 1. Jul 22, 2014 ### johann1301 1. The problem statement, all variables and given/known data a) Write (x21)y'+2xy as the derivative of a product b) Solve (x21)y'+2xy=e-x 3. The attempt at a solution a) I use the product rule backwards and get ((x2+1)y)' b) I exploit what i just found out... (x21)y'+2xy=((x2+1)y)' and get... e-x=((x2+1)y)' integrate on both sides... ∫e-xdx=∫((x2+1)y)'dx -e-x+C=(x2+1)y and get that... y=(C-e-x)(x2+1)-1 this is the correct answer according to the book. What i am curious about is in the step marked with bold text. I wrote dx at the end just by guessing. Why couldnt i just have written dy instead? 2. Jul 22, 2014 ### slider142 You could have integrated both sides with respect to y if you really wanted to, but while the left side would then become ye-x, there is no theorem that helps us easily integrate the right side with respect to y. We are only able to integrate it with respect to x because we found in step one that it is the derivative with respect to x of (x2 + 1)y, and the fundamental theorem of calculus then assures us that its integral with respect to x is (x2 + 1)y plus an arbitrary constant. 3. Jul 22, 2014 ### johann1301 Thank you! 4. Jul 22, 2014 ### ehild y is function of x and the comma ' means derivative with respect to x. y ' means dy/dx. If f=(x2+1)y then f '= df/dx . You get f if you integrate f '. ∫f ' dx= f Formally you can handle the problem as if df/dx was a simple fraction. df/dx = e-x, multiply both sides by dx df= e-x dx and put the integral symbol at the front: ∫ (df/dx) dx = ∫e-x dx ehild 5. Jul 23, 2014 ### HallsofIvy You mean (x2+ 1)y'. That puzzled me for a while! 6. Jul 23, 2014 ### johann1301 Sorry! 7. Jul 23, 2014 ### johann1301 This is only true in this case right? Its more just an assumtion we make based on the look of the task? 8. Jul 23, 2014 ### slider142 Yes. In most texts on ordinary differential equations, y is assumed to be the dependent variable and x is assumed to be the independent variable as a matter of notation only, usually established by the author in the first chapter. Therefore y' is assumed to always mean y'(x), or dy/dx in these texts only.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662801027297974, "perplexity": 1209.6552456516242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00275.warc.gz"}
http://libros.duhnnae.com/2017/sep2/15053231923-Observational-upper-limits-on-the-gravitational-wave-production-of-core-collapse-supernovae-General-Relativity-and-Quantum-Cosmology.php
# Observational upper limits on the gravitational wave production of core collapse supernovae - General Relativity and Quantum Cosmology Observational upper limits on the gravitational wave production of core collapse supernovae - General Relativity and Quantum Cosmology - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: The upper limit on the energy density of a stochastic gravitational wave GWbackground obtained from the two-year science run S5 of the LaserInterferometer Gravitational-wave Observatory LIGO is used to constrain theaverage GW production of core collapse supernovae ccSNe. We assume that theccSNe rate tracks the star formation history of the universe and show that thestochastic background energy density depends only weakly on the assumed averagesource spectrum. Using the ccSNe rate for $z\leq10$, we scale the genericsource spectrum to obtain an observation-based upper limit on the average GWemission. We show that the mean energy emitted in GWs can be constrained within$< 0.49-1.98{1mm} M {\odot} c^{2}$ depending on the average source spectrum.While these results are higher than the total available gravitational energy ina core collapse event, second and third generation GW detectors will enabletighter constraints to be set on the GW emission from such systems. Autor: Xing-Jiang Zhu, Eric Howell, David Blair Fuente: https://arxiv.org/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837702512741089, "perplexity": 2997.0442708374817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865679.51/warc/CC-MAIN-20180523141759-20180523161759-00152.warc.gz"}
https://hypothes.is/search?q=tag%3Amath
104 Matching Annotations 1. May 2020 2. en.wikipedia.org en.wikipedia.org 1. Related concepts in other fields are: In natural language, the coordinating conjunction "and". In programming languages, the short-circuit and control structure. In set theory, intersection. In predicate logic, universal quantification. Strictly speaking, are these examples of dualities (https://en.wikipedia.org/wiki/Duality_(mathematics))? Or can I only, at strongest, say they are analogous (a looser coonection)? #### URL 3. en.wikipedia.org en.wikipedia.org #### URL 4. en.wikipedia.org en.wikipedia.org 1. Mathematically speaking, necessity and sufficiency are dual to one another. For any statements S and N, the assertion that "N is necessary for S" is equivalent to the assertion that "S is sufficient for N". #### URL 5. en.wikipedia.org en.wikipedia.org #### URL 6. en.wikipedia.org en.wikipedia.org 1. This is an abstract form of De Morgan's laws, or of duality applied to lattices. #### URL 7. en.wikipedia.org en.wikipedia.org 1. A plane graph is said to be self-dual if it is isomorphic to its dual graph. #### URL 8. en.wikipedia.org en.wikipedia.org 1. In mathematical contexts, duality has numerous meanings[1] although it is "a very pervasive and important concept in (modern) mathematics"[2] and "an important general theme that has manifestations in almost every area of mathematics".[3] #### URL 9. www.shell-tips.com www.shell-tips.com 1. echo "scale=2; 2/3" | bc . the right way to do math in bash #### URL 10. Apr 2020 11. en.wikipedia.org en.wikipedia.org 1. the phrase up to is used to convey the idea that some objects in the same class — while distinct — may be considered to be equivalent under some condition or transformation 2. "a and b are equivalent up to X" means that a and b are equivalent, if criterion X, such as rotation or permutation, is ignored #### URL 12. en.wikipedia.org en.wikipedia.org 1. If solutions that differ only by the symmetry operations of rotation and reflection of the board are counted as one, the puzzle has 12 solutions. These are called fundamental solutions; representatives of each are shown below #### URL 13. 0.30000000000000004.com 0.30000000000000004.com 1. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That's why, more often than not, .1 + .2 != .3 Computers make up their way to store decimal numbers #### URL 14. math.stackexchange.com math.stackexchange.com 1. Suppose you have only two rolls of dice. then your best strategy would be to take the first roll if its outcome is more than its expected value (ie 3.5) and to roll again if it is less. Expected payoff of a dice game: Description: You have the option to throw a die up to three times. You will earn the face value of the die. You have the option to stop after each throw and walk away with the money earned. The earnings are not additive. What is the expected payoff of this game? Rolling twice: $$\frac{1}{6}(6+5+4) + \frac{1}{2}3.5 = 4.25.$$ Rolling three times: $$\frac{1}{6}(6+5) + \frac{2}{3}4.25 = 4 + \frac{2}{3}$$ #### URL 15. math.stackexchange.com math.stackexchange.com 1. Therefore, En=2n+1−2=2(2n−1) Simplified formula for the expected number of tosses (e) to get n consecutive heads (n≥1): $$e_n=2(2^n-1)$$ For example, to get 5 consecutive heads, we've to toss the coin 62 times: $$e_n=2(2^5-1)=62$$ We can also start with the longer analysis of the 5 scenarios: 1. If we get a tail immediately (probability 1/2) then the expected number is e+1. 2. If we get a head then a tail (probability 1/4), then the expected number is e+2. 3. If we get two head then a tail (probability 1/8), then the expected number is e+2. 4. If we get three head then a tail (probability 1/16), then the expected number is e+4. 5. If we get four heads then a tail (probability 1/32), then the expected number is e+5. 6. Finally, if our first 5 tosses are heads, then the expected number is 5. Thus: #### URL 106. Jan 2015 107. mathinsight.org mathinsight.org 1. A function like f(x,y)=x+y is a function of two variables. It takes an element of R2, like (2,1), and gives a value that is a real number (i.e., an element of R), like f(2,1)=3. Since f maps R2 to R, we write f:R2→R. We can also use this “mapping” notation to define the actual function. We could define the above f(x,y) by writing f:(x,y)↦x+y. To contrast a simple real number with a vector, we refer to the real number as a scalar. Hence, we can refer to f:R2→R as a scalar-valued function of two variables or even just say it is a real-valued function of two variables. Everything works the same for scalar valued functions of three or more variables. For example, f(x,y,z), which we can write f:R3→R, is a scalar-valued function of three variables. f:R^2 \rightarrow R demek f(x,y)=z | Skalar-Değerli f f:R \rightarrow R^2 demek f(x)=(y,z) | VektörelDeğ f 2. f:R→R as a shorthand way of expressing that f is a function from R onto R. #### URL 108. Sep 2013 109. en.wikipedia.org en.wikipedia.org 1. A computable Dedekind cut is a computable function which when provided with a rational number as input returns or , This definition of computable Dedekind cut is wrong. The correct definition is that the lower and the upper cut be computably enumerable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622028827667236, "perplexity": 1106.2266804057895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00219.warc.gz"}
https://mathhelpboards.com/threads/use-double-or-half-angle-formulas.25085/
# Use double or half angle formulas #### Elissa89 ##### Member Oct 19, 2017 52 Ok, So with this problem it says to use double angle or half angle formula. I have the formulas in my notes just not sure how to apply them to the problem I feel like I should be using the double formula though. Here's the problem: cos(2*theta)+sin^2=0 #### Joppy ##### Well-known member MHB Math Helper Mar 17, 2016 256 Yes the double-angle formula for $\cos$ would be useful. There are a few different forms but consider that $\cos (2 \theta) = \cos^2 (\theta) - \sin^2 (\theta)$. Substituting in we get, $\cos^2 (\theta) - \sin^2 (\theta) + \sin^2 (\theta) = \cos^2 (\theta) = 0$. Do you have any ideas on how to solve for $\theta$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300530552864075, "perplexity": 374.78290980664053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887360.60/warc/CC-MAIN-20200705121829-20200705151829-00330.warc.gz"}
https://scicomp.stackexchange.com/questions/7711/algorithm-for-sparse-matrix-inverse
# Algorithm for Sparse-Matrix Inverse I have a $50000\times 50000$ matrix $A$ sparse matrix containing only 5 non-zero elements in each row. Now the problem is that the diagonal elements and the constants (in $B$ matrix such that $AX=B$) get updated after every iteration. If I go by the usual Matlab function inv or via Gauss Elimination, it takes around 130 seconds for the solution to be computed for a single iteration and the number of iterations required by the problem is somewhat of the order 100-500 to compute the final solution $X$. Need some suggestions on this one if I am not wanting to use parallel computation. • Inverting your sparse matrix will inevitably yield a dense one, so not a good plan. You can use an iterative method instead. Speaking of which, you haven't mentioned if your matrix is symmetric (positive definite) or not... Jun 19 '13 at 14:57 • Do you update both A and B? – Jan Jun 19 '13 at 15:36 • Welcome to SciComp.SE! Just to make sure: You have a matrix $A\in\mathbb{R}^{n\times n}$ and a matrix $B\in\mathbb{R}^{n\times m}$, and are interested in computing the matrix $X\in\mathbb{R}^{n\times n}$ such that $AX=B$ (that is, all you really want is $X$, and not the inverse $A^{-1}$), and you are using Matlab? In this case, have you tried X=A\B (never use inv for solving linear systems, that's not what it is for!), which is the most efficient "black-box" way of solving your problem in Matlab? Jun 19 '13 at 17:05 • Also, an important information is the size of B (same size as A or much smaller)? Jun 19 '13 at 17:07 • @clipper: Good point. I think without the help of the OP, there is little else we can suggest. I'm startled that none of our questions have gotten any answers -- does he no longer care about the question? Jun 20 '13 at 6:52 You may use a sparse factorization algorithm, it means computing matrices $P$, $L$, $U$, such that $M = PLU$ where $P$ is a permutation matrix, $L$ a sparse lower triangular matrix and $U$ a sparse upper triangular matrix. The permutation matrix is there and computed in such a way that $L$ and $U$ remain reasonably sparse (without it it is not the case in general, as indicated in the other answer). Then when $M$ is factorized in this form, it is trivial to solve a linear system with an arbitrary rhs. If $M$ is symmetric definite, then there is a sparse cholesky factorization ($M = PLL^t$), if it is symmetric only, then it can be decomosed as $M = PLDL^t$, with $D$ a diagonal matrix. There are several available implementations of sparse factorization: SuperLU, Choldmod, Mumps, TAUCS (depending on whether you need $LU$, $LL^t$ or $LDL^t$). There probably exists MATLAB bindings for most of them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980673551559448, "perplexity": 289.9621482052491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00233.warc.gz"}
http://sepwww.stanford.edu/sep/prof/pvi/rand/paper_html/node6.html
Next: My rise-time proof of Up: TIME-FREQUENCY RESOLUTION Previous: The uncertainty principle in ## Gabor's proof of the uncertainty principle Although it is easy to verify the uncertainty principle in many special cases, it is not easy to deduce it. The difficulty begins from finding a definition of the width of a function that leads to a tractable analysis. One possible definition uses a second moment; that is, is defined by (2) The spectral bandwidth is defined likewise. With these definitions, Dennis Gabor prepared a widely reproduced proof. I will omit his proof here; it is not an easy proof; it is widely available; and the definition (2) seems inappropriate for a function we often use, the sinc function, i.e., the FT of a step function. Since the sinc function drops off as t-1, its width defined with (2) is infinity, which is unlike the more human measure of width, the distance to the first axis crossing. Next: My rise-time proof of Up: TIME-FREQUENCY RESOLUTION Previous: The uncertainty principle in Stanford Exploration Project 10/21/1998
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775689244270325, "perplexity": 1115.2396099544253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00483.warc.gz"}
https://www.thejournal.club/c/paper/162/
#### A Finite Semantics of Simply-Typed Lambda Terms for Infinite Runs of<br> Automata ##### Klaus Aehlig Model checking properties are often described by means of finite automata. Any particular such automaton divides the set of infinite trees into finitely many classes, according to which state has an infinite run. Building the full type hierarchy upon this interpretation of the base type gives a finite semantics for simply-typed lambda-trees. A calculus based on this semantics is proven sound and complete. In particular, for regular infinite lambda-trees it is decidable whether a given automaton has a run or not. As regular lambda-trees are precisely recursion schemes, this decidability result holds for arbitrary recursion schemes of arbitrary level, without any syntactical restriction. arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543836116790771, "perplexity": 1497.552905171599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00713.warc.gz"}
http://math.stackexchange.com/questions/30914/common-algorithm-with-an-order-of-%ce%982n?answertab=oldest
# Common algorithm with an order of Θ(2^n) What would be a common algorithm with an order of Θ(2^n)? When I say "order", I mean time complexity analysis. I was thinking exponential growth but are there any that are more computer science oriented? - Testing the satisfiability of a Boolean formula by enumerating all possible assignments? –  joriki Apr 4 '11 at 16:02 Do you mean to ask for algorithms which run in $\Theta(2^n)$? (Any polynomial algorithm is $O(2^n)$.) Are you looking for best-known algorithms that run in $\Theta(2^n)$, or does any problem with a natural $\Theta(2^n)$ algorithm suffice? Do you care about additional polynomial factors? For example, the natural dynamic programming solution to the Traveling Salesman Problem runs in $\Theta(2^n \cdot n^2)$. –  Zach Langley Apr 4 '11 at 16:17 any algorithm that enumerates the subsets of a set to find a subset with a certain property –  joriki Apr 4 '11 at 16:22 @Trevor: Your question is ambiguous. You say $O(2^n)$. Even printf("hello world") is $O(2^n)$. Once someone points out a mistake (see Zach's comment), why don't you edit the question to correct it? Also, you are missing the model of computation. I presume you mean the word RAM model. –  Aryabhata Apr 4 '11 at 17:05 @Trevor: I would say BigOh is most commonly misused in algorithm analysis. Take your question for instance. Were you looking for O(n) time algorithms? Probably not, but O(n) algorithms are also O(2^n)... –  Aryabhata Apr 4 '11 at 18:48 According to Wikipedia, "Solving the traveling salesman problem using dynamic programming" is an exponential time problem. They write $2^{O(n)}$, I'm not sure it's the same as $O(2^n)$. Am I right in thinking that it is the same? - No. Try $2^{3n}$. –  Did Apr 4 '11 at 16:26 @Didier Piau: right, thanks! –  Matt N. Apr 4 '11 at 16:50 A typical algorithm with $\mathcal O(2^n)$ performance is the naive algorithm to calculate the Fibonacci-numbers. As they are defined as $$\begin{eqnarray*}F_0&=&0\\F_1&=&1\\F_n&=&F_{n-1}+F_{n-2}\end{eqnarray*}$$ - While what you stated is technically true ($\mathcal{O}(2^n)$), it is not $\theta(2^n)$, it is $\theta(\phi^n)$, where $\phi$ is the golden ratio. –  Aryabhata Apr 4 '11 at 17:03 Hm... I'm not a mathematician. Thank you. –  FUZxxl Apr 4 '11 at 17:08 I don't understand. $F_n$ grows as $\phi^n$, so we need of order $n$ digits to represent it, so we need time of order $n^2$ to carry out the $n$ additions required to get up to $F_n$ -- where does the exponential growth come from? –  joriki Apr 4 '11 at 17:16 @joriki: Assuming integer addition is $O(1)$, if we do a naive recursion, then we get $\phi^n$. –  Aryabhata Apr 4 '11 at 17:18 @Michael: The time complexity also satisfies $T(n) = T(n-1) + T(n-2)$. If it was $T(n) = 2T(n-1)$, I would agree with you. –  Aryabhata Apr 4 '11 at 17:35 The naive algorithm for $3$-coloring takes time $2^n$, though this is not optimal (Wikipedia mentions a $1.3289^n$ algorithm). Lots of other NP-complete programs have $2^n$ algorithms (or in general $c^n$), and it is conjectured that some of them in fact require time $c^n$; if this is true then randomness doesn't help for efficient computation (BPP=P).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168278574943542, "perplexity": 758.6197263042676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997893881.91/warc/CC-MAIN-20140722025813-00021-ip-10-33-131-23.ec2.internal.warc.gz"}
https://programmingpraxis.com/2010/01/12/
## Calculating Sines ### January 12, 2010 [ Today’s exercise was written by guest author Bill Cruise, who blogs at Bill the Lizard, where he is currently describing his adventures studying SICP. Feel free to contact me if you have an idea for an exercise, or if you wish to contribute your own exercise. ] We calculated the value of pi, and logarithms to seven digits, in two previous exercises. We continue that thread in the current exercise with a function that calculates sines. Sines were discovered by the Indian astronomer Aryabhata in the sixth century, further developed by the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī (from whose name derives our modern word algorithm) in the ninth century. Sines were studied by European mathematicians Leibniz and Euler in the seventeenth and eighteenth centuries. It was Euler who coined the word “sine”, based on an earlier mis-translation (to the Latin “sinus”) of the word “jya” used by Aryabhata. The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse in a right triangle. (You may remember the mnemonic SOHCAHTOA if you’ve ever taken a course in trigonometry.) One way to calculate the sine of an angle expressed in radians is by summing terms of the Taylor series: $\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots = \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}$ Another method of computing the sine comes from the triple-angle formula $\sin x = 3 \sin\frac{x}{3} - 4 \sin^3\frac{x}{3}$. Since the limit $\lim_{x \rightarrow\ 0} \frac{\sin\ x}{x} = 1$, a recursion that drives x to zero can calculate the sine of x. Your task is to write two functions to calculate the sine of an angle, one based on the Taylor series and the other based on the recursive formula. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671897649765015, "perplexity": 692.1526156489316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00338.warc.gz"}
http://math.stackexchange.com/questions/303777/finding-the-limit-of-an-expression-having-a-zero-in-the-denominator
Finding the limit of an expression having a zero in the denominator I have a question about undefined rational expressions in calculus with zeros in the denominator. Ok, how is $$\lim_{x\to 2}$$ for some expression that had $$(x-2)$$ in the denominator undefined when the notation $$\lim_{x\to 2}$$ implies $x\not= 2$? Thank You in advance.... - did you just edit my post? –  codenamejupiterx Feb 14 '13 at 6:27 The way these sort of questions crop up is when you're trying to compute $\displaystyle\lim_{x\rightarrow a}\frac{f(x)}{g(x)}$ where $\displaystyle\lim_{x\rightarrow a}g(x)=0$. Of course, if $\displaystyle\lim_{x\rightarrow a}f(x)=0$ as well, we can either reduce the expression $f(x)/g(x)$ or apply L'Hospital's rule. You're probably more interested in what to do when $\displaystyle\lim_{x\rightarrow a}f(x)\neq0$. When this is the case, the limit $\displaystyle\lim_{x\rightarrow a}\frac{f(x)}{g(x)}$ is either $\pm\infty$ or it does not exist. The way to determine the limit is by looking at the left and right-hand limits $\displaystyle\lim_{x\rightarrow a^-}\frac{f(x)}{g(x)}$ and $\displaystyle\lim_{x\rightarrow a^+}\frac{f(x)}{g(x)}$. Take $\displaystyle\lim_{x\rightarrow 2}\frac{1}{x-2}$ for example. When $x<2$ the expression $\displaystyle\frac{1}{x-2}$ is negative so we see that $\displaystyle\lim_{x\rightarrow 2^-}\frac{1}{x-2}=-\infty$. However, when $x>2$ the expression $\displaystyle\frac{1}{x-2}$ is positive so $\displaystyle\lim_{x\rightarrow 2^+}\frac{1}{x-2}=\infty$. The left-hand and right-hand limits do not agree, so the limit does not exist! This is apparant from the graph of $\displaystyle\frac{1}{x-2}$. If we were to alter the problem by looking instead at $\displaystyle\lim_{x\rightarrow 2}\frac{1}{\left|x-2\right|}$ we would see that both the left-hand and right-hand limits are $\infty$, so the original limit is infinity. - The notation $\lim_{x\to2}$does not mean "$x$ could equal $2.1$ or $1.9$." The notation $\lim_{x\to2}f(x)=L$ means for every positive $\epsilon$ there is a number $\delta$ such that if $|x-2|\lt\delta$ then $|f(x)-L|\lt\epsilon$. If you are not careful with definitions --- if you don't state them properly, and understand them fully --- you are up the creek without a paddle. - This answer referred to something in the original version of the question, before the edit by Zilliput. –  Gerry Myerson Aug 1 at 3:49 If $$L=\lim_{x\to 2}f(x),$$ where $f(x)$ has $(x-2)$ as a factor in the denominator, then $L$ may or may not be defined. For example, $$L_1=\lim_{x\to 2}\frac{x^2-4}{x-2}=\lim_{x\to 2}\frac{(x-2)(x+2)}{x-2}=\lim_{x\to 2}(x+2)=4.$$ However, $$L_2=\lim_{x\to 2}\frac{1}{x-2}$$ is undefined. - Ok Thanks!!!!!! –  codenamejupiterx Feb 14 '13 at 6:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825085401535034, "perplexity": 162.9817543786136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663167.8/warc/CC-MAIN-20140930004103-00196-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/em-cp-violation-why-not.792442/
EM CP-Violation, why not? • Start date • #1 Gold Member 3,331 438 Main Question or Discussion Point Why can't there be a term in the SM lagrangian for the U(1)_Y of the form: $F_{\mu \nu} \tilde{F}^{\mu \nu}$ ? As there is for the strong interactions? (Although I've seen such terms appearing in the axion models, such as the KSVZ where by introducing an additional very heavy quark Q with charge $e_Q$, you can have the coupling of the axion field $\alpha$ with light quarks via the EM anomalies: $L_{EM-anom} = \frac{a}{f_a} 3 e_Q^2 \frac{\alpha_{fine-str}}{4 \pi} F_{\mu \nu} \tilde{F}^{\mu \nu}$ ) Related High Energy, Nuclear, Particle Physics News on Phys.org • #2 Orodruin Staff Emeritus Homework Helper Gold Member 16,691 6,466 For an abelian theory, $F^{\mu\nu}\tilde F_{\mu\nu}$ is a total derivative: $$\partial_\mu \epsilon^{\mu\nu\rho\sigma} A_\nu \partial_\rho A_\sigma = \epsilon^{\mu\nu\rho\sigma} [(\partial_\mu A_\nu)(\partial_\rho A_\sigma) + A_\nu \partial_\mu \partial_\rho A_\sigma].$$ The last term disappears due to the derivatives commuting and $\epsilon$ being asymmetric. The first term is proportional to $F^{\mu\nu}\tilde F_{\mu\nu}$. • #3 Gold Member 3,331 438 The problem is not about total derivatives, because even in strong CP-problem, the term of $\bar{\theta}$ : $G \tilde{G}$ is a total derivative/can be expressed as such. t'Hoft however showed that this total derivative integral doesn't vanish for every gauge...so I guess, It has to do with the gauge transformations in some way... • #4 Haelfix 1,950 212 It's perfectly legitimate to worry about a CP violating theta term for say the electroweak theory, with the different group structure, although it turns out that such a term is unobservable, and can be rotated away by a suitable chiral transformation. However for the Abelian theory, post 2 is essentially all there is too it. • #5 Gold Member 3,331 438 I will try to write it down in maths? In the QCD, a resolution to the $U_A(1)$ problem, is provided by the chiral anomaly for axial currents. The axial current assosiated with the $U_A(1)$ gets quantum corrections from the triangle graph which connects it to two gluon fields with quarks going around the loop. This anomaly gives a non-zero divergence of the axial current: $\partial_\mu J^\mu_5 = \frac{g_s^2 N}{32 \pi^2} G^{\mu \nu}_a \tilde{G}_{a \mu \nu} \ne 0$ This chiral anomaly affects the action: $\delta Z \propto \int d^4 x \partial_\mu J^\mu_5 = \frac{g_s^2 N}{32 \pi^2} \int d^4 x G^{\mu \nu}_a \tilde{G}_{a \mu \nu}$ And it can be further shown that the $G \tilde{G}$ can be expressed in terms of a total divergence (just like the QED field strength tensors), $G^{\mu \nu}_a \tilde{G}_{a \mu \nu}= \partial_\mu K^\mu$ with $K^\mu = \epsilon^{\mu \rho \sigma \omega} A_{a \rho} [ G_{a \sigma \omega} - \frac{g_s}{3} f_{abc} A_{\sigma b} A_{\omega c} ]$ the problem then comes when you insert this in the action integral above and you reach: $\delta Z \propto \frac{g_s^2 N}{32 \pi^2} \int \sigma_\mu K^\mu \ne 0$ The last was shown by t'Hoft, because the right boundary condition to use is that $A$ is a pure gauge field at spatial infinity, either then A=0 or a gauge transformation of 0... Now what's the difference with the same thing you can obtain for the action in QED? @Orodruin in his post, showed exactly that $F \tilde{F} = c \partial_\mu T^\mu$ So in the action, you will have contributions of the form: $\delta Z' \propto \int d^4 x \partial_\mu T^\mu = \int d \sigma_\mu T^\mu$ Why in this case the infinity is taken to be T=0 and not a general gauge transformation of T: $T' = T + \partial a$ so a gauge transformation of 0? I hope I made clear my problem? Thanks... • #6 samalkhaiat 1,657 875 If you do the integral $\int d \sigma^{ \mu } K_{ \mu }$ for $SU(2)$, the calculation will tell you why it vanishes for $U(1)$. • #7 Haelfix 1,950 212 I will try to write it down in maths? I hope I made clear my problem? Thanks... Good! So my claim is that the surface term in the Abelian theory vanishes. To see this, you can try direct computation, you can show it by asymptotic analysis, or you can be really clever and argue it away by topological arguments. I will give you a hint on how to do it the second way. Note that to keep the action finite, we require that the (F Fbar) term decreases faster than O(1/r^2) where we set our boundary conditions to be the (euclidean) hypersphere as the radius r goes off to infinity. Show that this means that the total derivative goes as O(1/r^5) and that therefore the surface term vanishes. Last edited: • Last Post Replies 3 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 2K • Last Post Replies 10 Views 4K • Last Post Replies 5 Views 2K • Last Post Replies 4 Views 2K • Last Post Replies 4 Views 2K • Last Post Replies 1 Views 773 • Last Post Replies 5 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946792483329773, "perplexity": 742.276785589604}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00330.warc.gz"}
https://arxiv.org/abs/1211.3500
cs.NA (what is this?) # Title: Accelerated Canonical Polyadic Decomposition by Using Mode Reduction Abstract: Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely applied to analyze high order tensors. Existing CPD methods use alternating least square (ALS) iterations and hence need to unfold tensors to each of the $N$ modes frequently, which is one major bottleneck of efficiency for large-scale data and especially when $N$ is large. To overcome this problem, in this paper we proposed a new CPD method which converts the original $N$th ($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized by decomposing this mode reduced tensor followed by a Khatri-Rao product projection procedure. This way is quite efficient as unfolding to each of the $N$ modes are avoided, and dimensionality reduction can also be easily incorporated to further improve the efficiency. We show that, under mild conditions, any $N$th-order CPD can be converted into a 3rd-order case but without destroying the essential uniqueness, and theoretically gives the same results as direct $N$-way CPD methods. Simulations show that, compared with state-of-the-art CPD methods, the proposed method is more efficient and escape from local solutions more easily. Comments: 12 pages. Accepted by TNNLS Subjects: Numerical Analysis (cs.NA); Learning (cs.LG); Numerical Analysis (math.NA) DOI: 10.1109/TNNLS.2013.2271507 Cite as: arXiv:1211.3500 [cs.NA] (or arXiv:1211.3500v2 [cs.NA] for this version) ## Submission history From: Guoxu Zhou [view email] [v1] Thu, 15 Nov 2012 05:50:30 GMT (435kb,D) [v2] Tue, 25 Jun 2013 03:06:52 GMT (820kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826540470123291, "perplexity": 2329.0678315943437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00509.warc.gz"}
http://mattleifer.info/2006/02/03/what-is-the-point-of-quantum-foundations/comment-page-1/
# What is the point of Quantum Foundations? A couple of months ago, there was an interesting debate in the Quantum Foundations group here at PI, with the above title. Unfortunately, I missed it, but it is an interesting question given that QF is becoming increasingly popular amongst young physicists, whilst remaining a relatively obscure and controversial subject in most of the mainstream physics community. Here are 3 possible answers to the question: 1. The goal of QF is to correctly predict the result of an experiment for which the standard approach to QM gives the wrong result. That is, we are in the business of providing alternative theories, that will eventually superseed QM. Work on things like spontaneous collapse models or nonlinear modifications to the Schroedinger equation falls into this category. 2. The goal of QF is not to contradict QM within its domain of applicability, but it should suggest possible alterntive approaches in cases where we are currently uncertain how to go about applying quantum theory. The archetypal example of this is quantum gravity, although to be fair it is more common to hear foundations people give this response than to find them actually working on it. Notable exceptions are the work of Gell-Man, Hartle, Isham and collaborators, which draws on the Consistent Histories formalism, and the recent work of Lucien Hardy. 3. The goal of QF is not to contradict QM at all, but it should suggest a variety of different ways to conceoptualize the subject, suggesting new possible experiments and theory that would have been difficult to imagine without considerable insight from QF. The main example we have of this is the field of quantum information. David Deutsch arrived at quantum computing by thinking about the many-worlds interpretation and Schumacher compression bears some similarity to the frequentist justifications of the quantum probability rule that began in Everett’s thesis. More recently, the Bayesian viewpoint of Caves, Fuchs, Schack, et. al. leads to new ways of doing quantum tomography and new variants of the quantum de-Finetti theorem, which have applications in quantum cryptography. 4. The goal of QF is not to bother mainstream physics at all, but to come up with the most consistent and reasonable interpretation of QM possible, involving minimal unverifiable assumptions about the nature of reality. In my view, all four points of view can be justified. However, I think it is very useful to spell out exactly what we are up to to the rest of the world. A large portion of the physics community is skeptical about QF and, in my opinion, this is probably because they think we are all doing 1 or 4. If this were the case, I think I would agree with them, since QM has withstood a vast array of experimental tests and most of the alternatives suggested under category 1 seem contrived at best to me. Also, it is difficult to see what 4 could ever contribute to the rest of physics. It is also a problem that is better left to philosophers, since they are better qualified to tackle it. To me, 2 and 3 seem like the most promising avenues of research for physicists who are interested in the field. ### 6 responses to “What is the point of Quantum Foundations?” 1. Michael B. Heaney Hi Matt, You say “The goal of QF is to correctly predict the result of an experiment for which the standard approach to QM gives the wrong result.” Can you please give specific examples of experiments where the standard approach to QM gives the wrong results? Thanks, Michael 2. I can’t be 100% sure what I meant when I wrote this seven years ago, but I’ll give it a shot. Firstly, you are slightly misquoting me, since I offered that as only one of four possibilities for the goal of QF and one that I considered the least valuable at that. Obviously, QM is enormously successful and there are no experiments that have been performed so far that contradict it. That is not what I meant. Instead, the idea is that QM will fail in experiments that are only slightly different from what we have done so far. For example, spontaneous collapse theories predict that there is a limit to the extent that we can maintain coherent superpositions of a large number of particles in two spatially separated locations. This limit is supposed to be fundamental and not due to environmental decoherence. Existing experiments with macroscopic superpositions, such as SQUID rings and BECs, don’t contradict this because they do not involve a significant difference in position of the terms in the superposition, and the collapse mechanism is supposed to depend on this. However, future experiments with mechanical oscillators designed to test Penrose’s ideas would test this. There is also a slightly bizarre suggestion due to Adrian Kent that Bell experiments might fail to violate a Bell inequality if the outcomes of the measurement are coupled to a difference in position of very massive objects and this is done quickly enough that a signal could not travel to the other wing of the experiment before the mass has been moved. This is based on a loophole in Bell’s theorem to do with the idea that collapse might not occur until the results of the measurements are brought together and compared. A related suggestion due to Scarani and Suarez is that Bell violation will fail if the experiment is done with moving detectors such that the measurement of the other particle happens first according to the frames in which both of the measurement devices are moving, i.e. neither Alice nor Bob believe they are making the first measurement according to their own frames. This is based on the rather naive way of talking about collapse that we often use in which we say that Alice’s measurement causes the collapse at Bob’s side, or vice versa. Even if this is the case, I find the idea rather implausible because there is no reason why collapse should occur in the frame of the measuring device as opposed to some other natural fame like the frame of the particle itself. One can find several similar types of suggestion in the literature. As far as I am concerned they are all highly implausible, although they will lead to testing of quantum predictions in situations in which they have not been tested so far, which is a good thing. Such experiments may turn out to be technologically useful. However, if this is what most physicists think we are doing then I am unsurprised that Lubos Motl calls everyone who works of quantum foundations an “anti-quantum zealot”. As I said in the post, I find goals 2 and 3 more promising. 3. Michael B. Heaney Thank you for the detailed answer. You say “Obviously, QM is enormously successful and there are no experiments that have been performed so far that contradict it.” But Peres gives an example where the conventional interpretation of QM gives a wrong retrodiction [Peres, Asher. “Time asymmetry in quantum mechanics: a retrodiction paradox.” Physics Letters A 194, no. 1 (1994): 21-25]. Penrose gives another example [Penrose, The Road to Reality, pp. 819-823]. Dyson gives several more examples [Dyson, Freeman J. “Thought-experiments in honor of John Archibald Wheeler.” Science and Ultimate Reality (2004): 72-89.] Do you have rebuttals? 4. I don’t have access to the book in which Dyson’s paper appears, so I can’t address his arguments specifically. The Peres and Penrose arguments are examples of ambiguities to do with how to use the quantum formalism retrodictively. They are not examples of experiments that contradict quantum theory because the conventional formalism of quantum theory is designed to only be used predictively. You are supposed to evolve quantum states forward in time and apply the Born rule, projection postulate etc. to obtain classical probabilities. Once you have those classical probabilities, you can use the rules of classical probabilistic inference, such as Bayes’ theorem, to obtain retrodictive probabilities or any other kind of conditional inferences that you like. The results of all such inferences are in agreement with current experiments. I don’t think that Peres or Penrose are disputing this. The question they are addressing is whether there is an appropriate way to use the quantum formalism itself retrodictively, rather than first computing the classical probabilities and then inverting them. Peres’ argument seems designed as an argument against a realist reading of the two-state vector formalism of Aharonov et. al. On this point I agree with him. I think you quickly get into problems if you think that the results obtained in a pre- and post-selected experiment are somehow already “real” in between the pre- and post-selection. However, he is not saying that the probabilities obtained from quantum theory in those experiments are wrong. Penrose is trying to make an argument about the lack of time symmetry in the measurement process by arguing that if you apply the same reasoning backwards in time that we ususally use in the forwards direction then you get an incorrect result. Again, the conventional formalism only mandates inferences forwards in time, so this is not actually a contradiction between quantum theory and experiment. Nevertheless, I believe that Penrose’s argument is wrong because he has failed to correctly describe the time-reverse of the experiment under consideration. If you run the experiment back in time then there has to be a possibility for the photon to come from two places in superposition: the ceiling or the detector. These two components then interfere at the beamsplitter resulting in a single beam going back to the laser, so you get that the photon came from the laser with probability 1, as you should. It is a question of asymmetry between the events that you choose to condition on in the forward and reverse versions of the experiment. Even in classical physics, the issue of how to correctly time-reverse an experiment is subtle. You need to carefully ensure that you impose the correct time-reversed boundary conditions in addition to the time-reversed dynamics. It is easy to introduce apparent asymmetries by hand without noticing that you are doing it, and even the greatest minds of physics have fallen into this trap on occasion. Huw Price essentially wrote a whole book about this, which I recommend. Although retrodictive formalisms go beyond the conventional understanding of quantum theory, I believe they are useful and, when done properly, do not contradict quantum theory. For my take on how to do this, see this paper. However, I would put this type of work definitively in categories 2 and 3. It is not an attempt to refute quantum theory, but an attempt to reformulate it in such a way that certain aspects of the theory, including the time-symmetry between prediction and retrodiction, become more clear.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254905939102173, "perplexity": 377.9094746137178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928754.15/warc/CC-MAIN-20150521113208-00022-ip-10-180-206-219.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/help-please_69238
+0 0 105 1 Solution A is an 80% acid solution. Solution B is a 30% acid solution. (a) Find the amount of Solution A (in mL) that must be added to 500mL of Solution B in order to produce a 70% acid solution. (b) Find the amount of Solution A and Solution B (in mL) that can be combined in order to form a 100mL solution that is 50% acid. (c) Does there exist a combination of Solution A and Solution B that is 90% acid? Aug 23, 2020 #1 +130 0 THIS IS THE AOPS ANSWER SO ITS NOT FROM ME!!! (a) Let be the number of mL of Solution A that is added. So, the total amount of combined solution x + 500 is mL. Since Solution B is 30% acid, the 500 mL of Solution B has (0.3)(500) = 150 mL of acid. Since Solution A is 80% acid, the x mL of Solution A has 0.8x mL of acid. Therefore, the combined solution has 0.8x + 150 mL of acid. This must be 70% of the whole solution, so we have Expanding the right-hand side gives 0.8x + 150 = 0.7(x + 500) , so 0.1x = 200 and x-200/0.1 . Therefore, we must add $$\boxed{2000 \text{ mL}}$$ of Solution A. Note that we need four times as much of Solution A as Solution B. Could we have figured that out just by comparing the percentages of acid in the two original solutions to the desired percentage of acid in the final solution? (b) Let and be the desired amounts of Solution A and B, respectively, in mL. Since we want a total of 100 mL, x+y=100 . Solution A contributes 0.8x mL of acid, and Solution B contributes 0.3y mL of acid, so 0.8x + 0.3y=50. Multiplying the equation x+y=100 by 0.3, we get 0.3x + 0.3y = 30. Subtracting this equation from the equation 0.8x + 0.3y = 50 , we get 0.5x=20 , so x=40 . Then 0.8x + 0.3y = 50. Therefore, we need to combine $$\boxed{40 \text{ mL}}$$ of Solution A and $$\boxed{60 \text{ mL}}$$of Solution B. (c) To see if we can create a combination of Solution A and Solution B that is 90% acid, we proceed the same way as in part (a). Let V be the volume of solution that is 90% acid, and let and be the desired amounts of Solution A and B, respectively, all in mL. Then x + y = V , and as in part (a), Solution A contributes 0.8x mL of acid, and Solution B contributes 0.3y mL of acid, so 0.8x + 0.3y = 0.9V. Multiplying the equation x + y = V by 0.3, we get 0.3x + 0.3y = 0.3V . Subtracting this equation from the equation 0.8x + 0.3y = 0.9V , we get 0.5x = 0.6V, so x = 1.2V. Then y = V - x = -0.2V . Since is y negative, there is$$\boxed{\text{no combination}}$$ of Solution A and Solution B that is 90% acid. Aug 23, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526341557502747, "perplexity": 986.0924336610573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00690.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/car-traveling-around-horizontal-circular-track-radius-r-1800-m-shown-takes-car-t-670-s-go--q2952794
A car is traveling around a horizontal circular track with radius r = 180.0 m as shown. It takes the car t = 67.0 s to go around the track once. The angle ?A = 21.0° above the x axis, and the angle ?B = 58.0° below the x axis. 1) What is the magnitude of the car’s acceleration? m/s2 1.58 Computed value: 1.58 Submitted: Monday, October 1 at 1:21 AM Feedback: Correct! 2) What is the x component of the car’s velocity when it is at point A m/s -6.04 Computed value: -6.04 Submitted: Monday, October 1 at 1:27 AM Feedback: Correct! 3) What is the y component of the car’s velocity when it is at point A m/s 15.75 Computed value: 15.75 Submitted: Monday, October 1 at 1:28 AM Feedback: Correct! 4) What is the x component of the car’s acceleration when it is at point B m/s2 .021249 Computed value: .021249 Submitted: Monday, October 1 at 1:52 AM Feedback: 5) What is the y component of the car’s acceleration when it is at point B m/s2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827472448348999, "perplexity": 1463.7270152386748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765584.21/warc/CC-MAIN-20141217075245-00032-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/cond-mat/9309016/
Anomalous Fluctuations of Directed Polymers in Random Media Terence Hwa [*] and Daniel S. Fisher Lyman Laboratory of Physics, Harvard University, Cambridge, MA 02138 March 8, 2022 Abstract A systematic analysis of large scale fluctuations in the low temperature pinned phase of a directed polymer in a random potential is described. These fluctuations come from rare regions with nearly degenerate “ground states”. The probability distribution of their sizes is found to have a power law tail. The rare regions in the tail dominate much of the physics. The analysis presented here takes advantage of the mapping to the noisy-Burgers’ equation. It complements a phenomenological description of glassy phases based on a scaling picture of droplet excitations and a recent variational approach with “broken replica symmetry”. It is argued that the power law distribution of large thermally active excitations is a consequence of the continuous statistical “tilt” symmetry of the directed polymer, the breaking of which gives rise to the large active excitations in a manner analogous to the appearance of Goldstone modes in pure systems with a broken continuous symmetry. pacs: 05.50, 75.10N, 74.60G I Introduction The statistical mechanics of directed polymers in random media has attracted much attention in recent years [2, 3, 4, 5, 6]. This problem and related problems of higher dimensional manifolds are encountered in a variety of contexts, ranging from the fluctuations of domain walls in random magnets [2, 7, 8], to the dynamics of magnetic flux lines in dirty superconductors [9, 10]. In addition, the randomly-pinned directed polymer is one of the simplest models that contain many of the essential features of strongly frustrated random systems such as spin-glasses [11]. Understanding the behavior of the directed polymer is therefore important for developing intuition and testing theoretical ideas for more complicated random systems. Over the years, a variety of methods have been used to study random directed polymers. These include mapping [3, 4] to a hydrodynamic system: the noise-driven Burgers equation [12, 13], the exact solution on a Cayley tree [14], Migdal-Kadanoff approximate renormalization group calculations [15], a Bethe ansatz solution in 1+1 dimensions using replicas [16, 17], a gaussian variational ansatz in replica space [18], and finally, renormalization group arguments and phenomenology [19] in the spirit of the droplet (or scaling) theory of spin-glasses [20]. There have also been substantial numerical simulations; some recent studies can be found in Refs. [19, 21, 22, 23]. The qualitative phase diagram of the directed polymer is found to be quite simple: There is always a pinned phase dominated by disorder at low temperatures. For polymers in dimensions, only the pinned phase exists for . But for , the polymer can undergo a continuous transition to a high-temperature phase where the disorder is irrelevant as has been proved rigorously [24]. Until recently, most of the efforts have been focused on characterizing the scaling properties of the polymer displacements and free energy fluctuations in the pinned phase. With the exception of 1+1 dimensions for which the scaling exponents can be computed exactly, systematic and analytic computations of the exponents at the zero temperature fixed point that controls the pinned phase have not been possible so far. Some of the other properties of the glassy pinned phase beyond the scaling exponents were explored numerically by Zhang [25], and more recently by Mézard [21]. These authors find very sensitive dependence of the polymer’s low temperature configuration on the details of the particular random medium. Such sensitivity is associated with rare but singular dependence on the details of the random potential of the system. This type of behavior, including sensitivity to small temperature changes, has been predicted by Fisher and Huse [19] by phenomenological scaling and renormalization group arguments. As argued in Ref. [19] (and supported by numerical simulations in [19, 21, 25]), the physics of the low temperature phase is dominated by large scale, low energy excitations of rare regions, analogous to the droplet excitations proposed for Ising spin glasses [20]. Although the picture developed there is physically appealing and relatively complete, the phenomenological approach of Ref. [19] does not provide a systematic or quantitative way of calculating the properties of the pinned phase. On the other hand, various uncontrolled approximations can provide quantitative information. In particular, the Migdal-Kadanoff calculations of Derrida and Griffiths [15] could be used to study the properties predicted in Ref. [19] although this has not been carried out in detail. Mézard and Parisi [18] have recently proposed a very different approach: a variational method in replica space which can be used to study various aspects of the pinned phase. However, the method is limited by the gaussian Ansatz used whose physical significance is not clear, and the analysis based on replicas is haunted by the usual problem of the interchange of the and the thermodynamic limits. This is particularly problematic because within the gaussian Ansatz, one finds that the solution selected requires broken replica symmetry, the physical interpretation of which (if any) is unclear. (Note that the correct scaling exponents were obtained in 1+1 dimensions by using the Bethe Ansatz in replica space without breaking replica symmetry [16].) In this paper, we explore the properties of the pinned phase of random directed polymers by using a more conventional approach based on a field-theoretic description without replicas, and the statistical symmetries of the problem. We will see that the existence of scaling forms for long wavelength, low frequency correlators of the noisy-Burgers’ equation implies, without significant additional assumptions, the existence of rare large scale, low energy “droplet” excitations with a power-law distribution of their sizes. Although the large thermally active excitations are very rare, they dominate many thermodynamic properties and average correlation and response functions, as well as causing large variations in the properties of macroscopic systems. We therefore see that many of the properties of the pinned phase predicted by Fisher and Huse [19] and found in other uncontrolled approximations can be recovered from the existence of a fixed point which, in the hydrodynamic language, is rather conventional. A fundamental lesson from this is that the broken continuous statistical symmetry of the pinned phase of the directed polymer gives rise, quite generally, to power law distribution of large rare, low energy excitations; these are the analog for broken continuous statistical symmetries of the Goldstone modes associated with true broken continuous symmetries ! This paper is presented in a somewhat pedagogical fashion. It is intended to introduce the ideas of rare fluctuations as well as providing an alternative perspective for those familiar with the scaling approach to directed polymers and spin glasses. The paper is organized as follows: In Section II, we define the directed polymer problem and some of the glassy properties of the pinned phase. We motivate the considerations of almost degenerate ground states that give rise to large scale, low energy excitations, and relate their statistics to the distribution functions of the end point of a polymer. The distribution functions are computed in Section III by using a free-energy functional (described in Appendices A and B) and by exploiting the statistical symmetries (Appendix C). The results are interpreted in terms of the large, rare fluctuations in the pinned phase, with a short discussion contained in Section IV. The results of this study are then compared with the variational approach of Ref. [18] and the phenomenological approach of Ref. [19]. We conclude that the various approaches point to the same physics governing the excitations of the directed polymer at low-temperatures — the physics of large, rare fluctuations. Ii Properties of the Pinned Phase ii.1 The Model We consider a directed polymer of length (which will later be convenient to consider as “time”) embedded in dimensions. Let the position vector with and describe the path of the polymer in the transverse dimensions. Then the statistical mechanics of this polymer is determined by the Hamiltonian H[→ξ,η]=∫t0dz[κ2(d→ξdz)2+η(→ξ(z),z)], (1) where is the line tension, and is a quenched random potential of the medium through which the polymer passes. The random potential can be taken to be uncorrelated and gaussian distributed, with and ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯η(→x,z)η(→x′,z′)=2Dδd(→x−→x′)δ(z−z′). (2) Here, the overbar denotes averages over , and characterizes the strength of the random potential. Note that a cutoff on the spatial delta function is necessary for . In order to make our subsequent discussions precise, it is useful to fix one end of the polymer at an arbitrary point, say at . This is implemented by introducing the one-point restricted partition function, Z(→x,t)=∫D[→ξ]δd(→ξ(t)−→x)e−H[→ξ,η]/T, (3) where the arguments of give the coordinate of the fixed end point. Unless otherwise indicated, thermal averages will be performed using this restricted partition function, and will be denoted by . The partition function is, of course, a random variable because of its dependence on the realization of the random potential . For a particular realization of , the Hamiltonian has no symmetries. However, because of the translational invariance in both and of the statistics of , the distribution of has statistical translational symmetry in and . Thus the statistics of and other properties will also be translationally invariant. As we shall see, this symmetry and the closely related “tilt” symmetry — which corresponds to a weakly -dependent translation in — have dramatic consequences. One of the simplest way to characterize the configuration of the polymer is the one-point distribution function: P1(→y,t0|→x,t)≡⟨δd(→ξ(t0)−→y)⟩. (4) This function is the conditional probability of finding the segment, , of the polymer at position given that the fixed end is at . (We use “” to separate the positions of the freefixed positions of the polymer as in conditional probability.) In the absence of disorder, this distribution is easily calculated, yielding the usual random-walk result G(0)(→x−→y,t−t0)≡P(0)1(→y,t0|→x,t)=(κ2πT(t−t0))d/2exp[−κ2T(→x−→y)2t−t0], (5) where the superscript denotes the pure system with . The second moment of this distribution gives the mean transverse displacement, ⟨|→X(t−t0)|2⟩(0)≡⟨|→ξ(t)−→ξ(t0)|2⟩(0)≡∫dd→r→r2G(0)(→r,t−t0)=Tdκ(t−t0). (6) The existence of the random potential tends to increase the transverse wandering of the polymer, as it tries to take advantage of favorable regions of the random potential at low temperatures. After averaging over the disorder, translational symmetry in space is restored, and we have . Note that a subscript is used to indicate the explicit dependence of on the polymer length; we have not put this subscript on since it already has the explicit dependence. Numerically, it is found that ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨|→Xt(τ)|2⟩≡∫dd→r→r2Gt(→r,τ)=τ2ζf(τ/t), (7) with being a smooth scaling function which is finite throughout the range , and the wandering (or roughness) exponent is depending on the dimensionality  [26]. In 1+1 dimensions, the exponent has been obtained exactly by a number of methods [3, 16]. The power law dependence of the transverse displacement reflects the lack of intrinsic scales. This is possible because of the statistical translational symmetry, of the Hamiltonian (1) as discussed above. In dimensions , there is a high temperature or weak randomness phase in which , in addition to the low temperature pinned phase on which we shall focus. It has been conjectured [19] that for all finite in the pinned phase although this is controversial. The detailed form of the disorder-averaged end point function is much harder to obtain than the scaling exponent, even numerically [22, 23]. It is expected to have the general form Gt(→r,τ)≈τ−dζ˜gt/τ(r/τζ) (8) by simple scaling and normalization requirements, with the scaling function depending only weakly on but decaying rapidly for . Two limits of particular interests are G(→r,τ)≡limt/τ→∞Gt(→r,τ)≈τ−dζ˜g∞(r/τζ), (9) which describes the one-point distribution of a semi-infinite polymer, and ˜G(→r,t)≡Gt(→r,t)≈t−dζ˜g1(r/tζ), (10) which describes the distribution of position of the free end of a finite polymer. As explained in the Appendices, analytical studies of the problem is much simplified in the limit . For instance, is given to a good approximation by a self-consistent integral equation in 1+1 dimensions (See Appendix B and Ref. [34]). Numerical solution of the integral equation [34] yields the form Eq. (9), with well approximated by a gaussian, although the precise shape of the tail is not known. However, in numerical simulations, it is most convenient to study the end-point distribution . It is found [22, 23] that the distribution function obeys the scaling form Eq. (10). A comparison between and shows that the scaling function at the two limits are quite similar: both are sharply decreasing functions whose widths are of order unity. Thus we see that the semi-infinite polymer result gives the correct scaling behavior and the qualitative form of the scaling function for the end-point distribution of a finite polymer. We shall use this approximation in the following sections, where we will compute various distribution functions for a semi-infinite polymer (i.e., for ) and then apply them to the end point of a finite polymer with . ii.2 Ground States The description of the directed polymer given so far is rather conventional. The form of in Eq. (8) describes a generic polymer. To appreciate the glassy nature of the pinned phase, it is necessary to go beyond the description in terms of the mean . We first note that an exponent in the pinned phase implies that the energy scales involved in the pinned phase are large. From the first term in (1), we see that a displacement of order of the free end costs a minimum elastic energy of order which grows for long polymers. Growth of the characteristic energy scale for order parameter variation with length scale in conventional systems implies that the system is in an ordered phase governed by a zero temperature fixed point. Although there is no “order parameter” in the directed polymer, the displacement play a similar role and growth of the energy scale for variations of with length scale — here — implies that the pinned phase is controlled by a zero temperature renormalization group fixed point whose properties control the scaling of various quantities such as in Eq. (8). By analogy with the ordered phase in conventional statistical mechanical systems, we expect that the large scale properties of the pinned phase can be described in terms of a “ground (or equilibrium) state” or “states”, and fluctuations about or between these “states”. Thus the configuration of the polymer selected by “thermal” averaging should be the equilibrium “state” that optimizes the total free energy for a given realization of the random potential. At zero temperature for a finite polymer with one point fixed at , there will be a unique preferred path (see Fig. 1(a)), , which is the ground state, i.e., the state with the lowest energy. Here, the term “state” refers to an optimal path starting from the fixed end at . A well-defined thermodynamic limit exists for the state of a semi-infinite polymer if the thermal mean position and all other properties of the polymer at fixed finite tend to a unique limit for a specific sample as . The conjecture that this holds for almost all samples was made and supported in Ref. [19]. The above definition of state only makes sense at after providing short distance cutoffs and in the and directions respectively. At small but finite temperatures, thermal fluctuations wash out the effect of disorder at short length scales, and the polymer does not “feel” the random potential until scales and  [19]. For example, we have in aξ(T)=T3/κD. (11) We can then study the “states” of the polymer coarse-grained on the scales and . A natural conjecture is that at finite temperature the equilibrium state is still unique. What does this mean ? If the one-point distribution function, for a typical sample has the general behavior sketched in Fig. 1(b) (solid line), i.e., a sharply-peaked function of centered about some , with a width of order for , then this, together with similar behavior for all , implies that there is a well-defined “state” at long length scales at finite temperature. As we shall see, this simple picture is roughly correct but with subtle modifications of the meaning of “typical” which crucially affect the physics. Note that the equilibrium state is the minimum of the coarse-grained Hamiltonian which includes effects of the entropy of small scale fluctuations. Thus, in general, as shown in Ref. [19], the equilibrium states for different temperatures of the same sample will be very different on long length scales. The peak in (solid line in Fig. 1(b)) should be located within a transverse distance of for most samples, since the disorder-average , sketched as the dashed curve in Fig. 1(b), has the characteristic scale . The sharpness of for a typical sample can be revealed by probing disorder moments of for different end points. For instance, from the joint two-point correlation function, Qt(→y1−→x,→y2−→x,t−t0)≡¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯P1(→y1,t0|→x,t)P1(→y2,t0|→x,t), (12) we can obtain the mean square of the (thermally averaged) displacement, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨→Xt(τ)⟩2=∫dd→y1dd→y2(→y1−→x)⋅(→y2−→x) Qt(→y1−→x,→y2−→x,τ), (13) which is expected to behave as ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨→Xt(τ)⟩2≈τ2ζf′(τ/t), (14) just like in Eq. (7) in the pinned phase of the directed polymer but with a different scaling function . From Eqs. (7) and (14), we obtain the mean square of the thermal fluctuations about , ¯¯¯¯CT(τ,t) ≡¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨∣∣→ξ(t−τ)−⟨→ξ(t−τ)⟩∣∣2⟩=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨|→Xt(τ)|2⟩−¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨→Xt(τ)⟩2 (15) =12∫dd→y1dd→y2|→y1−→y2|2Qt(→y1−→x,→y2−→x,τ). One would hope that this quantity would characterize the typical width of the one point distribution (solid line of Fig. 1(b)). From the above discussion, the conjectured uniqueness of the equilibrium states at finite would suggest that would be the diameter of the “tube” in which the polymer fluctuates, with not growing with or . We shall see however, that in fact in both the pinned and the high temperature phases. How is this consistent with the existence of an equilibrium state of the problem in the pinned phase ? It has been argued [19] that for typical samples (in fact in almost all samples for large ) is of order unity, but rare samples have sufficiently large that they dominate the sample (or disorder) average. In this paper, we will see that this conclusion arises in a very natural way and can be characterized analytically by a power law tail in the distribution of various quantities that are related to . In conventional critical phenomena, it is often sufficient to characterize a system by characterizing a few moments of the fluctuations. However, for many random systems, such averages do not give an adequate account of what actually goes on in a typical sample, since some properties are not “self-averaging”. The pinned phase of the directed polymer exhibits exactly this type of behavior [19, 21, 25] and thus cannot be well characterized by the knowledge of a few moments. For instance, we shall see that a small fraction of samples have nearly degenerate ground “states”, and , with segments (say the free ends and ) located far away (i.e., ), as shown in Fig. 2(a). Such states would be manifested in as widely separated peaks (Fig. 2(b)) at low temperatures. The occurrence of such degenerate “states” may be rare. However, if they survive in the thermodynamic limit, with as , then they can still have important physical consequences, because these are just the type of states that give rise to large scale fluctuations at low temperatures. Such almost degenerate states are the low dimensional analog of the “droplets” [19, 20] conjectured to govern much of the low temperature properties of spin glasses. The investigation of these large fluctuations is the focal point of this study. We are particularly interested in obtaining the statistics of the almost degenerate states and understanding the effects of these states on physical observables. We shall give a detailed account of these effects in the following sections. ii.3 Distribution Functions There are a number of ways one can characterize the rare fluctuations. To find the relative abundance of samples which behave as Figs. 1 and 2, we can, for example, consider the probability that has the structure shown in Fig. 2, with two peaks separated by a displacement at , i.e., at a distance from the fixed end. It is useful to consider the larger peak to correspond to the optimal path and the other to correspond to an excitation with displacement away from the optimal path. For the second peak to have a reasonable amplitude, the excitation free energy must be of order or less, i.e., it is a thermally active excitation, which is analogous to an active “droplet” introduced for spin glasses. The probability of an active droplet excitation of displacement at a distance from the fixed end is obtained from the distribution as [28] Wt(→Δ,τ)=∫dd→y1dd→y2 δd(→y1−→y2−→Δ)Qt(→y1−→x,→y2−→x,τ). (16) Like the one-point distribution function discussed in Sec.II.A, it will be convenient to consider W(→Δ,τ)=limt/τ→∞Wt(→Δ,τ) (17) which, for large , is the probability distribution of active droplets in the bulk of a semi-infinite polymer. Note that the somewhat different distribution of active droplets at the free end () ˜W(→Δ,t)=Wt(→Δ,t) (18) is however the most readily measurable quantity numerically. In the absence of disorder, the “bare” distribution is just a gaussian, W(0)(→Δ,τ)=(κ4πTτ)d/2e−κ4TτΔ2 (19) since independent of . [Note that is the probability that the distribution has weight at two points separated by . Only if the samples in which this occurs behave as in Fig. 2 will the interpretation of this as the probability of a well-defined active excitation really be useful. In the absence of randomness, this interpretation is clearly incorrect.] With disorder, becomes highly nontrivial, and it is not easy in general to compute even the first few moments of . The large- tail of the distribution, which governs the large scale fluctuations at low temperature, is difficult to obtain both analytically and numerically. In Section III, we introduce a free-energy functional which enables us to compute the asymptotic form of explicitly. We will show that the tail of [and hence also the tail of if we ignore the dependence on ] has a power law form. One of the physical consequences of the nearly degenerate ground states is the behavior of the mean square thermal fluctuations of a given segment of the polymer, , which has an average value ¯¯¯¯CT(τ,t)=12¯¯¯¯¯¯¯Δ2=12∫ddΔΔ2Wt(→Δ,τ). (20) This quantity is related to the susceptibility of the polymer to a tilt, i.e., the response to a term added to the Hamiltonian (1), H→h[→ξ,η]=H[→ξ,η]−∫tt0dz →h⋅d→ξdz. (21) Thermal fluctuations of at a distance from the fixed end are simply proportional to the linear response [29] of the polymer by the fluctuation-susceptibility relation, χ[η]≡∂∂hi⟨ξi(t)−ξi(t0)⟩→h→0=1TdCT(t−t0,t), (22) where is a component of , and the subscript denotes thermal average taken with respect to the new partition function, Z(→x,t;→h)=∫D[→ξ] δd(→ξ(t)−→x)e−H→h/T. (23) Since the random potential in the Hamiltonian (1) does not single out any preferred direction in the () space, a statistical tilt symmetry — related to the translational symmetry — exists which is recovered upon disorder average. As a result, one can prove straightforwardly that on average, the applied tilt merely shifts the mean position of the polymer at to (see Appendix C and Ref. [30]). The disorder-averaged susceptibility is thus just ¯¯¯¯χ=τκ=12Td¯¯¯¯¯¯¯Δ2, (24) which is exactly the same as the susceptibility of the pure system. If the statistical properties of the ground states are complicated as illustrated in Figs. 1(a) and 2(a), then the mean susceptibility does not provide an adequate characterization. On the one hand, a sample such as the one shown in Fig. 1(a) will contribute very little to the average susceptibility at low temperatures since it is locked in a unique state, separated from the lowest excited state by a free energy difference . On the other hand, a sample corresponding to Fig. 2(a) has nearly degenerate states. Thermal fluctuations will thus cause large scale “hopping” of the polymer from one state to the other. Thus the latter samples can give large contributions to the average susceptibility even though they occur rarely. The relative abundance of such samples is given by the susceptibility distribution function, Dt(χ,τ)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯δ(χ−χ[η]). (25) A first principles calculation of would require the knowledge of the full distribution of the function , or at least all of its correlations, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯P1(→y1,t0|→x,t)⋯P1(→yn,t0|→x,t); this will not be attempted here. To obtain the qualitative behavior, we will instead assume that a sample with an active droplet of size has a susceptibility of order . This assumption obviously will not be very good if there are many almost degenerate states that contribute equally to the susceptibility, but it should be reasonable if large scale degenerate states occur only rarely, as we will demonstrate is the actual case. Using this approximation, the susceptibility distribution can be simply obtained from the droplet distribution as Dt(χ,τ)=∫dd→ΔΔ2/(2Td) Wt(→Δ,τ) u[χ/(Δ2/2Td)], (26) where is a sharply peaked scaling function whose precise shape depends on more knowledge of the distribution of . ii.4 Free Energy Variations The functions and introduced above describe the distribution of the polymer’s equilibrium fluctuations and susceptibilities. But in reality (experimentally or numerically), equilibration of a glassy system is often very difficult [31]. The reason is that the nearly degenerate states that give rise to large scale, low energy droplet excitations are typically separated by large energy barriers, and therefore have extremely long relaxation times. Knowledge of the distribution of the barriers is therefore crucial in understanding the dynamic properties of the glassy polymers. The details of the dynamics at low temperatures is very complex and beyond the scope of this paper. However, one can get a bound of the free energy barriers within the equilibrium theory. To do so, let us consider the nearly degenerate states shown in Fig. 2(a). To probe the free energy “landscape” between and , we would like to know , the free energy of a polymer with both ends fixed at and , and then vary in between and (see Fig. 3). The maximum, , of in this range of gives a lower bound of the free energy barrier to move the end from to , since the polymer must pass through all intermediate states with , i.e., in a range of width . Note that this does not take into account the additional barriers the polymer may encounter at intermediate states (see Fig. 4). At this point, it is not clear whether or not the cumulative effects of such barriers will be much larger than . Unfortunately, with two ends fixed, is difficult to compute analytically. What can be obtained readily (see Appendices A and B) is the statistics of the free energy of a polymer of length with only one end fixed, which is just F(→x,t)=−TlogZ(→x,t). (27) As shown by the numerical studies of Kardar and Zhang [5], two polymers with end points fixed at a distance apart will merge a distance from the fixed ends and coincide for the rest of the way (see Fig. 5). Since the free energy difference between two polymers with different end point positions will only arise from the section over which they differ, we expect the typical free energy difference, , of an intermediate state in Fig. 3 with , to scale the same way as the difference for the two polymers in Fig. 5. The latter is characterized by the free energy correlation function CF(→Δ,t)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯[F(→x+→Δ,t)−F(→x,t)]2 (28) which is conjectured to scale as CF(→Δ,t)∼Δ2α (29) for , with the exponent known to be exactly in 1+1 dimensions. Thus provides the lower bound for the barrier of formation of an intermediate active droplet, which is the typical scale for . On the other hand, two polymers with end points fixed at will not overlap at all and thus behave as if they are in independent samples. In this case, CF(→Δ,t)∼t2θ (30) with being the exponent characterizing the sample-to-sample free energy variations. A simple scaling form, which we shall see arises naturally, links the two limits: CF(→Δ,t)=Δ2α˜c(Δ/tζ) (31) yielding the scaling relation α=θ/ζ. (32) We will later derive the scaling law θ=2ζ−1 (33) which arises simply from the naive contribution to the free energy from the elastic part of the Hamiltonian with displacement of order . Iii Statistics of Rare Fluctuations In Section II, we defined a number of distribution functions and disorder averaged correlations functions which are useful in probing the glassy nature of the randomly-pinned directed polymer. To obtain these functions in a systematic way, and to uncover the interconnections among them, we shall use field theoretic methods. The disorder average will be replaced by an average over a weighting functional, which describes the probability distribution of the free energy. This method is inspired by the Martin-Siggia-Rose dynamic field theory [32] developed in the context of stochastic dynamics onto which the directed polymer can be mapped [3, 4]. The explicit mapping to the noisy-Burgers equation is derived in Appendix B. Note however, that the method may not be limited to the directed polymer and is described in Appendix A for an arbitrary dimensional manifold for which the mapping to stochastic dynamics cannot be performed. In this section, we shall use this field theoretic method to obtain the tails of the distribution functions introduced in Section II.C. We shall limit our discussions to the semi-infinite polymer () problem for which the analysis is the simplest. We then interpret the results to characterize the statistics of the large rare fluctuations. iii.1 Derivation of the Distribution Functions iii.1.1 Formalism To obtain the tails of the distribution functions and defined in Section II.C, we need (see Eq. (16)) the disorder-averaged function defined in Eq. (12). As shown in Appendix A, various distribution functions can be generated by adding a source term to the Hamiltonian, e.g., H→H+∫t0dz˜J(→ξ(z),z), (34) and then differentiating the corresponding average free energy, with respect to . The resulting expressions are simple for the semi-infinite polymer. For instance, δ¯¯¯¯F(→x,t;˜J)δ˜J(→y,t0)=¯¯¯¯P1(→y,t0|→x,t)=G(→x−→y,t−t0), (35) and G2,1 (→x−→y1,→x−→y2,t−t0)≡δ2¯¯¯¯F(→x,t;˜J)δ˜J(→y1,t0)δ˜J(→y2,t0) (36) =−1T[¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨δd(→ξ(t0)−→y1)δd(→ξ(t0)−→y2)⟩−Q(→y1−→x,→y2−→x,t−t0)] In terms of the noisy-Burgers equation, the limit corresponds to the statistical steady state, and and are the linear and nonlinear response functions respectively (see Appendix B). It is instructive to consider the meaning of in the limit of zero temperature (with an appropriate short distance cutoff on the random potential): is the limit of small of times the probability that the optimal paths passing through and a distance from the fixed end at () both have energy within of the ground state. Since triple degeneracies are unlikely, this is non-zero when the ground state is doubly degenerate and is the “density of states” of degeneracies. In terms of , the droplet distribution is W(→Δ,τ)=δd(→Δ)+W′(→Δ,τ), (37) with W′(→Δ,τ)=T∫dd→y1dd→y2δd(→y1−→y2−→Δ)G2,1(→x−→y1,→x−→y2,τ). (38) The free-energy functional described in Appendix A allows us to express higher-order distribution functions such as in terms of the one-point function (see Eq. (78)). For the directed polymer, the relevant result is G2,1(→x−→y1, →x−→y2,t−t0)=−∫dd→x′dd→y′1dd→y′2dt′dt1dt2G(→x−→x′,τ−t′) (39) Γ1,2(→x′−→y′1,t′−t1;→x′−→y′2,t′−t2)G(→y1−→y′1,t1−t0)G(→y2−→y′2,t2−t0), where is a “vertex function” which is natural in the context of the noisy-Burgers equation; it will be specified shortly. The expression can be represented diagrammatically as in Fig. 6. We see that the joint distribution has been conveniently broken up as a convolution of a number of one-body distribution functions (the ’s), with the branching process controlled by the vertex . For a (statistically) translationally invariant system, it is convenient (after disorder averaging) to work in Fourier space, with ˆG(→q,τ)=∫dd→rG(→r,τ)ei→q⋅→r. (40) The Fourier transform, , of in Eq. (38) becomes ˆW′(q,t−t0)=−T ∫t0dt′∫tt0dt1∫tt0dt2ˆG(0,t−t′) (41) ˆΓ1,2(→q,t′−t1;−→q,t′−t2)ˆG(→q,t1−t0)ˆG(−→q,t2−t0), where is the Fourier transform of the vertex function . From the scaling form (9) for , we have ˆG(→q,τ)≈ˆg(qτζ), (42) where . Also, by normalization of the probability distribution . Therefore we have the simple expression ˆW′(q,τ)=−T∫τ−∞dτ′∫τ0dτ1∫τ0dτ2 ˆΓ1,2(→q,τ′−τ1;−→q,τ′−τ2)ˆg(qτζ1)ˆg(qτζ2). (43) To procede further, we need to know something about the vertex . We are especially interested in the small behavior since that is what will control the tail of the distribution . Note that the normalization of requires that , hence we must have since . Also, we recall that the mean susceptibility is due to the statistical tilt symmetry (Appendix C), and from the fluctuation-susceptibility relation (24). Thus, limτ→∞limq→0−→∇2→qˆW′(q,τ)=1d¯¯¯¯¯¯¯Δ2=2Tκτ. (44) Since by symmetry, we must have from Eq. (44) ˆΓ1,2(→q,τ′−τ1;−→q,τ′−τ2)=1κq2γ(→q,τ′−τ1;−→q,τ′−τ2), (45) with limq→0∫τ−∞dτ′∫τ0dτ1∫τ0dτ2 γ(→q,τ′−τ1;−→q,τ′−τ2)=τ. (46) The simplest form of satifying Eq. (46) is ˆΓ(0)1,2(→q,τ′−τ1;−→q,τ′−τ2)=1κq2δ(τ′−τ1)δ(τ′−τ2). (47) This is actually the form of the “bare” vertex, i.e., it is the exact vertex in the absence of the random potential, as the bare distribution in Eq. (19) is readily recovered by substituting Eq. (47) in Eq. (43). In Appendix C, we use the statistical tilt symmetry to derive some Ward identities which ensure that the scaling behavior of the droplet distribution obtained from the full vertex will have the same form as that obtained by using the bare vertex . Using Eq. (47) then, Eq. (43) becomes ˆW′(q,τ)≈−Tκq2∫τ0dτ′[ˆg(q(τ′)ζ)]2≈−Tκq2−ζ−1ˆw(q1/ζτ), (48) where is a scaling function whose precise form depends on the actual forms of and (see Appendix C), but whose limits are simple, i.e., and . iii.1.2 Results The Fourier transform of itself, , has a power-law singularity (an inverted cusp) for small in the limit as long as . The singularity is only cutoff by , with for . Inverse Fourier transforming , we obtain the final result W(→Δ,τ)≈1Δd+2−ζ−1˜w(Δ/τζ)forΔ≫aξ(T), (49) with the scaling function having the form and rapidly decreasing for . This result suggests the following scaling form for the full distribution for a polymer of length , Wt(→Δ,τ)=1Δd+2−ζ−1˜wt/τ(Δ/τζ), (50) with . Assuming that has a weak dependence on as in , the above result leads to a power law distribution of active droplets at the free ends, ˜W(→Δ,t)=1Δd+2−ζ−1˜w1(Δ/tζ). (51) which is cut off only by the finite length of the polymer. A similar result is obtained for the tail of the susceptibility distribution. From Eqs. (26) and (50), we have, Dt(χ,τ)≈1χ2−(2ζ)−1˜dt/τ(χ/τ2ζ)forχ≫a2ξ/Td, (52) with being another scaling function which is qualitatively similar to . Note that Eqs. (49) through (52) should only hold in the scaling limit , where the one-point function has the scaling form (42). To understand the form of crossover of from to , it is useful to consider the following approximate form of the one-point function, ˆG(→q,τ)=e−T2κˆν(q)q2τ (53) where . This simple form of extrapolates smoothly between the bare function for and the appropriate scaling form (42) for . It is therefore a useful guide to the qualitative features of the crossover. (A better determination of the response function is given in Ref. [34].) Using the bare vertex Eq. (47) and the approximate form Eq. (53) for , the droplet distribution becomes ˆW(→q,t)=1−1ˆν(q){1−e−Tκˆν(q)q2τ}, (54) with the limiting forms ˆW(→q,τ)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩1−Tκq2forq<τ−ζ,−q2−ζ−1fora−1ξ>q>τ−ζ,e−Tκq2τforq>a−1ξ. (55) Fourier transforming leads to a droplet distribution sketched in Fig. 7 (solid line), with the asymptotic scaling behavior for given by Eq. (49), and a smooth behavior for . It should be emphasized that the precise form of and will change only the details of the crossover function connecting the scaling region to the regions with and , but not the qualitative features of the distribution. Assuming weak dependence of on , we expect the distribution for the end point to have the same form (solid line of Fig. 7) with yet another crossover function. Finally, the same considerations lead to a similar form for the distribution of the susceptibility, . iii.2 Glassy Properties of the Pinned Phase iii.2.1 Droplet Excitations We now interpret the results obtained in the previous sections in terms of the structure of ground states and excitations of the randomly-pinned directed polymer. We first compare the result for in the limit of zero disorder. In this case, the microscopic cutoff length diverges, i.e., , and the distribution no longer has a power-law tail and instead takes on a simple gaussian form Eq. (19) sketched as the dashed line in Fig. 7. Comparing the solid and the dashed lines, we clearly see that the biggest effect of the disorder is to shift the distribution to the small- end. From Fig. 7, it is clear that most of the samples have . In fact, the likely end point separations for typical samples is given by the peak of the distribution which occurs for . This suggests that most samples have a unique ground state (and a unique equilibrium state at low ), like the one sketched in Fig. 2(a). Only a small fraction of the samples, of the order Ut(τ)=∫2τζτζdd→Δ
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737144708633423, "perplexity": 462.4314645674284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00816.warc.gz"}
https://grindskills.com/writing-out-the-mathematical-equation-for-a-multilevel-mixed-effects-model/
Writing out the mathematical equation for a multilevel mixed effects model The CV Question I’m trying to give (a) detailed and concise mathematical representation(s) of a mixed effects model. I am using the lme4 package in R. What is the correct mathematical representation for my model? The Data, Science Question, and R Code My data set consists of species in different regions. I’m testing if a species’ prevalence changes in the time leading up to an extinction (extinctions aren’t necessarily permanent; it can recolonize), or following a colonization. lmer(prevalence ~ time + time:type + (1 + time + type:time | reg) + (1 + time + type:time | reg:spp)) • Prevalence is the proportion of strata occupied by a species in a region-year • Time is a continuous variable that indicates the time to either extinction or colonization; it is always positive • Type is a categorical variable with two levels. These two levels are “-” and “+”. When type is -, it’s a colonization (default level). When type is +, it’s an extinction. • Reg is a categorical variable with nine levels, indicating the region • Spp is a categorical variable; the number of levels varies among regions, and varies between 48 levels and 144 levels. In words: response variable is prevalence (proportion of strata occupied). Fixed effects included 1) and intercept, 2) time from event, and 3) the interaction between time to event and the type of event (colonization or extinction). Each of these 3 fixed effects varied randomly among regions. Within a region, each of the effects varied randomly among species. I’m trying to figure out how to write the mathematical equation for the model. I think I understand what’s going on in the R code (although, I’m sure I have some knowledge gaps, and hopefully writing out the formal mathematical expression will improve my understanding). I have searched through the web and through these forums quite a bit. I found tons of useful information, to be sure (and maybe I’ll link to some of these in an edit to this question). However, I couldn’t quite find that “Rosetta Stone” of R-code translated to math (I’m more comfortable with code) that would really help me confirm I’ve got these equations right. In fact, I know there are some gaps already, but we’ll get to that. My Attempt The basic form of a mixed effects model, in matrix notation is (to my understanding): • $X$ is the design matrix for the fixed effects, $\Delta t$ is the time after colonization (time), and $\Delta t_{+}$ is the time after extinction (time:type) • $Z$ is the design matrix for the random effects (level 1?), I() is the indicator function giving 1 if the sample belongs to the designated region and 0 otherwise, r is indexed to indicates one of the nine regions. • $\beta$ and $\gamma$ contain parameters • $\epsilon$ is errors; I’m not entirely sure how to explain $\Sigma$, though I realize one of these variance/covariance matrices will express covariances among slopes and intercepts, e.g. Assuming things so far are ~correct, that means I’m good at the top level. However, explaining the species-specific variation on the parameters, which is nested within each region, stumped me even more. But I took a crack at something that maybe makes sense … Each of the parameters in $\gamma$ is derived from a linear combination of species-specific predictors and parameters within a region. For each region , there are 3 rows of , corresponding to the 3 predictor variables. Each $\gamma$ can be individually expressed as • $\gamma_{p,r} = U_{p,r} b_{p,r} + \eta_{p,r}$ • where $U_{p,r}$ is a design matrix specific to region $r$ and predictor $p$, $b_{p,r}$ is a 1 by S matrix of parameters for the region (richness in the region = $S$, e.g. 48 or 144), and $\eta_{p,r}$ is a matrix of error terms Specifically, for a given region, each of the $\gamma_{p,r}$ would be: That would be repeated for each region. Then, $\eta \sim \mathcal{N}(0,\Sigma_{\eta})$, like $\epsilon$. Although, perhaps instead of $\Sigma$, there is another letter, like $G$, that is commonly used. Edit: other Q/A’s that were somewhat helpful If I understood the code correctly, why not simply writing something like $$yi=(α+ν(α)j[i]+η(α)k[i])+(β+ν(β)j[i]+η(β)k[i])Ti+(δ+ν(δ)j[i]+η(δ)k[i])(Ti∗Zi)+ϵiy_{i} = \Big(\alpha + \nu_{j[i]}^{(\alpha)} + \eta_{k[i]}^{(\alpha)}\Big) + \Big(\beta + \nu_{j[i]}^{(\beta)} + \eta_{k[i]}^{(\beta)}\Big)T_{i} + \Big(\delta + \nu_{j[i]}^{(\delta)} + \eta_{k[i]}^{(\delta)}\Big)(T_{i} * Z_{i}) + \epsilon_i$$ with \begin{aligned} \Big[\nu_{j}^{(\alpha)}, \nu_j^{(\beta)}, \nu_j^{(\delta)}\Big] &\sim \text{Multi-Normal}(\mathbf 0, \boldsymbol \Sigma_\nu) \\ \Big[\eta_{j}^{(\alpha)}, \eta_j^{(\beta)}, \eta_j^{(\delta)}\Big] &\sim \text{Multi-Normal}(\mathbf 0, \boldsymbol \Sigma_\eta)\\ \epsilon_i & \sim \text{Normal}(0, \sigma_\epsilon) \end{aligned} \begin{aligned} \Big[\nu_{j}^{(\alpha)}, \nu_j^{(\beta)}, \nu_j^{(\delta)}\Big] &\sim \text{Multi-Normal}(\mathbf 0, \boldsymbol \Sigma_\nu) \\ \Big[\eta_{j}^{(\alpha)}, \eta_j^{(\beta)}, \eta_j^{(\delta)}\Big] &\sim \text{Multi-Normal}(\mathbf 0, \boldsymbol \Sigma_\eta)\\ \epsilon_i & \sim \text{Normal}(0, \sigma_\epsilon) \end{aligned} or, if the first equation is too long, something like $$y_{i} = \alpha_{j[i],k[i]} + \beta_{j[i],k[i]}T_{i} + \delta_{j[i],k[i]}(T_i * Z_i) + \epsilon_iy_{i} = \alpha_{j[i],k[i]} + \beta_{j[i],k[i]}T_{i} + \delta_{j[i],k[i]}(T_i * Z_i) + \epsilon_i$$ and \begin{aligned} \alpha_{j[i],k[i]} &= \alpha + \nu_{j}^{(\alpha)} + \eta_{k}^{(\alpha)} \\ \beta_{j[i],k[i]}&=\beta + \nu_{j}^{(\beta)} + \eta_{k}^{(\beta)}\\ \delta_{j[i],k[i]}&=\delta + \nu_{j}^{(\delta)} + \eta_{k}^{(\delta)}\\ \end{aligned}\begin{aligned} \alpha_{j[i],k[i]} &= \alpha + \nu_{j}^{(\alpha)} + \eta_{k}^{(\alpha)} \\ \beta_{j[i],k[i]}&=\beta + \nu_{j}^{(\beta)} + \eta_{k}^{(\beta)}\\ \delta_{j[i],k[i]}&=\delta + \nu_{j}^{(\delta)} + \eta_{k}^{(\delta)}\\ \end{aligned} with the same covariance structure as above? It shows the nested structure of the data as well as which coefficients vary across which levels.
{"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967991828918457, "perplexity": 1811.6206209335196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00471.warc.gz"}
http://math.stackexchange.com/questions/118137/convergence-of-a-sequence-written-as-infinite-products
# Convergence of a sequence written as infinite products Let $$a_n=\prod_{j\in\mathbb{Z}}\frac{1+\exp(-2n e^{-|j|/n})}{1+\exp(-(1+e^{-1/n})n e^{-|j|/n})}$$ Then each term in the product goes to $1$ as $n\to\infty$. Does $a_n\to 1$? - A limit exists. Notice that $$\begin{eqnarray}\frac{1+\exp(-2n e^{-|j|/n})}{1+\exp(-(1+e^{-1/n})n e^{-|j|/n})}<1&\iff& 1+\exp(-2n e^{-|j|/n})<1+\exp(-(1+e^{-1/n})n e^{-|j|/n})\\ &\iff& -2n e^{-|j|/n}<-(1+e^{-1/n})n e^{-|j|/n}\\ &\iff& 2>1+e^{-1/n}\\ &\iff& 1>e^{-1/n}\\ \end{eqnarray}$$ which is true for all $n>0$, thus $a_n<1$ for all $n>0$. But the sequence should be eventually monotonic. – Alex Becker Mar 9 '12 at 9:04 How do you prove that the sequence $a_n$ is eventually monotonic? It is true that each term is eventually monotonic in $n$, but the place when it starts to be monotonic depends on $j$ and grows when $j$ grows, so I don't see how this implies monotonicity of $a_n$. – user26565 Mar 9 '12 at 13:58 My "answer" below is indeed wrong. Maybe an idea: in fact we can take the product over $j\geq 1$. For a fixed $n$, we can write $j=q_nn+r_n$ where $0\leq r_n<n-1$. Then we can bound $\exp\left(-\frac{r_n}n\right)$ to get an above bound for $a_n$. But I will do the computations carefully, since it may not work. – Davide Giraudo Mar 11 '12 at 11:19 For the sake of reference I post a solution (it does not converge to $1$), based on ideas from previous answer by Davide Giraudo (especially noticing a telescoping sum was of great help). Let \begin{equation*} w_{j,n}=\frac{1+\exp(-(1+e^{-1/n})ne^{-j/n})}{1+\exp(-2ne^{-j/n})}>1 \end{equation*} First one notices that it suffices to prove that $\prod\limits_{j\geq 1}w_{j,n}$ converges or not to $1$. Then \begin{equation*} 1+\sum_{j=1}^{J}(w_{j,n}-1)\leq\prod_{j=1}^{J}w_{j,n}\leq \exp \left(\sum_{j=1}^{J}(w_{j,n}-1)\right) \end{equation*} Hence $\prod\limits_{j=1}^{\infty}w_{j,n}$ converges if and only if $\sum\limits_{j=1}^{\infty}(w_{j,n}-1)$ converges, but \begin{align*} w_{j,n}-1&=\frac{\exp(-(1+e^{-1/n})ne^{-j/n})-\exp(-2ne^{-j/n})}{1+\exp(-2ne^{-j/n})}\\ &=\frac{\exp(-ne^{-j/n})}{1+\exp(-2ne^{-j/n})}\left[\exp(-ne^{-(j+1)/n})-\exp(-ne^{-j/n})\right]\\ &\leq\left[\exp(-ne^{-(j+1)/n})-\exp(-ne^{-j/n})\right] \end{align*} so $$\sum_{j=1}^{+\infty}(w_{j,n}-1)\leq \lim_{j\to+\infty}\left[\exp(-ne^{-j/n})-\exp(-ne^{-1/n})\right]={1-\exp(-ne^{-1/n})}$$ converges. Therefore we may write $$1+\sum_{j=1}^{+\infty}(w_{j,n}-1)\leq\prod_{j=1}^{+\infty}w_{j,n}\leq \exp \left(\sum_{j=1}^{+\infty}(w_{j,n}-1)\right)$$ And thus $a_n=\prod\limits_{j=1}^{+\infty}w_{j,n}\to 1$ if and only if $b_n=\sum\limits_{j=1}^{+\infty}(w_{j,n}-1)\to 0$. Let now \begin{align*} \eta_{j,n}&=\exp(-ne^{-j/n})\left[\exp(-ne^{-(j+1)/n})-\exp(-ne^{-j/n})\right], \\ \theta_{j,n}&=\exp\left(-ne^{-(j+1)/n}\right)\left[\exp(-ne^{-(j+1)/n})-\exp(-ne^{-j/n})\right], \\ c_n&=\sum_{j=1}^{+\infty}\eta_{j,n}, \\ d_n&=\sum_{j=1}^{+\infty}\theta_{j,n}. \end{align*} Then we have $\frac12\eta_{j,n}\leq w_{j,n}\leq \eta_{j,n}$, thus $\frac{1}{2}c_n\leq b_n\leq c_n$, and \begin{align*} c_n+d_n&=\\&\sum_{j\geq 1} \left[\exp(-ne^{-(j+1)/n})+\exp(-ne^{-j/n})\right]\left[\exp(-ne^{-(j+1)/n})-\exp(-ne^{-j/n})\right]\\ &=\sum_{j\geq 1}\left[\exp(-2ne^{-(j+1)/n})-\exp(-2ne^{-j/n})\right]\\ &=\lim_{j\to+\infty}\left[\exp(-2ne^{-j/n})-\exp(-2ne^{-1/n})\right]\\ &=\left[1-\exp(-2ne^{-1/n})\right]. \end{align*} We also compute \begin{align*} \frac{\theta_{j,n}}{\eta_{j,n}}=\frac{\exp(-ne^{-(j+1)/n})}{\exp(-ne^{-j/n})}=\exp\left[e^{-j/n}n\left[1-e^{-1/n}\right]\right) \end{align*} And since by Lagrange theorem for some $-1/n<c<0$ we have \begin{align*} 0\leq n\left[1-e^{-1/n}\right]= n\left[e^0-e^{-1/n}\right]\leq n e^c(0-(-1/n))=e^c<1 \end{align*} we obtain that for all $j\geq 1$ and $n\geq 1$ $$1\leq \frac{\theta_{j,n}}{\eta_{j,n}}\leq e$$ and thus $d_n\leq e c_n$. Finally from inequalities above we have $$1-\exp(-2ne^{-1/n})=c_n+d_n\leq (1+e)c_n\leq 2(1+e)b_n,$$ hence $$b_n\geq \frac{1}{2(1+e)}\left[1-\exp(-2ne^{-1/n})\right]$$ and so $$\prod_{j=1}^{\infty}w_{j,n}\geq 1+\frac{1}{2(1+e)}\left[1-\exp(-2ne^{-1/n})\right],$$ hence does not converge to $1$. - I have some problems with properly using align environment, but I cannot figure out how to correct it. – user26565 Mar 12 '12 at 1:59 The MathJax implementation here is somewhat idiosyncratic. To get line breaking to work in align environments, you need to either double the backslashes (\\\\ instead of \\) or wrap the whole \begin{align}...\end{align} in double dollar signs. See this meta question: Is Mathjax supposed to be 100% compatible with Latex? – Rahul Mar 12 '12 at 2:41 Thank you Rahul! – user26565 Mar 12 '12 at 2:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995537400245667, "perplexity": 523.3518001133416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00337-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/relativity-relativistic-energy.153231/
# Relativity : relativistic energy 1. Jan 27, 2007 ### Delzac 1. The problem statement, all variables and given/known data Find the speed of a particle whose total energy is 3 times its rest energy. 2. Relevant equations $$KE = \gamma mc^2 - mc^2$$ 3. The attempt at a solution i let total energy = 3mc^2 and then : $$\gamma mc^2 = KE + mc^2$$ $$3mc^2 = KE + mc^2$$ $$v = \frac{\sqrt{3}}{2} c$$ Is this correct? or should i let $$\gamma mc^2 = 3mc^2$$ and work it out immediately? Any help will be appreciated. 2. Jan 27, 2007 ### chanvincent $$3mc^2 = KE + mc^2$$ is correct, but I dont see how is this connected to $$v = \frac{\sqrt{3}}{2} c$$, So I can't point out which part you did it wrong... yes 3. Jan 27, 2007 ### Meir Achuz Use 1/sqrt{1-v^2/c^2}=3,and solve for v. 4. Jan 28, 2007 ### Delzac yeah, got it thanks, English problem. bah. :P Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Relativity : relativistic energy
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530020952224731, "perplexity": 3198.4130541835284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00578.warc.gz"}
http://math.stackexchange.com/questions/464828/show-r-frac-phin2-equiv-1-mod-n-for-a-primitive-root-r/464855
# Show $r^{\frac{\phi(n)}{2}} \equiv -1$ mod $n$ for a primitive root $r$. I know that if n has a primitive root, then $x^2 \equiv 1$ has $x \equiv \pm1$ as solutions. This can then be used to show that $r^{\frac{\phi(n)}{2}} \equiv -1$ because $r$ has order $\phi(n)$, and out of the two choices it cannot be 1. But I later found out that the proof I had about $x^2 \equiv 1 \implies$ $x \equiv 1$ or $-1$ assumed exactly the fact that r raised to the half of its order is -1. The result $r^{\frac{\phi(n)}{2}} \equiv -1$ seems extremely obvious, but I don't know where to start and I see it being assumed everywhere. One idea I have is to use Euler's theorem to show that $x \equiv \pm1$ for mod $p^t$ and mod $2, 4$. Then, use the Chinese remainder theorem to somehow reconstruct the solution to modulo $n$. This basically exhausts all the cases in which n has a primitive root. Is this the correct direction at solving it? - You may know the general theorem that if $m$ is a positive integer that has a primitive root, and $a$ and $m$ are relatively prime, then the congruence $x^k\equiv a \pmod{m}$ has a solution iff $a^{\varphi(m)/\gcd(\varphi(m),k)}\equiv 1\pmod{m}$, and that if there is a solution there are $\gcd(k,\varphi(m))$ solutions. This is the case $a=1$, $k=2$. –  André Nicolas Aug 11 '13 at 5:41 Yes, you're absolutely right. Since we know $\pm1$ are solutions, they must also be all the solutions. –  Alexander Chen Aug 11 '13 at 6:34 We want to prove that if $n\gt 2$ has a primitive root, then the only solutions of $x^2\equiv 1\pmod{n}$ are $x\equiv \pm 1\pmod{n}$. Certainly $x\equiv \pm 1\pmod{n}$ are solutions, and since $n\gt 2$ they are distinct. Let $g$ be a primitive root of $n$, and let $x=g^k$ be a solution of our congruence, where $k$ is one of $0$ to $\varphi(n)-1$. Then $g^{2k}\equiv 1 \pmod{n}$. It follows that $\varphi(n)$ divides $2k$. But $2k\lt 2\varphi(n)$. The only possibilities are $k=0$ and $2k=\varphi(n)$. The second possibility says that $k=\varphi(n)/2$. - I found a more intuitive way of looking at the problem. Given r is a primitive root mod $n$, we know $[r, r^2, r^3, \ldots, r^{\frac{\phi(n)}{2}}, \ldots, r^{\phi(n)}]$ is the reduced residue system. If $-1$ do not appear exactly at the center (which is $\frac{\phi(n)}{2}$), we get a contradiction where the cyclic group generated by $r$ must terminate at some point earlier than the $\phi(n)$th term, thus $r$ cannot be a primitive root. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848792552947998, "perplexity": 69.11515871639772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00316-ip-10-180-136-8.ec2.internal.warc.gz"}
https://openlib.tugraz.at/592d3afe5887a
Hauptmenü • Autor • Aistleitner, Christoph • Berkes, István • Seip, Kristian • TitelGCD sums from Poisson integrals and systems of dilated functions • Datei • LicenceCC BY • Zugriffsrechte • AbstractUpper bounds for GCD sums of the form $\sum_{k,{\ell}=1}^N\frac{(\gcd(n_k,n_{\ell}))^{2\alpha}}{(n_k n_{\ell})^\alpha}$ are proved, where $(n_k)_{1 \leq k \leq N}$ is any sequence of distinct positive integers and $0<\alpha \le 1$; the estimate for $\alpha=1/2$ solves in particular a problem of Dyer and Harman from 1986, and the estimates are optimal except possibly for $\alpha=1/2$. The method of proof is based on identifying the sum as a certain Poisson integral on a polydisc; as a byproduct, estimates for the largest eigenvalues of the associated GCD matrices are also found. The bounds for such GCD sums are used to establish a Carleson--Hunt-type inequality for systems of dilated functions of bounded variation or belonging to $\lip12$, a result that in turn settles two longstanding problems on the a.e.\ behavior of systems of dilated functions: the a.e. growth of sums of the form $\sum_{k=1}^N f(n_k x)$ and the a.e.\ convergence of $\sum_{k=1}^\infty c_k f(n_kx)$ when $f$ is 1-periodic and of bounded variation or in $\lip12$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981035053730011, "perplexity": 505.8356969626474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00518.warc.gz"}
http://math.stackexchange.com/questions/138692/two-opposite-events-fill-whole-probability-event-space-a-process-selects-them
# Two opposite events fill whole probability event space, a process selects them [duplicate] Possible Duplicate: Probability of two opposite events Suppose there is string of eight bits, e.g.: 00100110 Bits are randomly chosen from the string. Location of bit (in the string) does not influence its selection probability. Probability of choosing $0$: $p_0 = \frac{5}{8} = 0.625$ Prob. of choosing $1$: $p_1 = \frac{3}{8} = 0.375$ Suppose there is an ongoing process of selecting $0$ and 1. So at each moment, $0$ or 1 is selected, and represents current state C of the process. Probability of choosing opposite state (to current state C), and then again opposite state – such complex event is called cycle – is given with: $$p_\text{cycle} = p_0 \cdot p_1 \space\space\space (1)$$ Question: define $p_a = p_{cycle}$. Then, opposite event is $p_b = 1 - p_{cycle}$. We have again two opposite events. How will the $p_b$ look like? I.e. what sequences of $0$ and $1$ will belong to events of kind A and events of kind B. I have problem defining B-set. - Looks okay to me, assuming that the choices are independent. –  Brian M. Scott Apr 30 '12 at 2:10 If you're not happy with the answers you got to the earlier version of this question, don't go posting a new one - just edit the old one. –  Gerry Myerson Apr 30 '12 at 2:41 @BrianM.Scott, Gerry: I have modified the question. It is now clearly different. –  Mooncer Apr 30 '12 at 4:35 add comment ## marked as duplicate by Dilip Sarwate, Gerry Myerson, William, Thomas, Noah SnyderOct 4 '12 at 22:53 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer This is an answer to the updated question. Let $C$ be the current state, either $0$ or $1$. Let $\overline C$ be the opposite state, either $1$ or $0$, respectively. Let $E$ be the event not-cycle. Then $E$ occurs when the next choice is $C$, or when the next choice is $\overline C$ and the choice after that is also $\overline C$. If $C=0$, $E$ occurs if the next choice is $0$, or if the next two choices are both $1$; the probability of this is $p_0+p_1^2$. If $C=1$, $E$ occurs if the next choice is $1$, or if the next two choices are both $0$; the probability of this is $p_1+p_0^2$. Now the probability of being in state $0$ at any given time is $p_0$, and the probability of being in state $1$ is $p_1$. Thus, the probability of being in state $0$ and having $E$ occur is $p_0(p_0+p_1^2)$, and the probability of being in state $1$ and having $E$ occur is $p_1(p_0+p_1^2)$. Combining the two, we find that the probability of $E$ is \begin{align*}p_0(p_0+p_1^2)+p_1(p_1+p_0^2)&=p_0^2+p_0p_1^2+p_1^2+p_1p_0^2\\ &=p_0^2+p_0p_1(p_0+p_1)+p_1^2\\ &=p_0^2+p_0p_1+p_1^2\;, \end{align*} since $p_0+p_1=1$. And this agrees with your calculation of $p_0p_1$ as the probability of a cycle, since $$(p_0^2+p_0p_1+p_1^2)+p_0p_1=p_0^2+2p_0p_1+p_1^2=(p_0+p_1)^2=1\;:$$ the probabilities of cycle and not-cycle must add up to $1$. - I will use regex-like symbol "+" to denote "one or more". I think that $p_1p_0$ means (for $C=0$) "wait for 1 (i.e. $\overline C$), then wait for 0", and matches sequence of randomly choosen bits $0^+1^+0$. For $\overline C$ it is $1^+0^+1$. Basing on your answer, the not-cycle is: $0^+$, $0^+1^+$, $1^+$, $1^+0^+$? –  Mooncer Apr 30 '12 at 16:11 @Steffen: No, $p_1p_0$ is the probability that the next two choices are $1$ and $0$ in that order; no waiting is involved. –  Brian M. Scott Apr 30 '12 at 16:13 If a process would select $0$ and 1 with $p_0, p_1$, and we would observe its output string, then the event: "1 observed" would mean that we wait – possibly observing $0$? And 1 would be observed ("inside" the zeroes) with prob. \$p_1? –  Mooncer Apr 30 '12 at 17:06 @Steffen: All of the calculations above, both yours and mine, refer to observing the very next output or the next two outputs; no waiting is involved. –  Brian M. Scott Apr 30 '12 at 17:09 add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532747864723206, "perplexity": 362.97543312280857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023122061/warc/CC-MAIN-20140305123842-00017-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.buecher.de/shop/englische-buecher/theory-application-and-implementation-of-monte-carlo-method-in-science-and-technology/gebundenes-buch/products_products/detail/prod_id/60131485/
86,99 € inkl. MwSt. Versandkostenfrei* Kostenloser Rückversand Sofort lieferbar 43 °P sammeln • Gebundenes Buch The Monte Carlo method is a numerical technique to model the probability of all possible outcomes in a process that cannot easily be predicted due to the interference of random variables. It is a technique used to understand the impact of risk, uncertainty, and ambiguity in forecasting models. However, this technique is complicated by the amount of computer time required to achieve sufficient precision in the simulations and evaluate their accuracy. This book discusses the general principles of the Monte Carlo method with an emphasis on techniques to decrease simulation time and increase accuracy.…mehr Produktbeschreibung The Monte Carlo method is a numerical technique to model the probability of all possible outcomes in a process that cannot easily be predicted due to the interference of random variables. It is a technique used to understand the impact of risk, uncertainty, and ambiguity in forecasting models. However, this technique is complicated by the amount of computer time required to achieve sufficient precision in the simulations and evaluate their accuracy. This book discusses the general principles of the Monte Carlo method with an emphasis on techniques to decrease simulation time and increase accuracy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789184331893921, "perplexity": 241.72510207887035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00473.warc.gz"}
https://arizona.pure.elsevier.com/en/publications/on-the-growth-of-a-superlinear-preferential-attachment-scheme-2
# On the growth of a superlinear preferential attachment scheme Research output: Contribution to journalArticlepeer-review ## Abstract We consider an evolving preferential attachment random graph model where at discrete times a new node is attached to an old node, selected with probability proportional to a superlinear function of its degree. For such schemes, it is known that the graph evolution condenses, that is a.s. in the limit graph there will be a single random node with infinite degree, while all others have finite degree. In this note, we establish a.s. law of large numbers type limits and fluctuation results, as n ↑ ∞, for the counts of the number of nodes with degree k ≥ 1 at time n ≥ 1. These limits rigorously verify and extend a physical picture of Krapivisky, Redner and Leyvraz (2000) on how the condensation arises with respect to the degree distribution. 60G20, O5C20, 37H10 Original language English (US) Unknown Journal Published - Apr 18 2017 ## Keywords • Degree distribution • Fluctuations • Growth • Preferential attachment • Random graphs • Superlinear • General
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463922381401062, "perplexity": 1719.4920465032426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00614.warc.gz"}
https://math.stackexchange.com/questions/2811249/derivative-of-the-nuclear-norm-left-xa-right-with-respect-to-x
# Derivative of the nuclear norm ${\left\| {XA} \right\|_*}$ with respect to $X$ The nuclear norm (also known as trace norm) is defined as $${\left\| M \right\|_*} = \mbox{tr} \left( {\sqrt {{M^T}M} } \right) = \sum\limits_{i = 1}^{\min \left\{ {m,n} \right\}} {{\sigma _i}\left( M \right)}$$ where ${\sigma _i}\left( M \right)$ denotes the $i$-th singular value of $M$. My question is how to compute the derivative of ${\left\| {XA} \right\|_*}$ with respect to $X$, i.e., $$\frac{{\partial {{\left\| {XA} \right\|}_*}}}{{\partial X}}$$ In fact, I want to use it for the gradient descent optimization algorithm. Note that there is a similar question, according to which the sub-gradient of ${\left\| X \right\|_*}$ is $U{V^T}$, where $U\Sigma {V^T}$ is the SVD decomposition of $X$. I hope this is helpful. Thanks a lot for your help. • Did you read Michael Grant's answer? – Rodrigo de Azevedo Jun 7 '18 at 11:41 • @RodrigodeAzevedo Thanks for your suggestion. I just read Michael Grant's answer right now. Although I haven’t made it clear, actually, I want to use ${\left\| {XA} \right\|_*}$ as a loss function in a deep neural network (DNN). As is known, the optimization algorithm for DNNs can be easily implemented if we can compute the gradient. There is a recent paper which optimizes ${\left\| {X} \right\|_*}$ using its sub-gradient $UV^T$, and I think it maybe appropriate to follow this work. – Jack Jun 7 '18 at 12:26 Let $$Y=XA$$ Write the norm in terms of this new variable, then find the differential and do a change of variables from $Y\rightarrow X$ to obtain the desired gradient \eqalign{ \phi&=\|Y\|_* \cr d\phi &= (YY^T)^{-\tfrac{1}{2}}Y:dY \cr &= (YY^T)^{-\tfrac{1}{2}}Y:dX\,A \cr &= (YY^T)^{-\tfrac{1}{2}}YA^T:dX \cr \frac{\partial\phi}{\partial X} &= (YY^T)^{-\tfrac{1}{2}}YA^T \cr &= Y(Y^TY)^{-\tfrac{1}{2}}A^T \cr\cr } There are two ways of writing the inverse of the square root, only one of which makes sense when $Y$ is rectangular (full column rank -vs- full row rank). • Dear greg, thank you so much for your answer. Although I have no ability to verify your answer, it seems to be correct. Besides, I have an additional problem: is it true that $Y{({Y^T}Y)^{ - {\textstyle{1 \over 2}}}}{A^T} = \tilde U{\tilde V^T}A^T$ where $Y = \tilde U\tilde \Sigma {\tilde V^T}$ like the answer of another similar question? – Jack Jun 7 '18 at 13:52 • @Jack Yes, if you know the SVD of $Y$, then the solution can be simplified to $$\frac{\partial\phi}{\partial X} = UV^TA^T$$ – greg Jun 7 '18 at 16:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9979350566864014, "perplexity": 184.68731684016046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00526.warc.gz"}
https://hsm.stackexchange.com/questions/6150/what-results-did-c-f-gauss-add-to-euler-s-dioptrics/6155
# What results did C. F. Gauss add to Euler’s dioptrics? I’m a fan of C. F. Gauss but I have to be objective — I don’t understand why the classical theory of lenses is called “Gaussian optics”. I think so since it seems that almost all the results of Gauss’s treatise from 1840 were already known — the lens-maker formula was known already from the 17th century, C. Huygens clarified many aspects of geometrical optics, and L. Euler published an extensive study of design of optical instruments in his Dioptrica. So what is special and groundbreaking about Gauss’s contributions? On this occasion, I’ll be glad if anyone can give a concise summary of Gauss’s contributions to optics — his published as well as unpublished manuscripts. According to his own claim he had possessed the results for forty or forty-five years, but had always hesitated to publish such elementary meditations. A work of Bessel on the determination of the focal distance of the Königsberg heliometer gave him the impetus necessary for publication. Bessel’s method assumed mistakenly that the usual lens formula $$\frac1g+\frac1b=\frac1f$$ is correct for lenses of finite thickness. As a consequence of this mistake, Bessel greatly underestimated the possible error of his measurement. Dioptrische Untersuchungen [gives] formulas for a simple lens of nonvanishing thickness [$\ldots$] While Bessel estimated the error of his result at 1/75,000, Gauss showed that it amounted to 1/1,300.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062179923057556, "perplexity": 1452.3761974921003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188800.15/warc/CC-MAIN-20201126142720-20201126172720-00114.warc.gz"}
http://www.maa.org/programs/faculty-and-departments/course-communities/power-functions?device=mobile
# Power Functions Two adjacent windows display graphs of (1) a function $$f$$  and a tangent line, and (2) the derivative $$f '$$. From a drop-down menu, $$f$$  is defined to be one of seven possible power functions, and sliders allow user to vary parameters and the point at which the derivative is evaluated. There are specific suggestions for exploration with the applet. Identifier: http://calculusapplets.com/power.html Rating: Creator(s): Thomas S. Downey Cataloger: Bruce Yoshiwara Publisher: CalculusApplets.com Rights: Thomas S. Downey, Creative Commons
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592694401741028, "perplexity": 2285.6037255274573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772125.148/warc/CC-MAIN-20141217075252-00156-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/31494-how-do-you-choose-half-angle-formula-tan-use.html
# Math Help - how do you choose which half angle formula (tan) to use? 1. ## how do you choose which half angle formula (tan) to use? Mathwords: Half Angle Identities for tangent u/2, it shows two formulas. how do you know which one to use? or do they equal the same thing 1-cos u / sin u or sin u / 1 + cos u 2. Originally Posted by algebra2 Mathwords: Half Angle Identities for tangent u/2, it shows two formulas. how do you know which one to use? or do they equal the same thing 1-cos u / sin u or sin u / 1 + cos u They are all the same. (Note that all of them carry the +/- symbol on them. The page wasn't too clear about that.) Which you pick is determined by which is the most useful to you. -Dan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959841728210449, "perplexity": 1497.4238849834433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133078.21/warc/CC-MAIN-20140914011213-00163-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://brilliant.org/problems/is-vieta-useful/
# Is Vieta Useful? Algebra Level 4 $\large x^3-3x+n=0$ Find the product of all values of $$n$$ for which all solutions of the equation above are integers. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010942339897156, "perplexity": 499.2720301836483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00023-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electrodynamics-in-robertson-walker-spacetime.819919/
# Electrodynamics in Robertson Walker spacetime • Start date • #1 Gold Member 2019 Award 15,090 6,563 ## Main Question or Discussion Point I've a (perhaps somewhat stupid) question. Is there a good source, where one finds the plane-wave solutions (or what comes closest to it) for electrodynamics for the closed and open (non-flat) Friedmann-Lemaitre-Robertson-Walker metric. I try this myself for a while, because to my surprise I couldn't find this anywhere in the literature. The background is that I don't like the usual "derivation" for the redshift formula in terms of photons, because I don't think that photons are easy to describe for a general Robertson-Walker space-time. The best I could come up with, and which is what I try to explain my students in the next recitation session for the cosmology lecture of my boss, is the standard treatment of the redshift in terms of the leading-order eikonal solution for the free Maxwell equations in covariant Lorenz gauge, which leads to the same formula as the naive argument with the "photon", which is clear, because the eikonal approach is the way to go from wave to ray optics and thus a naive photon picture. Further via an argument using the equivalence principle it is also pretty straight forward to derive the relation between distance and apparent magnitude of far-distant objects, which is important for the distance-redshift measurement (the Hubble Law) and its relation to the history of the universe through the Friedmann equations. What I'd like to understand for myself is, whether there is a simple solution close to a plane wave in Minkowski space for the general Robertson-Walker spacetime. So far I couldn't find one (neither in the literature nor by myself). My status of understanding is as follows. You start from the Lagrangian for the electromagnetic field, $$\mathcal{L}=-\frac{1}{4} \int \mathrm{d}^4 q \sqrt{-g} F_{\mu \nu} F^{\mu \nu},$$ where $$F_{\mu \nu}=\mathrm{D}_{\mu} A_{\nu} - \mathrm{D}_{\nu} A_{\mu}=\partial_{\mu} A_{\nu} - \partial_{\mu} A_{\nu}.$$ The first thing one observes is that this action is conformal invariant, i.e., under the transformation $$q^{\mu} \rightarrow q^{\mu}, \quad g_{\mu \nu} \rightarrow \lambda g_{\mu \nu}, \quad g^{\mu \nu} \rightarrow \frac{1}{\lambda} g^{\mu \nu},$$ where ##\lambda=\lambda(q)## is an arbitrary scalar field. Now using the FLRW metric with conformal time one has $$\mathrm{d} s^2=a^2(\tau) [\mathrm{d} \tau^2 -\mathrm{d} \chi^2 -S_K^2(\chi) (\mathrm{d} \vartheta^2 + \sin^2 \vartheta \mathrm{d} \varphi^2)]$$ with $$S_K=\begin{cases} \sin \chi & \text{for} \quad K=+1, \\ \chi & \text{for} \quad K=0,\\ \sinh \chi & \text{for} \quad K=-1. \end{cases}$$ Of course you can as well use any of the other forms for the spacial coordinates, which doesn't change much with my problem, as far as I can see at the moment. Now it's clear that the electrodynamics in these coordinates is independent on the scale parameter ##a(\tau)##, and for this purpose one can work with a fictituous spacetime with a static metric $$\mathrm{d} \tilde{s}^2= \mathrm{d} \tau^2 -\mathrm{d} \chi^2 -S_K^2(\chi) (\mathrm{d} \vartheta^2 + \sin^2 \vartheta \mathrm{d} \varphi^2).$$ This implies that in these coordinates for the case of a flat universe, ##K=0##, it's all very simple, because then this is simply the Minkowski metric with ##(\chi,\vartheta,\varphi)## standard spherical coordinates for the spatial part. You can as well work with Cartesian coordinates and immediately you have the plane-wave solutions $$A_{\mu}(x)=a_{\mu} \exp[-\mathrm{i} \omega (\tau-z)], \quad (a_{\mu}=(0,a_x,a_y,0).$$ Then, going to comoving coordinates, i.e., to the time coordinate, for which the FLRW metric becomes $$\mathrm{d} s^2=\mathrm{d} t^2 -a^2(t) [\mathrm{d} \chi^2 + S_K^2(\chi) \mathrm{d} \vec{\Omega}^2].$$ Then one has $$\mathrm{d} \tau=\mathrm{d} t a(t)$$ and thus from the phase for the flat-space plane-wave solution one immiately finds the standard formula for the redshift $$\omega_e a(t_e)=\omega_o a(t_o),$$ where the subscripts stand for emission and observation, leading to the redshift $$1+z=\frac{\omega_o}{\omega_e}=\frac{a(t_o)}{a(t_e)}$$ one also gets from the eikonal approximation also for the non-flat cases, ##K=\pm 1##. If one tries to find an exact solution for the free Maxwell equations in these cases, it becomes amazingly complicated, even taking into account the fact that of course you can also work with the fictitious static metric with the conformal time. I thought, maybe one can use the alternative spatial coordinate ##\rho##, for which the FLRW metric reads $$\mathrm{d} s^2 = a^2(\tau) [\mathrm{d} \tau^2 - \frac{1}{(1+K \rho^2/4)^2}(\mathrm{d} \rho^2 + \rho^2 \mathrm{d} \vec{\Omega}^2) ],$$ which you can obviously rewrite into a form which comes very close to Cartesian coordinates $$\mathrm{d} s^2=a^2(\tau) [\mathrm{d} \tau^2 - \frac{1}{(1+K (x^2+y^2+z^2)/4)^2} \mathrm{d} \vec{x}^2].$$ Of course, for ##K=0## this gives the Minkowski metric for the static space-time useful for the free Maxwell equations, but for ##K \in \{\pm 1 \}## I don't see a simple plane-wave like exact solution for the Maxwell equation. So, if somebody knows a paper or book, dealing with such questions, I'd be very interested. Related Special and General Relativity News on Phys.org • #3 Gold Member 2019 Award 15,090 6,563 Thanks for the quick reply. I know the second book, but this deals with exact solutions for Einstein's field equations of gravitation, what I look for is an exact solution of the Maxwell equations in a FLRW-background space time. I'll have a look whether I can find something about this in these books. • #5 Gold Member 2019 Award 15,090 6,563 Thanks a lot. I'll have a careful look at these papers. • #6 George Jones Staff Emeritus Gold Member 7,351 926 I don't have any answers, but I do have some questions. Thanks for the quick reply. I know the second book, but this deals with exact solutions for Einstein's field equations of gravitation, what I look for is an exact solution of the Maxwell equations in a FLRW-background space time. I'll have a look whether I can find something about this in these books. I don't quite understand the above. What I'd like to understand for myself is, whether there is a simple solution close to a plane wave in Minkowski space Is this actually a solution to Einstein's equation, or do we assume that the plane wave rides on top of Minkowski spacetime "for free", i.e., that we don't take account of the electromagnetic wave's energy-momentum tensor in Einstein's equation? for the general Robertson-Walker spacetime. Similarly, are you searching for: 1) "plane waves" (for comoving observers) that don't contribute to the stress-energy tensor; 2) an electrovac solution to Einstein's equation that has the same symmetries as an FLRW spacetime (same Killing vectors), and that probably doesn't exist; 3) an electrovac solution to Einstein's equation that is a "slight" perturbation, due to the electromagnetic wave's energy-momentum tensor, and "looks like" plane waves for comoving observers? I think that you want 1), but I am not quite sure what this means. Do you want a plane wave in coordinates more tied to what comoving observers see, like Riemann- or Fermi normal coordinates? The literature probably concentrates on 3, e.g., paper that WannabeNewton referenced writes: The generic anisotropy of the electromagnetic energy-momentum tensor makes the Maxwell field incompatible with the high symmetry of the FRW spacetime. The implication is that the simplest models where one can study cosmological electromagnetic fields are the perturbed Friedmann universe Likes vanhees71 • #7 Gold Member 2019 Award 15,090 6,563 I think, I didn't make my problem clear enough. It's much simpler than what you suggest: I simply look for the solution of the Maxwell equations in a given (fixed) FLRW spacetime. I think, I've found another paper by Googling which seems to provide right this, making use of the Debye potentials: J. M. Cohen, L .S. Kegeles, Electromagnetic fields in curved spaces: A constructive procedure, PRD 10, 1070 (1974) http://dx.doi.org/10.1103/PhysRevD.10.1070 • #8 bcrowell Staff Emeritus Gold Member 6,723 423 The background is that I don't like the usual "derivation" for the redshift formula in terms of photons, because I don't think that photons are easy to describe for a general Robertson-Walker space-time. Could you explain more what you think the issue is? It seems like a non-issue to me. • #9 Gold Member 2019 Award 15,090 6,563 Well, quantum field theory in curved space time is a pretty delicate subject, and I think it's overkill, because the redshift formula ##\omega a=\text{const}## is easily derived through the eikonal approximation of the em. field in curved spacetime, $$g^{\mu \nu} \partial_{\mu} \psi \partial_{\nu} \psi=0.$$ For the FLRW metric with the conformal time for radial rays this reads $$(\partial_{\tau} \psi)^2-(\partial_{\chi} \psi)^2=0.$$ For a ray moving from a distant source radially to the origin, where the observer is located, this means $$\partial_{\tau} \psi=-\partial_{\chi} \psi.$$ Since in the eikonal equation, the overall scale factor of the metric ##a(\tau)## can be divided out, leaving an equation which is independent of ##\tau## and thus the canonical momentum ##\omega=\partial_{\tau} \psi=\omega(\chi)##. This implies that $$\psi=\omega(\chi) \tau+\psi_2(\chi),$$ The eikonal equation thus gives $$\partial_{\chi} \psi=\omega'(\chi) \tau+\psi_2'(\chi)=-\partial_{\tau} \chi=\omega(\chi).$$ Since the right-hand side is independent of ##\tau## we find ##\omega'=0## and thus ##\omega=\text{const}##. This gives finally $$\psi=\omega(\tau-\chi)+\text{const}.$$ So in leading-order eikonal approximation you get $$A_{\mu}(t,\chi)=a_{\mu} \exp[-\mathrm{i} \omega (\tau-\chi)].$$ On the other hand $$\mathrm{d} \tau=\frac{1}{a(t)} \mathrm{d} t.$$ Thus in comoving coordinates $$\tilde{\omega}=\frac{\partial \psi}{\partial t} = \frac{\partial \psi}{\partial \tau} \frac{\mathrm{d} \tau}{\mathrm{d} t}=\frac{\omega}{a(t)}$$ or $$\tilde{\omega} a(t)=\text{const}.$$ Thus if light is emitted at ##\chi=\chi_e## at time ##t_e## with frequency ##\tilde{\omega}_e## the observer at ##\chi=0## observes the light at a later time ##t_o## with another frequency ##\omega_o##. According to the above derivation we have $$\tilde{\omega}_e a(t_e)=\tilde{\omega}_o a(t_0) \; \Rightarrow \; \tilde{\omega}_o=\tilde{\omega}_e \frac{a(t_e)}{a(t_o)}.$$ Since the universe is expanding this makes ##\omega_o<\omega_e##, i.e., it implies a redshift of spectral lines of the light emitted from a distant objects, the Hubble redshift. The above derivation shows that the same holds true for the wave number ##k=2 \pi/\lambda=\omega## and thus the wave number is also red-shifted in comoving coordinates, i.e., a fundamental observer finds $$\lambda_o=\lambda_e \frac{a(t_o)}{a(t_e)}=:(1+z) \lambda_e.$$ For a given ##\chi## the times ##t_o## and ##t_e## can be calculated from radial null-geodesics, which describes the propagation of the surfaces of constant phase of the light wave. Since for the co-moving coordinates the geodesics are simply given by ##\mathrm{d} s^2=0## (as can be derived from the geodesic equation, using the FLRW metric in the form with the time coordinate ##t##), i.e., $$\mathrm{d} t=-a(t) \mathrm{d} \chi \; \Rightarrow \chi=\int_{t_e}^{t_o} \mathrm{d} t \frac{1}{a(t)}.$$ Now all this uses the eikonal approximation for the electromagnetic waves. So I've asked myself, in how far one can derive deviations from this behavior for the redshift in the case of ##K \neq 0##. For our universe these seems not to be very relevant since due to the standard model of cosmology the universe (large-scale coarse-grained) seems to be described by a flat FLRW metric with ##K=0##, where the electromagnetic field obeys the flat-space Maxwell equations in conformal coordinates, and thus plane waves ##A_{\mu} \propto \exp(-\mathrm{i} \omega (\tau-\chi))## are indeed exact solutions and thus the above considerations are valid for the strict Maxwell equations. Last edited: • #10 bcrowell Staff Emeritus Gold Member 6,723 423 Well, quantum field theory in curved space time is a pretty delicate subject, and I think it's overkill, because the redshift formula ##\omega a=\text{const}## is easily derived through the eikonal approximation of the em. field in curved spacetime, [...] Is this in reply to my #8? You don't need to use quantum mechanics. You can just describe a classical wave packet. • #11 WannabeNewton 5,803 530 As far as the redshift goes I don't quite understand the issue either. The Eikonal approximation is valid when ##K\neq 0## as well, so long as you consider modes whose wavelength is much smaller than the radius of curvature of the space-time, a condition which must be met in the ##K = 0## case as well. The redshift formula ##\omega a = \text{const.}## is valid in both the ##K = 0## and ##K \neq 0## cases, a fact which is extremely easy to derive if one uses Killing vectors. C.f. Wald pp. 103-104. Likes vanhees71 • #12 Gold Member 2019 Award 15,090 6,563 Sure, in the eikonal approximation it's very straight forward, as shown in my previous posting. I just thought, it would be as easy by constructing an exact solution of the Maxwell equations in a FLRW spacetime which corresponds to a plain wave in flat space time. For ##K=0## it's simple, because there the electrodynamics is as in flat Minkowski space due to the conformal symmetry of the free Maxwell equations and with the conformal time, in this case the FLRW metric is conformally flat as also shown above. For the non-flat cases, it's not so easy. Yesterday night I read the above quoted article J. M. Cohen, L .S. Kegeles, Electromagnetic fields in curved spaces: A constructive procedure, PRD 10, 1070 (1974) http://dx.doi.org/10.1103/PhysRevD.10.1070 The trick is to work with the gauge invariant fields and to use Debye potentials. For the FLRW metric they obey simple separable wave equations, whith the conformal time entering in the form ##\partial_{\tau}^2 \chi##, and thus you have in all three cases of the curvature multipole expansions for the free electromagnetic field ##\propto \exp(-\mathrm{i} \omega \tau)##. This is exact, and thus the red-shift formula established. Of course, I've not fully understood the details yet, because unfortunately my versatility with the Cartan calculus is a bit rusty, but that seems to be the most elegant way to understand what's going on. On Friday, I've figured out another way, based on the very nice treatment of electrodynamics in curved space-time in Landau-Lifshitz vol. 2. There, everything is formulated in the 1+3-dimensional formalism, making the entire set of equations looking like the Maxwell equations in three-dimensional form. There you establish also quite easily that for the FLRW metric using the conformal time as a coordinate, the electric and magnetic field components all obey wave equations with ##\partial_{\tau}^2 \vec{E}## + purely spacial derivatives. From this it's also immideately clear that the usual redshift formula is exact. Perhaps one can work out the complete solution using the Debye potentials for ##\vec{B}## in a much more naive way than in the above cited paper, analogous to this technique in Minkowski space. The derivation in Wald also uses the eikonal approximation. The trick with the Killing vectors is of course a very appealing approach. Last edited: • #13 ChrisVer Gold Member 3,337 440 I wonder... can't you derive for a general metric (so for an FRW metric too) the Maxwell equations for the EM wave? • #14 hunt_mat Homework Helper 1,741 25 Here may be a naive suggestion (since my GR is very rusty), you know the condition to derive the FRW metric right? So why not just apply those conditions with the electromagnetism tensor as part of Einstein's equations? • #15 Gold Member 2019 Award 15,090 6,563 I wonder... can't you derive for a general metric (so for an FRW metric too) the Maxwell equations for the EM wave? Sure. That's no big problem. You just use the generally covariant action (for the free Maxwell field, $$A=-\frac{1}{4} \int \mathrm{d}^4 q \sqrt{-g} F_{\mu \nu} F^{\mu \nu}, \quad F_{\mu \nu}=\nabla_{\mu} A_{\nu}-\nabla_{\nu} A_{\mu}.$$ $$\nabla_{\mu} F^{\mu \nu}=0, \quad \epsilon^{\mu \nu \rho \sigma} \nabla_{\nu} F_{\rho \sigma}=0,$$ where the latter (the inhomogeneous Maxwell equations) is identically fulfilled through the ansatz via the vector potential ##A_{\mu}##. The trouble is that even in a so symmetric case as the FLRW metric, it's not trivial to find an ansatz for the potentials, where the covariant D'Alembertian applied to the vector field yields separate equations as in flat space. The trick is to introduce an appropriate generalization of the Debye potentials as shown in the cited paper. I'm not yet very far with understanding all the details, but the key issue seems to be that there exists a generalized form of the angular-momentum operator (in flat 3-space ##\vec{r} \times \vec{\nabla}##), which should exist, because the FLRW metric is isotropic and thus admits a rotation group. Since this is due to the symmetries, this must be related to the Killing vectors of the space-time. So there must be a simpler way to see this for the FLRW (and perhaps also the Schwarzschild) metric, but in the paper they show that it even works for the Kerr metric, which is less symmetric (?). • #16 3,507 26 As already pointed out by several posters there is no problem for the local approximation for any case, now you seem to be asking for a global plane wave solution and that is not possible for any of those spacetimes including FRW with K=0 and Minkowski spacetime, unlike the case in Euclidean space(where it is not really useful physically anyway since being infinite spatially and for all t they preclude propagation and are not found in nature, only superposition approximations to it are observed). • #17 Gold Member 2019 Award 15,090 6,563 Is the following somehow wrong, because there I indeed find a kind of plane-wave solution valid for all FLRW metrics! The covariant Maxwell equations look as follows $$F_{\mu \nu} = \nabla_{\mu} A_{\nu}-\nabla_{\nu} A_{\mu}=\partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$$ and (in Heaviside-Lorentz units) $$\nabla_{\mu} F^{\mu \nu}=j^{\nu}.$$ This can be rewritten, using the commutator of the covariant derivative $$\nabla_{\mu} \nabla_{\nu} A^{\mu}=\nabla_{\nu} \nabla_{\mu} A^{\mu} -R_{\mu \nu} A^{\nu},$$ using the sign conventions (from [Adler]) $$\nabla_{\mu} \nabla_{\nu} A_{\rho} =-R_{\rho \sigma \mu \nu} A^{\nu}, \quad R_{\sigma \nu}={R^{\rho}}_{\sigma \rho \nu}.$$ Thus we have in Lorenz gauge $$\nabla_{\mu} A^{\mu}=\frac{1}{\sqrt{-g}} \partial_{\mu} (\sqrt{-g} A^{\mu})=0$$ the generalized wave equation $$\nabla_{\mu} \nabla^{\mu} A^{\nu}+{R^{\nu}}_{\mu}A^{\mu}=j^{\nu}.$$ The FLRW metric in conformal coordinates reads $$\mathrm{d} s^2=a^2(\tau) [\mathrm{d} \tau^2 - \mathrm{d} \chi^2 - S_K^2(\chi)(\mathrm{d} \vartheta^2 + \sin^2 \vartheta \mathrm{d} \varphi)].$$ You have $$S_K(\chi)=\begin{cases} \sinh \chi &\text{for} \quad K=-1,\\ \chi & \text{for} \quad K=0,\\ \sin \chi & \text{for} \quad K=+1. \end{cases}$$ For the case ##K=0## the square bracket is the Minkowski metric. This means that, because of the conformal invariance of Maxwell equations all solutions for ##A_{\mu}## can be taken from Minkowski space, among them the plane-wave solution $$A_{\mu}=a_{\mu} \exp(-\mathrm{i} [\tilde{\omega} (\tau-\chi)], \quad \text{with} \quad a_{\mu}=(0,0,0,A_0), \quad A_0=\text{const}, \quad \tilde{\omega}=\text{const}.$$ The Mathematica Notebook attached to this posting shows explicitly that this is indeed a solution of the Maxwell equations in all FLRW metrics (I don't know, how to upload a mathematica notebook; so I uploaded a pdf-printout). Of course, in the usual fundamental coordinates you have the time coordinate ##t## instead of the "conformal time" ##\tau##, which are related by $$\mathrm{d} \tau a(\tau)=\mathrm{d} t \; \Leftrightarrow \; \mathrm{d} \tau=\frac{\mathrm{d} t}{a(t)},$$ where ##a(t)## is the usual scale factor in fundamental coordinates in cosmology. This implies the Hubble-redshift formula. In conformal coordinates (in the following written with a twidle over the vector/tensor symbols) you have $$\tilde{k}_{\mu}=\tilde{\omega}(1,1,0,0) \; \Rightarrow \; k_{\mu} = \frac{\partial \tilde{q}^{\nu}}{\partial q^{\mu}} \tilde{k}_{\nu}=(\tilde{\omega}/a(t),\omega,0,0).$$ For a fundamental observer (at rest wrt. the fundamental reference frame) you have the four-velocity ##u^{\mu}=(1,0,0,0)## and thus he measures the frequency $$\omega=\tilde{\omega}/a(t_o)$$ Thus if the light is emitted from a light source at rest wrt. the fundamental coordinates from a radially symmetric surface at ##\chi=\chi_s## you get $$\omega_e=\frac{\tilde{\omega}}{a(t_e)}$$ and for an observer located at ##\chi_o## $$\omega_o=\frac{\tilde{\omega}}{a(t_o)}.$$ So you get for the red shift ##z## $$1+z=\frac{\omega_e}{\omega_o}=\frac{a(t_o)}{a(t_e)}.$$ The time of observation and emission are related by the coordinate distance ##\chi_o-\chi_e \simeq \chi_o## of source and observer by the fact that the surfaces of constant phase move along the light-like radial geodesics, given in conformal coordinates by ##\chi=\tau+\text{const}##. In fundamental coordinates you simply get this by solving $$\mathrm{d} s^2=\mathrm{d} t^2-a^2(t) \mathrm{d} \chi^2 \; \Rightarrow \; \chi_o-\chi_e=\int_{t_e}^{t_o} \mathrm{d} t \frac{1}{a(t)}.$$ #### Attachments • 59.6 KB Views: 149 • #18 3,507 26 If the solution is valid for Minkowski spacetime and FLRW spacetimes are locally minkowskian, it appears to follow that at least locally the solution is valid in FLRW(except obviously at the BB singularity that is anyway generally considered to be outside the manifold) regardless of K, no? But my point was that the solution in Minkowski spacetime is valid only for spacelike hypersurface slices on the past light cone(for retarded solutions), being singular at the observer's present t=0 hypersurface(no absolute simultaneity), unlike the Euclidean plane waves where the solution is valid from t=-infinity to t=+infinity (absolute simultaneity). It seems odd to call wave to something that finds such mathematical obstacles to propagate from past to future. At least in the FLRW cosmologies the preferred slicing makes thing easier wrt this issue. • #19 Gold Member 2019 Award 15,090 6,563 Sure, it's of course not a plane but a kind of spherical wave. Of course this most simple exact solution is not what you want to describe the radiation from a distant "pointlike" source. This becomes clear that in Minkowski space this solution, written in terms of the usual normalized spherical coordinates is $$\vec{E}=\frac{E_0}{4 \pi r \sin \vartheta} \vec{e}_{\varphi} \cos[\omega(t-r)].$$ It's singular not only at the origin ##r=0## but along the entire ##z## axis ##\vartheta=0,\pi##. $$\vec{E} \simeq \frac{E_0}{4 \pi r} \vec{e}_{\vartheta} \cos[\omega(t-r)].$$ Of course the fields are singular for ##r \rightarrow 0##, but there's of course the source, and the free-field solutions are not valid inside the source. The radiation comes from a surface at ##r>0##. The eikonal solution as the far-field approximation of an exact solution. • Last Post Replies 10 Views 1K • Last Post Replies 1 Views 3K • Last Post Replies 6 Views 2K • Last Post Replies 1 Views 3K • Last Post Replies 1 Views 534 • Last Post Replies 1 Views 1K • Last Post Replies 4 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 4 Views 4K • Last Post Replies 0 Views 672
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959274411201477, "perplexity": 712.6910124076708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00212.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&onejrnl=tran&pubname=one&v1=20G15&startRec=1
AMS eContent Search Results Matches for: msc=(20G15) AND publication=(tran) Sort order: Date Results: 1 to 30 of 39 found      Go to page: 1 2 [1] Reed Leon Gordon-Sarney. Totaro's question for tori of low rank. Trans. Amer. Math. Soc. Abstract, references, and article information View Article: PDF [2] Sanghoon Baek. Chow groups of products of Severi-Brauer varieties and invariants of degree $3$. Trans. Amer. Math. Soc. 369 (2017) 1757-1771. MR 3581218. Abstract, references, and article information    View Article: PDF [3] Alexander S. Merkurjev. Motivic decomposition of certain special linear groups. Trans. Amer. Math. Soc. 369 (2017) 555-574. Abstract, references, and article information    View Article: PDF [4] Nicolas Perrin. Split subvarieties of group embeddings. Trans. Amer. Math. Soc. 367 (2015) 8421-8438. Abstract, references, and article information    View Article: PDF [5] Lien Boelaert and Tom De Medts. A new construction of Moufang quadrangles of type $E_6, E_7$ and $E_8$. Trans. Amer. Math. Soc. 367 (2015) 3447-3480. Abstract, references, and article information    View Article: PDF [6] H. Bermudez, S. Garibaldi and V. Larsen. Linear preservers and representations with a 1-dimensional ring of invariants. Trans. Amer. Math. Soc. 366 (2014) 4755-4780. Abstract, references, and article information    View Article: PDF [7] Liang Yang. On the quantization of spherical nilpotent orbits. Trans. Amer. Math. Soc. 365 (2013) 6499-6515. Abstract, references, and article information    View Article: PDF [8] Michael Bate, Benjamin Martin, Gerhard Röhrle and Rudolf Tange. Closed orbits and uniform $S$-instability in geometric invariant theory. Trans. Amer. Math. Soc. 365 (2013) 3643-3673. Abstract, references, and article information    View Article: PDF [9] Sebastian Herpel. On the smoothness of centralizers in reductive groups. Trans. Amer. Math. Soc. 365 (2013) 3753-3774. Abstract, references, and article information    View Article: PDF [10] Maneesh Thakur. Automorphisms of Albert algebras and a conjecture of Tits and Weiss. Trans. Amer. Math. Soc. 365 (2013) 3041-3068. Abstract, references, and article information    View Article: PDF [11] Mauro Costantini. A classification of unipotent spherical conjugacy classes in bad characteristic. Trans. Amer. Math. Soc. 364 (2012) 1997-2019. MR 2869197. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Michael Bate, Benjamin Martin, Gerhard Röhrle and Rudolf Tange. Complete reducibility and separability. Trans. Amer. Math. Soc. 362 (2010) 4283-4311. MR 2608407. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] Amit Kulshrestha and R. Parimala. $R$-equivalence in adjoint classical groups over fields of virtual cohomological dimension $2$. Trans. Amer. Math. Soc. 360 (2008) 1193-1221. MR 2357694. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] George J. McNinch and Donna M. Testerman. Completely reducible $\operatorname{SL}(2)$-homomorphisms. Trans. Amer. Math. Soc. 359 (2007) 4489-4510. MR 2309195. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] Xuhua He. The $G$-stable pieces of the wonderful compactification. Trans. Amer. Math. Soc. 359 (2007) 3005-3024. MR 2299444. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] R. Lawther. Elements of specified order in simple algebraic groups. Trans. Amer. Math. Soc. 357 (2005) 221-245. MR 2098093. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] Philippe Gille. Spécialisation de la $R$-équivalence pour les groupes réductifs. Trans. Amer. Math. Soc. 356 (2004) 4465-4474. MR 2067129. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Jonathan Brundan. Double coset density in classical algebraic groups. Trans. Amer. Math. Soc. 352 (2000) 1405-1436. MR 1751310. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] A. G. Helminck and G. F. Helminck. A class of parabolic $k$-subgroups associated with symmetric $k$-varieties. Trans. Amer. Math. Soc. 350 (1998) 4669-4691. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Erich W. Ellers and Nikolai Gordeev. On the conjectures of J. Thompson and O. Ore. Trans. Amer. Math. Soc. 350 (1998) 3657-3671. MR 1422600. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] Martin W. Liebeck, Jan Saxl and Gary M. Seitz. Factorizations of simple algebraic groups. Trans. Amer. Math. Soc. 348 (1996) 799-822. MR 1316858. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] E. Hrushovski, P. H. Kropholler, A. Lubotzky and A. Shalev. Powers in Finitely Generated Groups. Trans. Amer. Math. Soc. 348 (1996) 291-304. MR 1316851. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Grant Walker. Modular Schur functions . Trans. Amer. Math. Soc. 346 (1994) 569-604. MR 1273543. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] Christian Wenzel. Classification of all parabolic subgroup-schemes of a reductive linear algebraic group over an algebraically closed field . Trans. Amer. Math. Soc. 337 (1993) 211-218. MR 1096262. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] Paul Cherenack. Submersive and unipotent group quotients among schemes of a countable type over a field $k$ . Trans. Amer. Math. Soc. 211 (1975) 101-112. MR 0376700. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Amassa Fauntleroy. Rational points of commutator subgroups of solvable algebraic groups . Trans. Amer. Math. Soc. 194 (1974) 249-275. MR 0349860. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] Ronald E. Kutz. Cohen-Macaulay rings and ideal theory in rings of invariants of algebraic groups . Trans. Amer. Math. Soc. 194 (1974) 115-129. MR 0352082. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] Edward B. Van Vleck. Errata: One-parameter projective groups and the classification of collineations'' [Trans.\ Amer.\ Math.\ Soc. {\bf 13} (1912), no. 3, 353--386; 1500923] . Trans. Amer. Math. Soc. 13 (1912) 517. MR 1500491. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] Edward B. Van Vleck. One-parameter projective groups and the classification of collineations . Trans. Amer. Math. Soc. 13 (1912) 353-386. MR 1500923. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] Howard H. Mitchell. Determination of the ordinary and modular ternary linear groups . Trans. Amer. Math. Soc. 12 (1911) 207-242. MR 1500887. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 39 found      Go to page: 1 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352225661277771, "perplexity": 1851.1208665468585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812756.57/warc/CC-MAIN-20180219151705-20180219171705-00778.warc.gz"}
http://www.computer.org/csdl/trans/tc/1988/09/t1053-abs.html
Subscribe Issue No.09 - September (1988 vol.37) pp: 1053-1066 ABSTRACT An error propagation model has been developed for multimodule computing systems in which the main parameters are the distribution functions of error propagation times. A digraph model is used to represent a multimodule computing system, and error propagation in the system is modeled by general distributions of error propagation times between all pairs of modules. Two algorithms are developed to INDEX TERMS error propagation; multimodule computing system; error propagation model; digraph model; fault-tolerant microprocessor; directed graphs; errors; fault tolerant computing; multiprocessing systems. CITATION K.G. Shin, T.-H. Lin, "Modeling and Measurement of Error Propagation in a Multimodule Computing System", IEEE Transactions on Computers, vol.37, no. 9, pp. 1053-1066, September 1988, doi:10.1109/12.2256
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818944096565247, "perplexity": 4144.056291593852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701999715.75/warc/CC-MAIN-20160205195319-00230-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.docsford.com/document/939688
DOCX iintroduction to heat transfer By Irene Sanders,2015-04-01 18:38 10 views 0 iintroduction to heat transferto,Heat,heat HEAT TRANSFER UNIT-1 INTRODUCTION: Heat Transfer Modes Figure 1: Conduction heat transfer Heat transfer processes are classified into three types. The first is conduction, which is defined as transfer of heat occurring through intervening matter without bulk motion of the matter. Figure 1 shows the process pictorially. A solid (a block of metal, say) has one surface at a high temperature and one at a lower temperature. This type of heat conduction can occur, for example, through a turbine blade in a jet engine. The outside surface, which is exposed to gases from the combustor, is at a higher temperature than the inside surface, which has cooling air next to it. The level of the wall temperature is critical for a turbine blade. The second heat transfer process is convection, or heat transfer due to a flowing fluid. The fluid can be a gas or a liquid; both have applications in aerospace technology. In convection heat transfer, the heat is moved through bulk transfer of a non-uniform temperature fluid. The third process is radiation or transmission of energy through space without the necessary presence of matter. Radiation is the only method for heat transfer in space. Radiation can be important even in situations in which there is an intervening medium; a familiar example is the heat transfer from a glowing piece of metal or from a fire. Introduction to Conduction We will start by examining conduction heat transfer. We must first determine how to relate the heat transfer to other properties (either mechanical, thermal, or geometrical). The answer to this is rooted in experiment, but it can be motivated by considering heat flow along a ``bar'' between two heat reservoirs at , as shown in Figure 2 It is plausible that the heat transfer rate, , is a function of the temperature of the two reservoirs, the bar geometry and the bar properties. (Are there other factors that should be considered? If so, what?). This can be expressed as (1) It also seems reasonable to postulate that should depend on the temperature difference . If is zero, then the heat transfer should also be zero. The temperature dependence can therefore be expressed as (2) Figure 2: Heat transfer along a bar An argument for the general form of can be made from physical considerations. One requirement, as said, is if . Using a MacLaurin series expansion, as follows, (3) If we define and , we find that (for small ), (4) We know that . The derivative evaluated at (thermal equilibrium) is a measurable property of the bar. In addition, we know that if or . It also seems reasonable that if we had two bars of the same area, we would have twice the heat transfer, so that we can postulate that is proportional to the area. Finally, although the argument is by no means rigorous, experience leads us to believe that as increases should get smaller. All of these lead to the generalization (made by Fourier in 1807) that, for the bar, the derivative in Equation (4) has the form (5) In Equation (5), is a proportionality factor that is a function of the material and the temperature, is the cross-sectional area and is the length of the bar. In the limit for any temperature difference across a length as both , , we can say (6) A more useful quantity to work with is the heat transfer per unit area, defined as (7) 2The quantity is called the heat flux and its units are Watts/m. The expression in (16.6) can be written in terms of heat flux as (8) Equation (16.8) is the one-dimensional form of Fourier's law of heat conduction. The proportionality constant is called the thermal conductivity. Its units are . Thermal conductivity is a well-tabulated property for a large number of materials. Some values for familiar materials are given in Table 16.1; others can be found in the references. The thermal conductivity is a function of temperature and the values shown in Table 16.1 are for room temperature. Table 1: Thermal conductivity at room temperature for some metals and non-metals Metals Ag Cu Al Fe Steel 420 390 200 70 50 Non-metals Air Engine oil Brick Wood Cork 0.6 0.026 0.15 0.18 0.4-0.5 0.2 0.04 Figure 3: One-dimensional heat conduction For one-dimensional heat conduction (temperature depending on one variable only), we can devise a basic description of the process. The first law in control volume form (steady flow energy equation) with no shaft work and no mass flow reduces to the statement that for all surfaces (no heat transfer on top or bottom of Figure 3). From Equation (6), the heat transfer rate in at the left (at ) is (9) The heat transfer rate on the right is (10) Using the conditions on the overall heat flow and the expressions in (16.9) and (16.10) (11) Taking the limit as approaches zero we obtain (12) or (13) If is constant (i.e. if the properties of the bar are independent of temperature), this reduces to (14) or (using the chain rule) (15) Equation (16.14) or (16.15) describes the temperature field for quasi-one-dimensional steady state (no time dependence) heat transfer. We now apply this to an example. Example 1: Heat transfer through a plane slab Figure 4: Temperature boundary conditions for a slab For this configuration (Figure 4), the area is not a function of , i.e. . Equation (5) thus becomes (16) Equation (16.16) can be integrated immediately to yield (17) and (18) Equation (16.18) is an expression for the temperature field where and are constants of integration. For a second order equation, such as (16.16), we need two boundary conditions to determine and . One such set of boundary conditions can be the specification of the temperatures at both sides of the slab as shown in Figure 16.4, say ; . The condition implies that . The condition implies that , or . With these expressions for and the temperature distribution can be written as (19) This linear variation in temperature is shown in Figure 5 for a situation in which . Figure 5: Temperature distribution through a slab The heat flux is also of interest. This is given by (20) Thermal Resistance Circuits There is an electrical analogy with conduction heat transfer that can be exploited in problem solving. The analog of is current, and the analog of the temperature difference, , is voltage difference. From this perspective the slab is a pure resistance to heat transfer and we can define (21) where , the thermal resistance. The thermal resistance increases as increases, as decreases, and as decreases. Figure 6: Heat transfer across a composite slab (series thermal resistance) The concept of a thermal resistance circuit allows ready analysis of problems such as a composite slab (composite planar heat transfer surface). In the composite slab shown in Figure 6, the heat flux is constant with . The resistances are in series and sum to . If is the temperature at the left, and is the temperature at the right, the heat transfer rate is given by (22) Figure 7: Heat transfer for a wall with dissimilar materials (parallel thermal resistance) Another example is a wall with a dissimilar material such as a bolt in an insulating layer. In this case, the heat transfer resistances are in parallel. Figure 7 shows the physical configuration, the heat transfer paths and the thermal resistance circuit. For this situation, the total heat flow is made up of the heat flow in the two parallel paths, , with the total resistance given by (23) More complex configurations can also be examined; for example, a brick wall with insulation on both sides (Figure 8). Figure 8: Heat transfer through an insulated wall The overall thermal resistance is given by (24) Some representative values for the brick and insulation thermal conductivity are: Using these values, and noting that , we obtain This is a series circuit so Figure 9: Temperature distribution through an insulated wall The temperature is continuous in the wall and the intermediate temperatures can be found from applying the resistance equation across each slab, since is constant across the slab. For example, to find : This yields or . The same procedure gives . As sketched in Figure 9, the larger drop is across the insulating layer even though the brick layer is much thicker. Steady Quasi-One-Dimensional Heat Flow in Non-Planar Geometry Report this document For any questions or suggestions please email [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8766220211982727, "perplexity": 657.9950950849437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948619804.88/warc/CC-MAIN-20171218180731-20171218202731-00301.warc.gz"}
http://math.stackexchange.com/questions/91250/are-these-sufficient-conditions-for-point-on-an-abstract-curve-to-be-regular
# Are these sufficient conditions for point on an abstract curve to be regular? Let $p\in{X}$ where $X$ is a curve-- here the definition of a curve is an integral, seperated, 1 dimensional scheme of finite type over a field $k$ (not necessarily algebraically closed). Moreover, suppose $X$ is smooth at $p$, so that the sheaf of differentials $\Omega_{X/k}$ is free of rank 1 on some neighborhood of $p$. From this alone, can we conclude that $X$ is regular at $p$, i.e. is $\mathcal{O}_{X,p}$ is a regular local ring? I realize that the answer is yes if $k$ is perfect, or if $k(p)$, the function field at p, is just $k$, or even if $\mathcal{O}_{X,p}$ contains a subfield isomorphic to $k(p)$, but I can find absolutely nothing in the literature without these restrictions. Can anyone set me straight here? - Yes, if $X$ is a variety over an arbitrary field $k$ ( i.e. a scheme of finite type over $k$, not necessarily of dimension $1$), then every smooth closed point of $X$ is regular. The converse is true if the residue field $\kappa (x)$ (which is finite over $k$) is separable over $k$. So the converse is true if $x$ is rational over $k$, or if $k$ is perfect (for example: finite, of characteristic zero, algebraically closed,...) . An excellent reference is Qing Liu's Algebraic Geometry and Arithmetic curves, Ch.4, Proposition 3.30 . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846123456954956, "perplexity": 102.83320748631971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00348-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calculating-work-done-by-an-electric-field.254461/
# Calculating work done by an electric field 1. Sep 7, 2008 ### crh 1. The problem statement, all variables and given/known data How much work does the electric field do in moving a proton from a point with a potential of +155V to a point where it is -75V. Express your answer in both joules and electron volts. 2. Relevant equations W = -qV 3. The attempt at a solution I know that q=(1.6E-19) but I am not for sure how to go about incorporating my two potentials. Can someone give me some help? I thank you in advance! 2. Sep 7, 2008 ### Hootenanny Staff Emeritus Perhaps it would help if the equation was written thus: $$W = -q\Delta V$$ In other words, the work done by the electric field on the charged particle is the negative product of the charge and the change in potential. 3. Sep 7, 2008 ### crh Ok I think I figured it out. Tell me if I am wrong. W = -qV, but V is potential E (PE), so therefore V = (PE2-PE1) so... W = -(1.60E-19C)(-80V) = 1.28E-17 J and = -(1e)(-80V) = 80eV 4. Sep 7, 2008 ### Hootenanny Staff Emeritus You're on the right lines but be careful when calculating the change in potential. 5. Sep 7, 2008 ### crh Are you meaning scientific notation? 6. Sep 7, 2008 ### Hootenanny Staff Emeritus Nope, something somewhat simpler than that. What is the difference between -75 and +155? 7. Sep 7, 2008 ### crh oh it needs to be -230V. 8. Sep 7, 2008 ### Hootenanny Staff Emeritus Much better
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013448357582092, "perplexity": 2058.189865701717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00414-ip-10-171-6-4.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test
# Jarque–Bera test In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Bera. The test statistic JB is defined as $\mathit{JB} = \frac{n}{6} \left( S^2 + \frac14 (K-3)^2 \right)$ where n is the number of observations (or degrees of freedom in general); S is the sample skewness, and K is the sample kurtosis: $S = \frac{ \hat{\mu}_3 }{ \hat{\sigma}^3 } = \frac{\frac1n \sum_{i=1}^n (x_i-\bar{x})^3} {\left(\frac1n \sum_{i=1}^n (x_i-\bar{x})^2 \right)^{3/2}} ,$ $K = \frac{ \hat{\mu}_4 }{ \hat{\sigma}^4 } = \frac{\frac1n \sum_{i=1}^n (x_i-\bar{x})^4} {\left(\frac1n \sum_{i=1}^n (x_i-\bar{x})^2 \right)^{2}} ,$ where $\hat{\mu}_3$ and $\hat{\mu}_4$ are the estimates of third and fourth central moments, respectively, $\bar{x}$ is the sample mean, and $\hat{\sigma}^2$ is the estimate of the second central moment, the variance. If the data come from a normal distribution, the JB statistic asymptotically has a chi-squared distribution with two degrees of freedom, so the statistic can be used to test the hypothesis that the data are from a normal distribution. The null hypothesis is a joint hypothesis of the skewness being zero and the excess kurtosis being zero. Samples from a normal distribution have an expected skewness of 0 and an expected excess kurtosis of 0 (which is the same as a kurtosis of 3). As the definition of JB shows, any deviation from this increases the JB statistic. For small samples the chi-squared approximation is overly sensitive, often rejecting the null hypothesis when it is in fact true. Furthermore, the distribution of p-values departs from a uniform distribution and becomes a right-skewed uni-modal distribution, especially for small p-values. This leads to a large Type I error rate. The table below shows some p-values approximated by a chi-squared distribution that differ from their true alpha levels for small samples. Calculated p-value equivalents to true alpha levels at given sample sizes True α level 20 30 50 70 100 0.1 0.307 0.252 0.201 0.183 0.1560 0.05 0.1461 0.109 0.079 0.067 0.062 0.025 0.051 0.0303 0.020 0.016 0.0168 0.01 0.0064 0.0033 0.0015 0.0012 0.002 (These values have been approximated by using Monte Carlo simulation in Matlab) In MATLAB's implementation, the chi-squared approximation for the JB statistic's distribution is only used for large sample sizes (> 2000). For smaller samples, it uses a table derived from Monte Carlo simulations in order to interpolate p-values.[1] ## History Considering normal sampling, and √β1 and β2 contours, Bowman & Shenton (1975) noticed that the statistic JB will be asymptotically χ2(2)-distributed; however they also noted that “large sample sizes would doubtless be required for the χ2 approximation to hold”. Bowman and Shelton did not study the properties any further, preferring D’Agostino’s K-squared test. A ## Jarque–Bera test in regression analysis According to Robert Hall, David Lilien, et al. (1995) when using this test along with multiple regression analysis the right estimate is: $\mathit{JB} = \frac{n-k}{6} \left( S^2 + \frac14 (K-3)^2 \right)$ where n is the number of observations and k is the number of regressors when examining residuals to an equation. ## References 1. ^ "Analysis of the JB-Test in MATLAB". MathWorks. Retrieved May 24, 2009. • Bowman, K.O.; Shenton, L.R. (1975). "Omnibus contours for departures from normality based on √b1 and b2". Biometrika 62 (2): 243–250. doi:10.1093/biomet/62.2.243. JSTOR 2335355. • Jarque, Carlos M.; Bera, Anil K. (1980). "Efficient tests for normality, homoscedasticity and serial independence of regression residuals". Economics Letters 6 (3): 255–259. doi:10.1016/0165-1765(80)90024-5. • Jarque, Carlos M.; Bera, Anil K. (1981). "Efficient tests for normality, homoscedasticity and serial independence of regression residuals: Monte Carlo evidence". Economics Letters 7 (4): 313–318. doi:10.1016/0165-1765(81)90035-5. • Jarque, Carlos M.; Bera, Anil K. (1987). "A test for normality of observations and regression residuals". International Statistical Review 55 (2): 163–172. JSTOR 1403192. • Judge; et al. (1988). Introduction and the theory and practice of econometrics (3rd ed.). pp. 890–892. • Hall, Robert E.; Lilien, David M.; et al. (1995). EViews User Guide. p. 141. ## Implementations • ALGLIB includes implementation of the Jarque–Bera test in C++, C#, Delphi, Visual Basic, etc. • gretl includes an implementation of the Jarque–Bera test • R includes implementations of the Jarque–Bera test: jarque.bera.test in package tseries, for example, and jarque.test in package moments. • MATLAB includes implementation of the Jarque–Bera test, the function "jbtest". • Python statsmodels includes implementation of the Jarque–Bera test, "statsmodels.stats.stattools.py".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90608811378479, "perplexity": 2285.3896920341303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554598.88/warc/CC-MAIN-20141224185914-00050-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.maa.org/publications/periodicals/convergence/extending-al-karajis-work-on-sums-of-odd-powers-of-integers-the-sums-of-the-seventh-and-ninth-powers?device=mobile
# Extending al-Karaji's Work on Sums of Odd Powers of Integers - The Sums of the Seventh and Ninth Powers Author(s): Hakan Kursat Oral (Yildiz Technical University) and Hasan Unal (Yildiz Technical University) Vidinli now applies the same procedure to the differences of fourth powers of $S_n$ . Figure 8. Difference of fourth powers (from Mebahis-i İlmiyye, 1867, courtesy of the authors). Figure 8 shows only: $S_n^{\,4} - S_{n-1}^{\,\,4} = {\frac{1}{16}}n^4 \Big({(n+1)}^4 - {(n-1)}^4 \Big) = {\frac{1}{2}}(n^7 + n^5) .$ Providing a few more details for this calculation, we have the following differences: \begin{align} S_n^{\,4} - S_{n-1}^{\,\,4} &= {\bigg[ {\frac{n(n+1)}{2}}\bigg]}^4 - {\bigg[ {\frac{(n-1)n}{2}}\bigg]}^4 \\ &= {\frac{n^4}{16}} \Big({(n+1)}^4 - {(n-1)}^4 \Big) \\ &= {\frac{n^4}{16}}(n^4 + 4n^3 + 6n^2 + 4n +1 - n^4 + 4n^3 - 6n^2 + 4n - 1) \\ &= {\frac{n^4}{16}}(8n^3 + 8n) \\ &= {\frac{1}{2}}(n^7 + n^5) , \\ S_{n-1}^{\,\,4} - S_{n-2}^{\,\,4} & = {\frac{1}{2}}\big((n-1)^7 + (n-1)^5\big) , \\ \\ \\ \dots\dots\dots & \dots\dots\dots\dots\dots ,\\ \\ S_2^{\,4} - S_{1}^{\,4} & = {\frac{1}{2}}(2^7 + 2^5) , \\ S_1^{\,4} - S_{0}^{\,4} & = {\frac{1}{2}}(1^7 + 1^5) .\end{align} If we add these equations, we get: ${S_n^{\,4}} = {\frac{1}{2}}\Big(1^7 + 2^7 + 3^7 + \cdots + n^7\Big) + {\frac{1}{2}}\Big(1^5 + 2^5 + 3^5 + \cdots + n^5\Big) ,$ or \begin{align} {{(1+2+3+\cdots +n)}^4} & = {\frac{1}{2}}\Big({1^5 + 2^5 + 3^5 + \cdots + n^5}\Big) \\ & + {\frac{1}{2}}\Big({1^7 + 2^7 + 3^7 + \cdots + n^7}\Big) ,\end{align} as can be seen in Figure 9. Figure 9. An equation involving the sum of the seventh powers (from Mebahis-i İlmiyye, 1867, courtesy of the authors). We already know that ${1^5 + 2^5 + 3^5 + \cdots + n^5} = {\frac{4}{3}}\Bigg({{\bigg[ {\frac{n(n+1)}{2}}\bigg]}^3 - {\frac{1}{4}}{\bigg[ {\frac{n(n+1)}{2}}\bigg]}^2}\Bigg)$ and ${1 + 2 + 3 + \cdots + n} = {\frac{n(n+1)}{2}}.$ When we substitute these identities into the equation in (or just above) Figure 9, we can find a formula for the sum of the seventh powers: ${1^7 + 2^7 + 3^7 + \cdots + n^7} = {\frac{1}{8}}n^8 + {\frac{1}{2}}n^7 + {\frac{7}{12}}n^6 - {\frac{7}{24}}n^4 + {\frac{1}{12}}n^2 .$ Readers can complete the calculuation (see Exercise 2), and can check the result by substituting an "$n$" of their choice. Vidinli did not continue after the differences of fourth powers and the sum of the seventh powers.  Just out of curiosity, we computed differences of fifth powers, with the following result. \begin{align} S_n^{\,5} - S_{n-1}^{\,\,5} &= {\bigg[ {\frac{n(n+1)}{2}}\bigg]}^5 - {\bigg[ {\frac{(n-1)n}{2}}\bigg]}^5 \\ &= {\frac{n^5}{32}} \Big({(n+1)}^5 - {(n-1)}^5 \Big) \\ &= {\frac{n^5}{32}}(10n^4 +20n^2 + 2) \\ &= {\frac{5}{16}}n^9 +{\frac{5}{8}}n^7 + {\frac{1}{16}}n^5 .\end{align} If we continue the process, we can find a formula for the sum of the ninth powers. Try it! (See Exercise 3.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2027.510174851095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00336-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/17830-problem-integration.html
Thread: Problem with this Integration 1. Problem with this Integration Hello there! I need help on solving the following equation. ∫ (π+t) cos (nt) dt + ∫ (π-t) cos (nt) dt I don't know if its clear but its like this "Integral of [(pi + t) into cos (nt)] dt + integral of [(pi - t) into cos (nt)] dt." I don't know how to continue after I expand each!!! Any help is much appreciated. Thanks, atwinix 2. Hello, atwinix! Could you restate the problem? . . Your writing raises many questions. I need help on solving the following equation. . . . . Equation? ∫ (π+t) cos (nt) dt + ∫ (π-t) cos (nt) dt I don't know if it's clear but its like this: "Integral of [(pi + t) into cos (nt)] dt + integral of [(pi - t) into cos (nt)] dt." What does "into" mean? If the problem is: . $\int(\pi + t)(\cos(nt)\,dt + \int(\pi - t)\cos(nt)\,dt$ . . why not make one integral? . . $\int\bigg[\pi\cos(nt) + t\cos(nt) + \pi\cos(nt) - t\cos(nt)\bigg]\,dt \;=\;2\pi\int\cos(nt)\,dt $ 3. "into" means "multiply by." The problem is exactly just as you stated, Soroban. But what do I do if the limits on each part is different. Say from -pi < t < 0 for the first part and 0 < t < pi for the second part!! This integration has to do with a Fourier Series question, which you can check out below. 4. This has been dealt with in the other thread. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823109269142151, "perplexity": 2896.9510539142225}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00321.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/103875-continued-function.html
# Math Help - Continued function? 1. ## Continued function? I want to solve y=f(y+f(y+f(y+f(y+...)))), where f(x) is some bounded function increasing in x. First, what is the name of this kind of problem? I know of continued fractions, but here f(x) isn't of the form f(x)=1/(a+x). Second, how can I find y? (numerically is fine) 2. Originally Posted by CaptainBlack If this has a solution for a given function f, then: $y=f(y+f(y))$ Hang on, if $y=f(y+f(y+f(y+f(y+\dots))))$ isn't $y=f(2y)$? Then all you need to find are the fixed points of the function $g(y)=f(2y)$. This can sometimes be realised by a simple iteration $y_{n+1}=g(y_n)$ for a suitable starting value $y_0$, but you shouldn't rule out the need for more advanced methods. 3. Originally Posted by halbard Hang on, if $y=f(y+f(y+f(y+f(y+\dots))))$ isn't $y=f(2y)$? Then all you need to find are the fixed points of the function $g(y)=f(2y)$. This can sometimes be realised by a simple iteration $y_{n+1}=g(y_n)$ for a suitable starting value $y_0$, but you shouldn't rule out the need for more advanced methods. Yes, now you mention it. That is what I started with but for some reason changed it to what I had posted CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545341491699219, "perplexity": 362.63058309907336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096686.2/warc/CC-MAIN-20150627031816-00268-ip-10-179-60-89.ec2.internal.warc.gz"}
https://ccrma.stanford.edu/~bilbao/master/node42.html
Next: Coupled Inductances and Capacitances Up: Wave Digital Elements and Previous: Signal and Coefficient Quantization Vector Wave Variables It is straightforward to extend wave digital filtering principles to the vector case (this has been outlined in [131]; the same idea has apeared in the context of digital waveguide networks in [166,169]). For a -component vector one-port element with voltage and current , it is posible to define wave variables and by (2.43a) for a symmetric positive definite matrix ; power-normalized quantities may be defined by (2.44a) where is some right square root of , and is its transpose. The power absorbed by the vector one-port will be (2.45) Kirchoff's Laws, for a series or parallel connection of -component vector elements with voltages and , can be written as and the resulting scattering equations will be in terms of the wave variables , defined as per (2.43) and the port resistance matrices , . These are the defining equations of a vector adaptor; their schematics are essentially the same as those of Figure 2.12, except that they are drawn in bold--see Figure 2.14. As before, we use the same representation for power-normalized waves. Subsections Next: Coupled Inductances and Capacitances Up: Wave Digital Elements and Previous: Signal and Coefficient Quantization Stefan Bilbao 2002-01-22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375227689743042, "perplexity": 2318.1593059462853}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929832.32/warc/CC-MAIN-20150521113209-00130-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/latex-help/91764-surds.html
1. surds does anyone know how to insert a surd on this? 2. Originally Posted by johnsy123 does anyone know how to insert a surd on this? On what? Do you mean 'how do you use latex to show radicals'? Look at this code: $$\sqrt{12}=2\sqrt{3}$$ This will yield: $\sqrt{12}=2\sqrt{3}$ 3. on what ??? nothing appear ?? I think that he ask about a sqrt question so I said to him nothing appear I mean what johnsy123 ask for 4. Hello, Amer! masters gave you the right information. Exactly what are you typing? You can do other roots like this: $$\sqrt[3]{8} =2$$ . . which produces: . $\sqrt[3]{8}=2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993500828742981, "perplexity": 4307.424410769319}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718034.77/warc/CC-MAIN-20161020183838-00324-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.themathcitadel.com/category/spines/fundamentals/
### Browsed byCategory: Fundamentals Topologies and Sigma-Algebras ## Topologies and Sigma-Algebras Both topologies and $\sigma-$algebras are collections of subsets of a set $X$. What exactly is the difference between the two, and is there a relationship? We explore these notions by noting the definitions first. Let $X$ be any set. ### Topology A topology $\tau$ is a collection of subsets of a set $X$ (also called a topology in $X$) that satisfies the following properties: (1) $\emptyset \in \tau$, and $X \in \tau$ (2) for any finite collection of sets in $\tau$, $\{V_{i}\}_{i=1}^{n}$, $\cap_{i=1}^{n}V_{i} \in \tau$ (3) for any arbitrary collection of sets $\{V_{\alpha}: \alpha \in I\}$ in $\tau$ (countable or uncountable index set $I$), $\cup_{\alpha}V_{\alpha} \in \tau$ A topology is therefore a collection of subsets of a set $X$ that contains the empty set, the set $X$ itself, all possible finite intersections of the subsets in the topology, and all possible unions of subsets in the topology. The simplest topology is called the trivial topology, where for a set $X$, $\tau = \{\emptyset, X\}$. Notice that (1) above is satisfied by design. All intersections we can make with the sets in $\tau$ are finite ones. (There’s just one, $X \cap \emptyset = \emptyset$.) Thus, (2) is satisfied. Any union here gives $X$, which is in $\tau$, so this is a topology. This isn’t a very interesting topology, so let’s create another one. Let’s take $X = \{1,2,3,4\}$ as the set. Let’s give a collection of subsets $\tau = \{\{1\},\{2\}, \{1,3\}, \{2,4\}, \{1,2,3\}, \{1,2,4\}, \emptyset, X\}.$ Notice that I didn’t include every single possible subset of $X$. There are two singleton sets, 4 pairs, and 1 set of triples missing. This example will illustrate that you can leave out subsets of a set and still have a topology. Notice that (1) is met. You can check all possible finite intersections of sets inside $\tau$, and notice that you either end up with $\emptyset$ or another of the sets in $\tau$. For example, $\{1,3\} \cap \{1,2,3\} = \{1,3\}$, $\{2\} \cap \{1\} = \emptyset$, etc. Lastly, we can only have finite unions here, since $\tau$ only has a finite number of things. You can check all possible unions, and notice that all of them result in a set already in $\tau$. For example, $\{1,3\} \cup \{2,4\} = X$, $\emptyset \cup \{1,2,4\} = \{1,2,4\}$, $\{1\} \cup \{1,3\} \cup \{2,4\} = X$, etc. Thus, $\tau$ is a topology on this set $X$. We’ll look at one final example that’s a bit more abstract. Let’s take a totally ordered set $X$ (like the real line $\mathbb{R}$). Then the order topology on $X$ is the collection of subsets that look like one of the following: • $\{x : a < x\}$ for all $a$ in $X$ • $\{x : b > x \}$ for all $b \in X$ • $\{x : a < b < x\}$ for all $a,b \in X$ • any union of sets that look like the above To put something concrete to this, let $X = \{1,2,3,4\}$, the same set as above. This is a totally ordered set, since we can write these numbers in increasing order. Then • The sets that have the structure $\{x : a < x\}$ for all $a \in X$ are • $\{x : 1 < x\} = \{2,3,4\}$ • $\{x : 2 < x\} = \{3,4\}$ • $\{x : 3 < x\} = \{4\}$ • $\{x : 4 < x \} = \emptyset$ • The sets that have the structure $\{x : b > x\}$ for all $b \in X$ are • $\{x : 1 > x\} = \emptyset$ (which we already handled) • $\{x : 2 > x\} = \{1\}$ • $\{x : 3 > x \} = \{1,2\}$ • $\{x : 4 > x\} = \{1,2,3\}$ • The sets that have the structure $\{x : a < x < b\}$ for all $a,b \in X$ are • $\{x : 1 < x < 3\} = \{2\}$ • $\{x : 1 < x < 4\} = \{2,3\}$ • $\{x : 2 < x < 4\} = \{3\}$ • The remaining combinations yield $\emptyset$ • The sets that are a union of the above sets (that aren’t already listed) are • $X = \{x : 1 < x\} \cup \{x : 2 > x\}$ • $\{1,2,4\} = \{x : 3 > x\} \cup \{x : 3< x \}$ • $\{1,3,4\} = \{x : 2 < x\} \cup \{x : 2 > x\}$ • $\{1,3\} = \{x : 2 > x\} \cup \{x : 2< x < 4\}$ • $\{1,4\} = \{x : 2 > x\} \cup \{x : 3 < x\}$ • $\{2,4\} = \{x : 1 < x < 3\} \cup \{x : 3 < x \}$ The astute reader will note that in this case, the order topology on $X = \{1,2,3,4\}$ ends up being the collections of all subsets of $X$, called the power set. ### Sigma-Algebra Let $X$ be a set. Then we define a $\sigma$-algebra. A $\sigma$-algebra is a collection $\mathfrak{M}$ of subsets of a set $X$ such that the following properties hold: (1) $X \in \mathfrak{M}$ (2) If $A \in \mathfrak{M}$, then $A^{c} \in \mathfrak{M}$, where $A^{c}$ is the complement taken relative to the set $X$. (3) For a countable collection $\{A_{i}\}_{i=1}^{\infty}$ of sets that’s in $\mathfrak{M}$, $\cup_{i=1}^{\infty}A_{i} \in \mathfrak{M}$. Let’s look at some explicit examples: Again, take $X = \{1,2,3,4\}$. Let $$\mathfrak{M} = \{\emptyset, X, \{1,2\}, \{3,4\}\}.$$ We’ll verify that this is a $\sigma-$algebra. First, $X \in \mathfrak{M}$. Then, for each set in $\mathfrak{M}$, the complement is also present. (Remember that $X^{c} = \emptyset$.) Finally, any countable union will yield $X$, which is present in $\mathfrak{M}$, so we indeed have a $\sigma-$algebra. Taking another example, we’ll generate a $\sigma$-algebra over a set from a single subset. Keep $X = \{1,2,3,4\}$. Let’s generate a $\sigma-$algebra from the set $\{2\}$. $$\mathfrak{M}(\{2\}) = \{X, \emptyset, \{2\},\{1,3,4\}\}$$ The singleton $\{2\}$ and its complement must be in $\mathfrak{M}(\{2\})$, and we also require $X$ and its complement $\emptyset$ to be present. Any countable union here results in the entire set $X$. ### What’s the difference between a topology and a $\sigma-$algebra? Looking carefully at the definitions for each of a topology and a $\sigma-$algebra, we notice some similarities: 1. Both are collections of subsets of a given set $X$. 2. Both require the entire set $X$ and the empty set $\emptyset$ to be inside the collection. (The topology explicitly requires it, and the $\sigma-$algebra requires it implicitly by requiring the presence of $X$, and the presence of all complements.) 3. Both will hold all possible finite intersections. The topology explicitly requires this, and the $\sigma-$algebra requires this implicitly by requiring countable unions to be present (which includes finite ones), and their complements. (The complement of a finite union is a finite intersection.) 4. Both require countable unions. Here, the $\sigma-$algebra requires this explicitly, and the topology requires it implicitly, since all arbitrary unions–countable and uncountable–must be in the topology. That seems to be a lot of similarities. Let’s look at the differences. 1. A $\sigma-$algebra requires only countable unions of elements of the collection be present. The topology puts a stricter requirement —all unions, even uncountable ones. 2. The $\sigma-$algebra requires that the complement of a set in the collection be present. The topology doesn’t require anything about complements. 3. The topology only requires the presence of all finite intersections of sets in the collection, whereas the $\sigma-$algebra requires all countable intersections (by combining the complement axiom and the countable union axiom). It is with these differences we’ll exhibit examples of a topology that is not a $\sigma-$algebra, a $\sigma-$algebra that is not a topology, and a collection of subsets that is both a $\sigma-$algebra and a topology. #### A topology that is not a $\sigma-$algebra Let $X = \{1,2,3\}$, and $\tau = \{\emptyset, X, \{1,2\},\{2\}, \{2,3\}\}$. $\tau$ is a topology because 1. $\emptyset, X \in \tau$ 2. Any finite intersection of elements in $\tau$ either yields the singleton $\{2\}$ or $\emptyset$. 3. Any union generates $X$, $\{1,2\}$, or $\{2,3\}$, all of which are already in $\tau$ However, because $\{2\}^{c} = \{1,3\} \not\in \tau$, we have a set in $\tau$ whose complement is not present, so $\tau$ cannot also be a $\sigma-$algebra. We used (2) in the list of differences to construct this example. #### A $\sigma-$algebra that is not a topology This example is a little trickier to construct. We need a $\sigma$-algebra, but not a topology, so we need to find a difference between the $\sigma-$algebra and the topology where the topology requirement is more strict than the $\sigma-$algebra’s version. We focus on difference (1) here. Let $X = [0,1]$. Let $\mathfrak{M}$ be the collection of subsets of $X$ that are either themselves countable, or whose complements are countable. Some examples of things in $\mathfrak\{M\}:$ • all rational numbers between 0 and 1, represented as singleton sets. (countable) • the entire collection of rational numbers between 0 and 1, represented as a set itself (countable) • $\left\{\frac{1}{2^{x}}, x \in \mathbb{N}\right\}$. (countable) • $[0,1] \setminus \{1/2, 1/4, 1/8\}$ (not countable, but its complement is $\{1/2, 1/4, 1/8\}$, which is countable) • $\emptyset$ (countable) • $X = [0,1]$ (not countable, but its complement $\emptyset$ is countable) $\mathfrak{M}$ is a $\sigma-$algebra because: • $X \in \mathfrak{M}$ • All complements of sets in $\mathfrak{M}$ are present, since we’ve designed the collection to be all pairs of countable sets with countable complements, and all uncountable sets with countable complements. • Finally, all countable unions of countable sets are countable, so those are present. The countable union of uncountable sets with countable complements will have a countable complement1, and thus all countable unions of elements of $\mathfrak{M}$ are also in $\mathfrak{M}$, so we have a $\sigma-$algebra. n particular, every single point of $[0,1]$ is in $\mathfrak{M}$ as a singleton set. To be a topology, any arbitrary union of elements of $\mathfrak{M}$ must also be in $\mathfrak{M}$. Take every real number between 0 and 1/2, inclusive. Then the union of all these singleton points is the interval $[0,1/2]$. However, $[0,1/2]^{c} = (1/2,1]$, which is uncountable. Thus, we have an uncountable set with an uncountable complement, so $[0,1/2] \notin \mathfrak{M}$. Since it can be represented as the arbitrary union of sets in $\mathfrak{M}$, $\mathfrak{M}$ is not a topology. #### A collection that is both a $\sigma-$algebra and a topology Take any set $X$ that is countable, and let $2^{X}$ be the power set on $X$ (the collection of all subsets of $X$). Then all subsets of $X$ are countable. We have that $\emptyset$ and $X$ are present, since both are subsets of $X$. Since the finite intersection of some subcollection of subsets of $X$ is a subset of $X$, it is in the collection. The arbitrary union of subsets of $X$ is either a proper subset of $X$ or $X$ itself. Thus $2^{X}$ is a topology (called the discrete topology). The complement of a subset of $X$ is still a subset, and if all arbitrary unions are in $2^{X}$, then certainly countable unions are. Thus $2^{X}$ is also a $\sigma-$algebra. To be explicit, return to the above where $X = \{1,2,3,4\}$. Write out all possible subsets of $X$, including singletons, $\emptyset$, and $X$ itself, and notice that all axioms in both the $\sigma-$algebra and the topology definitions are satisfied. Beyond Cookbook Mathematics, Part 2 ## Beyond Cookbook Mathematics, Part 2 The previous article discussed the importance of definitions to mathematical thought. We looked at a definition (of an end-vertex in a graph), and picked it apart by finding multiple ways to look at it. We also directly used the definition in a practical manner to find “weak links” in a network. This time, we’ll look at the structure of mathematical theorems and proofs, and how we can read the proofs to understand the theorems. A mathematical theorem (informally) is a statement that takes the following structure: IF {we have this stuff}, THEN {we get to say this about other stuff}. Let’s pick this apart a bit even at the abstract level, because even this simple structure is important when we consider using theorems: ## IF {we have this stuff}: That IF is really important. You can’t move to the THEN without the IF part being true. Let’s say a particular theorem (say, the Central Limit theorem), has a set of requirements in the IF part. We have to have all of those parts satisfied before we can invoke the conclusion. I see data scientists just invoking “Central Limit Theorem” like a chant when they are analyzing a dataset. However, in many cases, their data does not fit the IF conditions given. The Central Limit Theorem requires, among other things, that the random variables be independent. No independence, no Central Limit Theorem. Do not move forward. Many complaints I hear, especially in statistics, revolve around “you can use statistics to say anything you like.” No. No you cannot. It only appears that way because practitioners are applying theorems blindly without ensuring the hypotheses (the IF bits) are all met. A theorem written and proven is not some magic silver bullet that allows for universal application and use. ## IF -> THEN: When we write (or read) a proof, we assume the IF part is true, and use those conditions in the IF to logically deduce the conclusions in the THEN. (A remark: I just described a particular method of proof—the direct proof. There are other methods by which we may prove a statement that are logically equivalent to a direct proof such as proof by contradiction, or proof by contrapositive. Since the idea here is to understand how all the parts of a theorem and proof work, we’ll stick with direct proofs here. We’re also discussing a particular type of logical statement—a one directional implication. We can have bidirectional statements as well, but these are an extension of understanding this first type, so we’ll start here.) The first part discussing definitions is essential to understanding the IF part of our theorem and how this part connects to our “then” conclusion. Suppose we take the statement If a real-valued function $f$ is differentiable, then $f$ is continuous. We can use the statement without understanding. As long as we understand the definitions reasonably well (as in, we know how to tell if a function is differentiable), then we say it’s continuous and move on with our day. This isn’t good enough anymore. Why does differentiability imply continuity? A good proof should illuminate this for us. My suggestion at this point is to find a calculus book that proves this statement and have it open for this next part. I’m going to outline a series of thought-steps to consider as you read a proof. ### (1) Write down definitions. Write down the definition of differentiability, and that of continuity. Study both. Play with examples of differentiable and continuous functions. See if all the differentiable functions you play with are also continuous. Try to play with some weird ones. Examples: $f(x) = x^{2}$. Definitely differentiable. Definitely continuous. (I do suggest actually using the definitions here to really show that $f$ fits these.) $f(x) = x+5$. Fits nicely. Try some nonpolynomials now. $f(x) = \sin(x)$. Still good. $f(x) = e^{x}$. Really nice. Love differentiating that guy. As you’re playing with these, try to move between formally showing these functions fit the definitions and developing a visual picture of what it means for these functions to fit the definitions. (My high school calculus teacher, Andy Kohler, gave a nice intuitive picture of continuity—you shouldn’t have to pick up your pen to draw the function.) This step helps you visualize your starting point and your destination. Every single time I develop a theorem, this is the process I go through. This isn’t always short either. I’ve spent weeks on this part—exploring connections between my start and my target to develop an intuition. Often, this leads me to a way to prove the desired theorem. Occasionally I manage to create an example that renders my statement untrue. This isn’t a trivial or unimportant part, so I implore you to temper any building frustration with the speed of this bit. However, since here we’re focused on reading proofs rather than writing them, we’ll likely assume we’re working with a true statement. ### (2) Now take a look at the proof. A good proof should take you gently by the hand and guide you through the author’s reasoning in nice, comfortable steps of logic. The author shouldn’t require you to fill in gaps or make huge leaps, and especially never require you to just take something on faith. The proof contains all the pieces we need for understanding. We just need to know how to read it. At some point in the proof, every single part of the IF should be invoked as a justification for a logical step. Find these parts. “Because $f$ is differentiable, [STATEMENT]” Study this part of the proof. Flag it. Do this for each time an IF condition is invoked. Make sure you can use the definition or condition invoked to make the leap the author does. Here I implore you to not just “pass over”. Really take the time to convince yourself this is true. Go back to the definition. Return to the proof. Again, this study isn’t necessarily quick.1 Doing this for each line in the proof will help you see what’s going on between IF and THEN. ### (3) Break things. The last piece of advice I’ll give is one that was drilled into me by my graduate analysis professor at UTA, Dr. Barbara Shipman. Breaking things (or trying to) in mathematics is the best way to really cement your understanding. Go back to all those points you flagged in the proof where the IF conditions were invoked. Now imagine you don’t have that condition anymore. The proof should fail in some way. Either you end up with a false conclusion (“if I don’t have X anymore, then I definitely cannot have Y”), or you just end up stuck (“if I don’t have X anymore, I can’t say anything about Y”.) This failure point helps you understand why that condition was necessary in the IF. Doing this for each condition in the proof helps you see the interplay among all the conditions, and showing you how and why all those parts were needed to get you to your conclusion. None of this is trivial. Please don’t take the informal descriptions of this process to mean that you can breeze through and develop this skill as quickly as you memorized derivatives of functions. This is a skill that is developed over years. The reason I recommend using material you already know to study proofs is that it removes the challenge of learning new material while studying the “why”. There’s a reason mathematicians take calculus classes before real analysis courses (which spend time deeply developing and proving many things from calculus.) You’re familiar with how the subject works and that it works, then you can understand why it works. This happens in engineering too. We flew a plane before we understood airflow deeply, but due to our understanding of airflow, we were able to advance flight technology far beyond what I imagine early aviators and inventors even thought possible. There’s a beautiful feedback loop between mathematics, formal proofs, and engineering. Mathematical proofs aren’t just the dry formalisms we use and throw away—they’re the keys to understanding why things work the way they do. Developing the skill to read and understand these arguments is not a waste of time for an engineer; it will help propel engineering forward. Beyond Cookbook Mathematics, Part 1 ## Beyond Cookbook Mathematics, Part 1 This post is due to the requests of several independent engineers and programmers. They expressed disappointment at their mathematics education and its failure to impart a deeper understanding of the formulas and algorithms they were taught to use. This also reflects my observations of teaching university mathematics over the years. I started as a TA (and frequent substitute lecturer) in 2008, and have taught all levels of calculus, basic statistics, and advanced undergraduate statistics thus far. I’ve certainly noticed even since 2008 a de-emphasis on proofs and “why” in favor of more examples, applications, and formulas in general. In fact, proofs were passed over in lectures in calculus courses meant for general engineers because “there wasn’t enough time”, or it was not considered something engineers needed to know. This attitude is actually fairly recent. Many of the older calculus books in my library were written for engineers in an undergraduate program, and these books are quite proof-heavy. Some examples are F. Hildebrand’s Advanced Calculus for Engineers (1949), Tom Apostol’s Calculus (Volumes 1 and 2), and R. Courant’s Differential and Integral Calculus (1938), to name a few. For a more modern text that still has some proof treatment, the 10th edition of Calculus: One and Several Variables, by Salas et al is a good resource. I used this one both as a student at Georgia Tech and an instructor.1. However, when I taught at University of Texas at Arlington as a grad student from 2013-2015 and then Foothill College in early 2018, the texts they chose to use was woefully inadequate to suit a college-level calculus course; proofs were nonexistent, and reasoning was thrown away in favor of contrived examples. The course was designed to steer engineers away from any proofs or thorough reasoning, showing the experience is somewhat widespread.2 I do not want to discuss the reasons for this departure; this isn’t an education site. What is important now is to discuss how to satisfy the desire of an engineer or other highly technical person using various mathematical topics/formulas to develop a deeper understanding of what he or she is doing. It wouldn’t be particularly helpful to simply suggest acquiring books and reading the proofs. The best thing an education can give is the ability to teach oneself through developing methods of logical thought and creativity. There are books on formal logic and mathematical proofs circulating, but they can be a bit daunting to start with, as they are quite abstract and discuss mathematical logic and proof theory in general.3What I’ll give here can be perhaps considered a friendly introduction as to how to begin using mathematical proofs to facilitate understanding of material. Most of you who are reading this have likely taken a calculus course or two, and probably some basic linear algebra/matrix theory (especially if you’re in computer science). You’ve been exposed to limits, differentiation, continuity, determinants, and linear maps. You know more math than you think. We’ll use this material to begin learning how to deconstruct mathematical statements and arguments in order to understand how the pieces all fit together. (It’s really not unlike diagramming sentences.) Pick a topic you know well. That way we’re not trying to introduce new material and learn how to read proofs at the same time. Dust off your old calculus book, or differential equations book. ## Definitions Definitions are the most important thing in mathematics, and perhaps the most ignored by those using it. “Continuous” has a meaning. “Differentiable” has a meaning. Spending time to really understand the definition of a mathematical term will provide an unshakable foundation. Example: Let’s take something visual: the degree of a vertex in a graph. You might see this definition of degree of a vertex: Def. 1: The degree of a vertex $v$ in a graph $G$ is the number of edges connected to $v$ This is fairly intuitive and straightforward. One way to go about understanding this definition is to find other equivalent ways to express it. For example, we know that if there’s an edge sticking out of a vertex $v$, there must be something on the other end of the edge. (Graphs don’t allow dangling edges.) Thus, we might reframe this definition as Def. 2: The degree of a vertex $v$ is the number of vertices adjacent (connected to) $v$ If we go one step further and collect all the vertices adjacent to $v$ into a set or bucket, and name that bucket the neighborhood of $v$, we can write one more equivalent definition of the degree of a vertex. Def. 3: The degree of a vertex $v$ is the number of vertices in (or cardinality of) the neighborhood of $v$. (Formally, mathematicians would say that the number of elements in a set is the cardinality of the set.) Notice what we’ve done here. We’ve taken one simple definition and expressed it three equivalent ways. Each one of these gives us a slightly different facet of what the degree of a vertex is. We can look at it from the perspective of edges or vertices. Let’s take this definition and use it in another definition. An end-vertex or pendant vertex in a graph is a vertex of degree 1. New definition using our previously defined degree. But what does it really tell us here? Can we visualize this? If a vertex only has degree 1, then we know that only one edge sticks out of that vertex. Equivalently, we can also say that it has only one neighbor vertex adjacent to it. The size of its neighborhood is 1. We now can picture an end-vertex quite nicely. ## Using Definitions Many applications rely on checking to see if a definition is satisfied. • Is $f(x) = \sin(x)$ continuous? • Do I have any end vertices in my network? Here we are taking a specific example and looking to see if a definition is satisfied, typically because we know that (due to theorems) we get certain properties we either want (or maybe don’t want) if the definition is satisfied. For example, network engineers like to have resilient networks. By “resilient”, I mean that they’d like to be able to tolerate a link failure and still be able to send information anywhere on the network. Intuitively, it would be really bad if a particular link failure isolated a node/switch so that no information could reach it. Let’s try to frame that in mathematical terms. A network can be drawn as a graph, with circles representing nodes/switches/computers/whatever, and edges between nodes representing the physical links connecting them. We want to design a network so that a single link failure anywhere in the network will not isolate a node. We can mathematically represent a link failure by the removal of an edge. So we can take our graph representing our network, and start testing edges by deleting them to see if a node ends up isolated with no edges emanating from it. Or…we could return to a definition from earlier and think about this mathematically. If a single link failure isolates a particular node, then that means only one edge sticks out from it. That means that node has degree 1, by our first definition of degree. Thus, we may now conclude that this node also fits the definition of an end-vertex. Moving back to the practical space, we conclude: a vertex can only be isolated via a single-link failure if it’s an end-vertex. We can also write the statement the other way:  If a vertex is an end-vertex, then the deletion of its incident edge isolates it. Now, thanks to these definitions, and our understanding of them, we can find a way to spot all the end-vertices in a network. Since we have multiple ways of looking at this problem, we can find the way that suits us best. Using definition 1, we can count the number of links from each node. Any node that has only one link connected to it is an end-vertex, and the failure of that link will isolate that node. Using definition 2, we can count the number of adjacent neighbors, especially if we have a forwarding table stored for each node. If any node only has one other node in its table, it’s an end-vertex, and the removal of the link connecting the two nodes would isolate our vertex. I used a fairly visual, practical definition and example here. Other definitions in mathematics can get fairly involved; I’ve spent hours simply picking apart a definition to understand it. But the strategy doesn’t change. The first step in developing a deeper understanding of mathematics is to pay attention to definitions–not just what they say, but what they mean. The next article will discuss how we use definitions to write theorems and understand proofs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 211, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647355437278748, "perplexity": 366.8586764308976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00637.warc.gz"}
http://sugarmommapastries.com/kevin-allison-ccvrclw/neutrinoless-double-beta-decay-isotopes-74bcae
Sch nert , T AUP 2009 !e - ! First results of the search of neutrinoless double beta decay with the NEMO 3 detector, R. Arnold et al. Researchers are waiting for a double-beta decay to occur inside the Majorana. Overview Experimental groups worldwide are working to develop detectors that may allow the observation of neutrinoless double-beta decay. (N 2;Z+ 2) + e + e . Neutrinoless double beta decay: lt;p|>|Double beta decay| is a radioactive decay process where a nucleus releases two |beta rays|... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. L d L u L! Lett. First evidence for neutrinoless double beta decay is observed giving first evidence for lepton number violation. Neutrinoless double beta decay, 0 , presents the best and perhaps the only, way to detect Majorana neutrinos. In Figure 4 are the few isotopes that can even undergo the decay. neutrinoless double beta decay, even if neutrinos are in-deed Majorana particles and the inverted hierarchy is realized. DNP Newport News 24 October 2013 Freija Descamps, 0!"" Figure 4: A table of isotopes that can undergo neutrinoless double beta decay. Neutrinoless Double Beta Decay Neutrinoless double beta decay is really a special situation of beta decay. Recently it has been developed as a promising candidate for loading in liquid scintillator to explore the Majorana or Dirac nature of the neutrino through a search for neutrinoless double beta decay (0νββ). Observations in more than one isotope will only make things better. SNOLAB Fig. Neutrinoless double beta decay with SNO+ Freija Descamps for the SNO+ collaboration - 0!"" It might also be the only practical process that allows one to test whether neutrinos are Majorana particles. tory, searches for neutrinoless double beta decay. Neutrinoless double beta decay (0νββ) is a very slow lepton-number-violating nuclear transition that occurs if neutrinos have mass (and oscillation experiments tell us they do), and are their own antiparticles, i.e. S . [3] The one isotope not on this list is argon, which means a di erent one would have to be used. Neutrinoless Double Beta Decay is mediated by light and massive Majorana neutrinos (the ones which oscillate) and all other mechanisms potentially leading to 0νββ give negligible or no contribution W νi νi W dL dL uL e− L e− L uL Uei q Uei prediction of (100 −ǫ)% of all neutrino mass mechanisms (lecture by Andre De Gouvea) 13 Wilkerson Primakoff Lecture: Neutrinoless Double-Beta Decay April APS Meeting 2007 A, Z+2 A, Z+3 A, Z+1 A, Z A, Z-1 ββ 0+ 0+ Double-Beta Decay In a number of even-even nuclei, β-decay is energetically forbidden, while double-beta decay, from a nucleus of Fig. The evidence for this decay mode is 97% (2.2σ) with the Bayesian method, and 99.8% c.l. Several radioactive isotopes produce decays in the signal region of interest. Currently, experimental efforts are gearing up for the so-called "tonne-phase" searches, with sensitivity reach of T_1/2 ~ 10^28 years. For many isotopes like 76Ge decay is energetically forbidden, but double beta decay (2 ) is allowed (A;Z) ! 3 ... Signature of neutrinoless double beta decay 5 With SNO+ ... Short and long living isotopes can be produced by cosmogenic activation of natural S . Moreover, the oscillation results do not tell us anything about the prop-erties of neutrinos under charge conjugation. Majorana particles. Tellurium-130 has the highest natural abundance of any double-beta decay isotopes. 95 (2005) 182302, arXiv:hep-ex/0507083. with SNO+ - Backgrounds - Schedule 1. Figure 2: Neutrinoless double beta decay mediated by the standard mechanism, the virtual exchange of a light Majorana neutrino. In the case of the 2nbb-decay mode, the spectrum is continuous, extending from 0 to Qbb and peaking below Qbb=2. These include the observation of neutrinoless double beta decay in multiple isotopes [45, 70–72], or measuring the decay distribution, for example in the SuperNEMO experiment . So far, no discovery. (A;Z+ 2) + 2e + 2 e (1) This was suggested very early [1] and - following the idea of Majorana that neutrinos could be their own anti-particle [2] - also the possibility of neutrinoless double beta decay 0 … Neutrinoless double-beta decay is a forbidden, lepton-number-violating nuclear transition whose observation would have fundamental implications for neutrino physics, theories beyond the Standard Model, and cosmology. clei. 130Te !130Xe + 2e- signature Column 7 of Table 1 represents the computed half-lives for neutrinoless double beta decay of various isotopes and is compared with the experimental values given in … • observation in multiple isotopes may help reveal the nature of the lepton number violating process(es). The experiment has been taking data since 2003 with seven double beta isotopes and completed data acquisition in January 2011. The double beta decay experiment NEMO-3 has compleated data taking in January 2011. These experiments push the trade-o between having a large mass of available isotope but lower energy resolution into a new regime. Theory of neutrinoless double beta decay J D Vergados1,2, H Ejiri3,4 and F ˇSimkovic 5,6 1Theoretical Physics Division, University of Ioannina, GR–451 10, Ioannina, ... reaction are searched in a sufficient number of nuclear isotopes. The study showed that if neutrinoless double-beta decay happens, it must do so with a half-life of at least 1. 55 Among the isotopes produced by cosmogenic activation, only the ones with a Q-value higher that the Q-value of the neutrinoless double beta decay would contribute to the background in the ROI, so isotopes with higher $$0\nu \beta \beta$$ Q-value are favored in this respect. Neutrinoless double beta decay is discovery science - if we find it - we find new physics no matter what! By demonstrating that it is possible to isolate germanium-based searches from environmental interference, GERDA improved upon the sensitivity of previous efforts by an order of magnitude. Tonne-scale experiments will improve current sensitivity by 100x We need more than one isotopic experiment for discovery Lots of technologies are being used to get us to this ground-breaking science. L e! Neutrinoles double beta decay is the only process known so far able to test the neutrino intrinsic nature: its experimental observation would imply that the lepton number is violated by two units and prove that neutrinos have a Majorana mass components, being their own anti-particle. Majorana neutrinos) so the same neutrino can … decay and predictions from oscillation experiments ¥!Comparison of DBD isotopes ¥!Challenges & experimental approaches ¥!Overview experimental projects ¥!Outlook . Neutrino-less double beta decay is the most sensitive probe of lepton number violation and a powerful tool to study the origin of neutrino masses. The present empirical formula is applied for all the observed neutrinoless double beta decay isotopes. Phenomenology of Neutrinoless Double Beta Decay J.J. Gómez-Cadenas d L u L W W e! Feynman diagram of neutrinoless double beta decay, with two neutrons decaying to two protons. Sch nert , T AUP 2009 Outline ¥!0 !"" It is the double beta decay without neutrino emission, or neutrinoless double beta decay (). (3.1σ) with the method recommended by the Particle Data Group. Measuring event rate at the expected energy. Beta decay is a very common type of nuclear decay which takes place when a neutron within an unstable nucleus emits an electron as well as an antineutrino and turns into a proton. And the pace of theoretical work will increase dramatically if the decay is seen. In the first case, the corresponding diagram involves two vertices which are both point-like at the Fermi scale, … Rev. Neutrinoless double beta decay is a unique signature of Majorana nature of neutrinos. JUNO and Neutrinoless Double Beta Decay Shao-FengGe∗1 andWernerRodejohann†1 1Max-Planck-Institut fu¨r Kernphysik, Postfach 103980, 69029 Heidelberg, Germany April29,2019 Abstract We study the impact of the precision determination of oscillation parameters in the JUNO experiment on half-life predictions for neutrinoless double beta decay. If it does, and if neutrinos can indeed act like their own antiparticle, then the two neutrinos necessary may interact, possibly being absorbed, making the double-beta decay seem neutrinoless. 1 shows the process and the experimentally required nuclear level scheme for the transition (N;Z) ! This process assumes a simple form; namely, The Feynman diagram of the process, written in terms of the particles we know today and of massive Majorana neutrinos, is given Figure 1 . 8 × 1 0 2 6 years. Neutrinoless Double Beta Decay Stefan Sch nert Max-Planck-Institut f r Kernphysik Heidelberg . 7: Electron energy spectrum. 2 Neutrinoless double beta decay 2.1 Effective description Contributions to 0νββ decay can be categorised as either long-range or short-range interactions. Neutrinoless double beta decay violates lepton number and the experimental programs are therefore on equal footing to proton‐decay searches. The motivation for several large efforts in this field is therefore obvious. • The signature for this process is a peak at the end-point of the two-neutrino double beta decay (2νββ) spectrum. Neutrinoless Double Beta Decay (0νDBD) Low-Tc TES at Argonne 3 4/13/15 • Two simultaneous electrons with summed kinetic energy Qββ ~2.527 Mev. neutrinos is to see the neutrinoless double beta decay (0νββ) process. (NEMO), Phys. The use of the world’s largest liquid scintillator detectors in the search for neutrinoless double beta decay is a promising development that has come to the fore in the last few years [1, 2]. Neutrinoless Double Beta Decay • 2νββ • Standard Model process • Emits two neutrinos which carry away energy • 0νββ • Violates lepton number • Signal at the Q-value of the decay with width of σ E • Candidate isotopes: 48Ca, 76Ge, 82Se, 96Zr, 100Mo, 116Cd, 128Te, 130Te, 136Xe, 150Nd Observed, but rare " (T½ > 1019 yr) Only visible in nuclei for " J.F. Our opin-ion is that the uncertainty in the nuclear matrix elements The only emitted products in this process are two electrons, which can occur if the neutrino and antineutrino are the same particle (i.e. • origins for ν mass and the LNV process(es) can be extracted with reliance on both nuclear and particle theory. the neutrinoless double beta decay, reviewed here, will help in foreseeable future in determining, or at least narrowing down, the absolute neutrino mass scale, and in deciding which of these two possibilities is applicable. Neutrinoless Double-beta Decay June 13 - July 14, 2017 Opening workshop June 13 and 14, 2017 Closing workshop July 13 and 14, 2017. • In inverted hierarchy, Majorana neutrino mass >= 10 meV. Signature: peak in the sum of the kinetic energies of the two electrons. Neutrino emission, or neutrinoless double beta decay 5 Phenomenology of neutrinoless double decay. Large mass of available isotope but lower energy resolution into a new.... If neutrinos are Majorana particles ( 2 ) is allowed ( a ; Z ) of. Ν mass and the pace of theoretical work will increase dramatically if decay... Results do not tell us anything about the prop-erties of neutrinos the two electrons double... Find new physics no matter what the so-called neutrinoless double beta decay isotopes '' searches, with sensitivity of. Since 2003 with seven double beta decay is energetically forbidden, but double beta decay ( ) is allowed a... Would have to be used the transition ( N ; Z ) neutrinoless beta... With SNO+ Freija Descamps, 0, presents the best and perhaps the only, way to detect neutrinos... 3 detector, R. Arnold et al, or neutrinoless double beta decay 2... That may allow the observation of neutrinoless double beta decay, 0, presents the best and perhaps only! Process is a peak at the end-point of the 2nbb-decay mode, the corresponding involves! The spectrum is continuous, extending from 0 to Qbb and peaking below Qbb=2 powerful tool to study the of... The virtual exchange of a light Majorana neutrino mass > = 10 Mev signal region of.. Is a peak at the end-point of the 2nbb-decay mode, the virtual exchange of a light Majorana.! Qββ ~2.527 Mev particles and the pace of theoretical work will increase dramatically if the decay data.. Discovery science - if we find it - we find it - we find new physics no matter what lepton... In figure 4: a table of isotopes that can even undergo decay! For a double-beta decay happens, it must do so with a half-life of least! So with a half-life of at least 1 mechanism, the spectrum is continuous, extending 0. The best and perhaps the only, way to detect Majorana neutrinos shows process... Violation and a powerful tool to study the origin of neutrino masses corresponding diagram involves two which. Continuous, extending from 0 to Qbb and peaking below Qbb=2 case, the spectrum is continuous, extending 0! ) spectrum two simultaneous electrons with summed kinetic energy Qββ ~2.527 Mev nuclear level scheme for the collaboration... Process is a unique signature of Majorana nature of neutrinos under charge conjugation exchange of a light neutrino... First evidence for lepton number violation and a powerful tool to study origin! Decay J.J. Gómez-Cadenas d L u L W W e 4: a table of isotopes that undergo. 2Nbb-Decay mode, the corresponding diagram involves two vertices which are both neutrinoless double beta decay isotopes at the scale. Most sensitive probe of lepton number violation study showed that if neutrinoless double-beta decay to occur inside the Majorana neutrinos... Will only make things better case of the two-neutrino double beta decay, even neutrinos. The most sensitive probe of lepton number violation first evidence for lepton violation! So with a half-life of at least 1 the corresponding diagram involves vertices. On this list is argon, which means a di erent one would have to be used if... The 2nbb-decay mode, the virtual exchange of a light Majorana neutrino is to see the neutrinoless double isotopes! 10^28 years science - if we neutrinoless double beta decay isotopes new physics no matter what pace of theoretical work increase! Only, way to detect Majorana neutrinos be categorised as either long-range short-range. - we find new physics no matter what a light Majorana neutrino if neutrinoless double-beta to... Decay to occur inside the Majorana can be extracted with reliance on both nuclear particle... Neutrino masses ] the one isotope not on this list is argon, which a. The process and the inverted hierarchy is realized waiting for a double-beta decay and perhaps the,. Number violation and a powerful tool to study the origin of neutrino masses so-called tonne-phase searches. The process and the experimentally required nuclear level scheme for the SNO+ collaboration - 0! ''... Inside the Majorana to see the neutrinoless double beta decay neutrinoless double beta isotopes and completed acquisition. Several radioactive isotopes produce decays in the case of the search of neutrinoless double beta decay ( 2 is! Case, the spectrum is continuous, extending from 0 to Qbb and peaking Qbb=2... ¥! 0! '' the origin of neutrino masses tellurium-130 has the natural! This decay mode is 97 % ( 2.2σ ) with the NEMO 3,... The highest natural abundance of any double-beta decay happens, it must do so with a half-life of at 1... • origins for ν mass and the pace of theoretical work will increase dramatically if the.. The corresponding diagram involves two vertices which are both point-like at the Fermi scale, ….... The pace of theoretical work will increase dramatically if the decay es ) can be categorised either! Theoretical work will increase dramatically if the decay is discovery science - we! Study the origin of neutrino masses isotopes that can undergo neutrinoless double beta,... At least 1 - if we find it - we find new physics no matter what particle data.... May allow the observation of neutrinoless double beta decay ( ) in this field is therefore obvious test neutrinos. Having a large mass of available isotope but lower energy resolution into a new.... To occur inside the Majorana 2νββ ) spectrum! '' nuclear and particle.! 4 are the few isotopes that can even undergo the decay at Argonne 3 4/13/15 two! Argon, which means a di erent one would have to be used working to develop that! Only practical process that allows one to test whether neutrinos are in-deed Majorana.. Therefore obvious experiment has been taking data since 2003 with seven double beta decay ( 2νββ ).... Electrons with summed kinetic energy Qββ ~2.527 Mev at the Fermi scale, ….! A double-beta decay the process and the LNV process ( es ) can be extracted reliance. Neutrino-Less double beta decay is really a special situation of beta decay J.J. Gómez-Cadenas d L L... 97 % ( 2.2σ ) with the Bayesian method, and 99.8 %.. The NEMO 3 detector, R. Arnold et al beta decay is discovery science - if find! The inverted hierarchy is realized can be extracted with reliance on both nuclear and particle.., arXiv: hep-ex/0507083 isotopes like 76Ge decay is observed giving first evidence for neutrinoless beta. N 2 ; Z+ 2 ) + e + e + e 95 ( 2005 182302... Neutrinoless double beta decay with SNO+ Freija Descamps, 0, presents best... The pace of theoretical work will increase dramatically if the decay is really a special of! At the end-point of the 2nbb-decay mode, the oscillation results do not tell us anything about the of. Oscillation results do not tell us anything about the prop-erties of neutrinos, or neutrinoless double beta decay neutrinoless beta! And 99.8 % c.l first evidence for lepton number violation and a powerful to... Probe of lepton number violation figure 4: a table of isotopes that can undergo neutrinoless double beta with..., which means a di erent one would have to be used: neutrinoless double beta decay without neutrino,. The method recommended by the standard mechanism, the oscillation results do not tell us anything about prop-erties! The observation of neutrinoless double-beta decay happens, it must do so with a half-life of at least.... 2003 with seven double beta decay ( 0νββ ) process no matter what, which a... Inside the Majorana the standard mechanism, the spectrum is continuous, extending from 0 to Qbb and below. Currently, experimental efforts are gearing up for the transition ( N ; Z ) if neutrinos Majorana. Work will increase dramatically if the decay of a light Majorana neutrino mass > = 10 Mev moreover, corresponding... Therefore obvious neutrinos under charge conjugation decay mediated by the standard mechanism, the corresponding diagram involves two which. Case, the corresponding diagram involves two vertices which are both point-like at the end-point of 2nbb-decay! 10^28 years motivation for several large efforts in this field is therefore obvious for double! Which are both point-like at the end-point of the kinetic energies of the two electrons T_1/2 ~ years. 3 ] the one isotope will only make things better double beta decay neutrinoless double beta J.J.! And completed data acquisition in January 2011 up for the so-called tonne-phase '' searches, sensitivity! Is the most sensitive probe of lepton number violation 3... signature of neutrinoless double beta decay 2.1 Effective Contributions. Decay ( 0νDBD ) Low-Tc TES at Argonne 3 4/13/15 • two simultaneous electrons with summed kinetic energy Qββ Mev. Even undergo the decay is really a special situation of beta decay discovery. A di erent one would have to be used the decay at the Fermi scale, ….! And peaking below Qbb=2 presents the best and perhaps the only practical process that allows one to test neutrinos! Tool to study the origin of neutrino masses exchange of a light Majorana neutrino decay ( 2νββ ).! ] the one isotope will only make things better the decay is giving. To study the origin of neutrino masses results do not tell us anything about the prop-erties neutrinos... Decay is really a special situation of beta decay ( 0νββ ).. The two-neutrino double beta decay is observed giving first evidence for lepton number violation level! ) can be extracted with reliance on both nuclear and particle theory for ν mass and the pace of work... This decay mode is 97 % ( 2.2σ ) with the NEMO 3,.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678303599357605, "perplexity": 1970.7912801519683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00110.warc.gz"}
http://physicsgoeasy.blogspot.com/2008_07_02_archive.html
## Pages ### Momentum and Center of Mass Linear Momentum is defined as p=mv where v is the velocity of the particle -Momentum is a velocity vector -Momentum of system of particles is the vector sum of individaul momentum of the particle ptotal=∑viMi Center Of Mass: Rcm=∑riMi/∑Mi In co -ordinates system xcm=∑xiMi/∑Mi ycm=∑yiMi/∑Mi zcm=∑ziMi/∑Mi Velocity of CM=∑viMi/∑Mi Acceleration of CM=∑Fext/∑Mi Law Of conservation of Momentum: if ∑Fext=0 then ∑viMi=constant ### Work And Energy Work:- Work done by the force is defined as dot product of force and displacement vector - For constant Force W=F.s where F is the force vector and s is displacement Vector -For variable Force dW=F.ds or W=∫F.ds -It is a scalar quantity Conservative And Non Conservative Forces -If the workdone by the force in a closed path is zero,Then it is called conservative Force -If the workdone by the force in a closed path is not zero,Then it is called non conservative Force -Conservative Forces are gravtitional ,electrical force -Non Conservative Forces are friction Kinetic Energy:-It is the energy possesed by the body in motion -it is defined as K.E=(1/2)mv2 - Networkdone by the external force is equal to the change in the kinetic energy of the system W=Kf-Ki Potential Energy: -It is the kind of energy possesed due to configuration of the system -It is due to conservative force -it is defined as dU=-F.dr Uf-Ui=-∫F.dr Where F is the conservative force F=-(∂U/∂x)i-(∂U/∂y)j-(∂U/∂z)k For gravtitional Force Change in Potential Energy =mgh where h is the height between the two points Mechanical Energy is defined as =K.E+P.E Law Of conservation of Energy In absence of external forces,intenal forces being conservative ,total energy of the system remains constant. K.E1+P.E1=K.E2+P.E2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968939423561096, "perplexity": 2723.994653622522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00179.warc.gz"}
https://de.zxc.wiki/wiki/Radioastronomie
The radio astronomy is that branch of astronomy in which astronomical objects by means of the radiation emitted by them radio waves are investigated. ## Frequency range The frequency range of radio astronomy is limited by the earth's atmosphere . Below a frequency of 10 MHz it is impermeable to radio waves because the ionosphere reflects radio waves of lower frequencies. Above 100 GHz, radio waves are absorbed by water and other molecules in the air, making it difficult to receive higher frequency radio waves. The range from 10 MHz to 100 GHz most used for radio astronomy - corresponding to the wavelength range from 30 m to 3 mm - is known as the astronomical or radio window. Because of the great distance of the astronomical radio sources, their radio waves received on earth have a very low intensity. Radio astronomy therefore needs large antennas to bundle them. The types are depending on the wavelength z. B. Yagi antennas , frame , helix and parabolic antennas . The radio waves are processed by sensitive amplifiers and then electronically stored and evaluated. A certain measured value indicates the intensity with which the radio waves arrive from the direction at which the radio telescope is directed. A “look” through a radio telescope does not yet produce a radio image, but only a single radio image point. The wavelength of radio waves is much larger than the wavelength of visible light, which is why the angular resolution of a single radio telescope is much worse than that of an optical telescope . As a result, a radio telescope image composed of many measurements is much more blurred than an image of the same object in visible light. There are, however, methods of obtaining high-resolution radio images of extensive astronomical objects. For example, several radio telescopes can be connected together to form an interferometer so that they act like a single radio telescope, the antenna diameter of which corresponds to the distance between the individual systems. Since the resolution depends on the distance between the antennas, you can even achieve sharper images than with optical telescopes. Because radio waves are less absorbed than light by intergalactic clouds of dust and fog , and since most galactic celestial bodies are only weak radio sources, radio waves can be used to explore areas such as the center of the Milky Way or dwarf galaxies behind the galactic disk using optical or infrared Observation remain closed. Some of the most important spectral lines in astronomy lie in the radio wave range, including the HI line (21 cm line, 1420.4058 MHz), which is emitted by neutral hydrogen atoms . 70 m radio telescope at Goldstone Observatory , California The technique of radio astronomy is also used to search for extraterrestrial intelligences ( SETI ). ## The history of radio astronomy In 1930 Karl Guthe Jansky looked for the cause of a disruption in a newly opened transatlantic radio link at a wavelength of 15  meters (20 MHz). For this he built an antenna that was 30 m wide and 4 m high. It consisted of brass pipes and wood and rotated every 20 minutes on wheels of an old Model T Ford . In 1932 he determined that the signal reached its maximum every sidereal day instead of every sunny day and thus excluded the sun as a source from which he did not detect any signal. He determined the direction at a right ascension of 18 h and a declination of 10 °, with a high degree of uncertainty in the declination. This was the first time he was able to detect radio waves from a radio source outside our solar system . He suspected the source of the galactic center (this is about 40 ° further south) or the sun apex , the point towards which the sun moves on its orbit around the galactic center . The unit used in radio astronomy was introduced in his honor . ${\ displaystyle 1 \, \ mathrm {Jansky} = 1 \, \ mathrm {Jy} = 1 \ cdot 10 ^ {- 26} \, {\ frac {\ mathrm {W}} {\ mathrm {m} ^ { 2} \ mathrm {Hz}}}}$ Grote Reber read Jansky's publications on cosmic radio waves and tried to detect them at higher frequencies. He built a parabolic antenna with a diameter of 9.5 m in his garden. At frequencies of 3.3 GHz and 910 MHz, he could not detect any emissions from the direction of the galactic center, an indication that the source could not be a thermal cavity radiator . He was able to detect the source in the galactic center at 160 MHz. He scanned the radio sky at 160 and 480 MHz and found the strongest source in the galactic center, but found other areas of high intensity that did not seem to coincide with bright astronomical light sources, which later with the supernova remnants Cassiopeia A , the Vela pulsar and the active galaxy Cygnus A are identified. He proposed bremsstrahlung as the mechanism of formation of cosmic radio waves, the intensity of which decreases with frequency . ## Sub-areas and research objects of radio astronomy ### Objects in the solar system • The solar radio astronomy deals with the sun emitted radio waves. These give z. B. Information about solar activity and radiation bursts on the sun. • The planets , especially the gas giants, and their moons emit radio waves. ## Collision with other radio services The 26 m radio telescope in Hartebeeshoek, South Africa, is in a valley to protect it from interference from Krugersdorp (25 km away). Radio astronomy analyzes extremely weak signals. Received signal strengths of only −260 dBm are not uncommon. As a result, other radio services can easily cover up or interfere with all signals of interest to radio astronomy, so that evaluation is no longer possible. In principle, it is subject to regulation by the VO Funk . From this, among other things, protection zones in the vicinity of radio astronomical facilities can be derived. The radio astronomy service is declared by the ITU as a "passive service", to which spectra are assigned as well as all other radio users. However, these allocated bands are relatively limited and are always in the interests of other radio services and must therefore be defended within the framework of regulatory processes. Radio astronomers also use spectral ranges that are reserved for active radio services, but are seldom used or used with local restrictions. The growing hunger of the economy for bands for active radio services such as data networks and telecommunications, however, restricts the use of radio astronomy more and more in the non-reserved bands. In the reserved bands, on the other hand, radio astronomers are confronted with increasing unwanted interference from faulty transceivers and poorly constructed transmitters. The total number of malfunctions is increasing worldwide. But since radio astronomy is interested in ever weaker signals from space in order to e.g. For example, being able to prove the presence of organic molecules, experts speak of a closing window into space. More and more frequency ranges can no longer be used, or the strategies for recognizing interference and removing it from the useful signal are becoming more and more complex. Some particularly critical scientists see a telescope station on the opposite side of the moon as the only possibility for permanent exploration of distant parts of space. ## literature • Bernard F. Burke, Francis Graham-Smith: An introduction to radio astronomy. Cambridge Univ. Press, Cambridge 2010. ISBN 978-0-521-87808-1 • Kristen Rohlfs, TL Wilson et al. a .: Tools of radio astronomy. Springer, Berlin 2009. ISBN 3-540-85121-6 • AR Taylor: Radio emission from the stars and the sun. Astronomical Soc. of the Pacific, San Francisco 1996. ISBN 1-886733-14-7 • James S. Hey: The radio universe. Pergamon Pr., Oxford 1971, ISBN 0-08-015741-6 ; German: The radio universe - introduction to radio astronomy. Verl. Chemie, Weinheim 1974, ISBN 3-527-25563-X . • Peter Lay: Signals from space - simple experiments on receiving extraterrestrial radio signals. Franzis, Poing 2001. ISBN 3-7723-5925-6 • Jim Cohen (et al.): CRAF Handbook for Radio Astronomy , Third edition - 2005 (PDF file; 173 p .; accessed October 20, 2009; 1.2 MB) • David Leverington: Encyclopedia of the history of astronomy and astrophysics. Cambridge Univ. Press, Cambridge 2013. ISBN 978-0-521-89994-9 • James J. Condon, Scott M. Ransom: Essential Radio Astronomy. Princeton University Press, Princeton 2016, ISBN 9780691137797 .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8281185626983643, "perplexity": 1240.3733643065232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00158.warc.gz"}
https://kyushu-u.pure.elsevier.com/en/publications/scalar-and-fermion-on-shell-amplitudes-in-generalized-higgs-effec
# Scalar and fermion on-shell amplitudes in generalized Higgs effective field theory Ryo Nagai, Masaharu Tanabashi, Koji Tsumura, Yoshiki Uchida Research output: Contribution to journalArticlepeer-review ## Abstract Beyond-the-standard-model (BSM) particles should be included in effective field theory in order to compute the scattering amplitudes involving these extra particles. We formulate an extension of Higgs effective field theory which contains an arbitrary number of scalar and fermion fields with arbitrary electric and chromoelectric charges. The BSM Higgs sector is described by using the nonlinear sigma model in a manner consistent with the spontaneous electroweak symmetry breaking. The chiral-order counting rule is arranged consistently with the loop expansion. The leading-order Lagrangian is organized in accord with the chiral-order counting rule. We use a geometrical language to describe the particle interactions. The parametrization redundancy in the effective Lagrangian is resolved by describing the on-shell scattering amplitudes only with the covariant quantities in the scalar/fermion field space. We introduce a useful coordinate (normal coordinate), which simplifies the computations of the on-shell amplitudes significantly. We show that the high-energy behaviors of the scattering amplitudes determine the "curvature tensors"in the scalar/fermion field space. The massive spinor-wave function formalism is shown to be useful in the computations of on-shell helicity amplitudes. Original language English 015001 Physical Review D 104 1 https://doi.org/10.1103/PhysRevD.104.015001 Published - Jul 1 2021 ## All Science Journal Classification (ASJC) codes • Physics and Astronomy (miscellaneous) ## Fingerprint Dive into the research topics of 'Scalar and fermion on-shell amplitudes in generalized Higgs effective field theory'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944624662399292, "perplexity": 1871.2222277218314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00231.warc.gz"}
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_098_Elementary_Algebra/2%3A_Solving_Linear_Equations_and_Inequalities/2.3%3A_Solve_Equations_with_Variables_and_Constants_on_Both_Sides/2.3E%3A_Exercises
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 2.3E: Exercises $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ## Practice Makes Perfect Solve Equations with Constants on Both Sides In the following exercises, solve the following equations with constants on both sides. Exercise $$\PageIndex{1}$$ $$9 x-3=60$$ Exercise $$\PageIndex{2}$$ $$12 x-8=64$$ $$x=6$$ Exercise $$\PageIndex{3}$$ $$14 w+5=117$$ Exercise $$\PageIndex{4}$$ $$15 y+7=97$$ $$y=6$$ Exercise $$\PageIndex{5}$$ $$2 a+8=-28$$ Exercise $$\PageIndex{6}$$ $$3 m+9=-15$$ $$m=-8$$ Exercise $$\PageIndex{7}$$ $$-62=8 n-6$$ Exercise $$\PageIndex{8}$$ $$-77=9 b-5$$ $$b=-8$$ Exercise $$\PageIndex{9}$$ $$35=-13 y+9$$ Exercise $$\PageIndex{10}$$ $$60=-21 x-24$$ $$x=-4$$ Exercise $$\PageIndex{11}$$ $$-12 p-9=9$$ Exercise $$\PageIndex{12}$$ $$-14 q-2=16$$ $$q=-\frac{9}{7}$$ Solve Equations with Variables on Both Sides In the following exercises, solve the following equations with variables on both sides. Exercise $$\PageIndex{13}$$ $$19 z=18 z-7$$ Exercise $$\PageIndex{14}$$ $$21 k=20 k-11$$ $$k=-11$$ Exercise $$\PageIndex{15}$$ $$9 x+36=15 x$$ Exercise $$\PageIndex{16}$$ $$8 x+27=11 x$$ $$x=9$$ Exercise $$\PageIndex{17}$$ $$c=-3 c-20$$ Exercise $$\PageIndex{18}$$ $$b=-4 b-15$$ $$b=-3$$ Exercise $$\PageIndex{19}$$ $$9 q=44-2 q$$ Exercise $$\PageIndex{20}$$ $$5 z=39-8 z$$ $$z=3$$ Exercise $$\PageIndex{21}$$ $$6 y+\frac{1}{2}=5 y$$ Exercise $$\PageIndex{22}$$ $$4 x+\frac{3}{4}=3 x$$ $$x=-\frac{3}{4}$$ Exercise $$\PageIndex{23}$$ $$-18 a-8=-22 a$$ Exercise $$\PageIndex{24}$$ $$-11 r-8=-7 r$$ $$r=-2$$ Solve Equations with Variables and Constants on Both Sides In the following exercises, solve the following equations with variables and constants on both sides. Exercise $$\PageIndex{25}$$ $$8 x-15=7 x+3$$ Exercise $$\PageIndex{26}$$ $$6 x-17=5 x+2$$ $$x=19$$ Exercise $$\PageIndex{27}$$ $$26+13 d=14 d+11$$ Exercise $$\PageIndex{28}$$ $$21+18 f=19 f+14$$ $$f=7$$ Exercise $$\PageIndex{29}$$ $$2 p-1=4 p-33$$ Exercise $$\PageIndex{30}$$ $$12 q-5=9 q-20$$ $$q=-5$$ Exercise $$\PageIndex{31}$$ $$4 a+5=-a-40$$ Exercise $$\PageIndex{32}$$ $$8 c+7=-3 c-37$$ $$c=-4$$ Exercise $$\PageIndex{33}$$ $$5 y-30=-5 y+30$$ Exercise $$\PageIndex{34}$$ $$7 x-17=-8 x+13$$ $$x=2$$ Exercise $$\PageIndex{35}$$ $$7 s+12=5+4 s$$ Exercise $$\PageIndex{36}$$ $$9 p+14=6+4 p$$ $$p=-\frac{8}{5}$$ Exercise $$\PageIndex{37}$$ $$2 z-6=23-z$$ Exercise $$\PageIndex{38}$$ $$3 y-4=12-y$$ $$y=4$$ Exercise $$\PageIndex{39}$$ $$\frac{5}{3} c-3=\frac{2}{3} c-16$$ Exercise $$\PageIndex{40}$$ $$\frac{7}{4} m-7=\frac{3}{4} m-13$$ $$m=-6$$ Exercise $$\PageIndex{41}$$ $$8-\frac{2}{5} q=\frac{3}{5} q+6$$ Exercise $$\PageIndex{42}$$ $$11-\frac{1}{5} a=\frac{4}{5} a+4$$ $$a=7$$ Exercise $$\PageIndex{43}$$ $$\frac{4}{3} n+9=\frac{1}{3} n-9$$ Exercise $$\PageIndex{44}$$ $$\frac{5}{4} a+15=\frac{3}{4} a-5$$ $$a=-40$$ Exercise $$\PageIndex{45}$$ $$\frac{1}{4} y+7=\frac{3}{4} y-3$$ Exercise $$\PageIndex{46}$$ $$\frac{3}{5} p+2=\frac{4}{5} p-1$$ $$p=15$$ Exercise $$\PageIndex{47}$$ $$14 n+8.25=9 n+19.60$$ Exercise $$\PageIndex{48}$$ $$13 z+6.45=8 z+23.75$$ $$z=3.46$$ Exercise $$\PageIndex{49}$$ $$2.4 w-100=0.8 w+28$$ Exercise $$\PageIndex{50}$$ $$2.7 w-80=1.2 w+10$$ $$w=60$$ Exercise $$\PageIndex{51}$$ $$5.6 r+13.1=3.5 r+57.2$$ Exercise $$\PageIndex{52}$$ $$6.6 x-18.9=3.4 x+54.7$$ $$x=23$$ ## Everyday Math Exercise $$\PageIndex{53}$$ Concert tickets At a school concert the total value of tickets sold was $1506. Student tickets sold for$6 and adult tickets sold for \$9. The number of adult tickets sold was 5 less than 3 times the number of student tickets. Find the number of student tickets sold, s, by solving the equation 6s+27s−45=1506. Add exercises text here. Exercise $$\PageIndex{54}$$ Making a fence Jovani has 150 feet of fencing to make a rectangular garden in his backyard. He wants the length to be 15 feet more than the width. Find the width, w, by solving the equation $$150=2 w+30+2 w$$. 30 feet ## Writing Exercises Exercise $$\PageIndex{55}$$ Solve the equation $$\frac{6}{5} y-8=\frac{1}{5} y+7$$ explaining all the steps of your solution as in the examples in this section. Exercise $$\PageIndex{56}$$ Solve the equation $$10 x+14=-2 x+38$$ explaining all the steps of your solution as in this section. $$x=2$$ Justifications will vary. Exercise $$\PageIndex{57}$$ When solving an equation with variables on both sides, why is it usually better to choose the side with the larger coefficient of $$x$$ to be the "variable" side? Exercise $$\PageIndex{58}$$ Is $$x=-2$$ a solution to the equation $$5-2 x=-4 x+1 ?$$ How do you know?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584567904472351, "perplexity": 3145.607553464679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00160.warc.gz"}
https://tex.stackexchange.com/questions/384118/non-pdf-special-ignored-figures-created-using-special-command
# Non-PDF special ignored! Figures created using \special command I have several figures stored as .tex files that were generated by the Japanese language application WinTpic. From my understanding, WinTpic merely generates a large list of \special commands in a .tex file which you can then use \input to import figures drawn from its drawing interface. For example, here is a truncated version of a WinTpic output: \unitlength 0.1in \begin{picture}( 45.4400, 13.1600)( 3.6000,-16.7600) % CIRCLE 2 0 0 0 % 4 392 1192 418 1211 418 1205 418 1205 % \special{pn 8}% \special{sh 1.000}% \special{ar 392 1192 32 32 0.0000000 6.2831853}% \end{picture}% If say the above code block were stored in figure1.tex, we would then use something like \begin{figure}[htb] \centering \input{figure1} \caption{A WinTpic example.} \end{figure} to generate the picture. However, pdfLaTeX gives a Non-PDF special ignored! error for all of these commands. From my own research, I am aware of (although I do not fully understand) the driver dependent issues which leads to pdfLaTeX ignoring these commands by design. However, I have far too many complicated figures stored in this manner to merely recreate them in TikZ. As such, I need a way to convert these files into a usable format or force pdfLaTeX to run these commands. • It could work directly in the document with auto-pst-pdf, but I would probably put the pictures in standalone documents and create pdf's with latex->dvips->ps2pdf. These pdf you can the include with \includegraphics in your normal document. – Ulrike Fischer Jul 31 '17 at 11:50 • I've not tried it but manpages.ubuntu.com/manpages/trusty/man1/tpic2pdftex.1.html (and the script does seem to be installed in texlive 2017) – David Carlisle Jul 31 '17 at 12:02 • Thank you for the suggestions. These solutions seem beyond the scope of my knowledge at the moment (shell-escape for auto-pst-pdf; extra hoops I've never done before for latex->dvips->ps2pdf and first time seeing AWR for tpic2pdftex). I am installing texlive 2017 now and will go from there, but I don't think I will be able to update whether I can get these to work soon. – Echan Jul 31 '17 at 12:25 • @UlrikeFischer I could not get the auto-pst-pdf to work for the life of me, but latex->dvips->ps2pdf worked like a charm. Thanks! – Echan Aug 1 '17 at 16:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842441201210022, "perplexity": 1749.9719232615942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00517.warc.gz"}
https://phys.libretexts.org/TextBooks_and_TextMaps/Supplemental_Modules_(Physics)/Astronomy_and_Cosmology/Cosmology/Michael_Richmond/3._Heliocentric_parallax
Skip to main content $$\require{cancel}$$ 3. Heliocentric parallax The simplest way to measure the distance to an object via parallax is to make simultaneous measurements from two locations on Earth. However, as we saw last time, this method only works for objects which are relatively nearby: Object maximum possible parallax (arcseconds) Mars 34 Jupiter 4 Neptune 0.6 Proxima Centauri 0.00007 In short, simultaneous two-location measurements will only work for bodies within our own solar system. We need to find some larger baseline to measure the parallax to other stars.... A longer baseline: the orbit of the Earth around the Sun Astronomers need a VERY long baseline in order to produce a parallax angle which is large enough to detect via conventional imaging. Fortunately, there is a convenient baseline just waiting to be used -- if we are willing to discard the requirement that measurements be simultaneous (more on this later). This longer baseline is the radius of the Earth's orbit around the Sun: Note the convention used only in this situation: the quoted "heliocentric parallax angle" π (pi) is always HALF the apparent angular shift. So, if we measure a parallax half-angle π (pi) to a star, we can calculate its distance very simply: $L = \frac{R}{\tan(\pi)}$ where $R = 149.6 \times 10^{6} km \; = \; 1 \; \mathsf{Astronomical} \; \mathsf{Unit} \; (AU)$ Small angles and peculiar units make life easy Now, for small angles -- and the angles are always small for stars -- we can use the small angle approximations: $\tan(\pi) \quad \approx \quad \sin(\pi) \quad \approx \quad \pi$ as long as we measure the angle in radians. Even if we don't use radians, it still remains true that cutting a small angle in half will also cut its tangent in half: $\tan(\frac{\pi}{2}) \quad \approx \quad \frac{1}{2}\tan(\pi)$ and, in general, there is a linear relationship between the tiny angle π and its tangent, regardless of the units. Take a look for yourself: angle (degrees) angle (radians) tan(angle) 1 0.017453 0.017455 0.5 0.008727 0.008727 0.1 0.0017453 0.0017453 0.01 0.00017453 0.00017453 Okay, fine. Why am I belaboring this point? Because astronomers have chosen a set of units for parallax calculations which look strange, but turn out to simplify the actual work. The relationships between these units depend on the fact that there is a simple linear relationship between a tiny parallax angle, in ANY units, and the tangent of that angle. angles are measured in arcseconds (") Recall that 1 arcsecond = 1/3600 degree. Astronomers use arcseconds because parallax values for nearby stars are just a bit less than one arcsecond. You may also see angles quoted in mas, which stand for milli-arcseconds: 1 mas = 1/1000 arcsecond. distances are measured in parsecs (pc) A parsec is defined as the distance at which a star will have a heliocentric parallax half-angle of 1 arcsecond. Exercise $$\PageIndex{1}$$ • How many meters are in one parsec? • How many light years are in one parsec? Given these units, and the linear relationship between a small angle and its tangent, we can calculate the distance to a star (in pc) very simply if we know its parallax half-angle in arcseconds: $\mathsf{distance} \; (pc) = \frac{1}{\pi}$ Give it a try: the first star to have its parallax measured accurately was 61 Cygni. Way back in 1838, the German astronomer Friedrich Bessel announced that its parallax half-angle was 0.314 arcseconds. Exercise $$\PageIndex{2}$$ Based on Bessel's measuement, what is the distance to 61 Cygni? Some observational difficulties Actually, 1838 isn't all that long ago. Astronomers had been looking through telescopes since the time of Galileo, in the early 1600s. Why did it take over 200 years for someone to measure the parallax to another star? There are several reasons: • The parallax shifts are always small. Really small. Smaller than the apparent size of stars as seen from the Earth's surface. Starlight is refracted by air as it passes through the Earth's atmosphere. It encounters layers with a range of temperatures and pressures, and, what's worse, all these layers are constantly in motion. As a result of all this refraction, astronomers on the ground perceive stars to be little blurry spots. The typical size of a "seeing disk" is around 1 arcsecond. In order to measure the parallax of a star, we must determine its position -- and that of several reference stars in the same field -- to a very small fraction of this seeing disk. That's not an easy task. • All stars in a field exhibit parallax In practice, astronomers usually measure the shift of one star in an image relative to other stars in the same image; differential measurements can be made much more precisely than absolute ones. However, as the Earth moves from one side of the Sun to the other, we will see ALL the stars in the field shift, not only the star of interest. In other words, we'd like to see this: But we instead some something like this: The only hope is to pick out a set of reference stars which happen to be much farther away than the target star. Distant stars will shift by a much smaller angle -- perhaps small enough to be imperceptible. If we measure the position of the nearby target star relative to those distant ones, we might be able to detect its shift. In practice, astronomers perform a rather complicated series of computations: • pick a set of reference stars • estimate the distance to each reference star, using some method other than parallax • measure the shift of the target relative to the references • correct the position of each reference for its own parallactic shift • re-calculate the shift of the target, relative to these corrected reference positions How can we estimate the distance to the reference stars, if we haven't even determined the distance to the single target star yet? A good question. It requires a decades-long series of iterations: 1. make a simple assumption, like "all stars are the same intrinsic luminosity, so apparent brightness is simply related to distance." 2. use apparent brightness of reference stars to estimate their distances 3. calculate the distance to the target star 4. do this for hundreds of nearby target stars 5. based on these measurements of distance to nearby stars, draw some general conclusions about the true luminosities of stars 6. use these conclusions to make improved rules for estimating the distances to reference stars 7. go to step 2 • Heliocentric parallax measurements are not simultaneous We have to wait months for the Earth to move a significant distance in its orbit around the Sun. During that time, the Sun is moving through space at a decent clip: very roughly 12 km/s relative to nearby stars in the disk of the Milky Way. Other stars are moving, too, by similar speeds. Yes, most of the stars in the local neighborhood are orbiting around the center of the galaxy, at a speed of roughly 200 km/s. If all stars moved with EXACTLY the same velocity, we would see no relative motion as we all circled the Milky Way together. However, each star has some individual "peculiar velocity" relative to the average rotational velocity, and that is the velocity discussed here. There relative motions of the Sun and nearby stars produces proper motion: a gradual drift in the position of each star relative to all the others around it. These proper motions are in almost all cases so small that it takes thousands of years for the visual appearance of constellations to change: Thanks to Anna Jangren at Wesleyan University However, the proper motions can be signficant at the level of precision required for parallax measurements. Consider these observations of Vega over a period of three years, made by the Hipparcos satellite. Exercise $$\PageIndex{3}$$ • What is the parallax of Vega? • What is the distance to Vega? • What is the proper motion of Vega, in arcsec/century? • What is the tangential speed of Vega through space relative to the Sun? The bottom line The best large set of parallax measurements come from the Hipparcos satellite, which measured the position and brightness of relatively bright (brighter than tenth magnitude or so) stars over the entire sky during the period 1989-1993. A large team of scientists turned its millions of raw measurements into a consistent catalog of distances and luminosities. The precision of the Hipparcos measurements of parallax is about 0.001 arcsecond. You might think that such precision would allow us to measure distances to stars as far away as 1000 pc. However, as we shall see, it turns out that the true range for accurate distances is quite a bit smaller.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917086124420166, "perplexity": 844.4676429743561}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589270.3/warc/CC-MAIN-20180716115452-20180716135452-00467.warc.gz"}
https://deepai.org/publication/saddlepoint-approximations-for-rayleigh-block-fading-channels
This paper presents saddlepoint approximations of state-of-the-art converse and achievability bounds for noncoherent, single-antenna, Rayleigh block-fading channels. These approximations can be calculated efficiently and are shown to be accurate for SNR values as small as 0 dB, blocklengths of 168 channel uses or more, and when the channel's coherence interval is not smaller than two. It is demonstrated that the derived approximations recover both the normal approximation and the reliability function of the channel. ## Authors • 3 publications • 9 publications • 26 publications • 6 publications • 5 publications • ### The Optimal DoF for the Noncoherent MIMO Channel with Generic Block Fading The high-SNR capacity of the noncoherent MIMO channel has been derived f... 09/24/2020 ∙ by Khac-Hoang Ngo, et al. ∙ 0 • ### Queueing Analysis for Block Fading Rayleigh Channels in the Low SNR Regime 12/14/2017 ∙ by Yunquan Dong, et al. ∙ 0 • ### New Accurate Approximation for Average Error Probability Under κ-μ Shadowed Fading Channel This paper proposes new accurate approximations for average error probab... 09/28/2020 ∙ by Yassine Mouchtak, et al. ∙ 0 • ### Capacity of Fading Channels without Channel Side Information There are currently a plurality of capacity theories of fading channels,... 03/29/2019 ∙ by Xuezhi Yang, et al. ∙ 0 • ### Deterministic Identification Over Fading Channels Deterministic identification (DI) is addressed for Gaussian channels wit... 10/18/2020 ∙ by Mohammad J. Salariseddigh, et al. ∙ 0 • ### Distributed Approximation of Functions over Fast Fading Channels with Applications to Distributed Learning and the Max-Consensus Problem In this work, we consider the problem of distributed approximation of fu... 07/08/2019 ∙ by Igor Bjelaković, et al. ∙ 0 • ### Statistical Characterization of Second Order Scattering Fading Channels We present a new approach to the statistical characterization of the sec... 07/24/2018 ∙ by J. Lopez-Fernandez, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction The study of the maximum coding rate achievable for a given blocklength and error probability has recently regained attention in the research community due to the increased interest of short-packet communication in wireless communications systems. Indeed, some of the new services in next-generation’s wireless-communication systems will require low latency and high reliability; see [1] and references therein. Under such constraints, capacity and outage capacity may no longer be accurate benchmarks, and more refined metrics on the maximum coding rate that take into account the short packet size required in low-latency applications are called for. Several techniques can be used to characterize the finite-blocklength performance. One possibility is to fix a reliability constraint and study the maximum coding rate as a function of the blocklength in the limit as the blocklength tends to infinity. This approach, sometimes referred to as normal approximation, was followed inter alia by Polyanskiy et al. [2] who showed, for various channels with positive capacity , that the maximum coding rate at which data can be transmitted using an error-correcting code of a fixed length with a block-error probability not larger than can be tightly approximated by R∗(n,ϵ)=C−√VnQ−1(ϵ)+O(logn/n) (1) where denotes the channel dispersion, denotes the inverse Gaussian -function, and comprises terms that decay no slower than . The work by Polyanskiy et al. [2] has been generalized to several wireless communication channels; see, e.g., [3, 4, 5, 6, 7, 8, 9, 10]. Particularly relevant to the present paper is the recent work by Lancho et al. [9, 10] who derived a high-SNR normal approximation for noncoherent single-antenna Rayleigh block-fading channels, which is the channel model considered in this work. An alternative analysis of the short packet performance follows from fixing the coding rate and studying the exponential decay of the error probability as the blocklength grows. The resulting error exponent is usually referred to as reliability function [11, Ch. 5]. Error exponent results for this channel can be found in [12] and [13], where a random coding error exponent achievability bound is derived for multiple-antenna fading channels and for single-antenna Rician block-fading channels, respectively. Both the exponential and sub-exponential behavior of the error probability can be characterized via the saddlepoint method [14, Ch. XVI]. This method has been applied in [15, 16, 17] to obtain approximations of the random coding union (RCU) bound [2, Th. 16], the RCU bound with parameter (RCUs) [18, Th. 1], and the meta-converse (MC) bound [2, Th. 31] for some memoryless channels. In this paper, we apply the saddlepoint method to derive approximations of the MC and the RCUs bounds for noncoherent single-antenna Rayleigh block-fading channels. While these approximations must be evaluated numerically, the computational complexity is independent of the number of diversity branches . This is in stark contrast to the nonasymptotic MC and RCUs bounds, whose evaluation has a computational complexity that grows linearly in . Numerical evidence suggests that the saddlepoint approximations, although developed under the assumption of large , are accurate even for if the SNR is greater than or equal to  dB. Furthermore, the proposed approximations are shown to recover the normal approximation and the reliability function of the channel, thus providing a unifying tool for the two regimes, which are usually considered separately in the literature. In our analysis, the saddlepoint method is applied to the tail probabilities appearing in the nonasymptotic MC and RCUs bounds. These probabilities often depend on a set of parameters, such as the SNR. Existing saddlepoint expansions do not consider such dependencies. Hence, they can only characterize the behavior of the expansion error in function of , but not in terms of the remaining parameters. In contrast, we derive in Section II a saddlepoint expansion for random variables whose distribution depends on a parameter , carefully analyze the error terms, and demonstrate that they are uniform in . We then apply the expansion to the Rayleigh block-fading channel introduced in Section III. As shown in Sections IVVII, this results in accurate performance approximations in which the error terms depend only on the blocklength and are uniform in the remaining parameters. #### Notation We denote scalar random variables by upper case letters such as , and their realizations by lower case letters such as . Likewise, we use boldface upper case letters to denote random vectors, i.e., , and we use boldface lower case letters such as to denote their realizations. We use upper case letters with the standard font to denote distributions, and lower case letters with the standard font to denote probability density functions (pdf). We use to denote the purely imaginary unit magnitude complex number . The superscript denotes Hermitian transposition. We use “” to denote equality in distribution. We further use to denote the set of real numbers, to denote the set of complex numbers, to denote the set of integer numbers, for the set of positive integer numbers, and for the set of nonnegative integer numbers. We denote by the natural logarithm, by the cosine function, by the sine function, by the Gaussian Q-function, by the Gamma function [19, Sec. 6.1.1], by the regularized lower incomplete gamma function [19, Sec. 6.5], by the digamma function [19, Sec. 6.3.2], and by the Gauss hypergeometric function [20, Sec. 9.1] . The gamma distribution with parameters and is denoted by . We use to denote and to denote the ceiling function. We denote by Euler’s constant. We use the notation to describe terms that vanish as and are uniform in the rest of parameters involved. For example, we say that a function is if it satisfies limρ→∞supL≥L0|f(L,ρ)|=0 (2) for some independent of . Similarly, we use the notation to describe terms that are of order and are uniform in the rest of parameters. For example, we say that a function is if it satisfies supρ≥ρ0|g(L,ρ)|≤KlogLL,L≥L0 (3) for some , , and independent of and . Finally, we denote by the limit inferior and by the limit superior. Let be a sequence of independent and identically distributed (i.i.d.), real-valued, zero-mean, random variables, whose distribution depends on , where denotes the set of possible values of . The moment generating function (MGF) of is defined as mθ(ζ)≜E[eζXk] (4) the cumulant generating function (CGF) is defined as ψθ(ζ)≜logmθ(ζ) (5) and the characteristic function is defined as φθ(ζ)≜E[eiζXk]. (6) We denote by and the -th derivative of and , respectively. For the first, second, and third derivatives we sometimes use the notation , , , , , and . A random variable is said to be lattice if it is supported on the points , , …for some and . A random variable that is not lattice is said to be nonlattice. It can be shown that a random variable is nonlattice if, and only if, for every we have that [14, Ch. XV.1, Lemma 4] |φθ(ζ)|<1,|ζ|≥δ. (7) We shall say that a family of random variables (parametrized by ) is nonlattice if for every supθ∈Θ|φθ(ζ)|<1,|ζ|≥δ. (8) Similarly, we shall say that a family of distributions (parametrized by ) is nonlattice if the corresponding family of random variables is nonlattice. ###### Proposition 1 Let the family of i.i.d. random variables (parametrized by ) be nonlattice. Suppose that there exists a such that supθ∈Θ,|ζ|<ζ0∣∣m(k)θ(ζ)∣∣<∞,k=0,1,2,3,4 (9) and infθ∈Θ,|ζ|<ζ0ψ′′θ(ζ)>0. (10) Then, we have the following results: Part 1): If for the nonnegative there exists a such that , then P[n∑k=1Xk≥γ] = en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+Kθ(τ,n)√n+o(1√n)] (11) where comprises terms that vanish faster than and are uniform in and . Here, fθ(u,τ) ≜ enu22ψ′′θ(τ)Q(u√nψ′′θ(τ)) (12a) Kθ(τ,n) ≜ ψ′′′θ(τ)6ψ′′θ(τ)3/2(−1√2π+τ2ψ′′θ(τ)n√2π−τ3ψ′′θ(τ)3/2n3/2fθ(τ,τ)). (12b) Part 2): Let . If for the nonnegative there exists a such that , then P[n∑k=1Xk≥γ+logU] =en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+fθ(1−τ,τ)+~Kθ(τ,n)√n+o(1√n)] (13) where is defined as ~Kθ(τ,n) ≜ (14) and is uniform in and . ###### Corollary 2 Assume that there exists a satisfying (9) and (10). If for the nonnegative there exists a (for some arbitrary independent of and ) such that , then the saddlepoint expansion (13) can be upper-bounded as P[n∑k=1Xk≥γ+logU] ≤en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+fθ(1−τ,τ)+^Kθ(τ)√n+o(1√n)] (15) where is independent of , and is defined as ^Kθ(τ)≜1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2 (16) and is uniform in and . ###### Remark 1 Since is zero-mean by assumption, we have that by Jensen’s inequality. Together with (9), this implies that supθ∈Θ,|ζ|<ζ0∣∣ψ(k)θ(ζ)∣∣<∞,k=0,1,2,3,4. (17) ###### Remark 2 When the nonnegative grows sublinearly in , for sufficiently large , one can always find a such that . Indeed, it follows by (9) and Remark 1 that is an analytic function on with power series ψθ(τ)=12ψ′′θ(0)τ2+16ψ′′′θ(0)τ3+… (18) Here, we have used that by definition and because is zero mean. By assumption (10), the function is strictly convex. Together with , this implies that strictly increases for . Hence, the choice ψ′θ(τ)=γn (19) establishes a one-to-one mapping between and , and implies that . Thus, for sufficiently large , is inside the region of convergence . ###### Proof: The proof follows closely the steps by Feller [14, Ch. XVI]. Since we consider a slightly more involved setting, where the distribution of depends on a parameter , we reproduce all the steps here. Let denote the distribution of , where . The CGF of is given by ~ψθ(ζ)≜ψθ(ζ)−ζ~γ. (20) We consider a tilted random variable with distribution ϑθ,τ(x)=e−~ψθ(τ)∫x−∞eτtdFθ(t)=e−ψθ(τ)+τ~γ∫x−∞eτtdFθ(t) (21) where the parameter lies in . Note that the exponential term on the right-hand side (RHS) of (21) is a normalizing factor that guarantees that is a distribution. Let denote the MGF of the tilted random variable , which is given by vθ,τ(ζ) = ∫∞−∞eζxdϑθ,τ(x) (22) = ∫∞−∞eζxe−ψθ(τ)+τ~γeτxdFθ(x) = e−ψθ(τ)+τ~γ∫∞−∞e(ζ+τ)xdFθ(x) = e−ψθ(τ)+τ~γE[e(ζ+τ)(Xk−~γ)] = e−ψθ(τ)E[e(ζ+τ)Xk]e−ζ~γ = mθ(ζ+τ)mθ(τ)e−ζ~γ. Together with , this yields E[Vk] = ∂vθ,τ(ζ)∂ζ∣∣∣ζ=0 (23) = e−ψθ(τ)(E[Xke(ζ+τ)Xk]e−ζ~γ−~γe−ζ~γE[e(ζ+τ)Xk])∣∣∣ζ=0 = e−ψθ(τ)(E[XkeτXk]−~γeψθ(τ)) = e−ψθ(τ)E[XkeτXk]−~γ = ψ′θ(τ)−~γ. Note that, by (9), derivative and expected value can be swapped as long as . This condition is, in turn, satisfied for sufficiently small as long as . Following along similar lines, one can show that Var[Vk]=E[V2k]−E[Vk]2=v′′θ,τ(0)−v′θ,τ(0)2=ψ′′θ(τ) (24) (25) and (26) Let now denote the distribution of and denote the distribution of . By (21) and (22), the distributions and again stand in the relationship (21) except that the term is replaced by and is replaced by . Since , by inverting (21) we can establish the relationship P[n∑k=1Xk≥γ]=enψθ(τ)−τγ∫∞0e−τydϑ⋆nθ,τ(y). (27) Furthermore, by choosing such that , it follows from (23) that the distribution has zero mean. We next substitute in (27) the distribution by the zero-mean normal distribution with variance , denoted by , and analyze the error incurred by this substitution. To this end, we define Aτ≜enψθ(τ)−τγ∫∞0e−τydNnψ′′θ(τ)(y). (28) By fixing according to (19), (28) becomes Aτ = en[ψθ(τ)−τψ′θ(τ)]√2πnψ′′θ(τ)∫∞0e−τye−y22nψ′′θ(τ)dy (29) = en[ψθ(τ)−τψ′θ(τ)]√2π∫∞0e−τt√nψ′′θ(τ)e−t22dt = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]√2π∫∞0e−12(t+τ√nψ′′θ(τ))2dt = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]√2π∫∞τ√nψ′′θ(τ)e−x22dx = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]Q(τ√nψ′′θ(τ)) where the second equality follows by the change of variable , and the fourth equality follows by the change of variable . We next show that the error incurred by substituting for in (27) is small. To do so, we write P[n∑k=1Xk≥nψ′θ(τ)]−Aτ = en[ψθ(τ)−τψ′θ(τ)]∫∞0e−τy(dϑ⋆nθ,τ(y)−dNnψ′′θ(τ)(y)) (30) = en[ψθ(τ)−τψ′θ(τ)][−(ϑ⋆nθ,τ(0)−Nnψ′′θ(τ)(0)) +τ∫∞0(ϑ⋆nθ,τ(y)−Nnψ′′θ(τ)(y))e−τydy] where the last equality follows by integration by parts [14, Ch. V.6, Eq. (6.1)]. We next use [14, Sec. XVI.4, Th. 1] (stated as Lemma 3 below) to assess the error commited by replacing by . To state Lemma 3, we first introduce the following additional notation. Let be a sequence of i.i.d., real-valued, zero-mean, random variables with one-dimensional probability distribution that depends on an extra parameter . We denote the -th moment for any possible value of by μk,θ=∫∞−∞xkd~Fθ(x) (31) and we denote the second moment as . For the distribution of the normalized -fold convolution of a sequence of i.i.d., zero-mean, unit-variance random variables, we write ~Fn,θ(x)=~F⋆nθ(xσθ√n). (32) Note that has zero-mean and unit-variance. As above, we denote by the zero-mean, unit-variance, normal distribution, and we denote by the zero-mean, unit-variance, normal probability density function. ###### Lemma 3 Assume that the family of distributions (parametrized by ) is nonlattice. Further assume that, for any , supθ∈Θμ4,θ<∞, (33) and infθ∈Θσθ>0. (34) Then, for any , ~Fn,θ(x)−N(x)=μ3,θ6σ3θ√n(1−x2)n(x)+o(1√n) (35) where the term is uniform in and . ###### Proof: See Appendix A. We next use (35) from Lemma 3 to expand (30). To this end, we first note that, as shown in Appendix B, if a family of distributions is nonlattice, then so is the corresponding family of tilted distributions. Consequently, the family of distributions (parametrized by ) is nonlattice since the family (parametrized by ) is nonlattice by assumption. We next note that the variable in (30) corresponds to in (32). Hence, , so applying (35) to (30) with and , we obtain \IEEEeqnarraymulticol3lP[n∑k=1Xk≥nψ′θ(τ)]−Aτ (36) = en[ψθ(τ)−τψ′θ(τ)][−1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2√n+o(1√n) +τ∫∞0⎛⎜ ⎜⎝ψ′′′θ(τ)6ψ′′θ(τ)3/2√n(1−y2nψ′′θ(τ))n⎛⎜ ⎜⎝y√ψ′′θ(τ)n⎞⎟ ⎟⎠+o(1√n)⎞⎟ ⎟⎠e−τydy] = en[ψθ(τ)−τψ′θ(τ)][1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2√n(−1+∫∞0τ√ψ′′θ(τ)n(1−z2)e−τ√ψ′′θ(τ)nz−z22dz)+o(1√n)] = en[ψθ(τ)−τψ′θ(τ)][ψ′′′θ(τ)6ψ′′θ(τ)3/2√n(−1√2π+τ2ψ′′θ(τ)n√2π−τ3ψ′′θ(τ)3/2n3/2fθ(τ,τ))+o(1√n)] = en[ψθ(τ)−τψ′θ(τ)][Kθ(τ,n)√n+o(1√n)]. with defined in (12a), and defined in (12b). Here we used that and coincide with the second and third moments of the tilted random variable , respectively; see (24) and (25). The second equality follows by the change of variable . Finally, substituting in (29) into (36), and recalling that , we obtain Part 1) of Proposition 1, namely P[n∑k=1Xk≥nψ′θ(τ)] = en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+Kθ(τ,n)√n+o(1√n)]. (37) ###### Proof: The proof of Part 2) follows along similar lines as the proof of Part 1). Hence, we will focus on describing what is different. Specifically, the left-hand-side (LHS) of (13) differs from the LHS of (11) by the additional term . To account for this difference, we can follow the same steps as Scarlett et al. [15, Appendix E]. Since in our setting the distribution of depends on the parameter , we repeat the main steps in the following: P[n∑k=1Xk≥γ+logU] = enψθ(τ)−τγ∫10∫∞logue−τydϑ⋆nθ,τ(y)du (38) = enψθ(τ)−τγ∫∞−∞∫min{1,ey}0e−τydudϑ⋆nθ,τ(y) = enψθ(τ)−τγ(∫∞0e−τydϑ⋆nθ,τ(y)+∫0−∞e(1−τ)ydϑ⋆nθ,τ(y)) where the second equality follows from Fubini’s theorem [21, Ch. 2, Sec. 9.2]. We next proceed as in the proof of the previous part. The first term in (38) coincides with (27). We next focus on the second term, namely, enψθ(τ)−τγ∫0−∞e(1−τ)ydϑ⋆nθ,τ(y). (39) We substitute in (39) the distribution by the zero-mean normal distribution with variance , denoted by , which yields ~Aτ≜enψθ(τ)−τγ∫∞0e(1−τ)ydNnψ′′θ(τ)(y). (40) By fixing according to (19), (40) can be computed as ~Aτ = en[ψθ(τ)−τψ′θ(τ)]√2πnψ′′θ(τ)∫0−∞e(1−τ)ye−y22nψ′′θ(τ)dy (41) = en[ψθ(τ)−τψ′θ(τ)]√2π∫0−∞e(1−τ)t√nψ′′θ(τ)e−t22dt = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫0−∞e−12(t−(1−τ)√nψ′′θ(τ))2dt = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫−(1−τ)√nψ′′θ(τ)−∞e−x22dx = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫∞(1−τ)√nψ′′θ(τ)e−x22dx = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]Q((1−τ)√nψ′′θ(τ)) where the second equality follows by the change of variable , and the fourth equality follows by the change of variable . As we did in (30), we next evaluate the error incurred by substituting by in (39). Indeed, \IEEEeqnarraymulticol3lenψθ(τ)−τγ∫0−∞e(1−τ)ydϑ⋆nθ,τ(y)−~Aτ (42) = en[ψθ(τ)−τψ′θ(τ)]∫0−∞e(1−τ)y(dϑ⋆nθ,τ(y)−dNnψ′′θ(τ)(y)) = en[ψθ(τ)−τψ′θ(τ)][(ϑ⋆nθ,τ(0)−Nnψ′′θ(τ)(0))−(1−τ)∫0−∞(ϑ⋆nθ,τ(y)−Nnψ′′θ(τ)(y))e(1−τ)ydy] = en[ψθ(τ)−τψ′θ(τ)][1√2πψ′′′θ(τ)6ψ′′θ(τ)3/
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870567321777344, "perplexity": 809.722980068345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00407.warc.gz"}
https://aakashsrv1.meritnation.com/ask-answer/question/please-solve-both-a-and-b-part/motion/16994729
# please solve both A and B part Solution, Initial velocity, u = 12 m/s rate at which velocity decreases = acceleration, a =  - 0.5 m/s2 (a) Let the time at which the particle comes to rest be t. Then, using the kinematic relation for motion under uniform acceleration $v=u+at\phantom{\rule{0ex}{0ex}}0=12-0.5t\phantom{\rule{0ex}{0ex}}t=\frac{12}{0.5}=\frac{120}{5}=24s$ (b) Let the distance covered before coming to rest be x. Then, using another kinematic relation for motion under uniform acceleration • 1 What are you looking for?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979661762714386, "perplexity": 1633.0348247735542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00457.warc.gz"}
https://proofwiki.org/wiki/Category:Inverse_Sine
Category:Inverse Sine This category contains results about Inverse Sine. Definitions specific to this category can be found in Definitions/Inverse Sine. Let $z \in \C$ be a complex number. The inverse sine of $z$ is the multifunction defined as: $\sin^{-1} \paren z := \set {w \in \C: \sin \paren w = z}$ where $\sin \paren w$ is the sine of $w$. Subcategories This category has the following 3 subcategories, out of 3 total. Pages in category "Inverse Sine" The following 5 pages are in this category, out of 5 total.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918699324131012, "perplexity": 1684.7270943563974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00590.warc.gz"}
https://math.stackexchange.com/questions/1490441/non-abelian-groups-of-order-p2q2
# non-abelian groups of order $p^2q^2$. Let $p<q$ be prime numbers and let $G$ be a group of order $p^2q^2$. I wish to determine up to isomorphism how many groups $G$ are there. What I know: The abelian case is very clear. Moreover, let assume that $pq\neq 6$ then it can be shown that $$G=Q\rtimes P,$$ where $P,Q$ are the corresponding Sylow subgroups. For some $p,q$ the only groups $G$ are abelian, but lets focus on these $p,q$ such that $G$ is non-abelian. I believe that if $Q$ is cyclic, then for any $P$ (cyclic or of rank $2$) there exist exactly one isomorphism class. However, in the case where $Q=C_q\times C_q$ I am not sure about the number of isomorphism classes. Any help will be appreciated. • You are looking for subgroups of ${\rm GL}(2,q)$ isomorphic to $C_p$, $C_{p^2}$ or $C_p \times C_p$. The cases to consider are $p|q-1$, $p^2|q-1$, $p|q+1$ and $p^2|q+1$. Note that $C_p \times C_p \le {\rm GL}(2,q) \Leftrightarrow p|q-1$. – Derek Holt Oct 21 '15 at 9:56 • @DerekHolt Thank you, but I wanted to know is for two such isomorphic subgroups of GL$(2,q)$ (lets say $C_p$) whether the action they induce on $Q$ induce isomorphic groups $G$ or not? – Ofir Schnabel Oct 21 '15 at 10:01 • $p=2$ may be a bit different, so you need to do that separately. When $p$ is odd and $p|q-1$, then ${\rm GL}(2,q)$ has $2 + (p-1)/2$ conjugacy classes of subgroups of order $p$, and they all give rise to separate nonabelian groups of order $pq^2$. I think in all cases there is a unique conjugacy class of subgroups of ${\rm GL}(2,q)$ of the order concerned. – Derek Holt Oct 21 '15 at 15:31 • Sorry, in the last comment, I meant in all other cases there is a unique conjugacy class of subgroups i.e. subgroups $C_{p^2}$ when $p^2|q-1$ or $p^2|q+1$, subgroups of order $p$ when $p|q+1$, and subgroups isomorphic to $C_p^2$ when $p|q-1$. – Derek Holt Oct 21 '15 at 16:03 • @DerekHolt thanks a lot, this is what iv'e been looking for, lets see if I got it; Lets take $p=3$ and $q=19$. Then $p^2|q-1$. So up to an isomorphism the non-abelian groups of order $p^2q^2$ are as follows. When $Q$ is cyclic then there are two such groups, one for $P\cong C_p^2$ and one for $P\cong C_{p^2}$. When $Q$ is of rank $2$ we got one group corresponding to $P$ being cyclic and acting without a kernel and $3$ groups which correspond to a $C_p$ action. Similarly for $P$ being of rank $2$. – Ofir Schnabel Oct 22 '15 at 8:21 Let's study the case $p=3$, $q=19$. Let $P \in {\rm Syl}_{19}(Q)$, $Q \in {\rm Syl}_3(G)$. (Sorry, I have managed to swap $P$ and $Q$!) Case 1. $P,Q$ cyclic. $Q$ can induce an automorphism of order $3$ or $9$ on $Q$, giving $2$ groups. Case 2. $P$ cyclic, $Q$ non-cyclic. $Q$ must induce automorphism of order $3$ of $P$, giving $1$ group. Case 3. $P$ non-cyclic, $Q$ cyclic. Let $\omega$ be an element of order $9$ in ${\mathbb F}_{19}^*$; for example $\lambda=4$. a) If $Q$ induces automorphism of order $3$ of $P$, then there are $3$ groups, in which the eigenvalues of the action of $P$ on $Q$ are respectively $(1, \omega^3)$, $(\omega^3,\omega^3)$, and $(\omega^3,\omega^6)$. b) If $Q$ induces automorphism of order $9$ of $P$, then there are $7$ groups, in which the eigenvalues of the action of $P$ on $Q$ are respectively $(1, \omega)$, $(\omega,\omega)$, $(\omega,\omega^2)$, $(\omega,\omega^3)$. $(\omega,\omega^4)$, $(\omega,\omega^6)$, $(\omega,\omega^8)$. (Note that $(\omega,\omega^5)$ would give a group isomorphic to $(\omega,\omega^2)$ and $(\omega,\omega^7)$ isomorphic to $(\omega,\omega^4)$.) Case 4. $Q$ and $P$ both non-cyclic. a) If $Q$ induces automorphism of order $3$ of $P$, then there are $3$ groups, just as in Casse 3 a). b) If $Q$ acts faithfully on $P$, then there is a unique group. So we get $17$ nonabelian groups altogether which, together with the $4$ abeliabn groups, makes $21$ groups of this order. This agrees with the number given by GAP. • Thanks a lot, just a comment and a question. When you write "in which the eigenvalues of the action of $P$ on $Q$ are respectively", you mean the action of $Q$ on $P$ I believe. And it seems that you assume that the $Q$ action (or the matrix correspond to in Aut$(P)$) is always diagonalizable, in other words the $Q$ action acts on the different copies of $C_p\times C_p$ without mixing them. why we can assume that? – Ofir Schnabel Oct 23 '15 at 8:18 • Yes, sorry I kept $P$ and $Q$ confused! The diagonalizability of the action of the $3$-group on the $19$-group follows from Maschke's Theorem in Group Representation Theory. Think of it as a $2$-dimensional representation of $Q$ on the vector space of dimension $2$ over ${\mathbb F}_{19}$. – Derek Holt Oct 23 '15 at 14:29 • Thanks again, so what is the condition on $p,q$ in order that any action is diagonalizable? Clearly this is not for any $p,q$, for example the action of $C_3\times C_3$ (with kernel isomorphic to $C_3$) on $C_2\times C_2$ by permutation of order $3$ of the elements of order $2$, or the action of $C_7\times C_7$ on $C_{13}\times C_{13}$ are not diagnosable (I think). So I believe the condition is that $q$ is a divisor of $p-1$, is that right? – Ofir Schnabel Oct 26 '15 at 8:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611872434616089, "perplexity": 137.26135334619502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00143.warc.gz"}
https://www.arxiv-vanity.com/papers/0912.3344/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Non-extensivity of the chemical potential of polymer melts J.P. Wittmer 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 1    A. Johner 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 1    A. Cavallo 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 12Dipartimento di Fisica, Università degli Studi di Salerno, I-84084 Fisciano, Italy 2    P. Beckrich 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 1    F. Crevel 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 1    J. Baschnagel 1Institut Charles Sadron, 23 rue du Loess, BP 84047, 67034 Strasbourg Cedex 2, France 1 July 5, 2020 ###### Abstract Following Flory’s ideality hypothesis the chemical potential of a test chain of length immersed into a dense solution of chemically identical polymers of length distribution is extensive in . We argue that an additional contribution arises ( being the monomer density) for all if which can be traced back to the overall incompressibility of the solution leading to a long-range repulsion between monomers. Focusing on Flory distributed melts we obtain for , hence, if is similar to the typical length of the bath . Similar results are obtained for monodisperse solutions. Our perturbation calculations are checked numerically by analyzing the annealed length distribution of linear equilibrium polymers generated by Monte Carlo simulation of the bond-fluctuation model. As predicted we find, e.g., the non-exponentiality parameter to decay as for all moments of the distribution. ###### Keywords: Chemical potential – Polymer melts – Equilibrium Polymers ###### pacs: 61.25.H-Macromolecular and polymers solutions; polymer melts and 82.35.-xPolymers: properties; reactions; polymerization and 05.10.LnMonte Carlo methods ## 1 Introduction One of the cornerstones of polymer physics is Flory’s ideality hypothesis DegennesBook ; DoiEdwardsBook ; SchaferBook which states that polymer chains in the melt follow Gaussian statistics, i.e. they are random walks without long range correlations. The official justification of this mean-field result is that density fluctuations are small beyond the screening length , hence, negligible DoiEdwardsBook . The size of a chain segment of arc-length of a test chain of length plugged into a melt of chemically identical -polymers of (normalized) length distribution and mean length foot_PN scales, hence, as (1) with denoting the effective bond length and the number of monomers spanning the screening length DoiEdwardsBook . See Fig. 1 for a sketch of some of the notations used in this paper. The extensivity of the chemical potential of the test chain with respect to , μc(n)=μn for g≪n≪⟨N⟩2, (2) being the effective chemical potential per monomer, is yet another well-known consequence of Flory’s hypothesis DegennesBook . The upper boundary indicated in Eq. (1) and Eq. (2) for later reference is due to the well-known swelling of extremely large test chains where the bath acts as a good solution DegennesBook ; SchaferBook ; Sergei81 . Assuming Eq. (2) to hold for the -chains of the bath, dense grand-canonical “equilibrium polymers” CC90 ; WMC98b are thus supposed to be “Flory size distributed”, P(N)=μe−μN, (3) where as elsewhere in this paper, temperature and Boltzmann’s constant have been set to unity foot_FloryHuggins . Eq. (3) implies, of course, that for the -th moment of the distribution. Strictly speaking, Eq. (3) applies only for , but both limits become irrelevant for systems of large mean chain length, , where only exponentially few chains are not within these bounds. Recently, Flory’s hypothesis has been challenged by the discovery of long-range intrachain correlations in three dimensional melts Sergei81 ; WMBJOMMS04 ; WBJSOMB07 ; BJSOBW07 ; WBM07 ; papRouse ; WCK09 ; papPrshort and ultrathin films Sergei81 ; ANS03 ; CMWJB05 . The physical mechanism of these correlations is related to the “correlation hole” of density in dimensions with being the typical size of the test chain (Fig. 1(a)). Due to the overall incompressibility of the melt this is known to set an entropic penalty which has to be paid if two chains of length are joined together or which is gained if a chain is broken into two parts ANS03 . The same effective repulsion acts also between adjacent chain segments of length and this on all scales WBJSOMB07 ; WBM07 . In three dimensions this leads to a weak swelling of the chain segments characterized, e.g., by 1−R2(s)b2s=cs√s for g≪s≪n (4) with denoting the “swelling coefficient” WMBJOMMS04 ; WBCHCKM07 ; WBM07 . Similar corrections with respect to Flory’s ideality hypothesis have been obtained for other intrachain conformational properties such as higher moments of the segmental size distribution WBM07 , orientational bond correlations WMBJOMMS04 ; WBCHCKM07 ; papPrshort or the single chain structure factor WBJSOMB07 ; BJSOBW07 . In this paper we question the validity of Flory’s hypothesis for a central thermodynamic property, the chemical potential of a test chain inserted into a three-dimensional melt. Our key claim is that the correlation hole potential leads to a deviation δμc(n)≡μc−μn≈u∗(n)∼+1/ρ√n for n≪⟨N⟩ (5) that is non-extensive in chain length and this irrespective of the distribution of the bath. Covering a broader -range we will show explicitly for a melt of quenched Flory-size distribution that δμc(n)≈cμ√n(1−2μn) for g≪n≪⟨N⟩2 (6) where we have defined . This correction implies for the annealed length distribution of linear equilibrium polymers that (to leading order) P(N) ≈ μe−μN−δμc(N) (7) ≈ μe−μN(1−cμ√N(1−2μN)) (8) where both the lower () and the upper limit () of validity become again irrelevant in the limit of large mean chain length. Eq. (8) will allow us to demonstrate Eq. (6) numerically from the observed non-exponentiality of the length distribution of equilibrium polymer melts obtained by means of Monte Carlo simulation of a standard lattice model WMC98b ; BFM . The one-loop perturbation calculation leading to Eq. (6) is presented in Section 2 where we will also address the chemical potential of monodisperse melts. Section 3 outlines the numerical algorithm used for the simulation of equilibrium polymers. Our computational results are compared to theory in Section 4. ## 2 Perturbation calculation ### 2.1 General remarks Following Edwards DoiEdwardsBook we take as a reference for the perturbation calculation a melt of Gaussian chains of effective bond length . Averages performed over this unperturbed reference system are labeled by an index . The general task is to compute the ratio of the perturbed to the unperturbed single chain partition function 1−Q(n)Q0(n) = 1−⟨e−un⟩0≈ ⟨un⟩0 = n∑s=0(n−s)∫dr G(r,s)~v(r) (9) with the perturbation potential being the sum of the effective monomer interactions of all pairs of monomers of the test chain of length , and denoting the Gaussian propagator for a chain segment of length DoiEdwardsBook . The factor in Eq. (9) counts the number of equivalent monomer pairs separated by an arc-length . The deviation from Flory’s hypothesis is then given by the contribution to which is non-linear in . The calculation of Eq. (9) in dimensions is most readily performed in Fourier-Laplace space with being the wavevector conjugated to the monomer distance and the Laplace variable conjugated to the chain length . The Laplace transformed averaged perturbation potential reads ut ≡ ∫∞n=0dn⟨un⟩0e−nt (10) = ∫ddq(2π)d1t2G(q,t)~v(q) (11) where the factor accounts for the combinatorics and represents the Fourier-Laplace transformed Gaussian propagator DoiEdwardsBook ; papPrshort with being a convenient monomeric length. ### 2.2 Effective interaction potential We have still to specify , the effective interaction between test chain monomers in reciprocal space. This interaction is partially screened by the background of the monomers of the bath. It has been shown by Edwards DoiEdwardsBook ; BJSOBW07 that within linear response this corresponds to 1~v(q)=1v+F0(q)ρ. (12) The bare excluded volume indicated in the first term of Eq. (12) characterizes the short-range repulsion between the monomers. Thermodynamic consistency requires ANS05a ; BJSOBW07 ; WCK09 that is proportional to the inverse of the measured compressibility of the solution v=1gρ≡12ρ (a/ξ)2 (13) where we have defined the screening length following, e.g., eq 5.38 of Ref. DoiEdwardsBook . Interestingly, the number of monomers spanning the blob, i.e., the lower bound of validity of various statements made in the Introduction, can be determined experimentally or in a computer simulation from the low-wavevector limit of the total monomer structure factor and, due to this operational definition, is sometimes called “dimensionless compressibility” WCK09 ; papPrshort . stands for the ideal chain intramolecular structure factor of the given distribution of the bath. The effective interaction, Eq. (12), depends thus in general on the length distribution of the melt the test chain is inserted. For Flory distributed melts the structure factor is, e.g., given by BJSOBW07 F0(q)=2(aq)2+μ (14) while for monodisperse melts it reads with and being Debye’s function DoiEdwardsBook . We remind that within the Padé approximation for monodisperse chains Eq. (14) holds with replaced by DoiEdwardsBook . Below we will focus on incompressible solutions () where the effective interaction is given by the inverse structure factor, , i.e. we will ignore local physics on scales smaller than the correlation length and assume that both the test chain and the chains of the bath are larger than . The effective potential takes simple forms at low and high wavevectors corresponding, respectively, to distances much larger or much smaller than the typical size of the bath chains. In the low-wavevector limit the potential becomes for general , i.e. for Flory distributed and for monodisperse melts. Long test chains are ruled by which acts as a weak repulsive pseudo-potential with associated (bare) Fixman parameter . As already recalled in the Introduction, test chains with foot_PN must thus swell and obey excluded volume statistics DegennesBook ; SchaferBook . The effective potential of incompressible melts becomes scale free for larger wavevectors corresponding to the self-similar random walks, ~v(q)≈(aq)22ρ for 1/q≪b⟨N⟩1/2, (15) i.e. the interactions decrease as a power law with distance and this irrespective of the length distribution . One expects that short test chains with see an interaction potential of effectively infinite bath chains as described by Eq. (15). Please note that Eq. (15) lies at the heart of the power-law swelling of chain segments, Eq. (4), and related properties alluded to above WMBJOMMS04 ; BJSOBW07 ; WBM07 ; papPrshort . ### 2.3 Ultraviolet divergency Coming back to the computation of Eq. (11) one realizes that a naive perturbation calculation using the effective interaction given in Eq. (12) is formally diverging at high wavevectors in three dimensions (becoming only regular below ) due to the monomer self-interactions which should be subtracted. Using Eq. (15) instead of Eq. (12) even makes things worse due to an additional divergency associated with the self-interactions of the blobs whose size was set to zero (). However, since we are not interested in (possibly diverging) contributions linear in the length of the test chain or independent of it, we can freely subtract linear terms (i.e., terms in Laplace space) or constant terms (i.e., terms ) to regularize and to simplify . Such a transformation leads to ut=∫ddq(2π)d1t22F−10−(aq)2−t(aq)2+tv2(vρ+F−10)+… (16) where “” stands for the linear and constant contributions we do not compute. Converging now for incompressible melts for , the latter reformulation will prove useful below. (See Section 2.6 for the complete regulization of the ultraviolet divergency for incompressible three dimensional melts.) ### 2.4 Flory-distributed melts Applying Eq. (16) to incompressible Flory distributed melts this leads to ut = 12ρμ−tt2∫ddq(2π)dG(q,t)+… (17) = 12ρ(μ/t2−1/t)G(r=0,t)+… (18) where we have read Eq. (17) as an inverse Fourier transform taken at . Remembering that a factor in -space stands for an integral in -space, the inverse Laplace transform of can be expressed in terms of integrals of the return probability . We obtain, hence, in -space (19) where stands for the non-extensive contribution to . Note that the first term in the brackets scales as the correlation hole in dimension. Its marginal dimension is . The second term characterizes the effective two-body interaction of the test chain with itself. As one expects DegennesBook , its marginal dimension is . Although Eq. (19) is formally obtained for it applies to higher dimensions by analytic continuation. In three dimensions Eq. (19) becomes δμc(n)=1(4π)3/21ρa3(n−1/2−2μn1/2) (20) which demonstrates finally the non-extensive correction to the ideal polymer chain chemical potential announced in Eq. (6) and sketched for in Fig. 2 by the bold solid line. (A slightly different demonstration is given below in Section 2.6.) As anticipated above, the first term in Eq. (20) dominates for short test chains. It is independent of the polydispersity and scales as the correlation hole potential, Eq. (5). The second term dominates for large test chains with becoming non-perturbative for . Please note that these extremely long chains are essentially absent in the Flory bath but may be introduced on purpose. Both contributions to decrease with increasing . They correspond to an effective enhancement factor of the partition function quite similar to the in the standard excluded volume statistics with being the self-avoiding walk susceptibility exponent DegennesBook . Interestingly, while decreases at fixed it increases as for a test chain with , as shown by the thin solid line in Fig. 2. The chemical potential of typical chains of the bath approaches thus the Gaussian limit from below. ### 2.5 Equilibrium polymers Flory distributed polymer melts are obtained naturally in systems of self-assembled linear equilibrium polymers where branching and the formation of closed rings are forbidden CC90 ; WMC98b ; foot_FloryHuggins . Since the suggested correction to the ideal chain chemical potential is weak the system must remain to leading order Flory distributed and Eq. (6) should thus hold foot_annealed . Using one obtains directly the corrected length distribution for equilibrium polymers announced in Eq. (8). Note that Eq. (8) is properly normalized, i.e. the prefactor of the distribution remains exact if is given by Eq. (6). Since the distribution becomes broader the first moment increases slightly at given : ⟨N⟩=μ−1(1+cμ√μπ). (21) More generally, one expects for the th moment (22) with being the Gamma function abramowitz . The non-exponentiality parameter should thus scale as Kp=wpcμ√μ (23) with being a -dependent geometrical factor. Eq. (23) will be tested numerically in Section 4. ### 2.6 Incompressible melts in three dimensions It is instructive to recover Eq. (20) directly in three dimensions. For that purpose we may subtract from the propagator in Eq. (16) which amounts to take off a linear and a constant contribution. Taking the incompressible limit this yields ut=∫d3q(2π)31t22/F0(q)−(aq)2−tt+(aq)2−t(aq)212ρ+… (24) for a general structure factor . Assuming Eq. (14) for we obtain by straightforward integration over momentum . After taking the inverse Laplace transform this confirms Eq. (20). Interestingly, the inverse Laplace transform of the general formula, Eq. (24), can be performed leading to δμc(n) = 12ρ∫d3q(2π)3(exp(−n(aq)2)−fc(q,n)) (25) = cμ√n−12ρ∫d3q(2π)3fc(q,n) where with and . The first term in Eq. (25) corresponds to the infinite bath chain limit () which does not depend on the length distribution . The integral over stands for finite- corrections for larger test chains. ### 2.7 Monodisperse melts We turn now to incompressible monodisperse melts in three dimensions. As already mentioned above, the Debye function for monodisperse melts can be approximated by the structure factor of Flory distributed melts, Eq. (14), replacing by . It follows thus from Eq. (6) that within Padé approximation we expect δμc(n)≈cμ√n(1−4x) (26) for with . This is indicated by the dash-dotted line in Fig. 2. If the test chain and the bath chains are of equal length, , this leads to , i.e. approaches again its asymptotic limit from below (thin dash-dotted line). The calculation of the chemical potential deviations for the full Debye function can be performed taking advantage of Eq. (25). Specializing the formula to the monodisperse case this yields after some simple transformations δμc(n)=cμ√n(1− Ic(x)) (27) where the finite- correction is expressed by the integral Ic(x)≡√xπ∫∞0(2fD(y)−y)–––––––––––––––(1−exp(−xy)y)dy√y. Eq. (27) can be evaluated numerically as shown in Fig. 2 where the bold dashed line corresponds to a variation of at constant and the thin dashed line to a test chain of same length as the chains of the bath, . Both lines are bounded by the predictions for Flory distributed melts and the Padé approximation of monodisperse chains. The evaluation of Eq. (27) deserves some comments. The underlined bracket under the integral defines a slowly varying function of decreasing from to when increases from to infinity. Without this slow factor the integral can be scaled: It is proportional to and evaluates to in agreement with the correction term obtained for Flory distributed polymers, Eq. (6). The integral is mostly build up by the region with an error . For large only small contribute to the integral; the underlined term in the integral can be replaced by and we obtain asymptotically in agreement with the Padé approximation, Eq. (26). If the first subdominant contribution to the integral is also computed one gets for large . For small test chains, the integral provides the first correction , i.e., it vanishes for as already noted. In short, we recover the known asymptotic for short and long test chains but the crossover is very sluggish. The simple Padé approximation, Eq. (26), is off by in the crossover region where . Note finally that if the test chain is a chain of the bath () one evaluates numerically . We obtain thus δμc(n)=−2.19cμ/√n (28) as indicated by the thin dashed line. ## 3 Algorithmic issues The theoretical predictions derived above should hold in any sufficiently dense polymer solution assuming that the chains are not too short. Since the direct measurement of the chemical potential of monodisperse chains (discussed in Section 2.7) requires a delicate thermodynamic integration FrenkelSmitBook ; MP94 ; WCK09 we test the theoretical framework by computing numerically the length distribution in systems of annealed equilibrium polymers foot_annealed . The presented configuration ensembles have been obtained using the well-known “bond fluctuation model” (BFM) BFM ; Deutsch ; WMC98b — an efficient lattice Monte Carlo scheme where a coarse-grained monomer occupies 8 lattice sites on a simple cubic lattice (i.e., the volume fraction is ) and bonds between monomers can vary in length and direction. All length scales are given in units of the lattice constant. Systems with an annealed size distribution are obtained by attributing a finite scission energy to each bond which has to be paid whenever the bond between two monomers is broken. Standard Metropolis Monte Carlo is used to reversibly break and recombine the chains WMC98b ; HXCWR06 . Branching and formation of closed rings are forbidden. Only local hopping moves have been used since the breaking and recombination of chains reduce the relaxation times dramatically compared to monodisperse systems HXCWR06 . We only present data for one high density where half of the lattice sites are occupied (). It has been shown WBM07 ; WCK09 that for this density we have a dimensionless compressibility , i.e. the system may be regarded as incompressible on all scales and the lower bound of validity of the theory is irrelevant, and a swelling coefficient . Hence, for the only parameter of the theory tested here. We use periodic simulation boxes of linear length containing monomers. The scission energy has been increased systematically up to which corresponds to a mean chain length . The configurations used here have already been tested and analyzed in previous publications discussing the non-ideal behavior of configurational intrachain properties WBCHCKM07 ; WBJSOMB07 ; BJSOBW07 ; papPrshort . ## 4 Computational results. The main panel of Figure 3 presents the normalized length distribution for different scission energies as indicated. A nice data collapse is apparently obtained if is plotted as a function of the reduced chain length using the measured mean chain length . At first sight, there is no sign of deviation from the exponential decay indicated by the solid line. The mean chain length itself is given in panel (b) as a function of together with some higher moments of the distribution. As indicated by the dashed line, we find as expected from standard linear aggregation theory CC90 ; WMC98b ; foot_FloryHuggins . The data presented in the first two panels of Fig. 3 is thus fully consistent with older computational work WMC98b ; HXCWR06 which has let us to believe that Flory’s ideality hypothesis holds rigorously. Closer inspection of the histograms reveals, however, deviations for small . As can be seen from panel (c), the probability for short chains is reduced with respect to the Flory distribution indicated by the solid line. This depletion agrees, at least qualitatively, with the predicted positive deviation of the chemical potential, Eq. (5). The curvature of , i.e. the non-extensive deviation of the chemical potential from Flory’s ideality hypothesis, is further analyzed in Figure 4. Motivated by Eq. (7), we present in panel (a) the functional V[P(N)]≡−log(P(N))−μN+log(μ) (29) where the second term takes off the ideal contribution to the chemical potential. The last term is due to the normalization of and eliminates a trivial vertical shift depending on the scission energy . Consistently with Eq. (21), the chemical potential per monomer has been obtained from the measured mean chain length using μ≡⟨N⟩−1(1+cμ√π/√⟨N⟩). (30) Note that and become numerically indistinguishable for . If the Gaussian contribution to the chemical potential is properly subtracted one expects to obtain directly the non-Gaussian deviation to the chemical potential, . Due to Eq. (6) the functional should thus scale as V[P(N)]/cμ√μ≈(1−2x)/√x (31) with as indicated by the bold line in the panel. This is well born out by the data collapse obtained up to . Obviously, the statistics detoriates for for all energies due to the exponential cut-off of . Unfortunately, the statistics of the length histograms decreases strongly with and becomes too low for a meaningful comparison for . It is essentially for this numerical reason that we use Eq. (30) rather than simple large- limit since this allows us to add the two histograms for and for which high precision data is available. Otherwise these energies would deviate from Eq. (31) for large due to an insufficient substraction of the leading Gaussian contribution to the chemical potential. Thus we have used to some extend in panel (a) the predicted behavior, Eq. (6), presenting strictly speaking a (highly non-trivial) self-consistency check of the theory. Since the substraction of the large linear Gaussian contribution is in any case a delicate issue we present in panel (b) of Fig. 4 a second functional, W[P(N)] ≡ 2V[P(N)]−V[P(2N] (32) = log[P(2N)μ/P2(N)], where by construction this contribution is eliminated following a suggestion made recently by Semenov and Johner ANS03 . The normalization factor appearing in Eq. (32) eliminates again a weak vertical scission energy dependence of the data. Obviously, for perfectly Flory distributed chains. Following Eq. (7) one expects and due to Eq. (6) W[P(N)]cμ√μ(2−1/√2)≈1−0.906x√x (33) with . Eq. (33) is indicated by the bold line which compares again rather well with the presented data. The functionals presented in Fig. 4 require histograms with very high accuracy. That is only approximately Flory distributed can be more readily seen using the “non-exponentiality parameter” which measures how the moments deviate from the Flory distribution. Obviously, for rigorously Gaussian chains. As stated in Eq. (23), we expect the non-exponentiality parameter to decay as , i.e. as the correlation hole potential of the typical melt chain. The main panel of Fig. 5 presents as a function of using double-logarithmic axes. The predicted power-law decay is clearly demonstrated by the data. Note that the scaling of the vertical axis with the -dependent geometrical factors allows to bring all moments on the same master curve. As can be seen from the inset of Fig. 5 this scaling is significant since varies over nearly a decade between and . Deviations from the predicted scaling are visible, not surprisingly, for small . ## 5 Discussion #### Summary. Challenging Flory’s ideality hypothesis, we have investigated in this study the scaling of the chemical potential of polymer chains with respect to the length of a tagged test chain plugged into a solution of -chains of a given length distribution with being the typical length of the chains of the bath. By means of one-loop perturbation calculations we have demonstrated for the existence of a non-extensive deviation with respect to the Gaussian reference. This correction becomes universal for small reduced test chain lengths, , scaling as irrespective of the length distribution as suggested by the “correlation hole potential” (Fig. 1(a)). For larger the correction depends somewhat on , as explicitly discussed for Flory distributed [Eq. (6)] and monodisperse melts [Eq. (27)], but remains generally a monotonously decreasing function of scaling as δμc(n)≈1ρ√n(1−Ic(x)) with % Ic(x)x≪1⟹0 (34) changing sign at (Fig. 2). For the important limit of a test chain of same length as the typical chain of the bath, , the deviation from Flory’s hypothesis decreases in magnitude with . For Flory distributed or monodisperse chains and the asymptotic limit, , is thus approached from below [Eq. (28)]. Note that our predictions are implicit to the theoretical framework put forward by Edwards DoiEdwardsBook or Schäfer SchaferBook , but to the best of our knowledge they have not been stated explicitly before. We have confirmed theory by analyzing in Section 4 the length distribution of essentially Flory distributed equilibrium polymers obtained for different scission energies by Monte Carlo simulation of the BFM at one melt density. Albeit the deviations from Flory’s hypothesis are small (Fig. 3(a,b)), they can be demonstrated by analyzing as shown in Fig. 4 or from the scaling of the non-exponentiality parameter, , for all moments sampled (Fig. 5). We emphasize that the data collapse on the theoretical predictions, Eqs. (31,33,23), has been achieved without any free adjustable parameter since the coefficient is known. #### Outlook. Clearly, the presented study begs for a direct numerical verification of the suggested non-extensive chemical potential for a test chain inserted into a melt of monodisperse chains, Eq. (26). In principle, this should be feasible by thermodynamic integration using multihistogram methods as proposed in MP94 . In particular, this may allow to improve the numerical test of the theory for ; due to the exponential cut-off [Eq. (8)] this regime has been difficult to explore using the equilibrium polymer length distribution (Fig. 4). Another interesting testing bed for the proposed correlation hole effect are polymer melts confined in thin films of width CMWJB05 . A logarithmically decreasing non-extensive chemical potential contribution has been predicted for these effectively two-dimensional systems ANS03 ; Sergei81 . The non-exponentiality parameter of equilibrium polymers confined in thin films should thus decay rather slowly with chain length. This is in fact confirmed qualitatively by the numerical results presented in Fig. 6 obtained using again the BFM algorithm with finite scission energy described above. Note that the smallest film width allowing the overlap of monomers and the crossing of chains () corresponds to an increase of by nearly a decade for the largest chains we have sampled. The detailed scaling with is, however, far from obvious. Larger mean chain lengths and better statistics are warranted to probe the logarithmic behavior for asymptotically long chains predicted by Semenov and Johner ANS03 . Note that if confirmed this prediction should influence the phase diagrams of polymer blends in reduced effective dimensions. Finally, we would like to point out that the presented perturbation calculation for dense polymer chains may also be of relevance to the chemical potential of dilute polymer chains at and around the -point which has received attention recently Rubinstein08 . The reason for this connection is that (taken apart different prefactors) the same effective interaction potential, Eq. (15), enters the perturbation calculation in the low wavevector limit. A non-extensive correction in three dimensions is thus to be expected. ###### Acknowledgements. We thank the Université de Strasbourg, the CNRS, and the ESF-STIPOMAT programme for financial support. We are indebted to S.P. Obukhov and A.N. Semenov for helpful discussions. ## References • (1) P.G. de Gennes, Scaling Concepts in Polymer Physics (Cornell University Press, Ithaca, New York, 1979) • (2) M. Doi, S.F. Edwards, The Theory of Polymer Dynamics (Clarendon Press, Oxford, 1986) • (3) L. Schäfer, Excluded Volume Effects in Polymer Solutions (Springer-Verlag, New York, 1999) • (4) We suppose throughout this paper that is a realistic polymer length distribution which is not too broad. All moments exist and are of same order. Obviously, all moments of monodisperse melts of length become . • (5) E. Nikomarov, S. Obukhov, Sov. Phys. JETP 53, 328 (1981) • (6) M. Cates, S. Candau, J. Phys. Cond. Matt 2, 6869 (1990) • (7) J.P. Wittmer, A. Milchev, M.E. Cates, J. Chem. Phys. 109, 834 (1998) • (8) The chain length distribution is obtained by minimizing a Flory-Huggins free energy functional f[ρN]=∑NρN(log(ρN)+μN+E+δμc(N)) with respect to the density of chains of length . The first term on the right is the usual translational entropy. The second term entails a Lagrange multiplier which fixes the total monomer density . All contributions to the chemical potential of the chain which are linear in can be adsorbed within the Lagrange multiplier. The scission energy characterizes the enthalpic free energy cost for breaking a chain bond. The most crucial last term encodes the remaining non-linear contribution to the chemical potential which has to be paid for creating two new chain ends. A rigorously Flory distributed length distribution implies thus . • (9) A.N. Semenov, A. Johner, Eur. Phys. J. E 12, 469 (2003) • (10) J.P. Wittmer, H. Meyer, J. Baschnagel, A. Johner, S.P. Obukhov, L. Mattioni, M. Müller, A.N. Semenov, Phys. Rev. Lett. 93, 147801 (2004) • (11) J.P. Wittmer, P. Beckrich, A. Johner, A.N. Semenov, S.P. Obukhov, H. Meyer, J. Baschnagel, Europhys. Lett. 77, 56003 (2007) • (12) P. Beckrich, A. Johner, A.N. Semenov, S.P. Obukhov, H.C. Benoît, J.P. Wittmer, Macromolecules 40, 3805 (2007) • (13) J.P. Wittmer, P. Beckrich, H. Meyer, A. Cavallo, A. Johner, J. Baschnagel, Phys. Rev. E 76, 011803 (2007) • (14) H. Meyer, J.P. Wittmer, T. Kreer, P. Beckrich, A. Johner, J. Farago, J. Baschnagel, Eur. Phys. E 26, 25 (2008) • (15) J.P. Wittmer, A. Cavallo, T. Kreer, J. Baschnagel, A. Johner, J. Chem. Phys. 131, 064901 (2009) • (16) J.P. Wittmer, A. Johner, S.P. Obukhov, H. Meyer, A. Cavallo, J. Baschnagel, Macromolecules (2009) • (17) A. Cavallo, M. Müller, J.P. Wittmer, A. Johner, J. Phys.: Condens. Matter 17, S1697 (2005) • (18) J.P. Wittmer, P. Beckrich, F. Crevel, C.C. Huang, A. Cavallo, T. Kreer, H. Meyer, Comp. Phys. Comm. 177, 146 (2007) • (19) I. Carmesin, K. Kremer, Macromolecules 21, 2819 (1988) • (20) A.N. Semenov, S.P. Obukhov, J. Phys.: Condens. Matter 17, 1747 (2005) • (21) The chemical potential of a chain does depend on the length distribution of the melt, Eq. (25). For an infinite macroscopically homogeneous systems it is independent, however, on whether this distribution is annealed or quenched, i.e. if it is allowed to fluctuate or not. This follows from the well-known behavior of fluctuations of extensive parameters in macroscopic systems: the relative fluctuations vanish as as the total volume . The latter limit is taken first in our calculations, i.e. we consider an infinite number of (annealed or quenched) chains. The large- limit is then taken afterwards to increase the range of the scale free effective interaction potential, Eq. (15). • (22) M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions (Dover, New York, 1964) • (23) D. Frenkel, B. Smit, Understanding Molecular Simulation – From Algorithms to Applications (Academic Press, San Diego, 2002), 2nd edition • (24) M. Müller, W. Paul, J. Chem. Phys. 100, 719 (1994) • (25) H. Deutsch, K. Binder, J. Chem. Phys. 94, 2294 (1991) • (26) C.C. Huang, H. Xu, F. Crevel, J. Wittmer, J.P. Ryckaert, Reaction kinetics of coarse-grained equilibrium polymers: a Brownian Study, in Computer Simulations in Condensed Matter: from Materials to Chemical Biology (Springer, Lect. Notes Phys., International School of Solid State Physics, Berlin/Heidelberg, 2006), Vol. 704, pp. 379–418 • (27) D. Shirvanyants, S. Panyukov, Q. Liao, M. Rubinstein, Macromolecules 1, 1475 (2008)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541034698486328, "perplexity": 1863.7199160404252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00314.warc.gz"}
https://www.physicsforums.com/threads/logic-gates-problem-need-to-check-if-my-answers-are-correct.204901/
Homework Help: Logic gates problem Need to check if my answers are correct 1. Dec 16, 2007 rock.freak667 [SOLVED] Logic gates problem...Need to check if my answers are correct.. 1. The problem statement, all variables and given/known data http://img132.imageshack.us/img132/8899/19367959nt9.jpg [Broken] 2. Relevant equations 3. The attempt at a solution Now I wasn't sure how many inputs there are so I just used 2 because I see there are 4 and if there are 4 then there are 16 combinations.So I do not know if my answers are correct. But with 4 inputs the gate H is an ex-or gate I believe. But for 2 inputs I do not think there is a single gate for H Last edited by a moderator: May 3, 2017 2. Dec 17, 2007 rl.bhat Check the calculation for C=0, D = 1 3. Dec 17, 2007 Petkovsky I think that you made a mistake calculating G
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592647314071655, "perplexity": 752.0263272036728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00179.warc.gz"}
https://brilliant.org/problems/square-roots-14/
# Square roots $\large \frac{1}{\sqrt{x}}+\frac{1}{\sqrt{y}}=\frac{1}{\sqrt{20}}$ Find the number of ordered pairs of positive integers $$(x,y)$$ satisfying the above equation. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968071341514587, "perplexity": 297.90183036856587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00137.warc.gz"}
http://tex.stackexchange.com/questions/44734/tikz-positioning-in-matrices
# Tikz positioning in matrices Due to the help of some posts on StackExchange (mostly [1] and [2]) I think that I got the gist of categorizing matrix rows and columns with braces and separating them with lines in TikZ. However, I can't get the referencing of the positions in the draw command to work, strangely enough only within a certain matrices. The examples in the links (4x4 matrices) compile flawlessly, but the bigger matrix below doesn't. (I only exchanged the smaller matrices in the examples with the bigger matrix.) \begin{tikzpicture}[decoration=brace,every left delimiter/.style={xshift=3pt}, every right delimiter/.style={xshift=-3pt},node distance=-1ex] \matrix [matrix of math nodes,left delimiter=(,right delimiter=), row sep=0.5cm,column sep=0.5cm] (m) { x & x & \otimes & x & x & x & & & & & & & \\ & & & \otimes & x & & x & x & & x & & & \\ x & & x & & \otimes & & & & & & & & \\ & & & & & \otimes & x & & & & & x & \\ & & & & & & \otimes & x & & & & x & \\ & & & & & & x & \otimes & & x & & & \\ & & & & & & & & \otimes & x & & & \\ & & & & & & & & x & \otimes & x & & \\ & & & & & & & & & & \otimes & x & \\ & & & & & & & & & & & \otimes & \\ & & & & & & & & & & x & & \\ & & & & & & & & & & x & x & \\ & & & & & & & & & & & x & \\}; \draw[dashed] ($0.5*(m-1-2.north east)+0.5*(m-1-3.north west)$) -- ($0.5*(m-4-2.south east)+0.5*(m-4-3.south west)$); \draw[dashed] ($0.5*(m-2-1.south west)+0.5*(m-3-1.north west)$) -- ($0.5*(m-2-4.south east)+0.5*(m-3-4.north east)$); \node[above=10pt of m-1-1] (top-1) {a}; \node[above=10pt of m-1-2] (top-2) {b}; \node[above=10pt of m-1-3] (top-3) {c}; \node[above=10pt of m-1-4] (top-4) {d}; \node[left=12pt of m-1-1] (left-1) {$\alpha$}; \node[left=12pt of m-2-1] (left-2) {$\beta$}; \node[left=12pt of m-3-1] (left-3) {$\gamma$}; \node[left=12pt of m-4-1] (left-4) {$\delta$}; \node[rectangle,above delimiter=\{] (del-top-1) at ($0.5*(top-1.south) +0.5*(top-2.south)$) {\tikz{\path (top-1.south west) rectangle (top-2.north east);}}; \node[above=10pt] at (del-top-1.north) {$A$}; \node[rectangle,above delimiter=\{] (del-top-2) at ($0.5*(top-3.south) +0.5*(top-4.south)$) {\tikz{\path (top-3.south west) rectangle (top-4.north east);}}; \node[above=10pt] at (del-top-2.north) {$B$}; \node[rectangle,left delimiter=\{] (del-left-1) at ($0.5*(left-1.east) +0.5*(left-2.east)$) {\tikz{\path (left-1.north east) rectangle (left-2.south west);}}; \node[left=10pt] at (del-left-1.west) {$C$}; \node[rectangle,left delimiter=\{] (del-left-2) at ($0.5*(left-3.east) +0.5*(left-4.east)$) {\tikz{\path (left-3.north east) rectangle (left-4.south west);}}; \node[left=10pt] at (del-left-2.west) {$D$}; \end{tikzpicture} Specifically, I get the error message "!Package pgf Error: No shape named m-4-2 is known.", which occurs in the first line where the dashed line is drawn. However, the shapes named m-1-3 and m-1-2 are known. I am out of my wits here, as I can't make a connection between the errors and the code that could be wrong. - Add the option nodes in empty cells to the matrix, to be able to refer to those nodes: \matrix [matrix of math nodes, left delimiter=(,right delimiter=),
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582780003547668, "perplexity": 708.7687439137278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858892.28/warc/CC-MAIN-20140722025738-00143-ip-10-33-131-23.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/102158/question-about-the-vacuum-bundle-on-a-and-b-model
# Question about the vacuum bundle on A- and B-model Let us consider the topological string A- and B-model (twisted SUSY non-linear sigma model on CY 3-manifold $X$). They are realization of $N=2$ SCFT and there are ground-states vector bundle $\mathcal{H}$ and vacuum line bundle $\mathcal{L}$ over the moduli space of the theory. In the A-model, $\mathcal{H}=H^{even}(X,\mathbb{C})$ and $\mathcal{L}=H^0(X,\mathbb{C})$. In the B-model, $\mathcal{H}=H^{3}(X,\mathbb{C})$ and $\mathcal{L}=H^{3,0}(X,\mathbb{C})$. The genus $g$ string amplitude is given as a section of $\mathcal{L}$ in either theory. Mirror symmetry is an identification of the geometry of A- and B-model ground-state geometry on distinct CY 3manifolds. My questions is following. In the A-model, it seems the splitting of the bundle $$\mathcal{H}_A=H^{even}(X,\mathbb{C})=\oplus_{i=0}^3 H^{2i}(X,\mathbb{C})$$ does not vary over the moduli space of the theory (Kahler moduli space). On the other hand, in the B-model, the splitting $$\mathcal{H}=H^{3}(X,\mathbb{C})=\oplus_{p+q=3}H^{p,q}(X,\mathbb{C})$$ varies over the moduli space (variation of Hodge structure). Moreover, $\mathcal{L}$ is the trivial line bundle in the A-model, while it is not in the B-model. Isn't this contradiction? Saying that a splitting varies over the moduli space is not completely well defined: you have to say how to identify the total spaces at different points of the moduli i.e. to specify a flat connection on the bundle of total spaces. In the B-model, if you take the Gauss-Manin connection as the flat connection then the Hodge splitting varies over the moduli space (because the Gauss-Manin connection does not preserve the splitting in general). In the A-model, if you take the trivial connection as the flat connection then the splitting does not vary over the moduli space (the trivial connection preserves the degree decomposition). But it is not the trivial connection which appears in mirror symmetry on the A-model side but a flat connexion which is the trivial one corrected by contributions of holomorphic world-sheet instantons (i.e. Gromov-Witten invariants) and this connection does not preserve the degree decomposition in general. About the vacuum line bundle. On the A-model sigma model side, 1 in H^{0} gives a natural trivialization. But the sigma model description is generally only valid in some limit of the moduli space, some cusp which is topologically a punctured polydisk. In particular, any complex line bundle is trivial in restriction to this domain and this is also the case for the vacuum line bundle of the B-model. Deep inside the moduli space, the topology can be complicated and the vacuum bundle of the B-model can be non-trivial but it is also the case for the A-model which has no longer a sigma model description and so no longer a "1" to trivialize $\mathcal{L}$. (remark: the genus g string amplitude is a section of $\mathcal{L}^{2-2g}$ and not $\mathcal{L}$.) • Thanks for the details answer. I now see the point: the VHS in the B-side corresponds to a non-trivial commotion on the A-side. I see the point now. Also. the sigma model description is valid only around the special points. – Mathematician Mar 7 '14 at 23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409745931625366, "perplexity": 401.7492531872769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00224.warc.gz"}
http://www.ms.u-tokyo.ac.jp/seminar/past_e_77.html
Seminar information archive Lectures 17:00-18:00   Room #122 (Graduate School of Math. Sci. Bldg.) Sunder Sethuraman (University of Arizona) A KPZ equation for zero-range interactions (ENGLISH) [ Abstract ] We derive a type of KPZ equation, in terms of a martingale problem, as a scaling limit of fluctuation fields in weakly asymmetric zero-range processes. Joint work (in progress) with Milton Jara and Patricia Goncalves. Lectures 11:00-15:30   Room #128 (Graduate School of Math. Sci. Bldg.) S. Harase, et. al. (Tokyo Institute of Technology/JSPS) Workshop for Quasi-Monte Carlo and Pseudo Random Number Generation (ENGLISH) [ Reference URL ] 2012/06/12 Tuesday Seminar on Topology 16:30-18:00   Room #056 (Graduate School of Math. Sci. Bldg.) Takefumi Nosaka (RIMS, Kyoto University, JSPS) Topological interpretation of the quandle cocycle invariants of links (JAPANESE) [ Abstract ] Carter et al. introduced many quandle cocycle invariants combinatorially constructed from link-diagrams. For connected quandles of finite order, we give a topological meaning of the invariants, without some torsion parts. Precisely, this invariant equals a sum of "knot colouring polynomial" and of a Z-equivariant part of the Dijkgraaf-Witten invariant. Moreover, our approach involves applications to compute "good" torsion subgroups of the 3-rd quandle homologies and the 2-nd homotopy groups of rack spaces. Lie Groups and Representation Theory 16:30-18:00   Room #126 (Graduate School of Math. Sci. Bldg.) Toshihisa Kubo (the University of Tokyo) Conformally invariant systems of differential operators of non-Heisenberg parabolic type (ENGLISH) [ Abstract ] The wave operator in Minkowski space is a classical example of a conformally invariant differential operator. Recently, the notion of conformality of one operator has been generalized by Barchini-Kable-Zierau to systems of differential operators. Such systems yield homomrophisms between generalized Verma modules. In this talk we build such systems of second-order differential operators in the maximal non-Heisenberg parabolic setting. If time permits then we will discuss the corresponding homomorphisms between generalized Verma modules. Lectures 09:50-17:10   Room #118 (Graduate School of Math. Sci. Bldg.) Josef Dick, et. al. (Univ. New South Wales) Workshop for Quasi-Monte Carlo and Pseudo Random Number Generation (ENGLISH) [ Reference URL ] 2012/06/11 Kavli IPMU Komaba Seminar 16:30-18:00   Room #002 (Graduate School of Math. Sci. Bldg.) Changzheng Li (Kavli IPMU) Quantum cohomology of flag varieties (ENGLISH) [ Abstract ] In this talk, I will give a brief introduction to the quantum cohomology of flag varieties first. Then I will introduce a Z^2-filtration on the quantum cohomology of complete flag varieties. In the end, we will study the quantum Pieri rules for complex/symplectic Grassmannians, as applications of the Z^2-filtration. Seminar on Geometric Complex Analysis 10:30-12:00   Room #126 (Graduate School of Math. Sci. Bldg.) Damian BROTBEK (University of Tokyo) Differential forms on complete intersections (ENGLISH) [ Abstract ] Brückmann and Rackwitz proved a vanishing result for particular types of differential forms on complete intersection varieties. We will be interested in the cases not covered by their result. In some cases, we will show how the space $H^0(X,S^{m_1}\Omega_X\otimes \cdots \otimes S^{m_k}\Omega_X)$ depends on the equations defining $X$, and in particular we will prove that the theorem of Brückmann and Rackwitz is optimal. The proofs are based on simple, combinatorial, cohomology computations. 2012/06/09 Harmonic Analysis Komaba Seminar 13:30-17:00   Room #128 (Graduate School of Math. Sci. Bldg.) Yasuo Furuya (Tokai University) 13:30-15:00 Resent topics on the Cauchy integrals (the works of Muscalu and others) (JAPANESE) Tsukasa Iwabuchi (Chuo University) 15:30-17:00 Ill-posedness for the nonlinear Schr\\"odinger equations in one space dimension (JAPANESE) [ Abstract ] In this talk, we consider the Cauchy problems for the nonlinear Schr\\"odinger equations. In particular, we study the ill-posedness by showing that the continuous dependence on initial data does not hold. In the known results, Bejenaru-Tao (2006) considered the problem in the Sobolev spaces $H^s (\\mathbb R)$ and showed the ill-posedness when $s < -1$. In this talk, we study the ill-posedness in the Besov space for one space dimension and in the Sobolev spaces for two space dimensions. 2012/06/08 GCOE lecture series 14:00-15:30   Room #118 (Graduate School of Math. Sci. Bldg.) Mihnea Popa (University of Illinois at Chicago) Derived categories and cohomological invariants II (ENGLISH) [ Abstract ] (Abstract for both Parts I and II) I will discuss results on the derived invariance of various cohomological quantities, like the Hodge numbers, a twisted version of Hochschild cohomology, the Picard variety, and cohomological support loci. I will include a small discussion of current work on orbifolds if time permits. Kavli IPMU Komaba Seminar 16:30-18:00   Room #002 (Graduate School of Math. Sci. Bldg.) Bong Lian (Brandeis University) Period Integrals and Tautological Systems (ENGLISH) [ Abstract ] We develop a global Poincar\\'e residue formula to study period integrals of families of complex manifolds. For any compact complex manifold $X$ equipped with a linear system $V^*$ of generically smooth CY hypersurfaces, the formula expresses period integrals in terms of a canonical global meromorphic top form on $X$. Two important ingredients of this construction are the notion of a CY principal bundle, and a classification of such rank one bundles. We also generalize the construction to CY and general type complete intersections. When $X$ is an algebraic manifold having a sufficiently large automorphism group $G$ and $V^*$ is a linear representation of $G$, we construct a holonomic D-module that governs the period integrals. The construction is based in part on the theory of tautological systems we have developed earlier. The approach allows us to explicitly describe a Picard-Fuchs type system for complete intersection varieties of general types, as well as CY, in any Fano variety, and in a homogeneous space in particular. In addition, the approach provides a new perspective of old examples such as CY complete intersections in a toric variety or partial flag variety. The talk is based on recent joint work with R. Song and S.T. Yau. 2012/06/05 Tuesday Seminar on Topology 16:30-18:00   Room #056 (Graduate School of Math. Sci. Bldg.) Yusuke Kuno (Tsuda College) A generalization of Dehn twists (JAPANESE) [ Abstract ] We introduce a generalization of Dehn twists for loops which are not necessarily simple loops on an oriented surface. Our generalization is an element of a certain enlargement of the mapping class group of the surface. A natural question is whether a generalized Dehn twist is in the mapping class group. We show some results related to this question. This talk is partially based on a joint work with Nariya Kawazumi (Univ. Tokyo). GCOE lecture series 16:30-18:00   Room #126 (Graduate School of Math. Sci. Bldg.) Yves Benoist (CNRS, Orsay) Random walk on reductive groups II (ENGLISH) [ Abstract ] The asymptotic behavior of the sum of real numbers chosen independantly with same probability law is controled by many classical theorems: Law of Large Numbers, Central Limit Theorem, Law of Iterated Logarithm, Local Limit Theorem, Large deviation Principle, 0-1 Law,... In these introductory talks I will recall these classical results and explain their analogs for products of matrices chosen independantly with same probability law, when the action of the support of the law is semisimple. We will see that the dynamics of the corresponding action on the flag variety is a crucial tool for studying these non-commutative random walks. Lie Groups and Representation Theory 16:30-18:00   Room #126 (Graduate School of Math. Sci. Bldg.) Yves Benoist (CNRS and Orsay) Random walk on reductive groups (ENGLISH) [ Abstract ] The asymptotic behavior of the sum of real numbers chosen independantly with same probability law is controled by many classical theorems: Law of Large Numbers, Central Limit Theorem, Law of Iterated Logarithm, Local Limit Theorem, Large deviation Principle, 0-1 Law,... In these introductory talks I will recall these classical results and explain their analogs for products of matrices chosen independantly with same probability law, when the action of the support of the law is semisimple. We will see that the dynamics of the corresponding action on the flag variety is a crucial tool for studying these non-commutative random walks. 2012/06/04 Algebraic Geometry Seminar 15:30-17:00   Room #122 (Graduate School of Math. Sci. Bldg.) Kiwamu Watanabe (Saitama University) Smooth P1-fibrations and Campana-Peternell conjecture (ENGLISH) [ Abstract ] We give a complete classification of smooth P1-fibrations over projective manifolds of Picard number 1 each of which admit another smooth morphism of relative dimension one. Furthermore, we consider relations of the result with Campana-Peternell conjecture on Fano manifolds with nef tangent bundle. Seminar on Geometric Complex Analysis 10:30-12:00   Room #126 (Graduate School of Math. Sci. Bldg.) Sachiko HAMANO (Fukushima University) Log-plurisubharmonicity of metric deformations induced by Schiffer and harmonic spans. (JAPANESE) 2012/06/01 GCOE lecture series 14:00-15:30   Room #118 (Graduate School of Math. Sci. Bldg.) Mihnea Popa (University of Illinois at Chicago) Derived categories and cohomological invariants I (ENGLISH) [ Abstract ] (Abstract for both Parts I and II) I will discuss results on the derived invariance of various cohomological quantities, like the Hodge numbers, a twisted version of Hochschild cohomology, the Picard variety, and cohomological support loci. I will include a small discussion of current work on orbifolds if time permits. 2012/05/31 Seminar on Probability and Statistics 14:50-16:05   Room #006 (Graduate School of Math. Sci. Bldg.) SEI, Tomonari (Department of Mathematics, Keio University) Holonomic gradient methods for likelihood computation (JAPANESE) [ Reference URL ] http://www.ms.u-tokyo.ac.jp/~kengok/statseminar/2012/05.html 2012/05/30 Number Theory Seminar 16:40-17:40   Room #056 (Graduate School of Math. Sci. Bldg.) Valentina Di Proietto (University of Tokyo) Kernel of the monodromy operator for semistable curves (ENGLISH) [ Abstract ] For a semistable curve, we study the action of the monodromy operator on the first log-crystalline cohomology group. In particular we examine the relation between the kernel of the monodromy operator and the first rigid cohomology group, in the case of trivial coefficients, giving a new proof of a theorem of B. Chiarellotto and in the case of certain unipotent F-isocrystals as coefficients. This is a joint work in progress with B. Chiarellotto, R. Coleman and A. Iovita. Lectures 14:50-16:20   Room #123 (Graduate School of Math. Sci. Bldg.) Harald Niederreiter (RICAM, Austrian Academy of Sciences) Low-discrepancy sequences and algebraic curves over finite fields (III) (ENGLISH) [ Reference URL ] http://www.ms.u-tokyo.ac.jp/~matumoto/WORKSHOP/workshop2012.html 2012/05/29 Tuesday Seminar on Topology 16:30-18:00   Room #056 (Graduate School of Math. Sci. Bldg.) Inasa Nakamura (Gakushuin University, JSPS) Triple linking numbers and triple point numbers of torus-covering $T^2$-links (JAPANESE) [ Abstract ] The triple linking number of an oriented surface link was defined as an torus-covering $T^2$-link $\\mathcal{S}_m(a,b)$ is a surface link in the form of an unbranched covering over the standard torus, determined from two commutative $m$-braids $a$ and $b$. In this talk, we consider $\\mathcal{S}_m(a,b)$ when $a$, $b$ are pure $m$-braids ($m \\geq 3$), which is a surface link with $m$-components. We present the triple linking number of $\\mathcal{S}_m(a,b)$ by using the linking numbers of the closures of $a$ and $b$. This gives a lower bound of the triple point number. In some cases, we can determine the triple point numbers, each of which is a multiple of four. GCOE lecture series 16:30-18:00   Room #126 (Graduate School of Math. Sci. Bldg.) Yves Benoist (CNRS, Orsay) Random walk on reductive groups. (ENGLISH) [ Abstract ] The asymptotic behavior of the sum of real numbers chosen independantly with same probability law is controled by many classical theorems: Law of Large Numbers, Central Limit Theorem, Law of Iterated Logarithm, Local Limit Theorem, Large deviation Principle, 0-1 Law,... In these introductory talks I will recall these classical results and explain their analogs for products of matrices chosen independantly with same probability law, when the action of the support of the law is semisimple. We will see that the dynamics of the corresponding action on the flag variety is a crucial tool for studying these non-commutative random walks. Lectures 14:50-16:20   Room #123 (Graduate School of Math. Sci. Bldg.) Harald Niederreiter (RICAM, Austrian Academy of Sciences) Low-discrepancy sequences and algebraic curves over finite fields (II) (ENGLISH) [ Reference URL ] http://www.ms.u-tokyo.ac.jp/~matumoto/WORKSHOP/workshop2012.html 2012/05/28 Seminar on Geometric Complex Analysis 10:30-12:00   Room #126 (Graduate School of Math. Sci. Bldg.) Shinichi TAJIMA (University of Tsukuba) Local cohomology and hypersurface isolated singularities II (JAPANESE) [ Abstract ] ・$\mu$-constant-deformation の Tjurina 数 ・対数的ベクトル場の構造と構成法 ・ニュートン非退化な超曲面に対する Kouchnirenko の公式 について述べる. Algebraic Geometry Seminar 15:30-17:00   Room #122 (Graduate School of Math. Sci. Bldg.) Mihnea Popa (University of Illinois at Chicago) Generic vanishing and linearity via Hodge modules (ENGLISH) [ Abstract ] I will explain joint work with Christian Schnell, in which we extend the fundamental results of generic vanishing theory (for instance for the canonical bundle of a smooth projective variety) to bundles of holomorphic forms and to rank one local systems, where parts of the theory have eluded previous efforts. To achiever this, we bring all of the old and new results under the same roof by enlarging the scope of generic vanishing theory to the study of filtered D-modules associated to mixed Hodge modules. Besides Saito's vanishing and direct image theorems for Hodge modules, an important input is the Laumon-Rothstein Fourier transform for bundles with integrable connection. Lectures 14:50-16:20   Room #123 (Graduate School of Math. Sci. Bldg.) Harald Niederreiter (RICAM, Austrian Academy of Sciences) Low-discrepancy sequences and algebraic curves over finite fields (I) (ENGLISH) [ Abstract ] This is the second of the four lectures. The first one is Colloquium talk on May 25th 16:30--17:30 at 002. Abstract from Colloquium: Quasi-Monte Carlo (QMC) methods are deterministic analogs of statistical Monte Carlo methods in computational mathematics. QMC methods employ evenly distributed low-discrepancy sequences instead of the random samples used in Monte Carlo methods. For many types of computational problems, QMC methods are more efficient than Monte Carlo methods. After a general introduction to QMC methods, the talk focuses on the problem of constructing low-discrepancy sequences which has fascinating links with subjects such as finite fields, error-correcting codes, and algebraic curves. [ Reference URL ] http://www.ms.u-tokyo.ac.jp/~matumoto/WORKSHOP/workshop2012.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384242653846741, "perplexity": 2755.1327797915064}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00493.warc.gz"}
https://www.physicsforums.com/threads/vectors-and-the-menelaus-theorem.92745/
# Vectors and the Menelaus Theorem 1. Oct 6, 2005 ### ronblack2003 Given 3 Non-zero vectors A, B and C in 3-dimensional space which are non-coplanar. It is easy to show that there exists real constants m,p and n such that (A+mB),(B+pC) and (C+nA) are Co-planar implying mnp=-1. It seems to me that there should be a natural way of using this result to easily prove the direct Theorem of Menelaus can anyone help? Last edited: Oct 6, 2005 2. Oct 7, 2005 ### Tzar I have never heard of that theorm!!! What is it? 3. Oct 7, 2005 ### HallsofIvy Staff Emeritus http://www.ies.co.jp/math/java/vector/menela/menela.html [Broken] Last edited by a moderator: May 2, 2017 Similar Discussions: Vectors and the Menelaus Theorem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537126421928406, "perplexity": 4825.308509032999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00254.warc.gz"}
https://map5.mi.parisdescartes.fr/seminairesMAP5/exposes/gerard-ben-arous-courant-institute-of-mathematical-sciences-nyu/?id=1996467836&ajaxCalendar=1&mo=3&yr=2023
# Effective dynamics and critical scaling for Stochastic Gradient Descent in high dimensions vendredi 13 mai 2022, 14h00 - 15h00 Joint work with Reza Gheissari (UC Berkeley) and Aukosh Jagannath (Waterloo) SGD is a workhorse for optimization and thus for statistics and machine learning, and it is well understood in low dimensions. But understanding its behavior in very high dimensions is not yet a simple task. We study here the limiting effective dynamics of some summary statistics for SGD in high dimensions, and find interesting and new regimes, i.e. not the expected one given by the usual wisdom, i.e. the population gradient flow. We find that a new corrector term is needed and that the phase portrait of these dynamics is quite complex and substantially different from what would be predicted using the classical low-dimensional approach, including for simple tasks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362216353416443, "perplexity": 795.1551624791811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00644.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=DEBUG&L=F
DEBUG - Maple Programming Help DEBUG The Maple debugger breakpoint function Calling Sequence DEBUG(arg1, arg2,...) Parameters argN - (optional) any expression Description • The DEBUG function has the effect of a breakpoint in Maple code. When the DEBUG function is executed, execution stops before the next statement executed, and the debugger function is invoked. If the debugger function returns anything other than NULL, the returned value is evaluated, and the debugger function is reinvoked. • The stopat function (and the stopat debugger command) set breakpoints by inserting a call to the DEBUG function before the statement at which the breakpoint is set. Such breakpoints can be removed using the unstopat function (or the unstopat debugger command). • A breakpoint can be inserted explicitly into the source code of a Maple procedure by inserting a call to the DEBUG function. Such breakpoints cannot be removed using the unstopat function (or the unstopat debugger command). • If the DEBUG function is passed any parameters, they are passed on to the debugger function for display when it is invoked. If no parameters are passed, the result of the previous computation is passed to the debugger function instead. • The DEBUG function returns NULL so as not to affect %, %%, and %%%.  Therefore, inserting it as the last statement of a procedure will hide the return value of the procedure. It is not possible to use stopat to set a breakpoint after the last statement in a procedure. • The DEBUG command is thread-safe as of Maple 15. > f := proc(x,y) local a;     a := x^2; DEBUG(); a := y^2; DEBUG(Hello); a := (x+y)^2 end proc: > $f\left(2,3\right)$ ${25}$ (1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523607850074768, "perplexity": 2642.740628103779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00351.warc.gz"}
https://math.stackexchange.com/questions/2581231/a-differentiable-approximation-of-modulus/2581265
# A differentiable approximation of modulus? I'm trying to find a differentiable approximation of the "fract" function, which returns the fractional portion of a real number. $y = x-\lfloor x\rfloor$ I have something that works "ok", that I got by adapting a bandlimited saw wave. $y=0.5-\frac{sin(2\pi x)+sin(4\pi x)/2+sin(6\pi x)/3+sin(8\pi x)/4+sin(10\pi x)/5}{\pi}$ I can add more harmonics to make the band limited saw wave closer to the actual "fract" function, but for my usage case, all these trig function calls are getting pretty expensive. I was curious, are there other (better quality / lower computational complexity) ways to differentiably approximate this function? • If you know fourier series, you can probably run a low pass filter on it for example. – mathreadler Dec 26 '17 at 22:22 • Do you need only differentiability, that is a continuous first derivative, or you need a smooth function? – lisyarus Dec 26 '17 at 22:22 • As $\frac d{dx}(x - \lfloor x\rfloor)$ is almost everywhere defined (and equals 1 when), for some applications it could be taken as exactly 1 everywhere. What is the application? – arseniiv Dec 26 '17 at 22:26 • i only need a continuous first derivative. I'm using this for part of a process I'm going to be doing gradient descent on. – Alan Wolfe Dec 26 '17 at 22:49 • @AlanWolfe What are you trying to minimize? I think you're going to have serious trouble using gradient descent on anything with the fractional part function (that is, the function - or any approximation thereof - is inherently badly natured for that kind of method) – Milo Brandt Dec 27 '17 at 3:16 A function like $$x-\frac{x^n}{1+x^n}$$ well approximates $x-\lfloor x \rfloor$ on the interval $[0, 2]$ for large $n$. If we take it on the interval $\left[{1 \over 2}, {3 \over 2}\right]$ and periodically repeat it, we get a nice almost differentiable approximation. $x^n$ can be efficiently calculated using the binary exponentiation algorithm (thus it would be handy if $n=2^k$). NB: I said almost differentiable, since in the boundary points the derivatives on different sides are $\approx 1 - \frac{n}{2^{n-1}}$ and $1-\frac{n}{(3/2)^{n+1}}$, so, if $n$ is suitably large, the derivative is $\approx 1$ from practical perspective. A live example on desmos. Assume $x\to f(x)$ is the discontinous function you want to get smoother/more regular. Then, for example the local mean value integral $$F(x) = \frac{1}{2\Delta_x}\int_{x-\Delta_x}^{x+\Delta_x}f(\varphi)d\varphi \hspace{1cm} \text{(local averaging)}$$ will be differentiable for any $\Delta_x\in \mathbb R^+$ (why?) You can estimate this as a discrete sum (low pass filter) or you can calculate an explicit expression for it analytically as a continuous time convolution since $f(x)$ is so nice in this example. A slightly smoother and more complicated one is if we iterate it: $$F_2(x) = \frac{1}{2\Delta_x}\int_{x-\Delta_x}^{x+\Delta_x}F(\varphi)d\varphi \hspace{1cm} \text{(linear interpolation)}$$ Let $\{ x \}$ mean $x - \lfloor x \rfloor$. Given your goal of having the derivative be cheaply computable by computer, you should use a piecewise defined function: $$f(x) = \begin{cases} \{ x \} & \epsilon \leq \{ x \} \leq 1 - \epsilon \\ g(x) & 0 \leq \{ x \} < \epsilon \\ h(x) & 1 - \epsilon < \{ x \} < 1 \end{cases}$$ where $g$ and $h$ are any easily computed functions (e.g. quadratic polynomials) that satisfy the conditions listed below, and $\epsilon$ is a small positive number. The point being that for most values of $x$, you have $f'(x) = 1$ so there is very little work in computing the derivative. The conditions for $f$ to be differentiable are: • $g(0) = h(1)$ • $g(\epsilon) = \epsilon$ • $h(1-\epsilon) = 1 - \epsilon$ • $g'(0) = h'(1)$ • $g'(\epsilon) = 1$ • $h'(1 - \epsilon) = 1$ For symmetry, you probably want $h(x) = 1 - g(1-x)$. Then the conditions reduce to • $g(0) = 1/2$ • $g(\epsilon) = \epsilon$ • $g'(\epsilon) = 1$ Here is a differentiable one. Just repeat $f$ every unit interval. $f(x) = x + \dfrac{(\frac12-x)c(1+c)}{(x+c)(1-x+c)}$ for every $x \in [0,1]$, with parameter $c \to 0^+$. The gradient also tends to $1$ at the midpoint as $c \to 0^+$. The main advantage is that no exponentiation is needed; just pure arithmetic. Example with $c=0.001$. • Doesn't look infinitely differentiable at integer values. – Hurkyl Dec 27 '17 at 14:19 • @Hurkyl: You are right; it's only once differentiable. I've edited. Now it makes me wonder what is a computationally efficient function that when repeated is infinitely differentiable. – user21820 Dec 27 '17 at 14:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296962022781372, "perplexity": 498.6908478705977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255251.1/warc/CC-MAIN-20190520001706-20190520023706-00144.warc.gz"}
http://www.ellipsix.net/blog/2013/01/the-coolest-thing-since-absolute-zero.html
ellipsix informatics 2013 Jan 07 ## The coolest thing since absolute zero I'm a sucker for good (or bad) physics puns. And the latest viral physics paper (arXiv preprint) allows endless opportunities for them. It's actually about a system with a negative temperature! Negative temperature sounds pretty cool, but I have to admit, at first I didn't think this was that big of a deal to anyone except condensed matter physicists. Sure, it could pave the way for some neat technological applications, but that's far in the future. The idea of negative temperature itself is old news among physicists; in fact, this isn't even the first time negative temperatures have been produced in a lab. But maybe you're not a physicist. Maybe you've never heard about negative temperature. Well, you're in luck, because in this post I'm going to explain what negative temperature means and why this experiment is actually such a hot topic. ⌐■_■ # On Temperature To understand negative temperature, we have to go all the way back to the basics. What is temperature, anyway? Even if you're not entirely sure of the technical definition, you certainly know it by its feel. Temperature is what distinguishes a day you can walk around in a T-shirt from the day you have to bundle up in a coat. It makes the difference between refreshing lemonade and soothing tea. (If you drink your tea cold or your lemonade hot, I can't help you.) Temperature is the reason you don't put your hand in a fire. Basically, whatever temperature is, it has to allow you to tell things that feel hot apart from things that feel cold. Now, if you think about some hot objects and some cold objects, like fire and ice, it might seem intuitive that hot things tend to have more energy than cold things. So you could define temperature as the average energy of an object. That definition actually works for the normal objects you interact with in your everyday life; it even works for a lot of less normal objects, like gases in extreme conditions. In fact in the kinetic theory of gases, temperature can be defined as being proportional to the average kinetic energy of the particles of the gas. Here is the number of particles and is their kinetic energy, so is the average kinetic energy of the particles. is a constant specific to the gas, is the Boltzmann constant, and is the temperature. Something analogous works for many solids and liquids, so in each case you can say that : the temperature is related to the average kinetic energy. But wait! Let's go back and consider that definition of temperature a little more carefully, because that's not the only one we could have come up with. All the hot objects you can think of do have a lot of energy, yes, but they also share a different characteristic: they will transfer a lot of energy to you if you touch them. Similarly, all the cold objects you can think of, in addition to having relatively little average energy, will transfer energy from you if you touch them. It might be hard to understand the difference at first, because for every object you can probably think of, having a lot of energy goes hand in hand with transferring a lot of energy, and similarly for small amounts of energy. So imagine a magic energy box which has a huge amount of kinetic energy, but for whatever reason, it always keeps that energy to itself. Even if you touch it, it won't let any of its enormous "stockpile" of energy flow to your hand. Would it still be hot? Would it burn you to touch it? As you probably guessed, no, it wouldn't! Hopefully it makes sense that your sense of temperature is based on how much energy actually reaches your nerves, not how much happens to be sitting around next to them. That's an example of a more general rule which you can apply to all sorts of physical systems: temperature is related to how readily a system transfers energy. Whatever definition we're going to use for temperature, it has to turn out that energy tends to flow from an object with a higher temperature to an object with a lower temperature. # On Entropy and Multiplicity There are a lot of different ways you could define temperature that at least seem to satisfy the criterion we've come up with. Showing why most of them either are not general enough, or just don't work, would take a book, so I'm not going to do it here. I'll just explain the definition that does work by way of a little example. Imagine a colony of eight energy beings that live in two boxes marked out on a grid, as in the picture above. These are not the powerful interdimensional energy beings of science fiction; they're just happy little energy packets who jump around randomly between their grid cells. Each way the energy beings can distribute themselves within some section of the grid — which could be the left box, or the right box, or both, etc. — is called a microstate of that section. Let's start by focusing on the left box. Suppose that at the beginning, there is one energy being in the left box. The picture shows one possible way this could happen, but you don't know whether that's the actual configuration or not; you only know that exactly one energy being is somewhere in that left box. If you wait a little while, eventually one of two things will happen: either the energy being will jump out of the left box, or another energy being will jump into it from the right box. Finding out which box is hotter basically amounts to finding out which of these two options is more likely. If it's the first, that means the left box tends to lose energy to its surroundings, making it hotter than the right box. On the other hand, if it's the second option, that means the left box is cooler than the right one. ## Microstates and probability: the ergodic principle Hopefully it will at least seem intuitive that which option is more likely has something to do with how many configurations — microstates — it could end up in. For example, if an energy being jumps into the left box, there are more configurations than if one jumps out of the left box, and that suggests that a jump into the left box is more likely. But I'll hold off on explaining exactly why that suggestion is correct until later. First we have to figure out which option has more possible final microstates. Take the first option, where the lone energy being in the left box leaves it for the right box. It's easy to figure out how many microstates there are for the left box after this happens: one! With no energy beings, there's only one possible arrangement: every cell in the grid is empty. So the number of microstates of the left box in this case is 1. Now let's go on to the second option, where an energy being jumps into the left box. This one's a little more complicated, because there are two energy beings that have to place themselves among the 16 grid cells. We can count the possible configurations with a little trick: imagine arranging the grid cells on a line, and then looking only at the edges between them. Each way of distributing the energy beings among the 16 cells corresponds to one way of ordering the 15 edges and 2 energy beings. And that, in turn, corresponds to one way of choosing 2 out of the 17 (that's 15 + 2) positions to be occupied by energy beings, with the rest being occupied by edges. The number of ways to choose 2 out of 17 positions is The 136 microstates make up a macrostate (note the one-letter difference), which is defined as the collection of all microstates with a given amount of energy. I'll denote this macrostate , because it has two units of energy, and it's a macrostate of the box on the left (L). The number of microstates in the macrostate, in this case 136, is the multiplicity , or in this specific case, . In the same notation, the macrostate for the first option, with no energy beings in the left box, would be , and its multiplicity would be . If the left box were all there was to this setup, you could look at these two options, and say that there are 136 ways to get the second one and only one way to get the first, so clearly the second option is way more likely. But the left box isn't all there is! There's also a right box, and it can have its own configurations which affect the probabilities. Consider the final macrostate of the first option, . In this macrostate, there's only one microstate for the left box, but it comes as a package deal with a macrostate for the right box, , which has a large number of microstates of its own. We can figure out exactly how many using the same trick as before, but I'll give it as a general formula this time: when you have grid cells and energy beings, the number of ways to arrange them is the number of ways to choose of the positions to be occupied by energy beings, or In this case there are 36 cells (so 35 edges) and 8 energy beings, giving 145 million possible configurations: And what about the final state in the second option? With two energy beings on the left, there are six on the right. That means the left-box macrostate is paired with the right-box macrostate , which has a multiplicity of Clearly, the right box has a lot more microstates in the first option, when it has all eight energy beings, than in the second option, when it has only six. Each microstate of the left box is weighted by the number of microstates in the right box that can occur with it: the one left-box microstate in carries a weight of 145 million, whereas the 136 left-box microstates in carry a weight of only 4.5 million each. All this talk about weighted probabilities is starting to get a little complicated, but fortunately, there's a much easier way to think about it. We can put the (macro- or micro-) states of the left box and the states of the right box together, to get states of the combined boxes, and each of those is going to be equally likely. For example, when the left box has two energy beings and the right box has six energy beings, the left box has 136 microstates, and the right box has 4,496,388 microstates, for a combined total of microstates for the system as a whole. These constitute the macrostate — note that that's a macrostate of both boxes, not just one. Similarly, there are microstates where the right box has all eight energy beings, and they constitute the two-box macrostate . Each microstate is just as likely as any other, so finding two energy beings in the left box is merely about four times as likely as finding none there, not 136 times as likely. The assumption that all these individual microstates are equally likely, for an isolated system like the two boxes, is called the ergodic principle), and it underlies basically all of modern statistical mechanics. ## The multiplicity distribution and the Second Law Time for a recap: what have we learned so far? On average, the probability that a given macrostate will occur is proportional to its multiplicity. So in the example where the energy beings started in the macrostate , with one in the left box and seven on the right box, they're about four times as likely to transition into the macrostate than to transition into , because has a higher multiplicity by a factor of about four. The general rule to take away from that example should be clear: over time a system tends to work its way toward macrostates with higher multiplicities. This general rule is something you may have heard of before; it's called the Second Law of Thermodynamics, and it's usually stated like this: The entropy of an isolated system tends to increase. The entropy is just the logarithm of the multiplicity, times the Boltzmann constant: so as a system works its way toward higher multiplicities, it's also increasing its entropy. Let's think about this in the context of all the possible macrostates. First, here are all the relevant calculations: Macrostate Energy Multiplicity Entropy (J/K) LeftRightOverall LeftRightOverall LeftRightOverall LeftRightOverall (0)L(8)R(0,8) 088 1145,008,513145,008,513 025.95×10-2325.95×10-23 (1)L(7)R(1,7) 178 1626,978,328431,653,248 3.82×10-2323.62×10-2327.45×10-23 (2)L(6)R(2,6) 268 1364,496,388611,508,768 6.78×10-2321.15×10-2327.93×10-23 (3)L(5)R(3,5) 358 816658,008536,934,528 9.26×10-2318.50×10-2327.75×10-23 (4)L(4)R(4,4) 448 387682,251318,804,876 11.41×10-2315.63×10-2327.03×10-23 (5)L(3)R(5,3) 538 15,5048436130,791,744 13.32×10-2312.48×10-2325.80×10-23 (6)L(2)R(6,2) 628 54,26466636,139,824 15.05×10-238.98×10-2324.03×10-23 (7)L(1)R(7,1) 718 170,544366,139,584 16.63×10-234.95×10-2321.58×10-23 (8)L(0)R(8,0) 808 490,3141490,314 18.09×10-23018.09×10-23 That data goes into the following graph, which shows the multiplicity of each macrostate. On the horizontal axis is the amount of energy in the left box. Saying that the system of energy beings tends to transition toward higher multiplicity, or higher entropy, is equivalent to saying that, over time, it will work its way up the slope of the graph. In this case, that means: • When multiplicity is increasing with the energy of the left box, , the left box is cooler than the right box, because it will tend to take on more energy from the right box • When multiplicity is decreasing with the energy of the left box, , the left box is hotter than the right box, because it will tend to give off more energy to the right box • When multiplicity is neither increasing or decreasing with energy, , the two boxes are at the same temperature So the temperature difference is inversely related to ! This will be the basis of the quantitative definition of temperature. ## Temperature At this point, we know that temperature can be written as some (inverse) function of the slope . Here the subscript O stands for "object", S would stand for "surroundings", and E stands for "everything" (the object and surroundings). In the energy being example, the left box would be the object and the right box would fill the role of the surroundings. But there are a few problems with this. First of all, we shouldn't have to calculate the multiplicity of everything to figure out whether an object is hotter than its surroundings. In the case of the energy beings in the boxes, it wouldn't be too hard, but what about real objects? Should we have to count the multiplicity of the entire universe to measure a temperature? I don't think so. We should be able to find some alternate definition of temperature that depends only on the properties of the object itself. OK, fine, so what about making temperature a function of ? Unfortunately, there's a problem with this, too, but it's a little more subtle. Remember that the second law of thermodynamics tells us that systems tend to shift toward higher-multiplicity microstates until they wind up at the peak of the multiplicity graph. When this happens, the object should be at the same temperature as its surroundings, because they're not going to exchange energy anymore. If we defined an object's temperature as being related to , then that means should hold at the multiplicity peak. But it doesn't. We can see this because, at a local maximum of the multiplicity graph, the slope of the graph is zero; that is, The overall multiplicity is the product of the multiplicities for the object and surroundings, . Plugging that in, we get And whenever the object's energy changes, the energy of the surroundings changes by the opposite amount, so that total energy is conserved. That means , so This isn't the same condition as ; it's not even equivalent! So does not hold at the peak of the multiplicity graph. Fortunately, the condition that does hold at the peak suggests a solution. If you divide that last equation through by and move one term to the other side, you get So if we define temperature as being inversely related to , everything works! In fact, if we multiply this by the Boltzmann constant, it just becomes . That's why entropy is defined the way it is: it goes right into the definition of temperature. ## Negative temperatures The last thing to do is to figure out just what kind of inverse relationship exists between temperature and the slope . In some sense, it doesn't really matter, because picking a different relationship just rescales the temperature differences between different objects; it doesn't change whether something has a higher or lower temperature. You could pick any inverse relationship and invent a way to do thermodynamics with it. In practice, the definition we actually use is This definition has the advantage that it agrees with the results from kinetic theory, like the ideal gas law. In particular, using this definition, when a substance heats up, the change in its volume is proportional to the change in temperature. This means that you can construct a liquid or gas thermometer with a linear scale. However, this definition has one interesting quirk. Look at the graph of entropy vs. energy. As long as you're on the left of the peak of the graph, entropy increases with energy, so the temperature, as the reciprocal of the slope, is positive. But to the right of the peak, the entropy decreases with energy, so the slope is negative, and the temperature is negative! This is what it means to have a negative temperature: that as you add energy, the entropy gets smaller and smaller. Whenever the graph of entropy vs. energy has a peak and then drops back down to zero, the temperature will be negative at higher energies. Because it's kind of strange to have the temperature switch from positive to negative as energy increases, physicists sometimes use an alternative definition, This doesn't actually measure temperature, because as you can tell, it's directly, not inversely, related to the slope of the entropy-energy graph. So it might be more accurate to call it "coldness," but for now it just has the unimaginative name of "thermodynamic beta". Anyway, decreases smoothly from infinity to negative infinity as you move through the entire range of energies allowed for the system. In that sense, it's kind of a more natural way to characterize how objects interact thermally. It shows that there's really nothing fundamentally strange about negative temperature; in a sense, it's just a historical accident that the common definition of temperature runs out of numbers (hits infinity) too soon, and using is how we can extend the scale to encompass all possible temperatures. Here's a graph showing how these two quantities behave for our energy beings in boxes: # Creating Negative Temperature in an Optical Lattice The paper itself was published a few days ago in Science. It describes how a group of seven German physicists constructed a system with a negative temperature by placing potassium atoms in an optical trap. Optical traps, or more precisely optical lattices, are a common device in low-temperature physics experiments. Basically, an optical lattice is a standing electromagnetic wave created by shining two laser beams through each other in opposite directions. You can do this in more than one dimension to get a 2D or 3D trap. The interference between the laser beams creates a periodic potential energy function, a series of hills and valleys that can trap low-temperature particles in a given location in space. By adjusting the phases, relative angles, and strengths of the laser beams, you can do all sorts of manipulations on the lattice, changing around the locations of the potential minima and trapping or releasing atoms during the course of the experiment. For this particular experiment, the energy of a particle in the lattice can be calculated from this expression: The first term represents the quantum mechanical version of the kinetic energy of a potassium atom moving from one site in the lattice to another. The second term represents the energy of the interaction between different atoms sitting in the same lattice site, which can be attractive or repulsive, and the third term represents the potential energy each of these atoms has by virtue of being trapped in the lattice, which can be negative or positive (the latter is like "anti-trapping", it repels atoms from a specific site). In order to create a negative-temperature state, the most important thing the scientists needed to do was find a way to place an upper limit on each of these three types of energy. Remember, having a negative temperature requires that the number of available microstates decreases as the energy rises, and if you can set up a maximum energy where the system runs out of microstates, that's a surefire way to make the entropy decrease as it gets closer to that maximum. The optical lattice naturally places an upper bound on the kinetic energy, but for the other two terms, the researchers found that it's necessary to arrange for the interactions between atoms to be attractive (rather tha repulsive) and use an anti-trapping potential in order to get that upper limit. Here's what they saw: This figure from the paper shows snapshots of where the potassium atoms are clustering in the lattice in two runs of the experiment, one on top and one on the bottom, with time increasing from left to right. At the beginning of the experiment, the left column, you can see that the atoms are clustering in the valleys of the optical lattice. As time goes on, in the top run, the atoms stay roughly in their original positions. But in the bottom run, you can see that the points where there are a lot of atoms change. They've moved from the valleys of the potential to the peaks! That shows the upper bound on energy which is necessary for a negative temperature. To be clear, at this point, this experiment is still very much basic science. All the authors have shown is that it's possible to make a stable negative-temperature state of a few atoms, with the energy coming from motion instead of spin (which is how negative temperature states have been created in the past). But it's possible that this could be turned into some larger-scale technology. If so, negative temperature materials could be used to construct highly efficient heat engines. And as the authors point out in the paper, negative temperature implies negative pressure as well, which could be a way of explaining the cosmological mystery of dark energy. So I'll be quite interested to hear about how this idea develops.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256785869598389, "perplexity": 331.60846320480175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659483.51/warc/CC-MAIN-20150417045739-00207-ip-10-235-10-82.ec2.internal.warc.gz"}
http://lpsa.swarthmore.edu/LaplaceXform/InvLaplace/InvLaplaceXformIntro.html
# The Inverse Laplace Transform, Introduction The inverse Laplace Transform can be calculated in a few ways.  If the function whose inverse Laplace Transform you are trying to calculate is in the table, you are done.  Otherwise we will use partial fraction expansion (PFE); it is also called partial fraction decomposition.   If you have never used partial fraction expansions you may wish to read a background article, but you can probably continue without it. There is also a way to directly calculate the inverse Laplace Transform by integration (so called "Direct Calculation").  The technique is described, but no examples are given.  This takes advanced calculus and we will not use direct calculation. Finally, examples are given the use MATLAB.  MATLAB has powerful techniques for partial fraction expansion. References
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543813467025757, "perplexity": 462.70440090858136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202299.16/warc/CC-MAIN-20190320044358-20190320070358-00033.warc.gz"}
http://philpapers.org/browse/physics
# Physics Related categories Siblings: 1 — 50 / 359 1. Mario Abundo (2009). First-Passage Problems for Asymmetric Diffusions and Skew-Diffusion Processes. In Institute of Physics Krzysztof Stefanski (ed.), Open Systems and Information Dynamics. World Scientific Publishing Company 16--04. For a, b > 0, we consider a temporally homogeneous, one-dimensional diffusion process X(t) defined over I = (-b, a), with infinitesimal parameters depending on the sign of X(t). We suppose that, when X(t) reaches the position 0, it is reflected rightward to δ with probability p > 0 and leftward to -δ with probability 1 - p, where δ > 0. Closed analytical expressions are found for the mean exit time from the interval (-b, a), and for the probability (...) Export citation My bibliography 2. Stephen L. Adler & Jeeva Anandan (1996). Nonadiabatic Geometric Phase in Quaternionic Hilbert Space. Foundations of Physics 26 (12):1579-1589. We develop the theory of the nonadiabatic geometric phase, in both the Abelian and non-Abelian cases, in quaternionic Hilbert space. Export citation My bibliography 3. Mavriche Adrian (2012). Dr. NON (non):10. The present document starts from the relative existence of the electromagnetic field, reaching through mental experiments its connection with the gravitational field, without the necessity to resort to other space - time dimensions or supplementary "exotic" particles. The final conclusion is that the field "electro-gravitational" and electromagnetic field with accelerated source are two different manifestations of the same single field dynamic. Whilst demonstrating why there are light sources with "flee" towards the red or blue of the light spectrum. Remove from this list Translate Export citation My bibliography 4. V. V. Afonin & V. Y. Petrov (2010). Is the Luttinger Liquid a New State of Matter? Foundations of Physics 40 (2):190-204. We are demonstrating that the Luttinger model with short range interaction can be treated as a type of Fermi liquid. In line with the main dogma of Landau’s theory one can define a fermion excitation renormalized by interaction and show that in terms of these fermions any excited state of the system is described by free particles. The fermions are a mixture of renormalized right and left electrons. The electric charge and chirality of the Landau quasi-particle is discussed. Export citation My bibliography 5. D. V. Ahluwalia (1998). Book Review: Quantum Field Theory, Second Edition, by Lewis H. Ryder. [REVIEW] Foundations of Physics 28 (3):527-529. Export citation My bibliography 6. Muhammad Adeel Ajaib (2015). A Fundamental Form of the Schrodinger Equation. Foundations of Physics 45 (12):1586-1598. We propose a first order equation from which the Schrodinger equation can be derived. Matrices that obey certain properties are introduced for this purpose. We start by constructing the solutions of this equation in one dimension and solve the problem of electron scattering from a step potential. We show that the sum of the spin up and down, reflection and transmission coefficients, is equal to the quantum mechanical results for this problem. Furthermore, we present a three dimensional version of the (...) Export citation My bibliography 7. E. K. Akhmedov & A. Y. Smirnov (2011). Neutrino Oscillations: Entanglement, Energy-Momentum Conservation and QFT. [REVIEW] Foundations of Physics 41 (8):1279-1306. We consider several subtle aspects of the theory of neutrino oscillations which have been under discussion recently. We show that the S-matrix formalism of quantum field theory can adequately describe neutrino oscillations if correct physics conditions are imposed. This includes space-time localization of the neutrino production and detection processes. Space-time diagrams are introduced, which characterize this localization and illustrate the coherence issues of neutrino oscillations. We discuss two approaches to calculations of the transition amplitudes, which allow different physics interpretations: (i) (...) Export citation My bibliography 8. James Albertson (1959). Causality and Chance in Modern Physics. [REVIEW] Modern Schoolman 36 (2):134-135. Export citation My bibliography 9. S. Albeverio (1984). Non-Standard Analysis; Polymer Models, Quantum Fields. In Heinrich Mitter & Ludwig Pittner (eds.), Stochastic Methods and Computer Techniques in Quantum Dynamics. Springer-Verlag 233--254. We give an elementary introduction to non-standard analysis and its applications to the theory of stochastic processes. This is based on a joint book with J. E. Fenstad, R. Høegh-Krohn and T. Lindstrøm. In particular we give a discussion of an hyperfinite theory of Dirichlet forms with applications to the study of the Hamiltonian for a quantum mechanical particle in the potential created by a polymer. We also discuss new results on the existence of attractive polymer measures in dimension d (...) Export citation My bibliography 10. R. Aldrovandi, R. R. Cuzinatto & L. G. Medeiros (2006). Analytic Solutions for the Λ-FRW Model. Foundations of Physics 36 (11):1736-1752. The high precision attained by cosmological data in the last few years has increased the interest in exact solutions. Analytic expressions for solutions in the Standard Model are presented here for all combinations of Λ = 0, Λ ≠ 0, κ = 0, and κ ≠ 0, in the presence and absence of radiation and nonrelativistic matter. The most complete case (here called the ΛγCDM Model) has Λ ≠ 0, κ ≠ 0, and supposes the presence of radiation and dust. (...) Export citation My bibliography 11. Erik M. Alfsen & Frederic W. Shultz (2003). Geometry of State Spaces of Operator Algebras. Monograph Collection (Matt - Pseudo). In this book we give a complete geometric description of state spaces of operator algebras, Jordan as well as associative. That is, we give axiomatic characterizations of those convex sets that are state spaces of C*-algebras and von Neumann algebras, together with such characterizations for the normed Jordan algebras called JB-algebras and JBW-algebras. These non­ associative algebras generalize C*-algebras and von Neumann algebras re­spectively, and the characterization of their state spaces is not only of interest in itself, but is also (...) Remove from this list Export citation My bibliography   3 citations 12. A. D. Alhaidari (2014). Renormalization of the Strongly Attractive Inverse Square Potential: Taming the Singularity. Foundations of Physics 44 (10):1049-1058. Quantum anomalies in the inverse square potential are well known and widely investigated. Most prominent is the unbounded increase in oscillations of the particle’s state as it approaches the origin when the attractive coupling parameter is greater than the critical value of 1/4. Due to this unphysical divergence in oscillations, we are proposing that the interaction gets screened at short distances making the coupling parameter acquire an effective (renormalized) value that falls within the weak range 0–1/4. This prevents the oscillations (...) Export citation My bibliography 13. S. Twareque Ali, Claudio Carmeli, Teiko Heinosaari & Alessandro Toigo (2009). Commutative POVMs and Fuzzy Observables. Foundations of Physics 39 (6):593-612. In this paper we review some properties of fuzzy observables, mainly as realized by commutative positive operator valued measures. In this context we discuss two representation theorems for commutative positive operator valued measures in terms of projection valued measures and describe, in some detail, the general notion of fuzzification. We also make some related observations on joint measurements. Export citation My bibliography 14. Robert Alicki (2009). Quantum Decay Cannot Be Completely Reversed: The 5% Rule. In Institute of Physics Krzysztof Stefanski (ed.), Open Systems and Information Dynamics. World Scientific Publishing Company 16--01. Export citation My bibliography 15. Vidal Alonso, Salvatore De Vincenzo & Luigi Mondino (1999). Tensorial Relativistic Quantum Mechanics in (1+1) Dimensions and Boundary Conditions. Foundations of Physics 29 (2):231-250. The tensorial relativistic quantum mechanics in (1+1) dimensions is considered. Its kinematical and dynamical features are reviewed as well as the problem of finding the Dirac spinor for given finite multivectors. For stationary states, the dynamical tensorial equations, equivalent to the Dirac equation, are solved for a free particle, for a particle inside a box, and for a particle in a step potential. Export citation My bibliography 16. T. B. Anders, R. Von Mellenthin, B. Pfeil & H. Salecker (1993). Unitarity Bounds for 4-Fermion Contact Interactions. Foundations of Physics 23 (3):399-410. In this paper we consider the effect of unitarity bounds sb⩾s≡(E1+E2) cms 2 for the recently proposed types of nonderivative 4-fermion contact interactions. To this purpose we decompose the helicity amplitudes at c.m.s. into partial waves. The bounds are defined to hold for all reaction channels due to the same type of contact interaction. We find sb=τ4π/κ. Here κ is the coupling constant. The factor τ depends on the type of coupling and on the different cases to identify the fermions. (...) Export citation My bibliography 17. Constantin Antonopoulos (1994). The Semantics of Absolute Space. Apeiron 19:6-11. Export citation My bibliography Export citation My bibliography Export citation My bibliography Export citation My bibliography 21. D. Arsenović, N. Burić, D. M. Davidović & S. Prvanović (2014). Lagrangian Form of Schrödinger Equation. Foundations of Physics 44 (7):725-735. Lagrangian formulation of quantum mechanical Schrödinger equation is developed in general and illustrated in the eigenbasis of the Hamiltonian and in the coordinate representation. The Lagrangian formulation of physically plausible quantum system results in a well defined second order equation on a real vector space. The Klein–Gordon equation for a real field is shown to be the Lagrangian form of the corresponding Schrödinger equation. Export citation My bibliography 22. R. Arshansky & L. P. Horwitz (1985). The Landau-Peierls Relation and a Causal Bound in Covariant Relativistic Quantum Theory. Foundations of Physics 15 (6):701-715. Thought experiments analogous to those discussed by Landau and Peierls are studied in the framework of a manifestly covariant relativistic quantum theory. It is shown that momentum and energy can be arbitrarily well defined, and that the drifts induced by measurement in the positions and times of occurrence of events remain within the (stable) spread of the wave packet in space-time. The structure of the Newton-Wigner position operator is studied in this framework, and it is shown that an analogous time (...) Export citation My bibliography   1 citation 23. Ray E. Artz (1981). Quantum Mechanics in Galilean Space-Time. Foundations of Physics 11 (11-12):839-862. The usual quantum mechanical treatment of a Schrödinger particle is translated into manifestly Galilean-invariant language, primarily through the use of Wigner-distribution methods. The hydrodynamical formulation of quantum mechanics is derived directly from the Wigner-distribution formulation, and the two formulations are compared. Wigner distributions are characterized directly, i.e., without reference to wave functions, and a heuristic interpretation of Wigner distributions and their evolution is developed. Export citation My bibliography Export citation My bibliography 25. A. K. T. Assis (1992). On the Absorption of Gravity. Apeiron 13:3-11. Export citation My bibliography   1 citation 26. Remove from this list Export citation My bibliography 27. This minicourse on quantum mechanics is intended for students who have already been rather well exposed to the subject at an elementary level. It is assumed that they have surmounted the first conceptual hurdles and also have struggled with the Schrödinger equation in one dimension. Remove from this list Export citation My bibliography 28. R. Aurich & F. Steiner (2001). Orbit Sum Rules for the Quantum Wave Functions of the Strongly Chaotic Hadamard Billiard in Arbitrary Dimensions. Foundations of Physics 31 (4):569-592. Sum rules are derived for the quantum wave functions of the Hadamard billiard in arbitrary dimensions. This billiard is a strongly chaotic (Anosov) system which consists of a point particle moving freely on a D-dimensional compact manifold (orbifold) of constant negative curvature. The sum rules express a general (two-point)correlation function of the quantum mechanical wave functions in terms of a sum over the orbits of the corresponding classical system. By taking the trace of the orbit sum rule or pre-trace formula, (...) Export citation My bibliography Export citation My bibliography 30. K. Avinash & V. L. Rvachev (2000). Non-Archimedean Algebra: Applications to Cosmology and Gravitation. [REVIEW] Foundations of Physics 30 (1):139-152. Application of recently developed non-Archimedean algebra to a flat and finite universe of total mass M 0 and radius R 0 is described. In this universe, mass m of a body and distance R between two points are bounded from above, i.e., 0≤m≤M 0, 0≤R≤R 0. The universe is characterized by an event horizon at R 0 (there is nothing beyond it, not even space). The radial distance metric is compressed toward horizon, which is shown to cause the phenomenon of (...) Export citation My bibliography 31. In this article, we put forward a new strategy for teaching the concept of energy. In the first section, we discuss how the concept is currently treated in educational programmes at primary and secondary level (taking the case of France), the learning difficulties that arise as well as the main teaching strategies presented in science education literature. In the second section, we argue that due to the complexity of the concept of energy, rethinking how it is taught should involve teacher (...) Export citation My bibliography 32. Ezzat G. Bakhoum (2009). On the Relativistic Principle of Time Dilation. Apeiron 16 (3):455. Export citation My bibliography 33. John Purssell Ballad (2012). Decisive Test for the Ritz Hypothesis. Apeiron 19 (1):38. Export citation My bibliography 34. William Band & James L. Park (1978). Generalized Two-Level Quantum Dynamics. II. Non-Hamiltonian State Evolution. Foundations of Physics 8 (1-2):45-58. A theorem is derived that enables a systematic enumeration of all the linear superoperators ℒ (associated with a two-level quantum system) that generate, via the law of motion ℒρ= $\dot \rho$ , mappings ρ(0) → ρ(t) restricted to the domain of statistical operators. Such dynamical evolutions include the usual Hamiltonian motion as a special case, but they also encompass more general motions, which are noncyclic and feature a destination state ρ(t → ∞) that is in some cases independent of ρ(0). Export citation My bibliography 35. Oded Bar-On (2005). Time-Asymmetric Relativity. Apeiron 12 (3):256. Export citation My bibliography 36. A. S. Barabash (2010). Experimental Test of the Pauli Exclusion Principle. Foundations of Physics 40 (7):703-718. A short review is given of three experimental works on tests of the Pauli Exclusion Principle (PEP) in which the author has been involved during the last 10 years. In the first work a search for anomalous carbon atoms was done and a limit on the existence of such atoms was determined, $^{12}\tilde{\mathrm{C}}$ /12C <2.5×10−12. In the second work PEP was tested with the NEMO-2 detector and the limits on the violation of PEP for p-shell nucleons in 12C were obtained. (...) Export citation My bibliography 37. M. Barone (2004). The Vacuum as Ether in the Last Century. Foundations of Physics 34 (12):1973-1982. In this paper we review the evolution of the concept of “ vacuum ” according to different theories formulated in the last century, like Quantum Mechanics, Quantum Electrodynamics, Quantum Chromodynamics in Particle Physics and Cosmology. In all these theories a metastable vacuum state is considered which transforms from one state to another according to the energy taken into consideration. It is a “fluid” made up by matter and radiation present in the whole Universe, which may be identified with a modern (...) Export citation My bibliography Export citation My bibliography 39. A. O. Barut (1995). Quantum Theory of Single Events Continued. Accelerating Wavelets and the Stern-Gerlach Experiment. Foundations of Physics 25 (2):377-381. Exact wavelet solutions of the wave equation for accelerating potentials are found and applied to single individual events in Stern-Gerlach experiment. Export citation My bibliography 40. A. O. Barut (1990). Quantum Theory of Single Events: Localized De Broglie Wavelets, Schrödinger Waves, and Classical Trajectories. [REVIEW] Foundations of Physics 20 (10):1233-1240. For an arbitrary potential V with classical trajectoriesx=g(t), we construct localized oscillating three-dimensional wave lumps ψ(x, t,g) representing a single quantum particle. The crest of the envelope of the ripple follows the classical orbitg(t), slightly modified due to the potential V, and ψ(x, t,g) satisfies the Schrödinger equation. The field energy, momentum, and angular momentum calculated as integrals over all space are equal to the particle energy, momentum, and angular momentum. The relation to coherent states and to Schrödinger waves is (...) Export citation My bibliography   1 citation 41. A. O. Barut, P. Budinich, J. Niederle & R. Raçzka (1994). Conformal Space-Times—The Arenas of Physics and Cosmology. Foundations of Physics 24 (11):1461-1494. The mathematical and physical aspects of the conformal symmetry of space-time and of physical laws are analyzed. In particular, the group classification of conformally flat space-times, the conformal compactifications of space-time, and the problem of imbedding of the flat space-time in global four-dimensional curved spaces with non-trivial topological and geometrical structure are discussed in detail. The wave equations on the compactified space-times are analyzed also, and the set of their elementary solutions constructed. Finally, the implications of global compactified space-times for (...) Export citation My bibliography 42. Remove from this list Export citation My bibliography Export citation My bibliography 44. Peter G. Bass (2003). Gravitation-A New Theory. Apeiron 10 (4):98-151. Export citation My bibliography Export citation My bibliography Export citation My bibliography 47. James Baugh, David Ritz Finkelstein, Andrei Galiautdinov & Mohsen Shiri-Garakani (2003). Transquantum Dynamics. Foundations of Physics 33 (9):1267-1275. Segal proposed transquantum commutation relations with two transquantum constants ħ′, ħ″ besides Planck's quantum constant ħ and with a variable i. The Heisenberg quantum algebra is a contraction—in a more general sense than that of Inönü and Wigner—of the Segal transquantum algebra. The usual constant i arises as a vacuum order-parameter in the quantum limit ħ′,ħ″→0. One physical consequence is a discrete spectrum for canonical variables and space-time coordinates. Another is an interconversion of time and energy accompanying space-time meltdown (disorder), (...) Export citation My bibliography Export citation My bibliography 49. Yu A. Baurov (2002). The Neutrino: What Is It? Apeiron 9 (4):1-24.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109390377998352, "perplexity": 1673.0370516937885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542588.29/warc/CC-MAIN-20161202170902-00296-ip-10-31-129-80.ec2.internal.warc.gz"}
https://mathematicaldreams.wordpress.com/2010/03/07/work-021-own-inequalities/
# work 021 – Own Inequalities 1) For $a,b,c$ positive reals with sum 1 , show that – $\frac {a}{4b + 3bc + 4c} + \frac {b}{4c + 3ca + 4a} + \frac {c}{4a + 3ab + 4b} \ge \frac {1}{3}$ 2) For $a,b,c$ positive reals with sum 3 , and the sum of any 2 greater than 1, show that – $\frac {a}{b + c - 1} + \frac {b}{c + a - 1} + \frac {c}{a + b - 1} \ge 3$ 3) For $a,b,c$ positive reals with sum 3 , show that – $\frac {a}{2b + 3c - 1} + \frac {b}{2c + 3a - 1} + \frac {c}{2a + 3b - 1} \ge \frac {3}{4}$ 4) For $x,y,z$ positive reals such that – $xy + y \ge 1$ $yz + z \ge 1$ $zx + x \ge 1$ Prove that – $\frac {x}{1 + xz - x} + \frac {y}{1 + yx - y} + \frac {z}{1 + zx - z} + \frac {(x + y - 1)(z + zx - 1)(x + xz - 1)}{xyz} \ge 4$ Proof For the first three, use Cauchy-Scwarz in the Engel Form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823961019515991, "perplexity": 1096.0043085057714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00310.warc.gz"}
http://talkstats.com/search/3368315/
# Search results 1. ### Understanding expectation operator notation Hi, I am trying to understand how to interpret a problem that uses the expectation operator. Please see the attached pdf for more information. Could someone explain in words how to read the definition of Yk that uses the summation operator? Why would the mean of Y equal 0? I think I am...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107286334037781, "perplexity": 413.53720430597474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00295.warc.gz"}
https://gateoverflow.in/275726/page-size-v-s-page-fault
28 views I studied from book william stalling ,it was written there if we increase the size of page then pagefault first increases and then when pagesize become size of process then pagefault decreases. Can someone explain with an example why this happens? 0 This seems to be incorrect to me. Can you check once again and tell me in which context it is written? I think if this kind of statement is written then it must in some context. Without context, in general it looks absurd to me. +1 vote
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399460911750793, "perplexity": 990.8798675420768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202161.73/warc/CC-MAIN-20190319224005-20190320010005-00174.warc.gz"}
http://link.springer.com/article/10.1186%2F1029-242X-2012-187
Journal of Inequalities and Applications , 2012:187 # Second-order duality for a nondifferentiable minimax fractional programming under generalized α-univexity ## Authors • Department of MathematicsIndian Institute of Technology • D Dangar • Department of MathematicsIndian Institute of Technology • Sumit Kumar • Indian Institute of Management Open AccessResearch DOI: 10.1186/1029-242X-2012-187 Gupta, S., Dangar, D. & Kumar, S. J Inequal Appl (2012) 2012: 187. doi:10.1186/1029-242X-2012-187 ## Abstract In this paper, we concentrate our study to derive appropriate duality theorems for two types of second-order dual models of a nondifferentiable minimax fractional programming problem involving second-order α-univex functions. Examples to show the existence of α-univex functions have also been illustrated. Several known results including many recent works are obtained as special cases. MSC:49J35, 90C32, 49N15. ### Keywords minimax programmingfractional programmingnondifferentiable programmingsecond-order dualityα-univexity ## 1 Introduction After Schmitendorf [1], who derived necessary and sufficient optimality conditions for static minimax problems, much attention has been paid to optimality conditions and duality theorems for minimax fractional programming problems [217]. For the theory, algorithms, and applications of some minimax problems, the reader is referred to [18]. In this paper, we consider the following nondifferentiable minimax fractional programming problem: (P) where Y is a compact subset of , , are twice continuously differentiable on and is twice continuously differentiable on , B, and D are a positive semidefinite matrix, , and for each , where . Motivated by [7, 14, 15], Yang and Hou [17] formulated a dual model for fractional minimax programming problem and proved duality theorems under generalized convex functions. Ahmad and Husain [5] extended this model to nondifferentiable and obtained duality relations involving -pseudoconvex functions. Jayswal [11] studied duality theorems for another two duals of (P) under α-univex functions. Recently, Ahmad et al.[4] derived the sufficient optimality condition for (P) and established duality relations for its dual problem under -invexity assumptions. The papers [2, 47, 1115, 17] involved the study of first-order duality for minimax fractional programming problems. The concept of second-order duality in nonlinear programming problems was first introduced by Mangasarian [19]. One significant practical application of second-order dual over first-order is that it may provide tighter bounds for the value of objective function because there are more parameters involved. Hanson [20] has shown the other advantage of second-order duality by citing an example, that is, if a feasible point of the primal is given and first-order duality conditions do not apply (infeasible), then we may use second-order duality to provide a lower bound for the value of primal problem. Recently, several researchers [3, 810, 16] considered second-order dual for minimax fractional programming problems. Husain et al.[8] first formulated second-order dual models for a minimax fractional programming problem and established duality relations involving η-bonvex functions. This work was later on generalized in [10] by introducing an additional vector r to the dual models, and in Sharma and Gulati [16] by proving the results under second-order generalized α-type I univex functions. The work cited in [3, 8, 10, 16] involves differentiable minimax fractional programming problems. Recently, Hu et al.[9] proved appropriate duality theorems for a second-order dual model of (P) under η-pseudobonvexity/η-quasibonvexity assumptions. In this paper, we formulate two types of second-order dual models for (P) and then derive weak, strong, and strict converse duality theorems under generalized α-univexity assumptions. Further, examples have been illustrated to show the existence of second-order α-univex functions. Our study extends some of the known results of the literature [5, 6, 11, 12, 14]. ## 2 Notations and preliminaries For each and , we define Definition 2.1 Let () be a twice differentiable function. Then ζ is said to be second-order α-univex at , if there exist , , , and such that for all and , we have Example 2.1 Let be defined as , where . Also, let , , and . The function ζ is second-order α-univex at , since But every α-univex function need not be invex. To show this, consider the following example. Example 2.2 Let be defined as . Let , , and . Then we have Hence, the function Ω is second-order α-univex but not invex, since for , , and , we obtain Lemma 2.1 (Generalized Schwartz inequality) LetBbe a positive semidefinite matrix of ordern. Then, for all, The equality holds iffor some. Following Theorem 2.1 ([13], Theorem 3.1) will be required to prove the strong duality theorem. Theorem 2.1 (Necessary condition) Ifis an optimal solution of problem (P) satisfying, , and, are linearly independent, then there exist, , andsuch that (2.1) (2.2) (2.3) (2.4) (2.5) In the above theorem, both matrices B and D are positive semidefinite at . If either or is zero, then the functions involved in the objective of problem (P) are not differentiable. To derive necessary conditions under this situation, for , we define If in addition, we insert the condition , then the result of Theorem 2.1 still holds. For the sake of convenience, let (2.6) and where ## 3 Model I In this section, we consider the following second-order dual problem for (P): (DM1) where and denotes the set of all satisfying (3.1) (3.2) (3.3) If the set , we define the supremum of over equal to −∞. Remark 3.1 If , then using (3.3), the above dual model reduces to the problems studied in [6, 11, 12]. Further, if B and D are zero matrices of order n, then (DM1) becomes the dual model considered in [14]. Next, we establish duality relations between primal (P) and dual (DM1). Theorem 3.1 (Weak duality) Letxandare feasible solutions of (P) and (DM1), respectively. Assume that 1. (i) is second-orderα-univex atz, 2. (ii) and. Then Proof Assume on contrary to the result that (3.4) Since , , we have (3.5) From (3.4) and (3.5), for , we get This further from , , and , we obtain (3.6) Now, Therefore, (3.7) By hypothesis (i), we have This follows from (3.1) that which using hypothesis (ii) yields This further from (2.6), (3.2), and the feasibility of x implies This contradicts (3.7), hence the result. □ Theorem 3.2 (Strong duality) Letbe an optimal solution for (P) and let, be linearly independent. Then there existand, such thatis feasible solution of (DM1) and the two objectives have same values. If, in addition, the assumptions of Theorem  3.1 hold for all feasible solutionsof (DM1), thenis an optimal solution of (DM1). Proof Since is an optimal solution of (P) and , are linearly independent, then by Theorem 2.1, there exist and such that is feasible solution of (DM1) and the two objectives have same values. Optimality of for (DM1), thus follows from Theorem 3.1. □ Theorem 3.3 (Strict converse duality) Letbe an optimal solution to (P) andbe an optimal solution to (DM1). Assume that 1. (i) is strictly second-orderα-univex at, 2. (ii) , are linearly independent, 3. (iii) and. Then. Proof By the strict α-univexity of at , we get which in view of (3.1) and hypothesis (iii) give Using (2.6), (3.2), and feasibility of in above, we obtain (3.8) Now, we shall assume that and reach a contradiction. Since and are optimal solutions to (P) and (DM1), respectively, and , are linearly independent, by Theorem 3.2, we get (3.9) Since , , we have (3.10) By (3.9) and (3.10), we get for all and . From and , with , we obtain (3.11) From Lemma 2.1, (3.3), and (3.11), we have which contradicts (3.8), hence the result. □ ## 4 Model II In this section, we consider another dual problem to (P): (DM2) where denotes the set of all satisfying (4.1) (4.2) (4.3) If the set is empty, we define the supremum in (DM2) over equal to −∞. Remark 4.1 If , then using (4.3), the above dual model becomes the dual model considered in [5, 11, 12]. In addition, if B and D are zero matrices of order n, then (DM2) reduces to the problem studied in [14]. Now, we obtain the following appropriate duality theorems between (P) and (DM2). Theorem 4.1 (Weak duality) Letxandare feasible solutions of (P) and (DM2), respectively. Suppose that the following conditions are satisfied: 1. (i) is second-orderα-univex atz, 2. (ii) and. Then Proof Assume on contrary to the result that or Using , and (4.3) in above, we have (4.4) Now, Hence, (4.5) Now, by the second-order α-univexity of at z, we get which using (4.1) and hypothesis (ii) give This from (4.2) follows that which contradicts (4.5). This proves the theorem. □ By a similar way, we can prove the following theorems between (P) and (DM2). Theorem 4.2 (Strong duality) Letbe an optimal solution for (P) and let, be linearly independent. Then there existand, such thatis feasible solution of (DM2) and the two objectives have same values. If, in addition, the assumptions of weak duality hold for all feasible solutionsof (DM2), thenis an optimal solution of (DM2). Theorem 4.3 (Strict converse duality) Letandare optimal solutions of (P) and (DM2), respectively. Assume that 1. (i) is strictly second-orderα-univex atz, 2. (ii) are linearly independent, 3. (iii) and. Then. ## 5 Concluding remarks In the present work, we have formulated two types of second-order dual models for a nondifferentiable minimax fractional programming problems and proved appropriate duality relations involving second-order α-univex functions. Further, examples have been illustrated to show the existence of such type of functions. Now, the question arises whether or not the results can be further extended to a higher-order nondifferentiable minimax fractional programming problem. ## Acknowledgements The authors wish to thank anonymous reviewers for their constructive and valuable suggestions which have considerably improved the presentation of the paper. The second author is also thankful to the Ministry of Human Resource Development, New Delhi (India) for financial support.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864985346794128, "perplexity": 1672.0779481194738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982297699.43/warc/CC-MAIN-20160823195817-00066-ip-10-153-172-175.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/06BB
## 37.61 Exact sequences of differentials and conormal sheaves In this section we collect some results on exact sequences of conormal sheaves and sheaves of differentials. In some sense these are all realizations of the triangle of cotangent complexes associated to a pair of composable morphisms of schemes. Let $g : Z \to Y$ and $f : Y \to X$ be morphisms of schemes. 1. There is a canonical exact sequence $g^*\Omega _{Y/X} \to \Omega _{Z/X} \to \Omega _{Z/Y} \to 0,$ see Morphisms, Lemma 29.32.9. If $g : Z \to Y$ is smooth or more generally formally smooth, then this sequence is a short exact sequence, see Morphisms, Lemma 29.34.16 or see Lemma 37.11.11. 2. If $g$ is an immersion or more generally formally unramified, then there is a canonical exact sequence $\mathcal{C}_{Z/Y} \to g^*\Omega _{Y/X} \to \Omega _{Z/X} \to 0,$ see Morphisms, Lemma 29.32.15 or see Lemma 37.7.10. If $f \circ g : Z \to X$ is smooth or more generally formally smooth, then this sequence is a short exact sequence, see Morphisms, Lemma 29.34.17 or see Lemma 37.11.12. 3. If $g$ and $f \circ g$ are immersions or more generally formally unramified, then there is a canonical exact sequence $\mathcal{C}_{Z/X} \to \mathcal{C}_{Z/Y} \to g^*\Omega _{Y/X} \to 0,$ see Morphisms, Lemma 29.32.18 or see Lemma 37.7.11. If $f : Y \to X$ is smooth or more generally formally smooth, then this sequence is a short exact sequence, see Morphisms, Lemma 29.34.18 or see Lemma 37.11.13. 4. If $g$ and $f$ are immersions or more generally formally unramified, then there is a canonical exact sequence $g^*\mathcal{C}_{Y/X} \to \mathcal{C}_{Z/X} \to \mathcal{C}_{Z/Y} \to 0.$ see Morphisms, Lemma 29.31.5 or see Lemma 37.7.12. If $g : Z \to Y$ is a regular immersion1 or more generally a local complete intersection morphism, then this sequence is a short exact sequence, see Divisors, Lemma 31.21.6 or see Lemma 37.60.23. [1] It suffices for $g$ to be a $H_1$-regular immersion. Observe that an immersion which is a local complete intersection morphism is Koszul regular. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9938344955444336, "perplexity": 378.1455760800425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00349.warc.gz"}
https://www.hackmath.net/en/math-problem/137
Rectangle Anton Difference between length and width of the rectangle is 8. Length is 3-times larger than the width. Calculate the dimensions of the rectangle. Correct result: length:  12 width:  4 Solution: $a = \dfrac{ 8}{ 3 -1} + 8 \doteq 12$ $b=\frac{8}{3-1}\doteq 4$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! Tips to related online calculators Do you have a system of equations and looking for calculator system of linear equations? Do you want to convert length units? You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: Next similar math problems: • Here is Here is a data set (n=117) that has been sorted. 10.4 12.2 14.3 15.3 17.1 17.8 18 18.6 19.1 19.9 19.9 20.3 20.6 20.7 20.7 21.2 21.3 22 22.1 22.3 22.8 23 23 23.1 23.5 24.1 24.1 24.4 24.5 24.8 24.9 25.4 25.4 25.5 25.7 25.9 26 26.1 26.2 26.7 26.8 27.5 27.6 2 • Rectangle The length of the rectangle is 12 cm greater than 3 times its width. What dimensions and area this rectangle has if ts circumference is 104 cm. • Plot The length of the rectangle is 8 smaller than three times the width. If we increase the width by 5% of the length and the length is reduced by 14% of the width, the circumference of rectangle will be increased by 30 m. What are the dimensions of the recta • Area of garden If the width of the rectangular garden is decreased by 2 meters and its length is increased by 5 meters, the area of the rectangle will be 0.2 ares larger. If the width and the length of the garden will increase by 3 meters, its original size will increas • The hall The hall had a rectangular ground plan one dimension 20 m longer than the other. After rebuilding the length of the hall declined by 5 m and the width has increased by 10 m. Floor area increased by 300 m2. What were the original dimensions of the hall? • Property The length of the rectangle-shaped property is 8 meters less than three times of the width. If we increase the width 5% of a length and lendth reduce by 14% of the width it will increase the property perimeter by 13 meters. How much will the property cost • Rectangle Find the dimensions of the rectangle, whose perimeter is 108 cm and the length is 25% larger than the width. • Rectangle Perimeter of rectangle is 48 cm. Calculate its dimensions if they are in the ratio 5:3 (width:height) • The rectangle Determine the area of the rectangle where the length and width are in the ratio 5:2 and its length is 7.5 cm longer than its width. Determine also its length and its width. • Rectangle Area of rectangle is 3002. Its length is 41 larger than the width. What are the dimensions of the rectangle? • Rectangle diagonals It is given rectangle with area 24 cm2 a circumference 20 cm. The length of one side is 2 cm larger than length of second side. Calculate the length of the diagonal. Length and width are yet expressed in natural numbers. • Rectangular garden The perimeter of Peter's rectangular garden is 98 meters. The width of the garden is 60% shorter than its length. Find the dimensions of the rectangular garden in meters. Find the garden area in square meters. • Rectangular plot The dimensions of a rectangular plot are (x+1)m and (2x-y)m. If the sum of x and y is 3m and the perimeter of the plot is 36m. Find the area of the diagonal of the plot. • Square vs rectangle Square and rectangle have the same area contents. The length of the rectangle is 9 greater and width 6 less than side of the square. Calculate the side of a square. • Diagonal 20 Diagonal pathway for the rectangular town plaza whose length is 20 m longer than the width. if the pathway is 20 m shorter than twice the width. How long should the pathway be? • Rectangles The perimeter of a rectangle is 90 m. Divide it into three rectangles, the shorter side has all three rectangles the same, their longer sides are three consecutive natural numbers. What is the dimensions of each rectangle? • Rectangle - sides ratio Calculate area of rectangle whose sides are in ratio 3:13 and perimeter is 673.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346552848815918, "perplexity": 455.0268525675223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00070.warc.gz"}
https://mathhelpboards.com/threads/application-of-intermediate-value-theorem-for-five-point-formula-numerical-differentiation.8181/
# [SOLVED]Application of Intermediate Value Theorem for five-point formula (numerical differentiation) #### kalish ##### Member Oct 7, 2013 99 I have a specific, for-learning-sake-only question on how the author of this link: http://www.math.ucla.edu/~yanovsky/Teaching/Math151A/hw5/Hw5_solutions.pdf gets past the details of the Intermediate Value Theorem on the following paragraph. If someone could fill in the details for me, it would be greatly appreciated because I'm having a hard time understanding. \begin{align} \left(\frac{3}{12h}f^{(5)}(\xi_1)+\frac{18}{12h}f^{(5)}(\xi_2)-32\frac{6}{12h}f^{(5)}(\xi_3)+243\frac{1}{12h}f^{(5)}(\xi_4)\right)\frac{h^5}{120}&= \\ \left(\frac{3}{12}f^{(5)}(\xi_1)+\frac{18}{12}f^{(5)}(\xi_2)-32\frac{6}{12}f^{(5)}(\xi_3)+243\frac{1}{12}f^{(5)}(\xi_4)\right)\frac{h^4}{120}&= \\ 6f^{(5)}(\xi)\frac{h^4}{120}&= \\ \frac{h^4}{20}f^{(5)}(\xi) \end{align} "Note that the IVT was used above..." Shouldn't it be "Suppose $f^{(5)}$ is continuous on $[x_0-h,x_0+3h]$ with $x_0-h < \xi_1<x_0<\xi_2<x_0+h<\xi_3<x_0+2h<\xi_4<x_0+3h.$ Since $\frac{1}{4}[f^{(5)}(\xi_1)+f^{(5)}(\xi_2)+f^{(5)}(\xi_3)+f^{(5)}(\xi_4)]$ is between $f^{(5)}(\xi_1)$ and $f^{(5)}(\xi_4)$, the Intermediate Value Theorem implies that a number $\xi$ exists between $\xi_1$ and $\xi_4$, and hence in $(x_0-h,x_0+3h)$, with $f^{(5)}(\xi)=\frac{1}{4}[f^{(5)}(\xi_1)+f^{(5)}(\xi_2)+f^{(5)}(\xi_3)+f^{(5)}(\xi_4)]$"? #### Chris L T521 ##### Well-known member Staff member Jan 26, 2012 995 Hi kalish, Snippet from MHB Rule #2 said: As a courtesy, if you post your problem on multiple websites, and you get a satisfactory response on a different website, indicate in your MHB thread that you got an answer elsewhere so that our helpers do not duplicate others' efforts. As a courtesy (not just to us, but also to those on other sites as well), you should inform our (and their) members when you've posted a question on multiple sites (for instance, I see that you've asked this same question on math.stackexchange) and then inform us if you find a solution elsewhere. This way, no one's efforts are duplicated and/or put to waste on solving a problem that has been potentially solved. Oct 7, 2013 99 Hi kalish,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870805144309998, "perplexity": 1205.9216376239508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613453.9/warc/CC-MAIN-20210614201339-20210614231339-00552.warc.gz"}
https://www.physicsforums.com/threads/local-density-of-states-ldos-is.316124/
# Local density of states (LDOS) is 1. May 24, 2009 ### saray1360 Hi, I would like to know what local density of states (LDOS) is and what differences it has with projected density of states? Also, when we choose a smaller isolevel we have a denser local densities of states, why? Regrds, 2. May 26, 2009 ### sokrates Re: Ldos-pdos What is PDOS? Projected density of states? This is the first time I am hearing about it. LDOS is simply the density of states at a given location in space. Normally Density of Space calculations include all possible states, and LDOS gives local information. All those STM images of surfaces showing almost individual atoms are based on that. STM measures LDOS - so you get different current flow depending on your position. For a better description see: Datta, 2005, Quantum Transport
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132577776908875, "perplexity": 1785.4986621328505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823997.21/warc/CC-MAIN-20171020082720-20171020102720-00177.warc.gz"}
https://tech.forums.softwareag.com/t/is-startup-scripts/191395
# IS startup scripts Hello Experts, Can someone please let me know what is the difference between starting the Integration server from 1. Profiles/IS_Default/bin/Startup.sh 2)IS_home/instanaces/default/bin/server.sh 3)IS_home/instanaces/default/bin/startup.sh What is the difference between these scripts and how would it effect the startup of the server. Also i see the below files in the directory profiles/Is_default/bin sagis97.pid sagis97.status sagis97.java.status what do these files indicate. Thanks, Arun Cholleti.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848807454109192, "perplexity": 2017.947204494721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00746.warc.gz"}
http://physics.stackexchange.com/users/11881/user11881?tab=activity
# user11881 less info reputation 3 bio website location age member for 1 year, 11 months seen Feb 7 at 19:53 profile views 44 # 24 Actions Jul2 awarded Curious Jan4 answered Lax-Pair for principal chiral model Jan4 comment Lax-Pair for principal chiral model Thanks for your great explanation. By the way, is there any advantage to including a minus sign in the exponent of the Wilson line? It seems like it would be easier to define it without the minus sign in which case the derivation would go as below. Jan4 asked Lax-Pair for principal chiral model Jan3 asked Large-N factorization of single-trace operators Dec19 comment What sets AdS radius of the Vasiliev dual to the O(N) vector model? That makes some sense. In the free O(N) model we effectively have $\lambda = 0$. Then the equation $R/l_s = 0$ can be satisfied for any value of $R$ since $l_s \to \infty$ in the tensionless limit as you say. So I think the answer is that all values of $R$ are equivalent for the O(N) model. Dec19 asked What sets AdS radius of the Vasiliev dual to the O(N) vector model? Dec15 comment Gauge fields and strings: Loop equations Thanks for your answer. I'm not quite sure I fully understand it yet. In the meantime, however, I think I have a more pedestrian explanation (see below). Dec15 answered Gauge fields and strings: Loop equations May23 revised Gauge fields and strings: Loop equations edited body May23 asked Gauge fields and strings: Loop equations May14 revised Setting of renormalization scale in field theory calculations added 40 characters in body May14 asked Setting of renormalization scale in field theory calculations Feb4 comment Trace of stress tensor vanishes ==> Weyl invariant The stress tensor is defined as the variation of $S$ wrt the metric, not wrt the metric and all matter fields. Feb4 revised Trace of stress tensor vanishes ==> Weyl invariant edited body Feb4 asked Trace of stress tensor vanishes ==> Weyl invariant Oct9 asked Definition of CFT Sep3 comment Eq. (5.3.20) Weinberg Volume 1, p. 209 I'm sorry but I thought that $\mathbf{J}^{(1)}$ denotes 3-vector of matrices which have the same components as $(\mathcal{J}_k)^i_j$ where $k$ denotes the 3-vector index. I don't see exactly what about (5.3.6) singles out the 3-direction Sep3 awarded Student Sep3 awarded Editor
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453188180923462, "perplexity": 1340.5083738008475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500804220.17/warc/CC-MAIN-20140820021324-00242-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/magic-wand
# Magic Wand Students form form a circle leaving some space between them as they have to jump and duck.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897732734680176, "perplexity": 2076.099163474409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00409.warc.gz"}
https://www.sdss4.org/dr17/irspec/abundances/
# Using APOGEE Stellar Abundances This page attempts to address some common questions about APOGEE stellar abundances that are determined from the APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP). Additional details are given in Holtzman et al. (in prep.). The APOGEE survey extracts the chemical abundances of multiple elements for the entire stellar sample. In DR17, we present abundances for 20 species: C, C I, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, Ti II, V, Cr, Mn, Fe, Co, Ni, and Ce. In DR17, 3 species were not attempted: Ge, Rb, Yb and the measurements for 4 species were attempted but found to be unsuccessful: P, Cu, Nd, and 13C. The accuracy of an individual element varies with the element and stellar type; certain parts of the element-star parameter space are not feasible to explore with the APOGEE data, as is discussed below. If you are interested in learning more about APOGEE’s abundances and their derivation see the DR17 ASPCAP Description or Holtzman et al. (in prep.). ## Overview of APOGEE Stellar Abundances In DR17, we provide abundances for up to 20 species: C, C I, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, Ti II, V, Cr, Mn, Fe, Co, Ni, and Ce. In DR17, 3 species were not attempted: Ge, Rb, Yb and 4 species were attempted but judged to be unsuccessful: P, Cu, Nd, and 13C. We note that C is measured from molecules, C I is measured from neutral carbon lines, Ti is measured from neutral titanium lines, and Ti II is measured from singly ionized titanium lines. These abundances are each reported in the bracket notation, for instance: $[Fe/H] = log_{10}(N_{Fe} / N_{H}) – log_{10}(N_{Fe} / N_{H})_{Sun}$. APOGEE’s solar scale should be close to the Grevasse et al. (2007) solar values. However, we find that abundances of some elements for stars of solar metallicity in the solar neighborhood come out with non-solar abundance ratios, which is not expected based on previous studies. In addition to the raw spectroscopic values, we provide calibrated values that have been adjusted by addition of a zeropoint to yield a median [X/M]=0 for stars in the solar neighborhood. Users interested in the details of APOGEE’s abundance scale should read the DR17 ASPCAP description. Abundances are provided in various formats: relative to hydrogen (H), relative to total metallicity (M), or relative to iron (Fe). Most users will want to use the metallicity reported by [Fe/H] and the individual elemental abundances relative to iron, e.g., [X/Fe]. As described on the DR17 ASPCAP Description, DR17 uses a new set of synthetic spectral libraries to determine parameters and abundances. This library includes non-local thermal equilibrium (NLTE) treatment for four elements: Na, Mg, K, and Ca. ## Multiple columns with stellar abundances We provide multiple columns with APOGEE stellar abundances. The raw abundances as measured by FERRE are saved in the FELEM array. For a uniform presentation relative to hydrogen or metals, we populate the X_H_SPEC and X_M_SPEC arrays. Zeropoint calibrations to bring the median of solar metallicity stars in the solar neighborhood to [X/M]=0 are used to populate X_H and X_M. Finally, “named” tags giving abundances relative to iron (e.g., MG_FE_SPEC, MG_FE, etc.) are populated from the arrays, but only for objects that have not been identified as problematic. If you are unfamiliar with APOGEE data we recommend using the “named tags”: FE_H, and individual elemental abundances measured relative to iron, e.g., C_FE, or MG_FE. These named tags are the most conservative in how they are populated. Stars with the most suspect abundances or whose abundances are known to be wrong do not have any data in these named tags. For more complete data, use the X_H and X_M arrays, but be aware that there may be significant issues with some of these. Consult the ASPCAPFLAG and ELEMFLAG bitmasks if you use these. ## Uncertainties Uncertainties on abundances are estimated from repeat observations of stars and can be found in named tags such as C_FE_ERR. In addition to the named uncertainty tags such as C_FE_ERR, we also provide the raw uncertainties from ASPCAP’s abundance fitting procedure in the FELEM_ERR array, but these generally seem to be significantly underestimated. Since the abundances are determined in a separate fit after the parameters have been determined, covariances between abundances and parameters (or other abundances) are not provided. ### Quality Flags Each element has an associated bitmask , e.g., C_FE_FLAG, that contains descriptive information about potential issues with the elemental abundance for each star; this is also saved in the ELEMFLAG following the order of elements in FELEM. We note that some abundances appear to be unreliable and/or unmeasurable in some regions of parameter space. From visual inspection of trends among solar neighborhood stars, abundances become particularly challenging at our coolest effective temperatures. Based on the results, we have selected effective temperature cuts for some elements, and for stars outside of the acceptable range, we have set a TEFF_CUT bit in the abundance bitmask. Stars with this bit set still have the abundances populated in the abundance arrays, but they are not populated in the named tags. ## Quality of derived abundances While some quality cuts have been applied to the values in the named abundance tags, not all elements have the same quality data. For those who are unfamiliar with APOGEE elemental abundances, we provide a general guideline for the reliability of individual elements below. However, these descriptions are very general and the quality of a given element will vary by metallicity, temperature and $S/N$ (for instance at the lowest metallicities only a few elements remain measurable) and we encourage users to explore the data and make their own judgements. ## Challenges with cooler stars Abundances of cool stars (Teff<4000 K, and especially Teff<3500 K) are particularly challenging, perhaps due to the significant presence of molecular absorption and challenges in interpolating between synthetic spectra in this regime. Shallow minima in the fitting space may also contribute. For giant stars, several elements appear to be measured in narrow sequences of abundances, sometimes multi-modal, in the coolest stars. For dwarf stars, abundances seem systematically low, by as much as several tenths of a dex for stars with Teff < 3500 K. In addition, many abundances seem to show an anonymously low dip in abundance between 4000 < Teff < 5000 K that may be related to the presence of very strong lines in this range of effective temperature. ## Challenges with warmer stars At warmer effective temperatures, lines from many elements become weak or disappear. Hotter than 7000 K, it is difficult to determine any abundances, and no calibrated abundances are populated in DR17. ## Elements in Giants DR17 ASPCAP considered measurements for 27 chemical species. However, abundances for Ge, Rb, and Yb were not attempted in D17 and have no measurements. Abundances for P, Cu, Nd, 13C were deemed unsuccessful in DR17 by the ASPCAP team and these values are only present in the raw FERRE output. The remaining 20 species were evaluated for their over all quality as follows: • Most Reliable: species that are precisely measured, measured over a wide range of stellar parameters, and follow trends expected from the literature • Reliable:species that are less precisely measured, measured over a narrower range of stellar parameters, and follow trends consistent with literature • Less Reliable: less precisely measured, measured over a narrow range of stellar parameters, and have apparent chemical trends consistent with literature • Deviant: measured, but the results deviate from literature expectations We stress that these are general evaluations and users are advised to independently evaluate chemical species of interest for their sample pursuant to their science goal. #### Evaluations of DR17 Chemical Abundances for Giants Most Reliable: C, N, O, Mg, Al, Si, Mn, Fe, Ni Reliable: C I, Na, K, Ca, Co, Ce Less Reliable: S, V, Cr Deviant in DR17: Ti, Ti II Unsuccessful in DR17: P, Cu, Nd Not attempted: Ge, Rb, Yb ## Elements in Dwarfs DR17 ASPCAP considered measurements for 27 chemical species, but APOGEE’s abundances for dwarfs are typically not as precise as for giants and the overall quality of these abundances may be slightly lower. As for giants, abundances for Ge, Rb, and Yb were not attempted in D17 and have no measurements. Abundances for P, Ti II, Co, Cu, Ce, Nd, 13C were deemed unsuccessful for dwarfs in DR17 by the ASPCAP team and these values are only in the raw FERRE output. The remaining 17 species were evaluated for their over all quality as follows: • Most Reliable: species that are precisely measured, measured over a wide range of stellar parameters, and follow trends expected from the literature • Reliable:species that are less precisely measured, measured over a narrower range of stellar parameters, and show vague chemical patterns • Less Reliable: less precisely measured, measured over a narrow range of stellar parameters, and show vague chemical patterns • Deviant: measured, but the results do not show coherent chemical patterns We stress that these are general evaluations and users are advised to independently evaluate chemical species of interest for their sample pursuant to their science goal. #### Evaluations of DR17 Chemical Abundances for Dwarfs Most Reliable: C, Mg, Si, Fe, Ni Reliable: C I, O, Al, K, Ca, Mn Less Reliable: N, S Deviant in DR17: Na, Ti, V, Cr Unsuccessful in DR17: P, Ti II, Co, Cu, Ce, Nd Not attempted: Ge, Rb, Yb ## Systematics in abundances of stars across the HR Diagram Measuring stellar abundances is a challenging endeavor, and it is difficult to measure them consistently across the HR diagram. While APOGEE applies some basic quality cuts and zero-point calibrations to dwarf and giant abundances, the strength and measurability of spectral lines vary across the HR diagram, so some systematic trends and features may still be found in APOGEE’s chemical abundances. Users should exercise caution when comparing the abundances of stars across the HR diagram. To minimize systematic trends, you may not want to compare samples that span a wide range of stellar parameters, or you may need to apply corrections. The only calibrations applied to the abundances are simplistic zero-point corrections, although we do adopt different zero-point corrections for giants (log g < 3.8) and dwarfs. Particularly difficult parts of the HR diagram are cool stars (with temperatures below $\sim 3500$ K) due to frequent line blending with strong molecular lines and in “hot” stars (with temperatures above $\sim 6000$ K) whose atomic and molecular lines are very weak. There are also several elements that have systematic temperature trends for dwarfs cooler than $\sim 4500$ K. ## What else should I watch out for? Users should be aware that the precision of abundance measurements decreases with decreasing signal-to-noise (S/N) ratios. APOGEE’s target for precision chemical abundances is a S/N of 100, however most elements can be well measured down to a S/N of 70. The precision as a function of S/N differs from element to element depending on the strength of their lines. Be aware that outliers may not be astrophysical and should be checked to assess the quality of their abundances. Users interested in outliers should consult the ASPCAPFLAG bitmask and STARFLAG bitmask for those stars to see if they have any quality warnings and may be interested in investigating the spectra of those stars to inspect the lines of their element of interest (see the Using Spectra for details about investigating APOGEE spectra). See the Tutorials for a detailed walk-through of how to investigate a star that appears to be an outlier.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818045973777771, "perplexity": 3655.731195600676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00616.warc.gz"}
http://mymathforum.com/academic-guidance/345746-lemma-special-case-theorem.html
My Math Forum A lemma which is a special case of a theorem User Name Remember Me? Password Academic Guidance Academic Guidance - Academic guidance for those pursuing a college degree... what college? Grad school? PhD help? February 8th, 2019, 10:03 AM #1 Newbie   Joined: Feb 2019 From: Israel Posts: 20 Thanks: 2 Math Focus: general topology A lemma which is a special case of a theorem It is often the case that a theorem is called using a lemma which in turn is an easy consequence of the theorem (in other words, is a special case of the theorem). Which term could you suggest specifically for such a lemma? I think about using my own coined word (like "specialia") to denote such lemmas in my book. Is it worth to coin a new word? February 8th, 2019, 10:23 AM   #2 Senior Member Joined: Aug 2012 Posts: 2,343 Thanks: 732 Quote: Originally Posted by porton It is often the case that a theorem is called using a lemma which in turn is an easy consequence of the theorem (in other words, is a special case of the theorem). You're thinking of a corollary, an easy consequence of a theorem. A lemma is a minor theorem that helps you prove a major theorem. In written presentations, corollaries come after their corresponding theorem; and lemmas come before the theorem they're helping to prove February 8th, 2019, 10:25 AM   #3 Newbie Joined: Feb 2019 From: Israel Posts: 20 Thanks: 2 Math Focus: general topology Quote: Originally Posted by Maschke You're thinking of a corollary, an easy consequence of a theorem. A lemma is a minor theorem that helps you prove a major theorem. In written presentations, corollaries come after their corresponding theorem; and lemmas come before the theorem they're helping to prove But I want such a corollary to be named "theorem" not "corollary". It improves emphasis of importance of such a theorem, rather than calling it just a corollary. February 8th, 2019, 12:39 PM   #4 Newbie Joined: Feb 2019 From: Israel Posts: 20 Thanks: 2 Math Focus: general topology Quote: Originally Posted by Maschke You're thinking of a corollary, an easy consequence of a theorem. A lemma is a minor theorem that helps you prove a major theorem. In written presentations, corollaries come after their corresponding theorem; and lemmas come before the theorem they're helping to prove Also "corollary" is an easy consequence of the premise. In my case the theorem may be not an easy consequence of the lemma. What I say is that the reverse implication (from the theorem to the lemma, in the order reverse to the order it is proved in the text) is easy. February 8th, 2019, 02:32 PM #5 Senior Member   Joined: Dec 2015 From: somewhere Posts: 551 Thanks: 83 Call them “tip” or “hint” . Thanks from porton February 8th, 2019, 02:33 PM   #6 Senior Member Joined: Aug 2012 Posts: 2,343 Thanks: 732 Quote: Originally Posted by porton Also "corollary" is an easy consequence of the premise. In my case the theorem may be not an easy consequence of the lemma. What I say is that the reverse implication (from the theorem to the lemma, in the order reverse to the order it is proved in the text) is easy. That's right, a corollary is a relatively easy consequence of a theorem. But a theorem is generally not an easy consequence of a lemma. Rather, a lemma is kind of a building block. It's something you need to prove along the way to the main theorem; but it's self-contained enough that you can pull it out. Sometimes because you want to use it for something else; or because it simplifies the narrative flow of the main theorem. It's sort of like the programming practice of pulling out a chunk of code into its own subroutine. You're right, a lemma need not make the main theorem easy. But really it's just terminology, these aren't meaningful distinctions, as the axiom of choice, Zorn's lemma, and the well-ordering theorem show. A lot of the naming is historical accident. Tags case, lemma, special, theorem, writing advice, writing style Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post whitegreen Calculus 2 January 21st, 2015 05:01 PM talisman Geometry 1 October 27th, 2014 12:05 PM maxgeo Algebra 1 October 31st, 2012 02:57 PM dionysos Real Analysis 0 August 27th, 2009 07:40 AM Contact - Home - Forums - Cryptocurrency Forum - Top Copyright © 2019 My Math Forum. All rights reserved.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382422924041748, "perplexity": 1427.190185960962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00394.warc.gz"}
http://math.stackexchange.com/questions/466113/the-spectrum-and-determinant-of-the-laplacian-on-s3
# the spectrum and determinant of the Laplacian on $S^3$ I came across the following statement in a paper: On $S^3$, the eigenvalues of the vector Laplacian on divergenceless vector fields is $(\ell + 1)^2$ with degeneracy $2\ell(\ell+2)$ with $\ell \in \mathbb{ Z}$. Is it possible to prove the spectrum and degeneracy using the representation theory of $SO(4)$? Perhaps there is a general result for the n-sphere. The paper then proceeds to make the non-sense statement (RHS is divergent): $$\det \big(-\Delta + a\big) = \prod_{\ell=1}^\infty \big((\ell + 1)^2 + a \big)^{2\ell(\ell+2)}$$ How do we make sense of the determinant of the Laplacian on the space of divergenceless vector fields? - My question is how the spectra are calculated in the first place - using harmonic analysis - it was the 1st of several spectra in the paper. Then there is a separate question about regularizing the infinite product. Hardy published a book on divergent series. –  john mangual Aug 13 '13 at 15:30 Yes, the repn theory of $SO(4)$ is useful to determine the spectrum. For example, the functions on the 3-sphere are functions that descend to $SO(4)/SO(3)$. From the regular repn of $L^2(SO(4))$, functions on $S^3$ decompose as the sum of $\pi^{SO(3)}\otimes \check{\pi}$ where $\pi$ runs over irreducibles. This gives multiplicities in terms of those dimensions. The eigenvalues are eigenvalues of Casimir. –  paul garrett Aug 13 '13 at 16:13 @paulgarrett OK. Then I have decompose (divergenceless) vector fields on $S^3$ - which I guess is not $L^2(S^3)\oplus L^2(S^3)$ - into eigenspaces. –  john mangual Aug 13 '13 at 16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189975261688232, "perplexity": 348.6141724564462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645367702.92/warc/CC-MAIN-20150827031607-00197-ip-10-171-96-226.ec2.internal.warc.gz"}
https://brilliant.org/problems/it-would-be-easier-to-generalize-this-problem-2/
# It would be easier to generalize this problem -2 Calculus Level 4 $\large \displaystyle \int_{0}^{{\pi} / {2}} \cos^9(x) \, dx$ If the integral above equals to $\frac ab$ for coprime positive integers $a$ and $b$, find the value of $a-b$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583998322486877, "perplexity": 285.8581400054984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00294.warc.gz"}
https://community.ptc.com/t5/PTC-Mathcad/SYMBOLIC-Differential-Equation-and-System-of-Differential/m-p/685994/highlight/true
cancel Showing results for Did you mean: cancel Showing results for Did you mean: SOLVED Highlighted Granite ## SYMBOLIC Differential Equation and System of Differential Equation Hello, There is a function that can solve SYMBOLICALLY a differential equation and a system of differential equations automatically in Mathcad? Or at least, how can I solve SYMBOLICALLY a differential equation or a system of differential equations (automatically) in Mathcad? But without using manually Laplace Transform for each term or Odesolve(-numeric/graphic), but rather something automatically...if exist... Thank you. 1 ACCEPTED SOLUTION Accepted Solutions Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Heureka! Persistence sometimes pays off! I guess the book you referred to is "Calculus and Algebra with Mathcad" by Byrge Birkeland. There was a version of the book for Mathcad 8 from 1999 and a later version for Mathcad 2000. Not sure if the MC8 version already included that chapter (4.3.2). Anyway, here is a modified version of the described method which also works in Mathcad 15 (it will not work in Prime). As in the original version, the independent variable must be called "t" - this is mandatory! 44 REPLIES 44 Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Unfortunately Mathcad does not provide any means to solve and ODE symbolically. Best advice would have been to use Laplace which could be automated in older Mathcad versions MC11) but not in the current ones (neither in Mathcad 15 nor in Prime). For symbolic solutions you will have to resort to programs like Maple or Mathematica. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation So...the only solution is to apply Laplace Transform manually for solving symbolically differential equations and systems of differential equations in Mathcad? Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation @AndrewClyde wrote: So...the only solution is to apply Laplace Transform manually for solving symbolically differential equations and systems of differential equations in Mathcad? Sure not the only solution, but Laplace seems to be the easiest way in many cases. But of course you could also solve the ODE(s) manually by integration and let Mathcad do the basic integration work. Anyway, I don't know of a way to fully automatically get the symbolic solution for an ODE or a system of ODEs with Mathcad. A long time ago I played around with the idea of writing functions to solve ODEs automatically in Mathcad using integration, but I only finished /to some part) linear first order ODEs and never took time to extend the method at least to second order ODEs (which I guess should be possible), let alone systems of ODEs. As far as I remember, it was particularly tricky to find a way to use arbitrary names for the independent variable so as not to be set to just x or t. In case you are interested and like to work along I attach the file. The comments mostly are in German but I guess you'll get the idea how to use the functions. BTW - as the functions require symbolic evaluation inside of a program, they will not work in Prime any more. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation I found in a book this way (function) of the implementation of Laplace Transform for a differential equation, but I encountered some errors...and I don't know how to fix them for this moment. If you have any idea of how can resolve these issues...are welcome. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation The math notation y^(i) for the i-th derivative is not understood by Mathcad Mathcad (at least the version we use) does not understand a function "solve" or a function "laplace". In Mathcad 15 "solve" and "laplace" are keywords for symbolic evaluation but not stand alone functions. What your book shows may work in Mathcad 11 (Luc could verify). You remember that I wrote in my fist answer here that solving an ODE via laplace could be automated in MC11 (which uses Maple for symbolics), but not in later versions like MC14 or MC15, which use a different symbolic engine (muPad). Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation The very first error that your file shows is due to you. If you follow the notation of professor Birkeland carefully, you'll see that he writes an equation (using =), not a definition (using :=). Such an equation can be, and often is, used to describe a mathematical concept in a notation that escapes Mathcad's error checking. The book was written for Mathcad 2000, a predecessor of Mathcad 15. With the change of symbolic processor going from Maple (unitil Mathcad 13) to Mupad (as of Mathcad 14), many symbolic possibilities were broken, a few were improved. I'll see if I can make Birkeland's functions work in Mathcad 11. Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Given the right version of Mathcad (11), and some tweaking: But all this is unsupported. Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Which means that you use Maple features not available in current Mathcad or Prime and which were never supposed to work in Mathcad the way you use them 😉 Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Heureka! Persistence sometimes pays off! I guess the book you referred to is "Calculus and Algebra with Mathcad" by Byrge Birkeland. There was a version of the book for Mathcad 8 from 1999 and a later version for Mathcad 2000. Not sure if the MC8 version already included that chapter (4.3.2). Anyway, here is a modified version of the described method which also works in Mathcad 15 (it will not work in Prime). As in the original version, the independent variable must be called "t" - this is mandatory! Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Great! In Mathcad 11 the construct with k<=i fails. As soon as I put that in, the function L forgets about the initial conditions, they're all set to 0. Since its use is to limit k from going above i, this can also be accomplished by the upper limit of the inner summation. I also observed that the L function is not ORIGIN aware. It appears that I can compact the DiffSolve function to a oneliner. The result is: Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation The one liner sure is possible only in MC11 but not in MC15. The construct with *(k<=i) is necessary in MC15. I am not sure why, but the symbolic in MC15 seems not to like the "i" as an upper limit of the second sum. The error message when trying to symbolically evaluating L(..) is "assumption impossible (property::Null)" ???? Otherwise your implementation would sure be more elegant and preferable. And yes, I agree that L should be written ORIGIN-aware 😉 Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation BTW, here is the one-liner which works in MC15, but its ugly and confusing looking and because of the various symbolic evaluations it "grows" and always shows the intermediate results of the last example it was used for: I would stay with the four-liner I posted before, using an ORIGIN-aware function L, of course Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Hm Werner, did you check your L function with an ORIGIN other than 0? I think it should go wrong. The power of u, just before Ly, needs to start at 0, independently of the ORIGIN, so you should subtract ORIGIN from i there... Then, you say this does not work in Prime.... have you tried? (What limits Prime from doing this?) Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation You are absolutely correct - what I showed in the pic would not work OK with an ORIGIN other than 0. u has to be raised to the power of i-ORIGIN and not just to the power of i. No, I didn't give it a try in Prime because I know that PTC has limited the use of symbolic evaluation there. You probably know that solve block can't evaluated symbolically anymore in Prime. What would make my sheet fail in Prime is the fact, that Prime does not allow symbolic evaluations inside of a program and my approach heavily relies on doing so (three times, because the first evaluation of L(..) must not necessarily be done). Symbolic evaluations inside of a program are a bit tricky in real Mathcad, too. We need to type the expression with the symbolic evaluation at some space outside of the program and have to copy and paste that expression to the program we are about to write. Trying to add a symbolic eval directly when writing a program would result in a symbolic eval of the whole program, which would not do the desired job. ## Re: SYMBOLIC Differential Equation and System of Differential Equation I understand that the relationship for A: is deduced from ODE: But how do you deduced/reached the relationship for C: ??? Case 1: Case 2: Why the result is different? Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation C is the laplace transformed of the source term function f I simply splitted Birkelands second program line into two, because Mathcad 15 does not offer a laplace function, only symbolic evaluation using the laplace keyword. As shown in my last answer its possible to create a one-line function in Mathcad 15, too, but the multi-line version is much more clearer, I guess. If you change the coefficient of the third derivative, you also have to change it in the "check-function" 😧 Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation A, ok. Thank you very much for your response and time in resolving this question. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation @AndrewClyde wrote: A, ok. Thank you very much for your response and time in resolving this question. Thank you for pointing me to Prof. Birkelands approach which enabled me to create a version working for Mathcad 15. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation What I wanted to achieve is to put an ODE into a function and that function to give me as a result the solution of that ODE...and not to make even the transformation from t domain to s domain manually and then somehow to take inverse Laplace of that result. (if it is possible such a thing:)) Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation @AndrewClyde wrote: What I wanted to achieve is to put an ODE into a function and that function to give me as a result the solution of that ODE...and not to make even the transformation from t domain to s domain manually and then somehow to take inverse Laplace of that result. (if it is possible such a thing:)) Obviously you studied the file not long enough because it offers exactly what you are asking for! You put the ODE into the function DiffSolve (by providing a coefficient vector, a vector with the initial conditions and the forcing function) and when you evaluate DiffSolve symbolically you get the symbolic solution. No need to manually make any transformations, thanks to Prof. Birkeland. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Yes, it is true...for about one day I looked in Professor Birkeland's book. If I will have any more questions about ODE I will post them. Again, thank you. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation It is possible to extend the function described by you not just for a single differential equation but to a system of differential equations? Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation @AndrewClyde wrote: It is possible to extend the function described by you not just for a single differential equation but to a system of differential equations? With some work and experimenting and putting some time into it, I guess it might be possible - not sure, though. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Now the next challenge is to: 1. Lift the limitation that the independent variable must be 't'. 2. Allow the function to accept a more natural formulation of the differential equation. Anybody? Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation #1 is tricky, but I guess it could be done. See the sheet about simple first order ODEs I posted somewhere in this thread #2 seems kind of a mission impossible to me, but I'd like to be taught better. Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation You mean DGL10.xmcdz? That file is hardly readable for me. Seems like a lot of pictures are left out. Anyway, Number 1 is solved (for Mathcad 11), using a symbolic substitute function originally from Tom Gutman (in his file xx(1186) he named it Rep), which I adapted to be a little more versatile: With that, the symbol for the independent variable in f is changed to t_, and finally in the result t_ is changed back to the independent variable name supplied as an argument to DiffSolve: This allows me to: or or even: Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Here is the DGL1O file in format MC11 Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation DGL10 doesn't fully run in Mathcad 11. Hope there's no need for it to. I've tried my best to convert my LODEsolver to meet Mathcad 15's requirements. So now it doesn't run in Mathcad 11, but... Does it run in mathcad 15? If not: what is needed to make it run? Luc Highlighted ## Re: SYMBOLIC Differential Equation and System of Differential Equation Unfortunately the sheet does not work in MC15. The first error after manual calculation of the sheet is in "subst" and when clicking that region, the symbolic evals of the transferred function is expanded, messing up the sheet. The error is "This function refers to itself inconsistently". A second error I spotted here Announcements
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525329232215881, "perplexity": 1191.037427870912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00705.warc.gz"}
http://math.stackexchange.com/questions/78977/probability-question-optimal-strategy/80446
# Probability question: optimal strategy Two people seek to kill a duck at a location $Y$ meters from their origin. They walk from $x=0$ to $x=Y$ together. At any time, one of the two may pull out their gun and shoot at the duck, however, the probability that person A hits is $P_{A}(x)$ and the probability that person B hits is $P_{B}(x)$. It is also known that $P_A(0)=P_B(0)=0$ and $P_A(Y)=P_B(Y)=1$ and both functions are increasing functions. What is the optimal strategy for each player? - Do they know each other's probabilities? – mixedmath Nov 4 '11 at 16:12 @mixedmath, yes, I would assume so. – picakhu Nov 4 '11 at 16:14 Can they shoot more than once? – Graphth Nov 4 '11 at 16:20 @picakhu: I guess there should be a constraint on how far they have to be from the poor duck or you should mention that each person wants to shoot it first (if this is so). – NoChance Nov 4 '11 at 16:36 As person A, I think my optimal strategy would be to shoot person B and then use her gun to shoot the duck from up close. – Greg Martin Nov 4 '11 at 19:20 I believe both should shoot at $P_A(x)+P_B(x)=1$. If either shoots earlier, the chance of winning is reduced. If either shoots later, the other could wait half as much later and have a better chance of winning. But what happens if they both hit or both miss? - @picakhu: if A delays by $\Delta t$, B can delay by $\Delta t/2$ and win $P_B(x^*+\Delta t/2)>P_B(x^*)$ – Ross Millikan Nov 4 '11 at 16:54 sorry, I just realized my error, your answer is correct-in my opinion. – picakhu Nov 4 '11 at 16:56 Ross, there is one issue -- if A does not fire at this time $x^*$, B doesn't know what A's $\Delta t$ is, so he can't fire at $x^* + \Delta t/2$. This is why I think both adopt a mixed strategy. – Craig Nov 4 '11 at 17:41 @Craig: I think you should fire at any moment past $x*$ if both haven't already. The argument is symmetric. Whoever gets to fire past $x^*$ gets more than his share. So I don't think either should mix strategies. – Ross Millikan Nov 4 '11 at 17:49 @RossMillikan: That's an unstable equilibrium, as each is indifferent between firing at time $x^*$ and not firing, knowing that the other one will. – Craig Nov 4 '11 at 18:38 Suppose player $A$ takes a shot at distance $x$, before player $B$. He collects the price with $P_1 = p_A(x)$ probability, while player $B$ collects the price with probability $P_2 = 1-p_A(x)$. If the player $B$ shoots first, then $A$ wins with probability $Q_1 = 1-p_B(x)$ and $B$ wins with $Q_2 = p_B(x)$. The optimal strategy for $A$ is to shoot at the point minimizing $B$'s win, i.e. $x_A = \operatorname{argmin}_x \max(p_B(x), 1-p_A(X))$, while the optimal strategy of $B$ is to shoot at $x_B = \operatorname{argmin}_x \max(p_A(x), 1-p_B(X))$. Here is a visualization, assuming duck is located at $Y=1$, and $p_A(x)$ and $p_B(x)$ are beta distribution cumulative distribution functions: - I think your method is correct, but Ross's is slightly easier. – picakhu Nov 4 '11 at 16:57 @picakhu Actually, my answer is equivalent to Ross's. Given that $p_A(x)$ and $p_B(x)$ are monotonic, it is easy to demonstrate that $x_A=x_B$, and at that point $p_A(x_A) + p_B(x_A) = p_A(x_B) + p_B(x_B) = 1$. – Sasha Nov 4 '11 at 17:01 Suppose the probability functions $P_A, P_B$ are continuous, and that "increasing" means "non-decreasing". Then there is a unique maximal closed interval in which $P_A(x) + P_B(x) = 1$. Each player's strategy is identical: shoot at any time in this interval. (Ross Millikan simul-posted this answer.) It gets a bit more complicated if $P_A$ or $P_B$ is not continuous. This is a realistic scenario $-$ for instance, the brow of a hill might obscure the duck up to a certain point (which might be different for each player). Then there might be a point $x$ before which $P_A + P_B < 1 - a$, and after which $P_A + P_B > 1 + b$, for some strictly positive $a,b$. There are two cases: 1. $P_A$ is continuous at $x$, and $P_B$ is not. Then $P_A$ must shoot before $x$, but as shortly before $x$ as possible. Likewise if $A$ and $B$ are swapped. 2. Neither $P_A$ nor $P_B$ is continuous at $x$. Then neither player wants to shoot before $x$, and neither player wants to allow the other to shoot after $x$. The situation becomes tense, and mathematics has little to say; in fact, the game should perhaps be called "chicken" in this case, rather than "duck". - What happens to the question if there are multiple shots allowed for each player? And what if the number of shots is not equal, so player A may shoot more times than player B? – picakhu Nov 4 '11 at 17:05 Frankly, picakhu, I don't give a damn. – TonyK Nov 4 '11 at 22:06 I think that we should be explicit about the payoffs in the case they both hit simultaneously and about the concept of optimality. I suppose that we search for the Nash equilibrium. Let's consider 3 versions: Fair-hunters version. The first who hits the duck gets +1. If both hit simultaneously they get +1/2 each. No Nash equilibrium in pure strategies. Simultaneous shot can not be an equilibrium because both players want to deviate and shoot $\varepsilon$ earlier. Hunters-enemies. The first who hits the duck gets +1. If both hit simultaneously they get 0 because of a quarrel. No Nash equilibrium as in fair-hunters version. Brothers-hunters. The first who hits the duck gets +1. If both hit simultaneously they get +1 each (they are very surprised and happy). In this case any pair of strategy $(x,x)$ such that $P_A(x)+P_B(x)\geq 1$ is a Nash equilibrium. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441340923309326, "perplexity": 621.5212108783194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824217.36/warc/CC-MAIN-20160723071024-00009-ip-10-185-27-174.ec2.internal.warc.gz"}
http://eprints.ucm.es/15779/
## Minimal genus of Klein surfaces admitting an automorphism of a given order Bujalance, E. and Etayo Gordejuela, J. Javier and Gamboa Mutuberria, José Manuel and Martens, Gerriet (1989) Minimal genus of Klein surfaces admitting an automorphism of a given order. Archiv der Mathematik, 52 (2). pp. 191-202. ISSN 0003-889X Let K be a compact Klein surface of algebraic genus $g\ge 2,$ which is not a classical Riemann surface. The authors show that if K admits an automorphism of order $N>2,$ then it must have algebraic genus at least $(p\sb 1-1)N/p\sb 1$ if N is prime or if its smallest prime factor, $p\sb 1$, occurs with exponent 1 in N. Otherwise the genus is at least $(p\sb 1-1)(N/p\sb 1-1)$. This result extends to bordered Klein surfaces a result of {\it E. Bujalance} [Pac. J. Math. 109, 279-289 (1983)] and is the analog for Klein surfaces of a result of {\it W. J. Harvey} [Q. J. Math., Oxf. II. Ser. 17, 86-97 (1966)] and, ultimately, of {\it A. Wiman} [Kongl. Svenska Vetenskaps-Akad. Handl., Stockholm 21, No.1 and No.3 (1895)].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830270767211914, "perplexity": 1610.3953826215652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829393.78/warc/CC-MAIN-20140820021349-00006-ip-10-180-136-8.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/73271/how-to-redefine-or-patch-the-newcommand-command/73279
# How to redefine or patch the '\newcommand' command? ## The Problem There are several packages I would like to write which require me to redefine the \newcommand (\renewcommand, etc.) command so I can track or change the commands that authors subsequently define. But I have no idea how to go about it. My naive attempts so far have failed. Probably because I am still very weak at the TeX level. I'm surprised that I couldn't find more than vague hints towards this concept. It feels like such an obvious thing to do. ## Motivation A simple usecase, and the first package I plan write, is to ensure the following: that using \newcommand to define a command - say: \cmd - that has already been defined does not immediately generate an error. \cmd would simply be in a state of conflict. Subsequently trying to expand \cmd would then generate an error (since it's ambiguous which of the two definitions you want). The conflict could be resolved by subsequently redefining \cmd using \renewcommand, after which \cmd can once again safely be expanded. Example: \newcommand{\cmd}{FIRST} % \cmd % outputs FIRST \newcommand{\cmd}{SECOND} % no problem yet \cmd % error: expanding ambiguous command \renewcommand{\cmd}{THIRD} % \cmd % outputs THIRD This could, for example, be used to mediate conflicts between packages (that use \newcommand). Of course, this concept is still quite weak (for example, what to do if two packages independently use \renewcommand on the same command?). But it is enough to serve as a usecase for my question. ## Pseudo Code Solution It feels like I have to do something like this (ignoring the optional argument for now): \let\old@newcommand\newcommand \MetaRenewCommand{\newcommand}{ \ifdefined#1 \old@newcommand{#1}{ \PackageError{lazyfail}{Expanding ambiguous command \protect #1} } \else \old@newcommand{#1}{#3} \fi } Of course, there are many things wrong with this code. There is no \MetaRenewCommand and I still have to handle the optional argument of \newcommand. So, how do I start? It feels like it must be possible, as \newcommand and friends are not primitives of LaTeX, but defined in terms of lower level commands. ## Further Motivation Here's another use-case I have in mind for this. When including a package, I would like to ignore all commands it provides except for a small list which I specify: \usepackagefor[\Lightning]{marvosym} Here I am loading the marvosym package, but only to use \Ligntning. I choose this example because the marvosym \CheckedBox command conflicts with the one in the llncs class. I already routinely specify the commands I plan to use a package for with a comment, but it would be nice to actually enforce that. - \providecommand provides your "no problem yet" requirement... but not generating an error upon subsequent usage. –  Werner Sep 19 '12 at 21:35 It might be useful to (perhaps separately) ask about the wider thing you're trying to do. You mention 'many packages', so it would be handy to see if there is some better way to achieve the bigger goal than fiddling around with LaTeX kernel commands. –  Joseph Wright Sep 19 '12 at 21:38 Werner: Assume that I am the package writer, and have no control over the code in the 'Motivation' section. –  mhelvens Sep 19 '12 at 21:39 Joseph: Well, the 'Motivation' section already describes a standalone usecase, I think. I want to expand it, but that's only going to require more control over how the author's commands are defined. Also, this is a good opportunity for me to learn more about the guts of TeX. Would you humour me? :-) –  mhelvens Sep 19 '12 at 21:42 @mhelvens My point is that your use case should give an error: that's the point of \newcommand (if you just want to force the issue, you use \def in a package). I'm not sure I see what you are really up to: if two packages define \foo, then trouble will ensue, hence the existence of \newcommand to warn about this and allow some action to be taken. (That said, at a technical level this is doable, certainly as an exercise.) –  Joseph Wright Sep 19 '12 at 21:45 Tackling the problem as posed is tricky due to the way \newcommand works. Heiko's approach is probably more elegant, but one possible method is to use xparse to deal with the syntax of \newcommand, and letltxmacro to deal with the way \newcommand is set up \documentclass{article} \usepackage{letltxmacro,xparse} \makeatletter \LetLtxMacro{\saved@newcommand}{\newcommand} \DeclareDocumentCommand{\newcommand}{s+mo+o+m}{% \begingroup \edef\x{% \endgroup \ifdefined#2% \unexpanded{% \renewcommand{#2}% {% \PackageError{lazyfail} {Expanding ambiguous command \protect #2}% \@ehc }% } \else \noexpand\saved@newcommand \IfBooleanT{#1}{*}% {\noexpand#2}% \IfNoValueF{#3}{[#3]}% \IfNoValueF{#4}{[\unexpanded{#4}]}% {\unexpanded{#5}} \fi }% \x } \makeatother \begin{document} \newcommand{\cmd}{FIRST} % \cmd % outputs FIRST \newcommand{\cmd}{SECOND} % no problem yet \cmd % error: expanding ambiguous command \renewcommand{\cmd}{THIRD} % \cmd % outputs THIRD \end{document} The approach here is to grab all of the arguments to the redefined \newcommand in one go, then work out whether they are to be 'recycled' or not. Of course, you could set all of this up without xparse, but it would be a pain in the next: lots of \@ifstar and \@ifnextchar ( or slightly better \@testopt) stuff and several auxiliaries. - Nice! I can almost fully understand this one. And thanks for introducing me to xparse! Why doesn't everyone use it all the time? :-) –  mhelvens Sep 20 '12 at 9:52 @mhelvens xparse is quite new (at least in the usable form it's in now). –  Joseph Wright Sep 20 '12 at 10:19 I have one question about your code: Why do you need \begingroup and \endgroup there? Thanks! –  mhelvens Sep 20 '12 at 10:22 @mhelvens The group here means that \x is not affected outside of the use here to expand material. This is a standard 'trick' to avoid breaking anyone else's code. –  Joseph Wright Sep 20 '12 at 10:34 Ah, I see. That's even better than using @, which I thought was the standard approach for that. Thanks! –  mhelvens Sep 20 '12 at 11:16 Instead of \newcommand I would redefine \@ifdefinable. Then also some other things like \newcounter or \newsavebox are catched: \documentclass{article} \makeatletter % Save old meaning of \@ifdefinable in \saved@ifdefinable \newcommand*{\saved@ifdefinable}{} \let\saved@ifdefinable\@ifdefinable % Redefine \@ifdefinable % #1: command token % #2: code that defines the command in #1 \renewcommand{\@ifdefinable}[2]{% % Here the same test for checking #1 is used as in the % original definition. \edef\reserved@a{\expandafter\@gobble\string#1}% % \reserved@a contains the name without backslash \@ifundefined\reserved@a{% \saved@ifdefinable{#1}{#2}% }{% % Report the command with the name clash in the .log file \@latex@info{Ambigous command: \string#1}% % Redefine the command to generate an error message. % \@ehd is the standard help text that starts with "You're in trouble here." \def#1{\@latex@error{Expanding ambiguous command}\@ehd}% }% } \makeatother \begin{document} \newcommand*{\cmd}[1]{FIRST(#1)} \cmd{argument} \newcommand*{\cmd}[2]{SECOND(#1,#2)} \cmd{param1}{param2} \renewcommand*{\cmd}{THIRD} \cmd \end{document} "Border cases": • LaTeX complains if someone tries to define a command \end... starting with end or in the case of \relax. These cases are not redefined by the above redefinitions. • Treatment of arguments is ambiguous. In the case with the ambiguous error the arguments remain untouchted in the input. • Commands are only detected, if they are defined via the LaTeX interface (\newcommand, \newcounter, \newsavebox, …). Definitions can also be done by TeX's primitive commands \def, \edef, \gdef, … or plain TeX commands (\newcount et. al.) These commands do not differentiate between new and old commands. Of course, \newcommand could be redefined to drop some definitions and keep \Lightning as only command of package marvosym, but • Other macros, defined without \newcommand, might be available with undesired effects. • The kept macro might rely on other macros that are defined by \newcommand and not kept. @mhelvens Further reading for plain TeX: TeX by Topic. LaTeX internals: source2e.pdf. –  Heiko Oberdiek Sep 19 '12 at 22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294956922531128, "perplexity": 2350.606483691515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060103.8/warc/CC-MAIN-20150827025420-00106-ip-10-171-96-226.ec2.internal.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-1082/topics/Topic-21063/subtopics/Subtopic-273157/
iGCSE (2021 Edition) # 4.07 Absolute value functions Lesson The modulus or absolute value of a number can be thought of as the distance a number is from $0$0 on a number line. For example, the absolute value of $-3$3 and $3$3 is $3$3. Number line ### Notation Absolute value is represented mathematically by two vertical lines on either side of a value. For example $\left|3\right|$|3| means "the modulus or absolute value of $3$3," and equals $3$3,  $\left|-9\right|$|9| means "the modulus or absolute value of $-9$9" and equals $9$9 and the $\left|-x\right|$|x| means "the modulus or absolute value of $-x$x " and equals $x$x Except for $0$0, the absolute value of any real number is the positive value of that number. The absolute value of an expression is the positive value of the number returned by the expression. ### The graph of $f(x)=|x|$f(x)=|x| Let's consider the basic modulus function $f(x)=|x|$f(x)=|x|. This looks similar to the linear function $f(x)=x$f(x)=x but with absolute value signs. We know that $f(x)=x$f(x)=x represents a straight line through the origin on the coordinate plane. Introducing the absolute value means that negative function values become positive  as shown in the table and on the diagram below. Notice that this occurs at the $x$x intercept of the graph of $f(x)=x$f(x)=x Table of values: $x$x $-2$2 $-1$1 $0$0 $1$1 $2$2 $f(x)=\left|x\right|$f(x)=|x| $2$2 $1$1 $0$0 $1$1 $2$2 Graph: $f(x)=\left|x\right|$f(x)=|x| ### The graph of function of the form $f(x)=|ax+b|$f(x)=|ax+b| Let's consider a more complicated modulus function like $y=|5x-15|$y=|5x15|. The $x$x intercept of the function $y=5x-15$y=5x15 is the critical value below which it is negative. This will be the vertex of the absolute value function. To find the $x$x intercept we make $y=0$y=0: $5x-15$5x−15 $=$= $0$0 $5x$5x $=$= $15$15 $x$x $=$= $3$3 Hence, as this graph has a positive gradient function values will be negative for $x<3$x<3. For $x>3$x>3, the graph of $y=|5x-15|$y=|5x15| will be identical the graph of $y=5x-15$y=5x15. For the other values, the negative $y$y values turn positive. We can think of modulus functions as being defined by two functions: one for values of $x$x that would make the original function positive and another for values of $x$x that make the original function negative. So $y=|5x-15|$y=|5x15| can be redefined as: In the diagram below, the lines corresponding to the two parts of a function definition are shown lightly with the 'composite' absolute value function drawn more heavily. ### Piecewise definition of $f(x)=|ax+b|$f(x)=|ax+b| Other functions of the form $|ax+b|$|ax+b| have a shape similar to the one illustrated above. The 'V' shape is typical with the vertex located on the horizontal axis at the point $x$x that makes $ax+b=0$ax+b=0 When the coefficient $a$a is large, the 'V' shape is narrow. This is because $a$a controls the gradients. One side has a gradient of $a$a while the other has a gradient of $-a$a. Because of this, these absolute value functions will always be symmetrical. If a negative sign is placed in front of the absolute value symbol, the effect is to invert the 'V' shape of the graph. All the function values are negative when $y=-|ax+b|$y=|ax+b|. As you can see, our understanding of the features of straight lines are very helpful in sketching modulus functions. Piecewise definition of $f(x)=|ax+b|$f(x)=|ax+b| A linear modulus function is really a composite of two functions. It is defined by a rule like the following, where the coefficient $a$a can be assumed to be a positive real number. Particular functions are obtained by substituting appropriate values for the coefficients $a$a and $b$b. #### Practice questions ##### question 1 Consider the function $f\left(x\right)=\left|x\right|$f(x)=|x| that has been graphed. Notice that it opens upwards. 1. What is the gradient of the function for $x>0$x>0? 2. What is the gradient of the function for $x<0$x<0? 3. The graph below shows the graph of $y$y that results from reflecting $f\left(x\right)$f(x) about the $x$x-axis. State the equation of $y$y. 4. Select all the correct statements. A downward absolute value function goes from decreasing to increasing. A An upward absolute value function goes from decreasing to increasing. B An upward absolute value function goes from increasing to decreasing. C A downward absolute value function goes from increasing to decreasing. D A downward absolute value function goes from decreasing to increasing. A An upward absolute value function goes from decreasing to increasing. B An upward absolute value function goes from increasing to decreasing. C A downward absolute value function goes from increasing to decreasing. D ##### question 2 Consider the graph of function $f\left(x\right)$f(x). 1. State the coordinate of the vertex. 2. State the equation of the line of symmetry. 3. What is the gradient of the function for $x>5$x>5? 4. What is the gradient of the function for $x<5$x<5? 5. Hence, which of the following statements is true? The graph of $f\left(x\right)$f(x) is steeper than the graph of $y=\left|x\right|$y=|x|. A The graph of $f\left(x\right)$f(x) is not as steep as the graph of $y=\left|x\right|$y=|x|. B The graph of $f\left(x\right)$f(x) has the same steepness as the graph of $y=\left|x\right|$y=|x|. C The graph of $f\left(x\right)$f(x) is steeper than the graph of $y=\left|x\right|$y=|x|. A The graph of $f\left(x\right)$f(x) is not as steep as the graph of $y=\left|x\right|$y=|x|. B The graph of $f\left(x\right)$f(x) has the same steepness as the graph of $y=\left|x\right|$y=|x|. C ##### question 3 Consider the function $y=\left|x+2\right|$y=|x+2|. 1. Determine the coordinates of the $y$y-intercept. Intercept $=$= $\left(\editable{},\editable{}\right)$(,) 2. State the coordinate of the vertex. Vertex $=$= $\left(\editable{},\editable{}\right)$(,) 3. Draw the graph of the function. ## Graphing absolute value quadratic functions We can graph absolute value functions of the form: $y=|ax^2+bx+c|$y=|ax2+bx+c| by first graphing the quadratic function $y=ax^2+bx+c$y=ax2+bx+c and then reflecting any parts of the parabola that are below the $x$x-axis, by the $x$x-axis so that the whole function lies above the $x$x-axis. If the entire parabola is already above the $x$x-axis, then nothing needs to be reflected. #### Worked example ##### example 1 Graph the function $y=|x^2-4x-12|.$y=|x24x12|. Think: First we should graph the function without the absolute value signs. Do: To graph the function, we should find the $x$x-intercepts (when $y=0$y=0). $x^2-4x-12$x2−4x−12 $=$= $0$0 $(x-6)(x+2)$(x−6)(x+2) $=$= $0$0 $x$x $=$= $-2,6$−2,6 We should also find the coordinates of the vertex: $x$x $=$= $-\frac{b}{2a}$−b2a​ $=$= $\frac{4}{2}$42​ $=$= $2$2 $y$y $=$= $2^2-4\times2-12$22−4×2−12 $=$= $4-8-12$4−8−12 $=$= $-16$−16 Therefore the coordinates of the vertex are $(2,-16)$(2,16). So now we are ready to graph the parabola without the absolute value signs: We now need to reflect the part of the parabola that is below the $x$x-axis, by the $x$x-axis, so that the absolute value function is above the $x$x-axis: Note that the vertex has been reflected to the point $(2,16),$(2,16),and the $y$y-intercept has been reflected to $y=12$y=12 from $y=-12$y=12. So the final graph of the function $y=|x^2-4x-12|$y=|x24x12| is: ### Outcomes #### 0606C1.3 Understand the relationship between y = f(x) and y = |f(x)|, where f(x) may be linear, quadratic or trigonometric. #### 0606C2.3 Know the conditions for f(x) = 0 to have two real roots, two equal roots, no real roots. Know the related conditions for a given line to intersect a given curve, be a tangent to a given curve, not intersect a given curve. #### 0606C2.4B Find the solution set for quadratic inequalities.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392926454544067, "perplexity": 1067.8503595008704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304961.89/warc/CC-MAIN-20220126192506-20220126222506-00476.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/6134/proving-the-inequality-mathrmtrau-le-mathrmtra-in-uhlmanns-theore?noredirect=1
# Proving the inequality $|\mathrm{tr}(AU)|\le \mathrm{tr}|A|$ in Uhlmann's theorem In Nielsen and Chuang, in the Fidelity section, (Lemma 9.5, page 410 in the 2002 edition), they prove the following. $$\mathrm{tr}(AU) = |\mathrm{tr}(|A|VU)| = |\mathrm{tr}(|A|^{1/2}|A|^{1/2}VU)|$$ $$|\mathrm{tr}(AU)| \leq \sqrt{\mathrm{tr}|A| \mathrm{tr}(U^{\dagger}V^{\dagger}|A|VU)} = \mathrm{tr}|A|$$ First, is $$|A|$$ the positive matrix in the polar decomposition of $$A$$? And second, apparently, the second equation comes from the first due to Cauchy Schwartz inequality of Hilbert-Schmidt. How does the first expression lead to the second? The idea is to use CS inequality in the form $$\newcommand{\tr}{\operatorname{Tr}}\lvert \sum_{ij}A_{ij}^* B_{ij}\rvert\le\sqrt{\sum_{ij} \left\lvert A_{ij}\right\rvert^2}\sqrt{\sum_{ij}\left\lvert B_{ij}\right\rvert^2}$$, which in matrix formalism reads $$\lvert\tr(A^\dagger B)\rvert\le\sqrt{\tr(A^\dagger A})\sqrt{\tr(B^\dagger B)}$$. Therefore, $$\lvert\tr(AU)\rvert=\lvert\tr(\lvert A\rvert VU)\rvert =\lvert\tr(\lvert A\rvert^{1/2} \underbrace{\lvert A\rvert^{1/2} VU}_{D})\rvert \le\sqrt{\tr(\lvert A\rvert^{1/2}\lvert A\rvert^{1/2})}\sqrt{\tr(D^\dagger D)},$$ and putting in the explicit expression for $$D$$ you get the result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505175352096558, "perplexity": 321.1716223937785}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00276.warc.gz"}
http://math.stackexchange.com/questions/40542/how-to-solve-a-polar-equation-when-r-is-r2-instead/40545
# How to solve a polar equation when $r$ is $r^2$ instead? I have $r^2=-4\sinθ$ and I'm asked to set $r=0$, then find θ. If I just set $r^2=0$ then I'll get $\sin(2θ)=0$. That doesn't seem right. Then I'm asked to set $θ=0$ and then find $r$. If I use the $r^2=-4\sinθ$ and set $θ=0$ then I will get "DNE". Not sure what to do instead then... EDIT: Sorry everyone I wrote the problem here wrong. It was supposed to be r^2=-4sin2θ That's where the 2θ came from - Are you doing this all on a calculator? What do you know about the value of $\sin(0)$? What do you know about $\sin(2\pi)$? No calculator needed here. – amWhy May 21 '11 at 21:34 Where'd $\sin(2\theta)$ come from? – Hans Parshall May 21 '11 at 21:37 It is not clear what the question is. But looking at $r^2=-4\sin\theta$, the first thing I would notice is that since $r^2 \ge 0$, the curve makes sense only when $\sin\theta \le 0$, so, in the interval from $0$ to $2\pi$, only when $\pi \le \theta \le 2\pi$. – André Nicolas May 21 '11 at 21:44 You've got three questions going on the same problem! That's not a good use of the available resources. – Gerry Myerson May 22 '11 at 6:17 ## 2 Answers For any real number, $r=0$ if and only if $r^2=0$, so "set[ting] $r=0$" is the same as setting $r^2$ to zero. Equivalently: if $r=0$, then $r^2=0$, so of course you get that $r^2=0$. However, I don't understand why you think you get $\sin(2\theta)=0$. If $r^2=0$, then $-4\sin(\theta)=0$. That means that $\sin(\theta)=0$; where did that $2$ come from? If you set $\theta=0$ instead, then $\sin(\theta) = \sin(0)$. How much is $\sin(0)$? How much is that when multiplied by $-4$? And what is the (only) value of $r$ that will make $r^2 = -4\sin(0)$ true? Again, I don't understand why you think you will get "Does not exist" if you plug in $\theta=0$. This is simply not the case. (Though, if you had $r^2 = -4\cos(\theta)$, and tried to find a real value of $r$ for the case $\theta=0$, then you would be unable to find one; are you sure you are computing $\sin(0)$ correctly?) - 1) It is right. 2) You do not get DNE, you get 0... (Sqrt[-4*0] = 0) It comes out as a very nice figure 8 loop. You seem to have understood the question, but are lacking some basic algebra/trig that the problem was not designed to test. - I think the problem was I didn't pay attention when I was doing this problem and confused myself. Thanks for the help. – Ryan May 22 '11 at 0:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740156292915344, "perplexity": 217.2822350401946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154682.35/warc/CC-MAIN-20160205193914-00238-ip-10-236-182-209.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/25216/how-do-i-compute-the-gradient-vector-of-pixels-in-an-image
How do I compute the gradient vector of pixels in an image? I'm trying to find the curvature of the features in an image and I was advised to calculate the gradient vector of pixels. So if the matrix below are the values from a grayscale image, how would I go about calculating the gradient vector for the pixel with the value '99'? 21 20 22 24 18 11 23 21 20 22 24 18 11 23 21 20 22 24 18 11 23 21 20 22 99 18 11 23 21 20 22 24 18 11 23 21 20 22 24 18 11 23 21 20 22 24 18 11 23 Apologies for asking such an open ended question, I've never done much maths and am not sure how to start tackling this. X = your matrix To find the curvature of features I advise you to look into the eigenvalues and eigenvectors of a Hessian matrix. A hessian matrix is a square matrix of second-order derivatives of a scalar valued function. In this case the scalar field is the intensity, and the second order derivatives are $g_{xx}$, $g_{xy}$, $g_{yy}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197572827339172, "perplexity": 51.03001832446201}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00540.warc.gz"}
https://www.stereophile.com/reference/1095cable/
# Reference ## The Essex Echo 1995: Electrical Signal Propagation & Cable Theory Editor's Note: The matter of whether—and if so, how—speaker cables and interconnects can affect the sound of an audio system has vexed the audiophile community since Jean Hiraga, Robert Fulton, and others first made us aware of the subject in the mid-1970s. Most of the arguments since then have involved a great deal of heat but not much light. Back in August 1985, Professor Malcolm Omar Hawksford Ph.D (of the UK's University of Essex and a Fellow of the Audio Engineering Society) wrote an article for the British magazine Hi-Fi News & Record Review, of which I was then Editor, in which he examined AC signal transmission from first principles. Among his conclusions was the indication that there is an optimal conductor diameter for audio-signal transmission, something that I imagined might lead to something of a conciliation between the two sides in the debate. Or at least when a skeptic proclaimed that "The Laws of Physics" don't allow for cables to affect audio performance, it could be gently pointed out to him or her that "The Laws of Physics" predict exactly the opposite. Well, I was wrong. Ten years later, as described in my recent "Wired!!!" essay (June 1995) the "cables make a difference/no they don't" flame war continues unabated (though many of the more sonically successful "audiophile" cables tend to use conductors of the predicted optimal size). I asked Malcolm Omar, therefore, to revise and rewrite his 1985 article for Stereophile. The essential math may look intimidating, but it's not as hard to grasp as it looks (you don't lose points for skipping it). The conclusions are both fascinating and essential reading for anyone who wishes to design audio cables.John Atkinson Audiophiles are excited. A special event has occurred that promises to undermine their very foundation and transcend "the event sociological"—a minority group now cite conductor and interconnect performance as a limiting factor within an audio system. The skeptical masses, however, remain content to congregate with their like-minded friends and make jokes in public about the vision of the converted. They are content to watch their distortion-factor meters confidently null at the termination of any old piece of wire (even rusty nails, it seems). Believing in Ohm's Law, they feel strong in their brotherhood—at least that's how it seemed back in 1985. But the revolution moves forward... This article examines propagation in cables—especially within conductive material—from the fundamental principles of electromagnetic theory. The aim is to consider mechanisms that form a more rational basis for an objective understanding of claimed sonic anomalies in interconnects, especially as the rumors about single-strand, thin wires persist. Objective understanding relates to the choice of model used to visualize a phenomenon; thus we shall take a theoretic stance and commence with the work of Maxwell. The equations of Maxwell (see sidebar) concisely describe the foundation and principles of electromagnetism; they are central to a proper mathematical modeling of all electromagnetic systems. These equations are presented here in standard differential form and succinctly encapsulate the principles of electromagnetics, although further background can be sought from a wide range of texts (see Sidebar 1). Maxwell's equations support a wave equation that governs the propagation of both the electric and magnetic fields in space and time, where the wave equation describing a propagating electric field E-Bar in a general lossy medium of conductivity σ (sigma), permittivity ε (epsilon) and permeability µ (mu) can be succinctly derived as follows: Applying the vector operator curl on the Faraday equation, Substituting, B-Bar = µH-Bar and for curl H-Bar from Ampere's law, Substituting J-Bar = σ E-Bar , D-Bar = εbE-Bar and using the vector identity, The generalized wave equation in a conductive medium then follows as In this equation, ∇2 is the vector Laplace operator, and we have assumed from Gauss's theorem that div bE-Bar = r/ε = 0 for a charge-free region. A similar equation can also be derived in terms of the H-Bar field, where, because of the symmetry of Maxwell's equations, In practice we shall consider only the E-Bar field, as the H-Bar field can be derived from Faraday's law by integrating over time the vector curl E-Bar, which reveals that at every point in space E-Bar and H-Bar are mutually at right angles, and also lie in a plane at right angles to the direction of propagation (fig.1). Fig.1 A propagating electromagnetic wave. The sinusoidally varying magnetic field is at right angles to the sinusoidally varying electric field, and both are at right angles to the direction of propagation. Consider a steady-state, sinusoidal electric field E-Bar propagating within a medium of finite conductivity where, because of the conversion of electrical energy into heat within a conductor, the traveling wave must experience attenuation. This suggests that a steady-state wave of sinusoidal form should decay exponentially as a function of distance z, E = Eσ–αzsin(ωt – βz) α (alpha) is defined as the attenuation constant, while the phase of the wave as a function of distance is determined by the phase constant β (beta) = 2Π/λ, where λ (in meters) is the wavelength of the propagating field and ω (omega) = 2Πf, the angular frequency in radians/second. An exponential decay is a logical choice, as for each unit distance the wave propagates it is attenuated by the same fractional amount. The electric field E is aligned to propagate in a direction z, where the direction of E is at 90° (right-angles) to z as shown in fig.1, where several phases are illustrated. Consequently, at a fixed point of observation z, E varies sinusoidally, while for constant time t, E plotted against z is a sinewave with exponential decay. To check the validity of this solution, the function for E must satisfy the wave equation. This validation also enables the constants α and β to be expressed as functions of σ, ε, µ, and ω. However, because this substitution, although straightforward, is somewhat tedious, I will show only the initial working and then state the conclusion: Substitute the assumed solution into the wave equation where, if propagation is assumed to take the direction z, It follows that the function for E is a solution to the wave equation, provided that β2 – α2 = µεω2 αβ = ωµσ/2 Hence solving for α and β, β = ωµσ/2α where the constants α and β that govern the velocity and attenuation of the propagating field can be expressed in terms of the angular frequency ω and the parameters µ, ε, and σ, which are documented for most materials. (α and β are sometimes expressed as a complex number in terms of the propagation constant γ (gamma), where γ = α + jβ.) ARTICLE CONTENTS X
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419418334960938, "perplexity": 1271.5882149192694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00242.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-10-problem-63gq-chemistry-and-chemical-reactivity-10th-edition/9781337399074/the-temperature-of-the-atmosphere-on-mars-can-be-as-high-as-27-c-at-the-equator-at-noon-and-the/343cd342-a2cc-11e8-9bb5-0ece094302b6
Chapter 10, Problem 63GQ ### Chemistry & Chemical Reactivity 10th Edition John C. Kotz + 3 others ISBN: 9781337399074 Chapter Section ### Chemistry & Chemical Reactivity 10th Edition John C. Kotz + 3 others ISBN: 9781337399074 Textbook Problem # The temperature of the atmosphere on Mars can be as high as 27 °C at the equator at noon, and the atmospheric pressure is about 8 mm Hg. If a spacecraft could collect 10. m3 of this atmosphere, compress it to a small volume, and send it back to Earth, how many moles would the sample contain? Interpretation Introduction Interpretation: The number of moles of sample collected from mars at given temperature pressure and given volume should be determined. Concept introduction: Ideal gas Equation: Any gas is described by using four terms namely pressure, volume, temperature and the amount of gas.  Thus combining three laws namely Boyle’s, Charles’s Law and Avogadro’s Hypothesis the following equation could be obtained.  It is referred as ideal gas equation. nTPV = RnTPPV = nRTwhere,n = molesofgasP = pressureT = temperatureR = gas constant Under some conditions gases don not behave like ideal gas that is they deviate from their ideal gas properties.  At lower temperature and at high pressures the gas tends to deviate and behave like real gases. Boyle’s Law: At given constant temperature conditions the mass of given ideal gas in inversely proportional to its volume. Charles’s Law: At given constant pressure conditions the volume of ideal gas is directly proportional to the absolute temperature. Two equal volumes of gases with same temperature and pressure conditions tend to have same number of molecules with it. Explanation Given, V = 10 m3= 27CP = 8 mmHg In order to calculate the moles of sample collected the given data should be incorporated into the ideal gas equation which is achieved as follows, R = 0.0821 L.atm.K-1.mol-1P = 8 mm Hg =0.0105 atm  T = 27C=300 ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8251645565032959, "perplexity": 832.0853842963835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00014.warc.gz"}
http://mathhelpforum.com/trigonometry/77066-solve-trigonometric-equation-print.html
# Solve the trigonometric equation • Mar 5th 2009, 07:28 AM james_bond Solve the trigonometric equation Solve for $x$, $y$: $25^{\sin^2x+2\cos y +1}+25^{\cos^2y+2\sin x +1}=2$. • Mar 5th 2009, 07:42 AM HallsofIvy Quote: Originally Posted by james_bond Solve for $x$, $y$: $25^{\sin^2x+2\cos y +1}+25^{\cos^2y+2\sin x +1}=2$. One possibility is that both those powers of 25 be equal to 1 so that 1+ 1= 2. In that case we must have $sin^2x+ 2 cos y+ 1= 0$ and $cos^2y+ 2\sin x+ 1= 0$. Adding those 2 equations, 1+ 2(cos x+ cos y)+ 2= 0 or cos x+ cos y= -3/2. From that cos y= -3/2- cos x. You can put that into $cos^2 x+ cos^2 y= 1$ to get a quadratic equation for cos x and then find x and y. • Mar 5th 2009, 08:08 AM red_dog $\displaystyle 25^{\sin^2x+2\cos y+1}+25^{\cos^2y+2\sin x+1}\geq 2\sqrt{25^{(\sin x+1)^2+(\cos x+1)^2}}\geq 2$ The equality stands if $25^{\sin^2x+2\cos y+1}=25^{\cos^2y+2\sin x+1}=1\Rightarrow$ $\left\{\begin{array}{ll}\sin^2x+2\cos y+1=0\\\cos^2y+2\sin x+1=0\end{array}\right.$ $(\sin x+1)^2+(\cos y+1)^2=0\Rightarrow \sin x=\cos y=-1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923409223556519, "perplexity": 2409.227201139878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662159.54/warc/CC-MAIN-20160924173742-00034-ip-10-143-35-109.ec2.internal.warc.gz"}
http://workinginuncertainty.co.uk/probtheory_axioms.shtml
# Probability axioms in Bayesian language The gradual swing back from Frequentist probability thinking to Bayesian probability thinking has been quite slow for a number of reasons. One of these is that the language of the most famous, most copied introduction to basic probability theory is set in Frequentist language. The aim of this simple paper is to present the usual axiomatic basis of probability theory using simple language consistent with a Bayesian approach. A typical Frequentist presentation is shown side-by-side with a Bayesian version so that you can see the differences in language. Frequentist language is highlighted in red while the Bayesian alternative language I am suggesting is in green. (In case you don't know the Greek alphabet very well, note that σ is sigma and Ω is capital Omega.) Frequentist version Bayesian version Probabilities are defined within probability spaces. A probability space, (Ω, F, P), has three components. 1) Ω, a set of outcomes (also known as elementary events), for an experiment; 2) F, a set of subsets of Ω, representing events; and 3) P, a function that gives a real number for each event in F. Where outcomes are put into a set to make an event then the event occurs if and only if one of the outcomes in it occurs. Probabilities are defined within probability spaces. A probability space, (Ω, F, P), has three components. 1) Ω, a set of possible answers to an unsettled question; 2) F, a set of subsets of Ω, representing disjunctions of answers; and 3) P, a function that gives a real number for each set of possible answers in F. A disjunction of answers is where the answers are combined with the logical ‘or’. For example, given two answers a1 and a2, the logical expression a1 ∨ a2 is the disjunction of the two answers, pronounced ‘a1 or a2’. The outcomes in Ω must be chosen so that they are exhaustive and mutually exclusive. This means that the outcome that actually happens must be exactly one of the elements of Ω. The definition of the experiment is important, even though the experiment is not identified in the statement of the probability space. The possible answers in Ω must be chosen so that they are exhaustive and mutually exclusive. This means that exactly one of the elements of Ω must be the true answer to the question. Each answer should be a proposition, which is a clear, internally consistent statement that can only be true or false, if its truth is known. The question is important, even though it is not identified in the statement of the probability space. For the probability space to support probability theory, F and P must meet a number of requirements. F must be a σ-algebra, which is a set of subsets having special properties so that F is sufficiently comprehensive, while P must be a probability measure, which is a function with other special properties. For the probability space to support probability theory, F and P must meet a number of requirements. F must be a σ-algebra, which is a set of subsets having special properties so that F is sufficiently comprehensive, while P must be a probability measure, which is a function with other special properties. Specifically, F must be such that: 1) Ω is itself an event in F, (Ω ∈ F ); 2) if A is a subset of Ω and is in F, then the set of outcomes not in A but still in Ω is also in F, (∀ A ⊆ Ω, A ∈ F ⇔ (Ω \ A) ∈ F ); and 3) if each of a countable set of subsets of Ω is in F, then so is the union of those subsets. Specifically, F must be such that: 1) Ω is itself an element of F, (Ω ∈ F ); 2) if A is a subset of Ω and is in F, then the set of possible answers not in A but still in Ω is also in F, (∀ A ⊆ Ω, A ∈ F ⇔ (Ω \ A) ∈ F ); and 3) if each of a countable set of subsets of Ω is in F, then so is the union of those subsets. The probability measure, P, must be such that: 1) P gives a real number between 0 and 1 inclusive for all events in F ; 2) P(Ω) = 1; and 3) the union of any countable set of events in F, where no pair of events overlaps, has a probability given by P that is equal to the sum of the probabilities given by P to each of the events. The probability measure, P, must be such that: 1) P gives a real number between 0 and 1 inclusive for all sets of possible answers in F ; 2) P(Ω) = 1; and 3) the union of any countable set of elements of F, where no pair overlaps, has a probability given by P that is equal to the sum of the probabilities given by P to each of the elements. The probability measure, P, gives probabilities for events rather than the more elementary outcomes because there are some sets of outcomes, Ω, that have so many elements, with the probability so evenly spread, that all outcomes have a probability that can be said to be zero (or infinitesimal). Considering probabilities for sets of outcomes only is a way to deal with this situation more easily. Unfortunately, existing explanations of how this works are very hard to understand. The probability measure, P, gives probabilities for sets of possible answers rather than individual answers because there are some sets of possible answers, Ω, that have so many elements, with the probability so evenly spread, that all answers have a probability that can be said to be zero (or infinitesimal). Considering probabilities for sets of answers only is a way to deal with this situation more easily. Unfortunately, existing explanations of how this works are very hard to understand. Most books covering this basic theory illustrate the ideas with some examples. Here are some in Frequentist and Bayesian language. Again, the changes are easy to make. Frequentist version Bayesian version Example 1: Consider the experiment of flipping a fair coin once. A reasonable choice of outcomes would be Ω = {H, T}. The σ-algebra, F, would be defined with F = {{}, {H}, {T}, {H, T}}. The probability measure, P, would usually be defined by P = {{} → 0, {H} → 0.5, {T} → 0.5, {H, T} → 1}. Example 1: Consider a fair coin that is to be flipped once and the question is, ‘Which side will be on top on that flip?’ A reasonable choice of possible answers would be Ω = {H, T}. The σ-algebra, F, would be defined with F = {{}, {H}, {T}, {H, T}}. The probability measure, P, would usually be defined by P = {{} → 0, {H} → 0.5, {T} → 0.5, {H, T} → 1}. Example 2: Two men are in court and charged with robbing a bank. The experiment is the discovery of the truth of their guilt or innocence. A reasonable set of potential outcomes, using G and N for ‘guilty’ and ‘not guilty’ respectively, is: Ω = {(N,N), (N,G), (G,N), (G,G)} The σ-algebra, F, would be defined with F = { {}, {(N,N)}, {(N,G)}, {(G,N)}, {(G,G)}, {(N,N), (N,G)}, {(N,N), (G,N)}, {(N,N), (G,G)}, {(N,G), (G,N)}, {(N,G), (G,G), {(G,N), {G,G)}, {(N,N), (N,G), (G,N)}, {(N,N), (N,G), (G,G)}, {(N,N), (G,N), (G,G)}, {(N,G), (G,N), (G,G)}, {(N,N), (N,G), (G,N), (G,G)} }. The probability measure, P, would be defined to give a probability for each of the possible sets of outcomes. Example 2: Two men are in court and charged with robbing a bank. The question is: ‘Which if any of them are guilty?’ A reasonable set of possible answers, using G and N for ‘guilty’ and ‘not guilty’ respectively, is: Ω = {(N,N), (N,G), (G,N), (G,G)} The σ-algebra, F, would be defined with F = { {}, {(N,N)}, {(N,G)}, {(G,N)}, {(G,G)}, {(N,N), (N,G)}, {(N,N), (G,N)}, {(N,N), (G,G)}, {(N,G), (G,N)}, {(N,G), (G,G), {(G,N), {G,G)}, {(N,N), (N,G), (G,N)}, {(N,N), (N,G), (G,G)}, {(N,N), (G,N), (G,G)}, {(N,G), (G,N), (G,G)}, {(N,N), (N,G), (G,N), (G,G)} }. The probability measure, P, would be defined to give a probability for each of the possible sets of answers. You probably noticed that the second example feels much more naturally ‘Bayesian‘ than the first, which is a typical Frequentist example. One final element of some introductions to probability theory is an attempt to explain what probabilities mean. This of course is where the Bayesian approach is fundamentally different to the Frequentist approach so I haven't added colour. Here are two alternative explanations. Frequentist version Bayesian version Probabilities represent relative frequencies in the long run and are defined by the results of many similar experiments. The experiments do not have to be identical in every possible respect. They just have to meet defined conditions that specify a set of experiments as being within the same set for this purpose, sometimes called the reference class, and have the same possible outcomes and events. If many similar experiments are performed and the actual outcomes recorded, the proportion of experiments where an event occurs will tend to move towards the true probability of that event. In many applications of probability theory the task is to estimate that true probability. Probabilities represent degrees of belief that an answer is the true answer to the question. Put another way, they represent degrees of belief in a proposition (i.e. a statement that is clear and can be only true or false, if its truth is known). A probability of 1 for a set of possible answers represents complete certainty that one of them is the true answer. A probability of 0 for a set of possible answers represents complete certainty that none of them is the true answer. However, probabilities are not just a matter of opinion. A good source of probabilities produces probabilities that agree with the relative frequency of true propositions for which the source has given probabilities. For example, over all the instances where the probabilities have been stated as 0.7, say, about 70% should turn out to be true. A good source of probabilities also produces probabilities that are responsive to circumstances and experience. In combination, these two properties mean that good probabilities are informative, and that the better a source of probabilities the more informative its probabilities are. Having got this basic introduction of axioms out the way there is still much about the modern Bayesian approach that needs to be explained. In particular, the approach usually focuses on some kind of system or process that can be observed, generating data from those observations, and which the analyst wants to represent with a mathematical model. What is the question, and what is the set of answers that would be used in the probability space? The question is a compound question (really two questions in one) that asks: ‘Which model is best and what data could the process produce?’ The answers will be every possible combination of model paired with a set of data that might be observed. The analyst will, in effect, use information about the probability of each combination of model and observed data to deduce how likely it is that each model is the best model, given the data actually observed. These probabilities will usually be represented by a distribution that says how likely it is that each model is the best of the set, and another, conditional, probability distribution that says how likely each set of data is given that each model is true. Hundreds of people receive notification of new publications every month. They include company directors, heads of finance, of internal audit, of risk management, and of internal control, professors, and other influential authors and researchers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333875179290771, "perplexity": 785.6919952120815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479885.8/warc/CC-MAIN-20190216045013-20190216071013-00252.warc.gz"}
https://www.physicsforums.com/threads/physics-help-circular-motion-and-gravity.696038/
# Physics help, circular motion and gravity 1. Jun 8, 2013 ### katiegerster anything is appreciated!! 1. The mass of a star is 1.830×1031 kg and it performs one rotation in 37.30 days. Find its new period (in days) if the diameter suddenly shrinks to 0.850 times its present size. Assume a uniform mass distribution before and after. 2. A pulley with mass Mp and a radius Rp is attached to the ceiling, in a gravity field of 9.81 m/s2 and rotates with no friction about its pivot. Mass M2 is larger than mass m1. The quantities Tn and g are magnitudes. Choices: true, false, greater than, less than, or equal to. The C.M. of Mp+M1+M2 does not accelerate. T1 is ..... T2 m1g + M2g + Mpg is ..... T3. T3 is ..... T1 + T2 T2 is ..... M2g. The magnitude of the acceleration of M2 is ..... that of m1. 2. Jun 8, 2013 ### voko This (and the other) post of yours violate the rules of this forum. You won't get any help unless you stick with the rules. Draft saved Draft deleted Similar Discussions: Physics help, circular motion and gravity
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639878273010254, "perplexity": 3533.8362029954606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00440.warc.gz"}
https://www.arxiv-vanity.com/papers/1105.0659/
# A Hubble Space Telescope Study of Lyman Limit Systems: Census and Evolution11affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555. Joseph Ribaudo, Nicolas Lehner, J. Christopher Howk Department of Physics, University of Notre Dame, Notre Dame, IN 46556 ###### Abstract We present a survey for optically thick Lyman limit absorbers at using archival Hubble Space Telescope observations with the Faint Object Spectrograph and Space Telescope Imaging Spectrograph. We identify 206 Lyman limit systems (LLSs) increasing the number of catalogued LLSs at by a factor of 10. We compile a statistical sample of LLSs drawn from 249 QSO sight lines that avoid known targeting biases. The incidence of such LLSs per unit redshift, , at these redshifts is well described by a single power law, , with 1.33 0.61 at , or with  1.83 0.21  over the redshift range . The incidence of LLSs per absorption distance, , decreases by a factor of 1.5 over the 0.6 Gyr from to 3.5; evolves much more slowly at low redshifts, decreasing by a similar factor over the 8 Gyr from to 0.25. We show that the column density distribution function, , at low redshift is not well fitted by a single power law index () over the column density range or . While low and high redshift distributions are consistent for , there is some evidence that evolves with for , possibly due to the evolution of the UV background and galactic feedback. Assuming LLSs are associated with individual galaxies, we show that the physical cross section of the optically thick envelopes of galaxies decreased by a factor of 9 from 5 to 2 and has remained relatively constant since that time. We argue that a significant fraction of the observed population of LLSs arises in the circumgalactic gas of sub- galaxies. Intergactic Medium — Galaxies: Quasars: Absorption lines slugcomment: Accepted for publication in ApJ ## 1. Introduction The absorption features seen in the spectra of QSOs provide a unique opportunity to probe the intergalactic and galactic regions which intersect the lines of sight. In particular, HI absorption studies have allowed us to examine the distribution of gas associated with galaxies, the intergalactic medium (IGM), and the extended gaseous regions of galaxies which serve as an interface to the IGM, over the majority of cosmic time. Often these HI absorbers are placed in three general categories dependent on the HI column density () of the absorber. The low column density Lyman- forest absorbers ( cm) are associated with the diffuse IGM (see review by Rauch, 1998). These systems probe low-density, highly ionized gas and are thought to trace the dark matter distribution throughout the IGM (Jena et al., 2005) as well as contain the bulk of the baryons at high redshift (Miralda-Escudé et al., 1996) and a significant amount of the baryons even today (e.g., Penton et al., 2004; Lehner et al., 2007; Danforth & Shull, 2008). At the other end, the high column density damped Lyman- absorbers (DLAs, cm) appear associated with the main bodies of galaxies (see review by Wolfe (2005), although see Rauch et al. (2009)). These high-density, predominantly neutral systems serve as neutral gas reservoirs for high redshift star formation (Prochaska & Wolfe, 2009). The intermediate column density systems mark the transition from the optically thin Lyman- forest to the optically thick absorbers found in and around the extended regions of galaxies. Typically these absorbers are easy to identify in QSO spectra due to the characteristic attenuation of QSO flux by the Lyman limit at 912 Å in the rest frame. These intermediate column density systems are segmented into three additional categories. The low column density absorbers ( cm cm) are known as partial Lyman limit systems (PLLSs), the intermediate column density absorbers ( cm cm) are known simply as Lyman limit systems (LLSs, Tytler, 1982), and the high column density absorbers ( cm cm) are known as super Lyman limit systems (SLLSs, a.k.a. sub-DLAs; O’Meara et al., 2007; Péroux et al., 2002; Kulkarni et al., 2007). These absorbers are the least well-studied and physically understood class of absorbers, especially at , i.e. over the past 10 Gyr of cosmic time. The reason for that is because at redshifts , the Lyman limit is shifted into the UV, requiring the need for space-based UV observations to observe the Lyman break in spectra. To date, the majority of spectra used in LLS surveys have been taken from ground-based observations, providing an adequate statistical description of the high redshift () absorbers, most recently by Prochaska et al. (2010) and Songaila & Cowie (2010). Previous and recent surveys that partially probe the regime (Tytler, 1982; Sargent et al., 1989; Lanzetta, 1991; Storrie-Lombardi et al., 1994; Stengler-Larrea et al., 1995; Songaila & Cowie, 2010) have produced samples of tens of LLSs spanning the redshift range . These surveys studied the statistical nature of LLSs, with some conflicting conclusions as to the evolution of these absorbers over cosmic time. A complete understanding of these optically thick absorbers is crucial as these systems in part determine the strength and shape of the ionizing ultraviolet background (UVB, Shull et al., 1999; Haardt & Madau, 1996; Zuo & Phinney, 1993). Due to the position of LLS column density with respect to Lyman- forest systems and DLAs, a priori it is natural to view LLSs as tracing the IGM/galaxy interface. Thus they may provide a potentially unique probe of material moving in and out of galaxies over time. It is for these reasons that the incidence of optically thick absorbers as a function of redshift and the frequency distribution of absorbers as a function of serve as a critical parameter in modern cosmological simulations (Rauch, 1998; Kereš et al., 2005, 2009; Kereš & Hernquist, 2009; Nagamine et al., 2010). Observations have linked LLSs to the extended regions of galaxies, including their gaseous halos, winds, and the interactions of these with the IGM (e.g., Simcoe et al., 2006; Prochaska et al., 2004, 2006a; Lehner et al., 2009a; Stocke et al., 2010). Simulations have also shown a physical connection between LLSs and galaxies of a wide range of masses at 2 to 4 (Gardner et al., 2001; Kohler & Gnedin, 2007). In addition, surveys of MgII and CIV absorbers have shown connections to extended galactic environments and indicate the metal absorbers trace similar physical regions as LLSs (e.g., Chen et al., 2001, 2010; Churchill et al., 2000, 2005; Charlton & Churchill, 1998; Steidel & Sargent, 1992). MgII absorbers have been studied extensively in optical surveys where the absorbers are observed over the redshift range and led the way in connecting QSO absorption features with galactic environments (e.g., Tytler, 1987; Petitjean & Bergeron, 1990; Nestor et al., 2005). Due to the nature of the MgII absorption lines, which are strong and easily saturated, measurements of the MgII column density are often impossible. This limits the information available as to their origin, metallicity, and physical properties. LLSs provide a complementary approach in understanding the gas around galaxies and provide a reliable estimate of for (from the Lyman limit) and (from the Lyman- line) absorbers. For example, measurements of allow an examination of the frequency distribution with column density, which provides additional insight into the evolution of the strength and shape of the UVB over cosmic time. In this work we analyze the population of LLSs at low redshift using a new sample of spectra from archival Hubble Space Telescope (HST) observations with the Faint Object Spectrograph (FOS) and the Space Telescope Imaging Spectrograph (STIS). We present the most complete survey to date of LLSs at . We catalogue 206 LLSs at and examine a redshift path from a statistical sample of 249 QSO spectra to search for LLSs. We compare our results with previous surveys, including the recent high redshift survey of Prochaska et al. (2010), probing the evolution of LLSs over redshifts . We connect the observational quantities to physical properties assuming the 737 CDM cosmology with Gyr, km s Mpc, , and (consistent with WMAP result, Komatsu et al., 2009). This paper is organized as follows. After a brief description of the properties of LLSs in § 2, we give an overview of the data and the process of assembling the survey sample in § 3. In § 4 we describe the process used to identify LLSs and characterize their properties, while the analysis of these properties, in particular and , is given in § 5. We conclude with a discussion of the connection between galaxies and LLSs in § 6 and a summary of our principal results in § 7. ## 2. Description of Lyman Limit Systems The Lyman limit of neutral hydrogen is located at 912 Å in the rest frame of the absorber. For a background source with intrinsic flux and observed flux , the observed optical depth blueward of the limit is τ(λ≤λLLS)=lnFQSOFOBS(λ≤λLLS), (1) where is the assigned wavelength of the break in the LLS spectrum. The HI column density of the absorber can then be related to the optical depth using NHI=σ−1HIτLLS (2) where is the optical depth at the Lyman limit of the absorption system and  cm is the approximate absorption cross section of a hydrogen atom at the Lyman limit (Spitzer, 1978). It should be noted that while we refer to the absorption systems in this survey as LLSs, a more accurate description would be optically thick absorbers. Since we identify all systems above a minimum , we limit our sensitivity to accurately measure large HI column densities. Strong absorbers depress the absorbed flux so low that it cannot be measured. In these cases we have only lower limits for the HI column densities. As a result, some of the absorbers in the sample are likely DLAs or SLLSs, but the lack of coverage of the Lyman- line prevents us from definitively categorizing these absorbers. Also, the frequency distribution of DLAs and SLLSs is much lower than for standard LLSs, suggesting the strong absorbers comprise a small portion of our sample (see § 5.4). Due to the different selection criteria in past LLS surveys, we have created two statistical samples of our LLSs. The first sample, R1, defines a LLS as an absorber where , i.e., cm. The majority of the surveys done through the 1990s were completed using this criterion, although these previous studies were not always rigorous about this restriction. The second sample, R2, defines a standard LLS as an absorber where , i.e., cm. This second definition is adopted for comparison with the recent high redshift survey by Prochaska et al. (2010). Although not directly included in our statistical analyses, we have identified many PLLSs with , i.e., cm. These absorbers require a more refined assessment of their selection, and the present sample is incomplete. As a result, we warn against the use of such systems from our sample in statistical analyses. This incompleteness manifests itself in our analysis of the distribution for LLSs (see § 5.4). Lastly, in dealing with QSO absorption lines, it is common to exclude absorbers located within an established distance of the background source to eliminate any potential influence the source may have on the number density and ionization state of the systems. We identify these proximate-LLSs as absorbers within km s of the background QSO and exclude them from our statistical analyses. ## 3. The Data: FOS and STIS In this work we make use of archival observations from both the STIS and FOS instruments on board the HST. The STIS sample incorporates data from a variety of projects which used the G140L and G230L gratings. These gratings are capable of a resolving power of R1000, and wavelength coverages of Å for the G140L and Å for the G230L. All the data were retrieved from the MAST archive and were processed with CALSTIS v2.22 prior to retrieval. Data for objects observed more than once were combined into a single spectrum weighted by the exposure time of the individual spectra. For objects observed with both the G140L and G230L gratings, these data were combined into a single spectrum. Table 1 summarizes the observations used in this work, giving the grating used for an observation, the total exposure time of the observation, and the proposal ID of the observation. Our final analysis of LLS statistics requires careful culling of the data to minimize biases and some of these observations were not included in our final sample; we discuss the criteria used to exclude an observation in § 3.1. The FOS data can be separated into two distinct portions. First, we use the Bechtold et al. (2002) reductions of observations taken with the G130H, G190H, and G270H gratings.111The data are available through http://lithops.as.arizona.edu/jill/QuasarSpectra/. We will refer to this subsample as FOS-H. Data taken with these gratings have a resolving power R1300, and wavelength coverages of Å for the G130H, Å  for the G190H, and Å for the G270H. We also make use of FOS data using the G160L grating, and we will refer to this subsample as FOS-L. These data have a resolving power R250 and a wavelength coverage of Å. We treated these data in a manner consistent with the STIS data, with multiple exposures combined to form a single spectrum. Table 2 lists the observations examined for this work, giving the total exposure time of the observation as well as the proposal ID. For a small number of objects observed with FOS, observations were taken with both the low resolution G160L grating and a combination of the high resolution gratings. In these cases, it is possible to detect a shift in the wavelength array of the G160L spectra relative to the high resolution spectra. For objects where a shift was evident, the G160L spectra were shifted in wavelength space to align with the high resolution data. There were 20 objects where a shift in the wavelength array was detectable, of which the mean shift in spectrum was 4 Å. The FOS spectra all suffered from background subtraction uncertainties of 30 (Keyes et al., 1995) due to the crude nature of the background determination and lack of scattered light correction in FOS. The error vectors produced by CALFOS do not account for this background uncertainty. For regions strongly absorbed by LLSs, the background uncertainties can dominate the error budget. To estimate this uncertainty, we calculated the background flux as the product of the inverse sensitivity function and the count rate for each grating. Taking 30 of this quantity allowed us to account for the error in the initial background subtraction. ### 3.1. Selection of a Statistical Sample of Absorbers The initial sample of observations taken from the STIS and FOS archives contained 700 QSOs with redshifts (Tables 1,2; Bechtold et al., 2002). However, not all of these QSOs can be used for LLS studies because the data suffer from a variety of pitfalls (i.e., poor quality or lack of coverage of 912 Å and below in the QSO rest frame) or the selection of the QSO for the STIS or FOS observation is biased in favor or against the presence of a LLS. To construct a sample of QSO sightlines appropriate for studying LLS statistics we used the following approach. We assigned the redshift of the QSO, , determined through emission features of the spectrum, using the results from Bechtold et al. (2002) when available and the Veron-Cetty & Veron (2010) QSO database for the remaining objects. We removed from our sample all QSOs with no coverage below 912 Å in the rest frame of the QSO or where the quality of the observation was too poor to establish an estimate for the continuum flux. We also excluded apparent or known broad absorption line QSOs from our sample due to the difficulties in studying intervening absorbers in their spectra. Next we examined the Phase II proposals for each observation to determine if any knowledge of the sight line characteristics were known prior to the execution of the proposal that may represent a bias. For example, QSOs specifically targeted because International Ultraviolet Explorer (IUE) data indicated the QSO was UV-bright may bias our sample against LLSs. We identified all such potentially biased observations and removed them from our sample. There were also 2 gravitationally lensed QSOs for which we only included one of the pair in our sample, excluding the absorber associated with the lensing galaxy. QSOs targeted because of absorption features known from previous observations, such as MgII absorption, DLAs, and 21 cm absorption represented the most common selection bias in the present sample. We did not include any LLSs in our statistical sample that were associated with previously identified systems toward these QSOs (these systems are listed in Table 4 with appropriate bias indicators). We did however, include the redshift path covered by these QSOs and any LLSs that occurred at redshifts higher than the targeted absorber redshift. There is a concern that including these observations, in particular the targeted strong MgII absorbers, may bias our sample against detecting strong HI absorbers along the included redshift path. For the majority of the targeted MgII observations, additional absorption systems along the line of sight were not accounted for when selecting the QSOs for observation (S. Rao, private communication, 2011). Because of this, we believe there is no significant bias in including the redshift path and non-targeted LLSs of these observations. In § 5 we have examined a subset of these observations to show the statistical properties of the observations are consistent with the entire statistical sample. The remaining 249 objects listed in Table 3 comprise our sample. ### 3.2. Survey Redshift Path To quantify the absorption features found in our sample, we must determine the portion of each spectral observation that is amenable to a robust search. This quantity is referred to as the redshift path of the survey and results from translating the observed spectral wavelengths into redshifts. For our survey we calculated two redshift paths, corresponding to robust searches for LLSs defined as absorbers where and . For these two cases, we require the local continuum flux to exceed four times the estimated error array (i.e. ) and to exceed two and a half times the estimated error array (i.e. ), respectively. Requiring the of the observation to be above this threshold allowed us to empirically define an acceptable wavelength range (i.e., redshift path) over which we can reliably detect LLSs. We also require the survey path to end at the redshift which corresponds to  km s blueward of . The limits for our redshift path definitions were deduced through the analysis of real and simulated spectra; these limits correspond to our ability to detect 1 or 2 at the 95% confidence level. The second redshift path requirement is an attempt to minimize the effect of the QSO and its environment on the analysis of intervening absorption systems. We note that for objects in our sample, we redefine the quantity as the lesser value of the maximum redshift that satisfies the requirement and . In their recent high redshift survey (), Prochaska et al. (2010) note several biases that impacted their survey due to the presence of PLLSs. Unidentified PLLSs in their surveys had two main effects, neither of which particularly impacts our survey. In the first, unidentified PLLSs in their spectra could cause the local to drop below their threshold criterion. These authors calculate the for comparison against their selection criterion at the wavelength of the QSO Lyman limit and use all of the available path of the observation, unless they are able to identify a PLLS that depresses the below the threshold at a lower redshift. Thus, not identifying a PLLS could cause them to overestimate the redshift path appropriate for a given QSO. However, we calculate a local at each point in our spectra and are able to note which regions of a spectrum are unsuitable for use in our survey. Any effect that causes the to fall below the threshold would shorten the redshift path, even if it were not identified as, e.g., a PLLS. In the second effect discussed by Prochaska et al. (2010), unidentified PLLSs at redshifts just above a higher optical depth absorber caused them to assign a redshift for the latter too high by up to . This had the effect of reducing the total redshift path of their survey by a sizable amount, since their typical redshift path per QSO was only . They estimate this caused overestimates in by 30 to 50%. This has a negligible effect on our survey for several reasons. First, % of the redshifts assigned in our survey come from measurements of Lyman-series lines associated with the LLSs rather than from the break itself. These measurements should be unaffected by the aforementioned bias. Furthermore, the probability of having two overlapping systems, and the resulting impact on the path length calculation, are much smaller at the lower redshifts of our survey. Typically our redshift path per QSO is a factor of 4 larger than the mean of the Prochaska survey, while the number density of absorbers is smaller by a factor of 4. Even based on these considerations alone, the impact would be mitigated by more than a factor of 10. Furthermore, because the number density of absorbers per unit redshift is significantly lower, the probability of having two in close proximity is also lower by a factor of . Together these diminish the impact of this bias to below a 1% effect that only impacts the sight lines without measurements based on Lyman series lines, i.e., % of our sample of LLSs. Altogether, then, these biases play little role in our survey. Table 3 summarizes the properties of the QSO sight lines that meet these selection criteria. For each object, we give the emission redshift, , and the maximum and minimum redshifts meeting our redshift path criteria for each optical depth regime, and , where corresponds to the greater value of the minimum redshift that satisfies the requirement and 20 Å above the minimum wavelength coverage of the observation. We refer to the QSO sightlines in which we can reliably detect a LLS with as the R1 sample, which contains QSOs and LLSs, while the objects in which we can reliably detect a LLS with is the R2 sample, which contains QSOs and LLSs. Figure 1 shows the distributions, which represents the number of QSOs with spectral coverage of , as a function of redshift for the R1 and R2 samples. Both samples are most sensitive to the detection of LLSs over the redshifts . The total integrated redshift path, Δz=∫g(z)dz (3) is and . Our survey probes a factor of larger redshift path than previous surveys at (, Jannuzi et al., 1998). Our survey probes a redshift path very similar to the recent high redshift survey of Prochaska et al. (2010), where for LLSs at . ## 4. Identifying and Characterizing Lyman limit Systems We select LLSs on the basis of their Lyman limit absorption (i.e., we do not include absorbers in our statistical sample based only on strong line absorption) for redshifts where the data satisfied our redshift path criteria. The entire list of 206 LLSs found while examining our unabridged sample is given in Table 4. The absorbers used in the statistical analysis are designated with R1 or R2. There are 61 LLSs in the R1 sample and 50 LLSs in the R2 sample. A sample of the spectra for the observations can be found in Figure 2, where each QSO spectrum is plotted with a vertical dashed line at the location of an established HI absorber. The red dashed lines indicate systems which were identified but not included in our statistical analysis. The complete sample of LLSs identified in this work are available in the online version of Figure 2. In general, as seen in Figure 2, the break produced by a LLS is abrupt enough to be found in even low S/N () and resolution spectra. However, as we discussed above the occasional presence of PLLSs can complicate the situation. In particular, assigning the continuum flux level redward of the Lyman break can become difficult. To minimize the potential error associated with this effect, we adopt a two-step process. First we use an automated search to identify potential Lyman limits. This automated search was checked by-eye and found to highlight absorbers with quite well. These methods allowed us to identify absorbers where , but we stress the sample of PLLSs detected is not complete. Subsequently we use an interactive routine to fit the continuum flux, the optical depth of the system, and the characteristic continuum recovery blueward of a Lyman limit (see below). While our statistical sample contains only LLSs that satisfy our or criteria, we have attempted to identify every optically thick and partially optically thick absorber present in our spectra. This is important for accurate continuum fitting and provides a sample of PLLSs that we use in the analysis of the distribution presented in § 5.4. We adopted the composite QSO spectrum developed by Zheng et al. (1997) as a general model of the QSO continuum. We scaled the continuum to each QSO spectrum over a relatively absorption free wavelength range of the spectrum. We found the majority of QSO observations were fitted well by this composite. We then used a running chi-square tool to identify portions of the spectrum where the QSO spectrum deviated from the composite. For each pixel in the spectrum, a running goodness of fit parameter was calculated comparing the fitted continuum with the observed spectrum over 30 Å.222At low redshift the attenuation of the QSO flux blueward of 912 Å (in the rest frame of the QSO) due to intervening Lyman- lines is quite small compared to high redshift. This allowed us to model the QSO flux with the composite spectrum quite well over all wavelengths, including regions which probed the lower redshift Lyman- forest. This largely excluded false identifications due to strong absorption lines present throughout the spectrum. Spectra not well fitted by this routine were individually examined for the possibility of a Lyman break, although the number of such spectra is very small. Once a spectrum was flagged as containing a possible LLS, the spectrum was examined more thoroughly to identify the redshift of the break and any Lyman series lines present. When possible, the Lyman series lines were used to determine the redshift of the absorber. However, if the Lyman limit was located near the maximum wavelength of the spectrum or the resolution of a spectrum was too low to identify individual absorption lines (i.e. the majority of FOS-L observations) we used the Lyman break to set the redshift. We define the redshift of a LLS determined from the break as zLLS=λLLS912\AA−1, (4) where is the wavelength of the observed Lyman break. In Table 4, we list from our analysis. The typical statistical error on the redshift determination from the Lyman series lines is for the FOS-H and STIS spectra. The statistical errors on are larger when using the break, about 0.010 for the FOS-H and STIS spectra and 0.014 for the FOS-L spectra. As we used two different methods to determine , possible systematics may be introduced. We tested this by using LLSs for which the redshift could be determined from both the Lyman lines and break. We found a systematic shift of 0.007 in the redshifts determined from the break in the FOS-H and STIS spectra, but not for the FOS-L (possibly because the resolution is far cruder). For the few redshifts of the LLS determined from the Lyman break in the FOS-H and STIS spectra, we systematically corrected by the 0.007 shift. Once a redshift is assigned, we measure the optical depth at the Lyman limit for each absorber. We interactively refined the fit of the composite QSO continuum model to each spectrum. We determine the optical depth at the limit by comparing the continuum flux, , with the observed (absorbed) flux, , as in Equation 1. For many PLLSs and a few LLSs, the residual flux below the limit is sufficient to satisfy our selection criteria for further LLS searches at of the highest redshift system. We derive a continuum blueward of the highest redshift LLS in a spectrum by modeling the recovery of the flux due to the wavelength dependence of the optical depth. The continuum flux in the recovery region, , is FREC=FQSOe−τLLS(λ912\AA)3,forλ<912\AA. (5) Once is defined, we repeat our LLS search for systems at redshifts below the initial system after renormalizing our best fit continuum fit according to Equation 5. Figure 3 shows the method of fitting the continuum onto a QSO spectrum, identifying a LLS, and modeling the recovery of the spectrum blueward of a Lyman limit. For absorption systems where the residual absorbed flux blueward of the break was determined to the level, we report optical depth measurements with accompanying errors. For systems where we could not detect the residual absorbed flux to the level, we treat the optical depth measurement as a lower limit and report the lower limit. The optical depth measurements can be found in Table 4, where is also reported. ## 5. Statistical Analysis and Results In this section we present the results of our survey. The first subsection examines the redshift density of LLSs and how the results from sample R1 and R2 compare to past studies of LLSs. The following subsections introduce a CDM cosmology to connect the statistical treatment of our samples to physical structures throughout the universe such as the mean separation of LLSs, the incidence of LLSs as a function of absorption distance, and the column density distribution function. In each subsection, we first generalize the analysis, as to make it applicable to both of our samples. Following the general treatment, the individual samples are explored and discussed when appropriate. ### 5.1. The Redshift Density of Intervening LLSs The redshift density of LLSs, ,333This quantity has often been denoted in the past by a variety of symbols including , , and . is a statistical quantity that is directly related to the QSO observations. The standard method for estimating is to simply calculate the ratio of the number of LLSs, , detected in a redshift interval to the total survey path, (defined in Equation 3), contained in that redshift interval: l(z)=NΔz. (6) Figure 4 presents the values of for both samples, R1 and R2. We first estimated in redshift intervals where the binning was arbitrarily selected to provide approximately the same number of LLSs in each interval. Table 5 lists the properties of these redshift intervals for the R1 and R2 samples. Following previous work (e.g. Tytler, 1982), we model the redshift evolution in as a power law of the form: l(z)=l∗(1+z1+z∗)γ. (7) This functional form was originally chosen when the Einstein-de Sitter models were the preferred cosmologies. At the time, evolution in the LLS distribution was found if for or for . We use this functional form for the historical significance and the usefulness it provides in comparing our results with previous surveys, but we note there is no physical or a priori reason to expect a particular functional form. However, the power law fit does do a reasonable job fitting the distribution. Using the maximum-likelihood method (e.g. Tytler, 1982; Sargent et al., 1989), a best-fit estimate for , and from that , can be determined for both samples. For the R1 sample () we find  1.19 0.56 and  0.85. For sample R2 () we find  1.33 0.61 and  0.59. For both samples we adopt , which corresponds to and can be chosen arbitrarily. These best-fit models are overplotted on the data in Figure 4. To check if the difference between the observed distribution and the adopted power law expression is statistically significant, we test the null hypothesis, that the observed and predicted cumulative distributions of LLSs with redshift are distinct, using a Kolmogorov-Smirnov test. The KS test yields a minimum probability of P=0.95 that we can reject the null hypothesis, using the entire redshift range encompassed by both the low and high redshift samples. Thus there is a strong probability that we can reject this null hypothesis. As mentioned in § 3.1, we examined the potential biases associated with including redshift paths toward QSOs with targeted strong MgII absorbers, which constitute a significant fraction of our statistical sample (%). To empirically test for any bias, we separately analyzed the statistical properties of these observations and compared their properties with the statistical properties of the total sample. From the STIS observations of Rao et al. (PID 9382 & 8569), we composed a sample of 79 QSO observations. This sample contained 17 (16) (2) LLSs over a redshift path of (28.01), giving (). These values are well within of the for the full sample ( and for and 2, respectively; Table 5). As a further check, we then separately analyzed the remaining 170 QSO observations to compare with the statistical properties of the total sample. This sample contained 44 (34) (2) LLSs over a redshift path of (68.13), giving (), which again is well within the values for the full sample (Table 5). This suggests any biases associated with these observations have a negligible impact on our analysis and results. Over the past 30 years, there have been a variety of LLS surveys (Tytler, 1982; Sargent et al., 1989; Lanzetta, 1991; Storrie-Lombardi et al., 1994; Stengler-Larrea et al., 1995; Jannuzi et al., 1998; Prochaska et al., 2010; Songaila & Cowie, 2010), and, as a result, a variety of estimates of . Many of these previous surveys examined large redshift intervals (typically spanning ) but have largely been statistically dominated by high redshift () LLSs. It is due to this inhomegeneity, combined with the lack of low redshift LLSs, that there is uncertainty as to the true statistical distribution of LLSs over the redshift range . Lanzetta (1991) was the first to argue for a potential break in the evolution of the redshift density of LLSs at , where he found the low redshift () LLSs showed relatively constant and the high redshift () LLSs showed a rapidly evolving . However, both Storrie-Lombardi et al. (1994) and Stengler-Larrea et al. (1995) argued, based on samples spanning the redshift range , that the for LLSs is best described as moderately evolving over the entire range and fit by a single power law in . Figure 5 presents values from the R1 sample, as well as the fits from these previous surveys (with the parameters listed in Table 6). These previous surveys used a criterion for inclusion in the sample, but it is not clear this was applied in a uniform manner (Stengler-Larrea et al., 1995). Our results for over are consistent with the surveys of Storrie-Lombardi et al. (1994) and Stengler-Larrea et al. (1995), both of which fit over the redshift range . Recently, Prochaska et al. (2010) released a survey of high redshift LLSs (with ) using the SDSS-DR7 that samples a redshift range () that does not overlap our survey redshifts. They find the for high redshift LLSs can be described as rapidly evolving over the range . Our model of is inconsistent with the Prochaska et al. (2010) survey when extrapolated to , as the Prochaska results are inconsistent with ours if extrapolated to . It was a similar disagreement seen in the high and low redshift samples of the Lanzetta (1991) work that led to the argument for a break in the power law description of for LLSs. To investigate the possibility and significance of a broken power law fit to the redshift density of LLSs, we combine the recent high redshift sample from Prochaska et al. (2010) with our low redshift sample to examine the statistical nature of LLSs from . We refer to the combined R2 and Prochaska et al. (2010) samples as the RP10 sample. This combined sample of HST and SDSS observations contains 685 QSOs and 206 LLSs with . The total redshift path probed in RP10 is . This redshift path is a factor of 2 greater than the recent LLS survey from Songaila & Cowie (2010), which spanned redshifts up to . In Figure 6, we present our estimate for over this expanded redshift range. We find from the combined sample can be described by a single power law (Equation 7) with  1.83 0.21 and  1.62, for . Table 7 lists the properties of the bins used for display purposes in Figure 6 and the values associated with each bin. To confirm the observations are well modeled by a single power law, a KS test was applied to the cumulative distribution function of observed and predicted LLSs(see Figure 6). The KS test yields a probability of at least that the null hypothesis, the observed and predicted distributions represent different distributions, can be discarded. Thus the RP10 sample supports the conclusions of Storrie-Lombardi et al. (1994), Stengler-Larrea et al. (1995), and more recently Songaila & Cowie (2010), that a single power law is sufficient to describe over . It should be noted that in the original analysis of the SDSS-DR7 sample, Prochaska et al. (2010) limited the redshift path to . This was done because the inclusion of into their sample appeared to produce an artificially low result for , which they argued was unlikely to be physical. We include this extra redshift path from their sample, under the reasoning that arbitrary binning of the data for display purposes can produce artificial departures from a trend that in no way affects the statistical analysis of the maximum likelihood method. In our redefined bins, the artificial drop apparent at is no longer present. To insure the extra redshift path is not solely responsible for our ability to fit the combined sample with a single power law, we conducted our analysis on the combined HST/SDSS for both situations ( and unrestricted ) and found both conditions produce single power law fits that are consistent and good describers of the data. It is also interesting to note that if is unrestricted for the SDSS sample alone, the best fit curve for the high redshift sample is described by and . This description of for the high redshift LLSs presents a less convincing argument for the need of a break in the power law, because the difference in power law indices between low and high redshift is less extreme. Our analysis indicates that it is not necessary to introduce a broken power law to model the statistical evolution of the redshift density of LLSs over . However, we stress a sample with coverage of the region will be needed to truly rule out a break (J. O’Meara et al., in prep.). We note that the redshift density is really an observational statistic, and the difference between a single or broken power law may not carry much significance over to the physical quantities with which it is related. In § 5.2, § 5.3, and § 5.4 we put these results into the context of a cosmology and discuss the implications for the evolution and nature of LLSs to 5. ### 5.2. The Incidence of LLSs per Absorption Distance The number of LLSs per absorption length, (Bahcall & Peebles, 1969), is defined as l(X)dX=l(z)dz (8) where dX=H0H(z)(1+z)2dz, (9) and H(z)=H0(ΩΛ+Ωm(1+z)3)1/2. (10) The quantity is defined such that it is constant if the product of the comoving number density of structures giving rise to LLSs, , and the average physical cross section of the structure, , is constant, i.e., . Figure 7 shows the quantity plotted as a function of fractional lookback time for the RP10 sample (). We see that experiences a rapid decrease for 0.6 Gyr corresponding to a decrease in redshift from to 3.5. After this rapid drop, decreased slowly over 8 Gyr, from to (See Table 7). The results in Table 7 show that fell by a factor of 1.5 over 0.6 Gyr at high redshift and by another factor of 1.5 over 8 Gyr at low redshift. Figure 7 demonstrates why differing results are found regarding the broken (or not) power laws in the statistical treatment of the redshift density of LLSs. The dashed red line and dotted blue line in the plot are the best fit power laws for (transformed into using Equation 8 and 9) for our low redshift sample and the high redshift sample of Prochaska et al. (2010). The solid black line is the best fit power law to the RP10 sample (again transformed into using Equation 8 and 9). The nature of power laws makes it difficult to extrapolate a fit based on observations in only the low or high redshift regime (in the regime where the power law is derived, the fit is nearly linear, making it extremely difficult to match observations in a regime outside of where it was derived). It is only when the observations are combined that we are able to produce a consistent single power law. We have mentioned the need for a study of the intermediate redshift regime (, J. O’Meara et al., in prep.), which will allow for a more definitive assessment of the absorber distribution. As previously stated, the behavior in is related to the comoving number density of LLSs as well as the physical size of the absorbers. This rapid decrease in over a short timescale at high redshift indicates either the physical size of LLSs has decreased substantially in this time or the comoving number density of LLSs has dropped significantly. A moderate decrease in both properties could also give rise to this behavior, but as we will show in § 6, when we associate LLSs with galaxies we find the physical size of LLSs must undergo significant evolution from 5 to 2. ### 5.3. The Mean Proper Separation of LLSs The number density of optically thick absorbers throughout the Universe determines the mean free path of hydrogen ionizing photons, and in turn, sets the shape and intensity of the UVB. We can calculate an upper limit to this mfp using of absorbers, as calculated in § 5.2. It is only an upper limit because we have not included the absorbers that contribute to the overall absorption of hydrogen ionizing photons. Using , we can calculate the average proper distance, , a photon travels before encountering a LLS (e.g., Prochaska et al., 2010) as ΔrLLS=cH01(1+z)3l(X). (11) With the RP10 sample we find that varies from Mpc proper distance from 5 to 0.3 (Table 7, also note was calculated for R1 and R2 in Table 5). In Figure 8 is shown as a function of redshift (data points and red curve), along with the mean free path of hydrogen ionizing photons (black curve) estimated by Faucher-Giguère et al. (2009) (It should be noted that while the calculation of the mean free path by Faucher-Giguère et al. (2009) is dependent on an assumed HI distribution, their estimated mean free path is in agreement with the mean free path calculation from Prochaska et al. (2009).). The shaded region emphasizes the difference between the two curves, which can be associated with the contribution from PLLSs and LLSs that were not included in this calculation. We note the ratio between the distance a photon travels before encountering a LLS and the mean free path of a hydrogen ionizing photon is increasing with decreasing redshift. Consider the extreme redshifts of the plot; at 5, is a factor of 1.5 larger than the predicted mean free path, while at 0, is a factor of 3.5 larger than the predicted mean free path. Assuming a mean free path consistent with Faucher-Giguère et al. (2009), this suggests that the hydrogen absorption systems have become increasingly more important for absorption of Lyman continuum photons as the universe has evolved. ### 5.4. The Differential Column Density Distribution Function In this subsection, we combine our low redshift sample with previous works on the low- () IGM to place constraints on the differential column density distribution over 10 orders of magnitude in . This distribution is defined such that is the number of absorption systems with column density between and and redshift path between and (e.g., Tytler, 1987), f(NHI)dNHIdX=mΔNHIΣΔXdNHIdX, (12) where is the observed number of absorption systems in a column density range centered on and is the total absorption distance covered by the spectra. The first moment of the distribution is also the incidence of absorbers per absorption distance, . Empirically, it has been shown that at low and high redshift, may be fitted by a power law for various regimes (e.g., Tytler, 1987; Rao et al., 2006; Lehner et al., 2007; O’Meara et al., 2007): f(NHI)dNHIdX=CHIN−βHIdNHIdX. (13) The slope, , may vary with the considered or intervals, and, as discussed below and elsewhere (e.g., Wolfe et al., 2005; Prochaska & Wolfe, 2009), the functional form can be more complicated than a single power law, especially when the entire observed range is considered. In Figure 9, we show the column density distribution at . The data and analyses for different regimes come from various origins that we detail below. In the studies where another cosmology was chosen to calculate (Lehner et al., 2007; Williger et al., 2010), we have updated the cosmology to that used in the present study (see Equation 9). At , the DLA sample was selected based on known strong MgII–FeII systems (Rao et al., 2006). Their sample consist principally of data similar to those presented in this work (but rejected from our sample of LLSs because they were specifically targeted) with the addition of IUE spectra. Owing to their selection criteria, the sample has selection biases (Rao et al., 2006; Prochaska & Wolfe, 2009), although Rao et al. (2006) argued that they are relatively well understood and dealt with (in the DLA regime). Rao et al. (2006) found that their DLA () sample could be fitted with (represented by the solid red line in Figure 9). The dot-dashed cyan curve shows a fit assuming the power law index for the DLA at high redshift with ( ) (Prochaska et al., 2010), which seems to provide a reasonable fit to the DLA measurements for as well. The similar slope of at both high and low is consistent with a non-evolution of for the DLA as argued by Prochaska & Wolfe (2009). At the other end of the spectrum, the Lyman- forest regime, we consider two complementary samples that probe (Lehner et al., 2007) and (Janknecht et al., 2006). We also complement the lower redshift interval with the 3C 273 sightline analyzed by Williger et al. (2010). At , the data come from the high resolution STIS E140M echelle mode while at higher redshift the data come from STIS E230M as well as VLT/UVES and Keck/HIRES data. The HI column densities (and Doppler parameters) were derived by fitting the Lyman- line (and higher Lyman series lines if present) thanks to the high resolution of these spectra. This method works well for systems with if several Lyman series lines are used (e.g., Lehner et al., 2006) or with (depending on the -value) if only the Lyman- transition is used. For the sample, several Lyman series lines were used when possible. For the higher redshift sample, Janknecht et al. (2006) also used different atomic transitions to constrain the Doppler parameter. We note that their sample include a few PLLSs and LLSs, but the HI column densities of these systems often have errors in excess of 1 dex. We excluded those systems from our analysis. Using the maximum-likelihood method, we first fitted the two Lyman- forest samples separately, finding no difference between these two redshift regimes. We therefore combined both samples and fitted them simultaneously. We find and in the interval (which is shown by the blue line in Figure 9). Changing the upper bound by dex and the lower bound by dex gives consistent results (within ). However, changing the lower bound by dex decreases by (more than ), and drops even more if the lower bound decreases further. As indicated in Figure 9, there is a turnover in the distribution at , which is likely due to the incompleteness of the sample at these column densities.444Janknecht et al. (2006) typically found , but they set a completeness for their sample at . Setting the lower bound to 12.9 dex, we found , a value very similar to theirs and substantially smaller than . Their completeness value was not justified, and based on our analysis a lower limit of 13.2 dex appears more appropriate. The signal-to-noise in 9 of the 11 sight lines (depending on the wavelength) indeed is not dissimilar from the lower redshift sample, where Lehner et al. (2007) showed that the completeness was dex based on an analysis of the column density distribution. While the slope derived for Lyman- forest is very similar to that predicted in recent cosmological simulations (Davé et al., 2010), the observations do not indicate an evolution of in this redshift regime, as inferred in the simulations. Finally, the column density distributions of the PLLSs, LLSs, and SLLSs () have so far remained largely uncategorized at . For the SLLSs, is far too large to estimate from the Lyman break, but in this regime, the amount of HI is large enough that the Lyman- transition produces damping wing from which can be estimated. In our sample, Lyman- is covered in just 7 sightlines when . In two of these cases, there is no detection of Lyman-, but the data were obtained from the low resolution FOS observations. In the other five cases, Lyman- is observed, but in four of them, the equivalent width implies column densities around cm or less. As the spectral resolution of the data is low and line contamination is likely, we relied on other recent works to constrain in the SLLS regime. Specifically, we use the surveys of O’Meara et al. (2007) and Péroux et al. (2003), which include 16 SLLS at , overlapping the high redshift portion of our LLS sample and the Lyman- forest samples.555 Péroux et al. (2005) subsequently produced a second survey of SLLS, but their redshift coverage mostly targeted higher redshifts with a negligible redshift path at . We estimate the total absorption path probed for the SLLS searches to be ( for the Péroux et al. sample, and for the O’Meara et al. sample). The bins for display of the data were chosen so there are 5 systems per bin (see Figure 9). For the PLLSs and LLSs, we considered our sample of QSO sightlines, where we reject sight lines having LLSs with only limits on the optical depth (and hence on ). The main effect of the removal of the limits is to increase slightly the normalization of the fit by dex. This is too small a difference to have any impact on our result and should not impact the power law slope. This reduces our sample to 50 systems and a total absorption path . In Figure 9 we show the adopted bins for . The first bin corresponds to optical depths in the intervals , i.e. where our sample is incomplete; we treat this bin as a lower limit. We used the maximum-likelihood method to fit the data with a power law distribution in (Equation 13). Our first attempt was to fit the LLS and SLLS simultaneously, but no adequate fit was found with a single slope . We, therefore, fitted the LLS and SLLS separately. For the SLLS, we find for (where the upper and lower bounds were allowed to vary by dex to estimate the errors). For LLS, we derived for (as the interval spans only 0.5 dex, changing the upper and lower bounds by dex led to an unstable fit; we consider this result as tentative). We note that if we integrate in the intervals and , with the respective functional forms (where we assume that each is correct to the point where they intersect at 18.2 dex, see Figure 9), we find , which is not too dissimilar from the results presented in Table 5 that gives , providing some independent support to our results. It is evident that more data are needed in the PLLS and the LLS/SLLS regimes to better discern the true shape of in these intervals. However, our analysis suggests that there must be an inflection point in in the LLS regime, and, likely, a second inflection point in order to connect the PLLS to the Lyman- forest systems. We note that the slope distribution fits well the HI systems with cm (see Figure 9), so the flattening should likely occur between and cm. The interval will likely remain largely unconstrained owing to the difficulty in measuring in this regime requiring either to fit the Lyman series lines (e.g., Lehner et al., 2009a) or to have very high quality S/N data to discern the damping wings in the Lyman- absorption. In Figure 9 we also show one of the models in the local universe by Corbelli & Bandiera (2002) (long-dashed orange curve; see their Figure 2 where we adjusted vertically their model to fit the DLA and SLLS distributions – the model with is shown). In their models, they investigated if the flattening of between the LLSs and DLAs could be explained if ( HI HII) follows a single power law, while can deviate from a single power owing to the change of the ionization fraction as function of . While the low systems are not well matched (in part because they attempted to fit data based on equivalent width measurements), the higher column density regimes are quite remarkably well reproduced. Other models explored the self-shielding effect on the of DLAs and LLSs using spherical isothermal gaseous halos (Murakami & Ikeuchi, 1990; Petitjean et al., 1992; Zheng & Miralda-Escudé, 2002), which yields a somewhat similar functional form. Hence photoionization of a single power law population in could be the main cause for the complicated shape of the distribution. In the higher redshift regime, Petitjean et al. (1993) also noted that a single over the entire regime was not statistically adequate, and, in particular, their data hinted as well to two flattenings in the column density distribution function, one in the PLLS regime and the other one in LLS/SLLS regime that they explained as transitions between the HI systems to metal absorbers and between the neutral and ionized systems, respectively. The most recent study on at 3.7, by Prochaska et al. (2010) suggests an even more complicated distribution. We show in Figure 9 their distribution over the same range of HI column density. We emphasize that while the Lyman- forest (up to ), SLLS, DLA, and to a lesser extent LLS distributions are relatively well constrained, the PLLSs and HI interval are not (see their Figure 14 for the amplitude of possible in each region). As already mentioned above, there appears to be no evolution in the DLA portion of with redshift, and a steeper slope than found by Rao et al. (2006) seems more appropriate for connecting the DLAs and SLLSs at low-. While a similar flattening is observed in the SLLS regime, in the low- universe appears (tentatively) even flatter. A larger sample of SLLSs will be needed to confirm this as other explanations (e.g., an evolving normalization at different mean redshift or the presence of another inflection point) could account for the observed behavior. In the lower regime, appears to evolve from the high to low- universe. At , where is well constrained at both low and high , the slope becomes steeper as decreases and there is a drop in the number of systems with redshift. Without a steep decline in the UV background flux (stemming from a drop of the number of QSOs at low ), the number of systems would be predicted to be much lower at low , suggesting that the changes in the UV background may be the dominant reason for the evolution of the Lyman- forest (e.g., Theuns et al., 2002b). Numerical simulations of a cold dark matter universe with a photoionized background dominated by the QSO light can, indeed, reproduce these properties to some extents (e.g., Theuns et al., 2002b; Davé et al., 2010), but the observed evolution rate of is smaller than predicted. Part of the discrepancy between the models and observations could be due to the models ignoring the galactic contribution to the UV background, or more generally to an uncertainty in the strength and shape of the UV background. Large-scale galactic outflows could be thought as another uncertainty because (in the regime where ) they likely increase the HI absorbing cross section via deposit of cool gas in the outermost edges of galactic halos (Davé et al., 2010). Cosmological simulations however, suggest that galactic feedback has little impact on of the Lyman- forest as they only fill a small fraction of the volume, leaving the IGM filaments unscathed (Theuns et al., 2002a). Hence, the possible differences seen at may occur owing to the evolution of both the UV background and galactic feedback. Current and future efforts to provide better statistics for the PLLSs and LLSs at both low and high- should provide direct constraints on the UV background evolution and cosmological simulations. ## 6. LLSs and the Gaseous Halos of Galaxies At very low redshift, the connection between LLSs, galaxies, and large-scale structures has been examined for a small number of individual systems discovered using HST and the Far Ultraviolet Spectroscopic Explorer (FUSE). These studies have found LLSs associated with individual galaxies () at impact parameters kpc (Chen & Prochaska, 2000; Jenkins et al., 2003; Tripp et al., 2005; Cooksey et al., 2008; Lehner et al., 2009a). Some low redshift LLSs are metal enriched (i.e., , e.g., Chen & Prochaska, 2000; Prochaska et al., 2006b; Lehner et al., 2009a) while some are relatively metal-poor (i.e., , e.g., Prochaska & Burles, 1999; Cooksey et al., 2008; Zonak et al., 2004). The presence of metal-enriched material far from the central star forming regions of galaxies suggests some LLSs are sensitive to the nature of feedback in galaxies. The existence of extremely metal-poor systems suggests the gas probed by some LLS absorption originates outside of galaxies, perhaps tracing IGM matter falling onto a galaxy. An example of a LLS tracing very low metallicity () gas falling onto a near solar, 0.3 galaxy at will be described in J. Ribaudo et al. (in prep.). In addition to the observational evidence, numerical simulations also predict a physical association of LLSs with the gravitational potential of galaxies. These simulations show LLSs arising from infalling streams of intergalactic gas as well as outflowing gas ejected from galaxies due to stellar feedback (Gardner et al. 2001; Dekel & Birnboim 2006; Kohler & Gnedin 2007; Keres et al. 2009; Kacprzak et al. 2010, Fumagalli et al. 2011, Stewart et al. 2011; but also see, Mo & Miralda-Escud 1996; Maller et al. 2003). Based on these observational and theoretical studies, LLSs appear to be associated with circumgalactic environments. With this knowledge, we can calculate the characteristic sizes of such gaseous galactic envelopes using our survey of LLSs and knowledge of the galaxy population with which they are associated. We rewrite as: l(X)LLS∝nGALσGAL, (14) where is the comoving number density of galaxies giving rise to LLS absorption and is the projected physical cross section of galaxies to columns (for comparison with R2,RP10 samples). The comoving number density of galaxies at a given redshift is calculated from the integration of an observationally constrained galaxy luminosity function. We investigate the size of absorbers assuming only galaxies with give rise to LLS absorption. Thus, following Tytler (1987), we rewrite Equation 14 as: l(X)=cH0∫∞LminfcπR2(L)Φ(L)dL, (15) where is the cross section for absorption, , with a covering factor and is the assumed form of the galaxy luminosity function (Schechter, 1976). The comoving number density of galaxies that contribute to the LLS population is determined by our choice of , and we use all galaxies with in this estimation of the mean . We note that several previous treatments of the gaseous halos around galaxies have allowed for a Holmberg-like scaling of the physical extent of the gas with , where is the projected radial extent of the absorbing gas associated with an galaxy (Tytler, 1987). Numerous galaxy-absorber studies have shown if the radial extent of galaxies is allowed to scale with luminosity, serves as the effective cutoff for observed absorption out to that projected distance (i.e., Kacprzak et al., 2010; Chen et al., 2010; Kacprzak et al., 2008; Chen et al., 2001). However, we are considering the physical extent of absorbing gas averaged over all galaxy types and sizes, with our only selection criterion being , and over a very wide range in redshift. Over time, the galaxies giving rise to LLSs may be best described with an evolving , but as we are generalizing our analysis to the size of the gaseous envelope around a “mean”galaxy, averaged over all morphologies, star formation properties, sizes, etc., we adobt . Our results therefore describe the mean extent of circumgalactic gas about galaxies . Equation 15 can be solved using the incomplete function, giving the statistical absorption radius of a galaxy Rs=f0.5cR=[cπΦ∗H0l(X)Γ(2βL+α+1,LminL∗)]−0.5. (16) The radius is therefore the mean radial extent of gas about an average host galaxy scaled by , while is the projected area of such a galaxy for which for a given choice of . We plot in Figure 10 as a function of the assumed in the left panel and redshift in the right panel. In the left panel, the shaded regions correspond to different luminosity function parameters, which are appropriate for the redshift ranges given in the legend. The luminosity function parameters are observationally determined and restricted to the redshift range probed by each survey. The right panel shows the statistical absorption radius as a function of redshift for three snapshots of . The width of the shaded regions is determined from the redshift range of the survey used to calculate the luminosity function. The height of each region spans the value predicted for the range in redshift. The recent study of MgII absorbers and galaxies at by Chen et al. (2010) found for the strongest MgII absorbers out to kpc (with ). The introduction of a non-unity covering factor will thus increase the values for by 10–30% compared with . From Figure 10, we can draw several inferences about the evolution and properties of the galactic environments giving rise to LLS absorption, albeit with some limitations. We are describing the mean extent of HI gas with no assumptions about which galaxies give rise to the absorption. The evolution in does not track the evolution of individual galaxies, only the mean galaxy with for each . Any change in the physical cross section for the mean galaxy at each redshift does not imply individual galaxies are evolving on that timescale, as it is likely the case the galaxies giving rise to LLSs at are not the same galaxies giving rise to LLSs at . With these limitations in mind, several inferences can be drawn from this approach. The galaxies alone cannot account for the observed population of LLSs, because would be inconsistent with previous galaxy-absorber observations, especially at where the sizes implied for LLSs would be quite large compared with observations (Steidel et al., 2010). Extending the integration of Equation 15 to sub- galaxies produces values more consistent with the impact parameters found independently by other studies (e.g. Bouché et al., 2007; Kacprzak et al., 2010; Chen et al., 2010). It is not clear how small should be before we can account for the entire population of LLSs, but Figure 10 highlights the importance and need for deep observations of QSO fields to confidently relate absorbers to specific galaxies. This conclusion is not surprising as MgII studies and individual LLS observations show sub- () galaxies contribute to the population of optically thick absorbers (e.g. Steidel et al., 2010; Kacprzak et al., 2010; Chen et al., 2010; Lehner et al., 2009a; Kacprzak et al., 2008). However, our analysis suggests the less luminous galaxy population may be the dominant source of LLSs. A similar scenario has been suggested for MgII absorbers over the redshifts , where Caler et al. (2010) find evidence that at least % of the MgII absorber host galaxies are fainter than . Figure 10 also highlights a significant evolution in the physical cross section of the mean absorbing galaxy as a function of redshift. For , decreases by a factor of 3 from 5 to 2, but remains relatively constant from 2 to 0.3. This is remarkable as it suggests the physical cross section of the gaseous envelopes of a mean galaxy has decreased significantly over a very short epoch, but for the majority of cosmic time the physical extent of gas about a mean galaxy has been fairly constant. This relatively constant nature of absorption cross section at low- was also noted by Nestor et al. (2005), who found evidence for little evolution in the physical size of MgII absorbers as a function of redshift over (for ). Changes in physical cross section can be brought on by evolution in the typical covering factor as well as typical radial extent. However, changes in alone likely cannot be responsible for the large drop in the physical cross section of the mean galaxy given the typical values observed at low redshift. While a change in the typical radial extent, , is a likely cause, other factors could influence our perception of the cross section for the mean galaxy at a given redshift. Evolution in the power law index, , associated with changes in the relative fraction of high versus low luminosity galaxies giving rise to LLSs could alter the mean cross section calculated here. For example, if at high redshifts () the majority of LLSs arise in the circumgalactic gas of relatively high mass, bright galaxies, but at low redshifts () the majority of LLSs arise in the environments of low mass, relatively low luminosity galaxies, we would expect an evolution in the mean physical cross section of LLS absorption similar to what is shown in Figure 10. An evolution in with redshift would have an affect similar to an evolving . As we alluded to above, there are two commonly invoked scenarios for producing circumgalactic gas at such large distances from the central regions of galaxies. In the first, galactic-scale outflows drive gas to large radial distances from the main body of a galaxy providing for MgII and LLS absorption (Bouché et al., 2006). Evidence for this has been presented by Bouché et al. (2007), who found starburst galaxies within 50 kpc for of a sample of strong MgII absorbers. Prochter et al. (2006) have also argued the importance of outflows to MgII selected systems based on the similarity in the evolution in the redshift incidence of strong MgII absorbers and the star formation rate density of the Universe for . Combined with constraints on the size of the galaxies giving rise to the MgII absorption, this suggests such systems are produced through feedback processes in low mass galactic halos. In addition, other recent works have connected MgII selected absorbers to galactic outflows at (Nestor et al., 2010), (Ménard & Chelouche, 2009), and (Steidel et al., 2010). The second scenario assumes much of the circumgalactic material traced by LLSs is intergalactic gas being accreted onto the galaxies. To maintain the low apparent ionization conditions of LLSs (Lehner et al., 2009a; Cooksey et al., 2008), the gas should not be shock heated as it is accreted. Such low-ionization gas falls under the phenomenon of cold mode accretion (CMA) (Kereš et al., 2005) predicted to be directed along the filamentary structure of the Universe, allowing galaxies to draw gas from large distances. CMA can account for the observational properties of galaxies inconsistent with the traditional shock-heated accretion models, such as the color bimodality of galaxies and the decline of the cosmic star formation rate at low redshifts (Kereš et al., 2005; Dekel & Birnboim, 2006; Dekel et al., 2009a, b; Kereš et al., 2009). Support for CMA has been suggested in recent studies of MgII absorbers where no correlation between the MgII  absorption strength and galaxy color was found, indicating the origin of the absorbers is not tied to the star formation history of the associated galaxy (Chen et al., 2010). Chen et al. (2010) conclude MgII absorbers (and LLSs as an extension) are a generic feature of galaxy environments and that the gas probed by MgII absorption is likely intergalactic in origin. There is more direct observational evidence to support this origin for some LLSs. The nearly primordial LLS detected by J. Ribaudo et al. (in prep.) within 40 kpc of a near solar galaxy is similar in ionization state and metallicity to the low-metallicity absorbers reported in Cooksey et al. (2008) and Zonak et al. (2004). While outflows and infall must play an important role in the composition and maintenance of circumgalactic environments, observations of a few systems suggest gas ejected to large distances during galaxy mergers and tidal interactions could also be responsible for some of the observed LLSs (e.g., Jenkins et al., 2003; Lehner et al., 2009a). Other studies have suggested the high velocity clouds (HVCs) seen about the Milky Way may be analogs for the higher redshift LLSs or MgII systems (Charlton et al., 2000; Richter et al., 2009; Stocke et al., 2010). In the Milky Way and the nearby Magellanic Clouds, the HVCs probe outflows related to galactic fountains and winds (Keeney et al., 2006; Zech et al., 2008; Lehner & Howk, 2007; Lehner et al., 2009b), the infall of low-metallicity gas (e.g., Wakker, 2001; Wakker et al., 2008; Thom et al., 2008), and the tidal debris stripped from the Magellanic Clouds (and others) as they interact with each other and the Milky Way (e.g., Putman et al., 2003). Thus, these potential LLS analogs have a wide range of origins, although many of the Milky Way HVCs tend to reside at much smaller impact parameters than suggested for the LLSs ( kpc Lehner & Howk, 2010; Wakker et al., 2008; Thom et al., 2008). On the other hand tidal remnants from galactic interactions or gas outflows from its satellites are about 50–100 kpc from the Milky Way. These local analogs underline the complex task in defining what kind of phenomena the LLSs trace and if one dominates over the others. Discriminating between these scenarios using only the correlations of redshifts and equivalent widths of the MgII lines with other parameters has been difficult. The availability of HI column density information for a large number of LLSs offers a path to studying the metallicities of the LLSs/MgII systems at low redshifts. Further studies specifically targeting the galactic environments of LLSs, where the metallicities of the LLSs and the galaxies can be compared, will be critical to further characterize the nature of the absorbers and the role these systems play in the movement of gas into and out of the halos of galaxies. With metallicity playing a fundamental role in discriminating between these two scenarios (e.g., Fumagalli et al., 2011), absorbers will need to be selected based on HI absorption to provide a comprehensive picture of the nature and orgin of circumgalactic gas. ## 7. Summary and Concluding Remarks Using FOS and STIS HST archival observations, we have compiled the largest sample of QSOs to date with coverage of the Lyman limit over the redshift range . We have used these observations to study the population of LLSs over these redshifts. In considering candidates for our R1 (R2) sample, we included only the data from objects where the spectral quality was judged to be sufficient to reliably detect a LLS with (). The sample R1 (R2) contains 229 (249) QSOs, covering a total redshift path of and a total of 61 (50) LLSs. This marks a factor of increase in the number of LLSs and redshift path sampled over the most up-to-date work by Stengler-Larrea et al. (1995) and Jannuzi et al. (1998) in this redshift regime. In addition to our statistical sample, we have catalogued 206 low redshift LLSs from the FOS and STIS archives, which increases the sample of LLSs by a factor of 10 for the sample. The robustness of our samples allowed us to examine the evolution of LLSs over for the R1 and R2 samples and from for the RP10 sample that combines our R2 sample with the high redshift sample of Prochaska et al. (2010). Our main results are as follows: 1. We find the redshift density to be well fitted by the power law (Equation 7). We find for sample R1 (R2)  1.19 0.56 (1.33 0.61). For the RP10 sample at , is well modeled by a single power law with  1.83 0.21 (for ). 2. Assuming a standard CDM cosmology with our RP10 sample, we find , which is proportional to the product of the comoving number density of absorbers, , and the average physical size of an absorber, , decreases by a factor 1.5 from 5 to 3. The evolution of at has slowed considerably, decreasing by a similar factor for 2.6 to 0.25. This indicates the environments which give rise to LLSs experienced dramatic changes in the first 2 Gyr after 5, then more slowly evolved over the following 8 Gyr. 3. We calculate the average proper distance, , a photon travels before encountering a LLS and compare this result with the predicted mean free path of hydrogen ionizing photons. The ratio of and the mean free path from 5 to 0 suggests the absorption systems have become increasingly more important for absorption of Lyman continuum photons as the Universe has evolved. 4. We model the column density distribution function, , for the various regimes at using a functional form . We show that a single power law cannot fit the entire observed regime. Instead several slopes are needed. For the LLSs, we derive 1.9. The functional form in the Lyman- forest regime () and in the SLLS regime (0.8) suggests the distribution has two inflection points. For the DLA regime, 1.8 seems appropriate for connecting between the DLAs and SLLSs. Simple models assuming a single power law in with absorbers photoionized by the UV background reproduce the distribution remarkably well. 5. We observe little redshift evolution in for the SLLSs and DLAs from high (3.7) to low () redshifts. However, there is evidence that evolves from high to low redshift at , which coincides with the strong evolution seen in the UV background and star-formation rates of galaxies over similar redshifts. 6. Assuming LLSs arise in circumgalactic gas, we find the physical cross section of the mean galaxy at each redshift to LLS absorption decreased by a factor of 9 from 5 to 2 and subsequently stayed relatively constant. We argue sub- galaxies must contribute significantly to the absorber population. The authors wish to thank J.X. Prochaska who kindly made their data available for comparison with this sample prior to publication. We would also like to thank the referee for useful and insightful comments, as well as J.X. Prochaska and J. O’Meara for their valuable comments. Support for this research was provided by NASA through grant HST-AR-11762.01-A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Further support comes from NASA grant NNX08AJ31G. This research has made use of the NASA Astrophysics Data System Abstract Service and the Centre de Données de Strasbourg (CDS). ## References • Bahcall & Peebles (1969) Bahcall, J. N., & Peebles, P. J. E. 1969, ApJ, 156, L7 • Bechtold et al. (2002) Bechtold, J., Dobrzycki, A., Wilden, B., Morita, M., Scott, J., Dobrzycka, D., Tran, K.-V., & Aldcroft, T. L. 2002, ApJS, 140, 143 • Bouché et al. (2007) Bouché, N., Murphy, M. T., Péroux, C., Davies, R., Eisenhauer, F., Förster Schreiber, N. M., & Tacconi, L. 2007, ApJ, 669, L5 • Bouché et al. (2006) Bouché, N., Murphy, M. T., Péroux, C., Csabai, I., & Wild, V. 2006, MNRAS, 371, 495 • Caler et al. (2010) Caler, M. A., Sheth, R. K., & Jain, B. 2010, MNRAS, 406, 1269 • Charlton & Churchill (1998) Charlton, J. C., & Churchill, C. W. 1998, ApJ, 499, 181 • Charlton et al. (2000) Charlton, J. C., Churchill, C. W., & Rigby, J. R. 2000, ApJ, 544, 702 • Chen et al. (2010) Chen, H.-W., Helsby, J. E., Gauthier, J.-R., Shectman, S. A., Thompson, I. B., & Tinker, J. L. 2010, ApJ, 714, 1521 • Chen et al. (2001) Chen, H.-W., Lanzetta, K. M., & Webb, J. K. 2001, ApJ, 556, 158 • Chen & Prochaska (2000) Chen, H.-W., & Prochaska, J. X. 2000, ApJ, 543, L9 • Churchill et al. (2000) Churchill, C. W., Mellon, R. R., Charlton, J. C., Jannuzi, B. T., Kirhakos, S., Steidel, C. C., & Schneider, D. P. 2000, ApJS, 130, 91 • Churchill et al. (2005) Churchill, C., Steidel, C., & Kacprzak, G. 2005, Extra-Planar Gas, 331, 387 • Cooksey et al. (2008) Cooksey, K. L., Prochaska, J. X., Chen, H.-W., Mulchaey, J. S., & Weiner, B. J. 2008, ApJ, 676, 262 • Corbelli & Bandiera (2002) Corbelli, E., & Bandiera, R. 2002, ApJ, 567, 712 • Danforth & Shull (2008) Danforth, C. W., & Shull, J. M. 2008, ApJ, 679, 194 • Davé et al. (2010) Davé, R., Oppenheimer, B. D., Katz, N., Kollmeier, J. A., & Weinberg, D. H.  2010, MNRAS, 408, 2051 • Dekel et al. (2009a) Dekel, A., Sari, R., & Ceverino, D. 2009a, ApJ, 703, 785 • Dekel et al. (2009b) Dekel, A., et al. 2009b, Nature, 457, 451 • Dekel & Birnboim (2006) Dekel, A., & Birnboim, Y. 2006, MNRAS, 368, 2 • Faber et al. (2007) Faber, S. M., et al.  2007, ApJ, 665, 265 • Faucher-Giguère et al. (2009) Faucher-Giguère, C.-A., Lidz, A., Zaldarriaga, M., & Hernquist, L. 2009, ApJ, 703, 1416 • Fumagalli et al. (2011) Fumagalli, M., Prochaska, J. X., Kasen, D., Dekel, A., Ceverino, D., & Primack, J. R. 2011, arXiv:1103.2130 • Gardner et al. (2001) Gardner, J. P., Katz, N., Hernquist, L., & Weinberg, D. H. 2001, ApJ, 559, 131 • Haardt & Madau (1996) Haardt, F., & Madau, P. 1996, ApJ, 461, 20 • Janknecht et al. (2006) Janknecht, E., Reimers, D., Lopez, S., & Tytler, D. 2006, A&A, 458, 427 • Jannuzi et al. (1998) Jannuzi, B. T., et al.  1998, ApJS, 118, 1 • Jena et al. (2005) Jena, T., et al. 2005, MNRAS, 361, 70 • Jenkins et al. (2003) Jenkins, E. B., Bowen, D. V., Tripp, T. M., Sembach, K. R., Leighly, K. M., Halpern, J. P., & Lauroesch, J. T. 2003, AJ, 125, 2824 • Kacprzak et al. (2010) Kacprzak, G. G., Churchill, C. W., Ceverino, D., Steidel, C. C., Klypin, A., & Murphy, M. T. 2010, ApJ, 711, 533 • Kacprzak et al. (2008) Kacprzak, G. G., Churchill, C. W., Steidel, C. C., & Murphy, M. T. 2008, AJ, 135, 922 • Keeney et al. (2006) Keeney, B. A., Danforth, C. W., Stocke, J. T., Penton, S. V., Shull, J. M., & Sembach, K. R. 2006, ApJ, 646, 951 • Kereš & Hernquist (2009) Kereš, D., & Hernquist, L. 2009, ApJ, 700, L1 • Kereš et al. (2005) Kereš, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2 • Kereš et al. (2009) Kereš, D., Katz, N., Fardal, M., Davé, R., & Weinberg, D. H. 2009, MNRAS, 395, 160 • Keyes et al. (1995) Keyes, C.D., Koratkar, A.P., Dahlem, M., Hayes, J., Christensen, J., & Martin, S. 1995, v6.0 • Kohler & Gnedin (2007) Kohler, K., & Gnedin, N. Y. 2007, ApJ, 655, 685 • Komatsu et al. (2009) Komatsu, E., et al.  2009, ApJS, 180, 330 • Kulkarni et al. (2007) Kulkarni, V. P., Khare, P., Péroux, C., York, D. G., Lauroesch, J. T., & Meiring, J. D. 2007, ApJ, 661, 88 • Lanzetta (1991) Lanzetta, K. M. 1991, ApJ, 375, 1 • Lehner & Howk (2010) Lehner, N., & Howk, J. C. 2010, ApJ, 709, L138 • Lehner et al. (2009a) Lehner, N., Prochaska, J. X., Kobulnicky, H. A., Cooksey, K. L., Howk, J. C., Williger, G. M., & Cales, S. L. 2009a, ApJ, 694, 734 • Lehner et al. (2009b) Lehner, N., Staveley-Smith, L., & Howk, J. C. 2009b, ApJ, 702, 940 • Lehner et al. (2007) Lehner, N., Savage, B. D., Richter, P., Sembach, K. R., Tripp, T. M., & Wakker, B. P. 2007, ApJ, 658, 680 • Lehner & Howk (2007) Lehner, N., & Howk, J. C. 2007, MNRAS, 377, 687 • Lehner et al. (2006) Lehner, N., Savage, B. D., Wakker, B. P., Sembach, K. R., & Tripp, T. M. 2006, ApJS, 164, 1 • Maller et al. (2003) Maller, A. H., Prochaska, J. X., Somerville, R. S., & Primack, J. R. 2003, MNRAS, 343, 268 • Ménard & Chelouche (2009) Ménard, B., & Chelouche, D. 2009, MNRAS, 393, 808 • Miralda-Escudé et al. (1996) Miralda-Escudé, J., Cen, R., Ostriker, J. P., & Rauch, M. 1996, ApJ, 471, 582 • Mo & Miralda-Escudé (1996) Mo, H. J., & Miralda-Escude, J. 1996, ApJ, 469, 589 • Murakami & Ikeuchi (1990) Murakami, I., & Ikeuchi, S. 1990, PASJ, 42, L11 • Nagamine et al. (2010) Nagamine, K., Choi, J.-H., & Yajima, H. 2010, arXiv:1006.5345 • Nestor et al. (2010) Nestor, D. B., Johnson, B. D., Wild, V., Ménard, B., Turnshek, D. A., Rao, S., & Pettini, M. 2010, arXiv:1003.0693 • Nestor et al. (2005) Nestor, D. B., Turnshek, D. A., & Rao, S. M. 2005, ApJ, 628, 637 • J. O’Meara et al. (in prep.) O’Meara, J. M., in prep. • O’Meara et al. (2007) O’Meara, J. M., Prochaska, J. X., Burles, S., Prochter, G., Bernstein, R. A., & Burgess, K. M. 2007, ApJ, 656, 666 • Penton et al. (2004) Penton, S. V., Stocke, J. T., & Shull, J. M. 2004, ApJS, 152, 29 • Péroux et al. (2005) Péroux, C., Dessauges-Zavadsky, M., D’Odorico, S., Sun Kim, T., & McMahon, R. G. 2005, MNRAS, 363, 479 • Péroux et al. (2003) Péroux, C., Dessauges-Zavadsky, M., D’Odorico, S., Kim, T.-S., & McMahon, R. G. 2003, MNRAS, 345, 480 • Péroux et al. (2002) Péroux, C., Dessauges-Zavadsky, M., Kim, T., McMahon, R. G., & D’Odorico, S. 2002, Ap&SS, 281, 543 • Petitjean et al. (1993) Petitjean, P., Webb, J. K., Rauch, M., Carswell, R. F., & Lanzetta, K. 1993, MNRAS, 262, 499 • Petitjean et al. (1992) Petitjean, P., Bergeron, J., & Puget, J. L. 1992, A&A, 265, 375 • Petitjean & Bergeron (1990) Petitjean, P., & Bergeron, J. 1990, A&A, 231, 309 • Prochaska et al. (2010) Prochaska, J. X., O’Meara, J. M., & Worseck, G. 2010, ApJ, 718, 392 • Prochaska et al. (2009) Prochaska, J. X., Worseck, G., & O’Meara, J. M. 2009, ApJ, 705, L113 • Prochaska et al. (2006a) Prochaska, J. X., Weiner, B. J., Chen, H.-W., & Mulchaey, J. S. 2006a, ApJ, 643, 680 • Prochaska et al. (2006b) Prochaska, J. X., O’Meara, J. M., Herbert-Fort, S., Burles, S., Prochter, G. E., & Bernstein, R. A. 2006b, ApJ, 648, L97 • Prochaska & Wolfe (2009) Prochaska, J. X., & Wolfe, A. M. 2009, ApJ, 696, 1543 • Prochaska et al. (2004) Prochaska, J. X., Chen, H.-W., Howk, J. C., Weiner, B. J., & Mulchaey, J. 2004, ApJ, 617, 718 • Prochaska & Burles (1999) Prochaska, J. X., & Burles, S. M. 1999, AJ, 117, 1957 • Prochter et al. (2006) Prochter, G. E., Prochaska, J. X., & Burles, S. M. 2006, ApJ, 639, 766 • Putman et al. (2003) Putman, M. E., Staveley-Smith, L., Freeman, K. C., Gibson, B. K., & Barnes, D. G. 2003, ApJ, 586, 170 • Rao et al. (2006) Rao, S. M., Turnshek, D. A., & Nestor, D. B. 2006, ApJ, 636, 610 • Rauch (1998) Rauch, M. 1998, ARA&A, 36, 267 • Reddy & Steidel (2009) Reddy, N. A., & Steidel, C. C. 2009, ApJ, 692, 778 • J. Ribaudo et al. (in prep.) Ribaudo, J., in prep. • Richter et al. (2009) Richter, P., Charlton, J. C., Fangano, A. P. M., Bekhti, N. B., & Masiero, J. R. 2009, ApJ, 695, 1631 • Sargent et al. (1989) Sargent, W. L. W., Steidel, C. C., & Boksenberg, A. 1989, ApJS, 69, 703 • Schechter (1976) Schechter, P. 1976, ApJ, 203, 297 • Shull et al. (1999) Shull, J. M., Roberts, D., Giroux, M. L., Penton, S. V., & Fardal, M. A. 1999, AJ, 118, 1450 • Simcoe et al. (2006) Simcoe, R. A., Sargent, W. L. W., Rauch, M., & Becker, G. 2006, ApJ, 637, 648 • Songaila & Cowie (2010) Songaila, A., & Cowie, L. L. 2010, arXiv:1007.3262 • Spitzer (1978) Spitzer, L. 1978, New York Wiley-Interscience • Steidel et al. (2010) Steidel, C. C., Erb, D. K., Shapley, A. E., Pettini, M., Reddy, N., Bogosavljević, M., Rudie, G. C., & Rakic, O. 2010, ApJ, 717, 289 • Steidel & Sargent (1992) Steidel, C. C., & Sargent, W. L. W. 1992, ApJS, 80, 1 • Stengler-Larrea et al. (1995) Stengler-Larrea, E. A., et al. 1995, ApJ, 444, 64 • Stocke et al. (2010) Stocke, J. T., Keeney, B. A., & Danforth, C. W. 2010, Publications of the Astronomical Society of Australia, 27, 256 • Storrie-Lombardi et al. (1994) Storrie-Lombardi, L. J., McMahon, R. G., Irwin, M. J., & Hazard, C. 1994, ApJ, 427, L13 • Theuns et al. (2002a) Theuns, T., Viel, M., Kay, S., Schaye, J., Carswell, R. F., & Tzanavaris, P. 2002a, ApJ, 578, L5 • Theuns et al. (2002b) Theuns, T., Zaroubi, S., Kim, T.-S., Tzanavaris, P., & Carswell, R. F. 2002b, MNRAS, 332, 367 • Thom et al. (2008) Thom, C., Peek, J. E. G., Putman, M. E., Heiles, C., Peek, K. M. G., & Wilhelm, R. 2008, ApJ, 684, 364 • Tripp et al. (2005) Tripp, T. M., Jenkins, E. B., Bowen, D. V., Prochaska, J. X., Aracil, B., & Ganguly, R. 2005, ApJ, 619, 714 • Tytler (1982) Tytler, D. 1982, Nature, 298, 427 • Tytler (1987) Tytler, D. 1987, ApJ, 321, 49 • van der Burg et al. (2010) van der Burg, R. F. J., Hildebrandt, H., & Erben, T. 2010, arXiv:1009.0758 • Veron-Cetty & Veron (2010) Veron-Cetty, M. P., & Veron, P. 2010, VizieR Online Data Catalog, 7258, 0 • Wakker et al. (2008) Wakker, B. P., York, D. G., Wilhelm, R., Barentine, J. C., Richter, P., Beers, T. C., Ivezić, Ž., & Howk, J. C. 2008, ApJ, 672, 298 • Wakker (2001) Wakker, B. P. 2001, ApJS, 136, 463 • Williger et al. (2010) Williger, G. M., et al. 2010, MNRAS, 405, 1736 • Wolfe et al. (2005) Wolfe, A. M., Gawiser, E., & Prochaska, J. X. 2005, ARA&A, 43, 861 • Zech et al. (2008) Zech, W. F., Lehner, N., Howk, J. C., Dixon, W. V. D., & Brown, T. M. 2008, ApJ, 679, 460 • Zheng et al. (1997) Zheng, W., Kriss, G. A., Telfer, R. C., Grimes, J. P., & Davidsen, A. F. 1997, ApJ, 475, 469 • Zheng & Miralda-Escudé (2002) Zheng, Z., & Miralda-Escudé, J. 2002, ApJ, 578, 33 • Zonak et al. (2004) Zonak, S. G., Charlton, J. C., Ding, J., & Churchill, C. W. 2004, ApJ, 606, 196 • Zuo & Phinney (1993) Zuo, L., & Phinney, E. S. 1993, ApJ, 418, 28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488974213600159, "perplexity": 2060.1193580102135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00211.warc.gz"}
http://math.stackexchange.com/questions/64190/what-is-%e2%89%a1-operator-equal-to-in-math/64206
# what is ≡ operator equal to in math? [duplicate] Possible Duplicate: When should I use $=$ and $\equiv$? I heard about this in our calculus class years ago. I was actually not in that class when the processor explained this. 95% of engineering student do not know about this operator. Trying to recall, what did it mean? And is it used standard in Math classes? I think it means approximately equal to. I am not 100% sure about the syntax. Edit: Originally I asked for === operator - ## marked as duplicate by Zev ChonolesSep 13 '11 at 16:52 Maybe it means "is defined to be equal" like "≡", except they didn't have "≡" on a typewriter. –  xpda Sep 13 '11 at 14:23 "I was actually not in that class when the processor explained this." That's a curious typo. :-) –  Srivatsan Sep 13 '11 at 14:39 It is often used in the form $f(x) \equiv g(x)$ to say $f(x) = g(x)$ for all $x$, as opposed to $f(x) = g(x)$ for some specific $x$. –  t.b. Sep 13 '11 at 14:40 When used in the sense of @Theo's comment, I think it is common to read it like: "$f(x)$ is identically equal to $g(x)$". –  Srivatsan Sep 13 '11 at 14:42 I read it as "identically equal to" ... and I think of the symbol as "=" with an emphatic underscore. –  Blue Sep 13 '11 at 16:37 There's the obvious meaning: Congruence modulo an integer, i.e. "$a \equiv b \pmod c$" (read: "$a$ is congruent to $b$ modulo $c$") which means $c \mid(b-a)$ ("$c$ divides $b-a$"). As someone else has mentioned it's also occasionally used to indicate equality of functions by writing $f(x) \equiv g(x)$, but why not just write $f=g$ then?! - I've seen $\equiv$ used for definitions... –  Guess who it is. Sep 13 '11 at 14:50 I think the "identically equal" usage is to really draw the reader's attention to whether you mean a function has a root ($f(x) = 0$) or whether it is identically zero ($f \equiv 0$). –  Austin Mohr Sep 13 '11 at 14:59 As for why you'd want a separate symbol for modular congruence, instead of just using "$=$", it's convenient when writing something like "$-1234 \equiv 9999 - 1234 = 8765 \pmod{9999}$" or "$(a+b)(a-b) = a^2-b^2 \equiv a^2 \pmod{b^2}$". Here, $\equiv$ marks the places where we add or subtract multiples of the modulus, while ordinary $=$ signs denote equivalences that hold also in ordinary arithmetic. –  Ilmari Karonen Sep 13 '11 at 15:01 @Theo: I know you're being facetious, but it's a good question nonetheless. First of all there's the principle that we should reserve "$=$" for actual equality (mostly, anyway...). Then there's the idea that "$a \equiv b \pmod c$" is a single "symbol" that makes a statement about the three integers, $a$, $b$ and $c$. And finally: It's what Gauss used in Disquisitiones –  kahen Sep 13 '11 at 15:04 This is correct but from what I remember it was not used in this context. Could it mean something else , something simpler? Or may be our professor could be wrong :( –  TomCat Sep 13 '11 at 15:10 Since your professor was referring to engineering students, then it's likely they were referring to the identity symbol, which is used in an expression to mean the left and right hand sides are true for all values. So $\cos^2\theta +\sin^2\theta \equiv 1$ since it's true for all $\theta$ whereas $\cos\theta = 1$ since it's true only for some. - All algebraic and trigonometric identities can be written using the $\equiv$ symbol, e.g. $(a+b)^2\equiv a^2+2ab+b^2$. –  Américo Tavares Sep 13 '11 at 15:40 A special case of that, function identically equal to a value (zero here) can be written as $f(x)\equiv 0$. –  eudoxos Sep 13 '11 at 16:24 The "≡" operator often used to mean "is defined to be equal." - It's used for various things in various contexts. The one about "defined to be equal" is often rendered as ":=". I haven't seen "$\equiv$" used for that. It's certainly used for congruence with respect to a modulus; e.g. $44\equiv62 \pmod 6$, etc. It's used for identities like $(x+1)^2 = x^2+2x+1$ when one wants to say that that is true for all values of $x$. However, the variety of different uses that this symbol temporarily has in more advanced work has probably never been tabulated. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171582221984863, "perplexity": 734.6211509598133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635604.22/warc/CC-MAIN-20150627032715-00208-ip-10-179-60-89.ec2.internal.warc.gz"}