url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://dml.cz/handle/10338.dmlcz/134326
# Article Full entry | PDF   (1.4 MB) Keywords: numerical analysis; convection-diffusion problems; boundary layers; uniform convergence Summary: Singularly perturbed problems of convection-diffusion type cannot be solved numerically in a completely satisfactory manner by standard numerical methods. This indicates the need for robust or $\epsilon$-uniform methods. In this paper we derive new conditions for such schemes with special emphasize to parabolic layers. References: [AS64] Abramowitz, M., Stegun, I.A.: Handbook of mathematical functions. National Bureau of Standards, 1964. [Ec73] Eckhaus, W.: Matched asymptotic expansions and singular perturbations. North-Holland, Amsterdam, 1973. MR 0670800 | Zbl 0255.34002 [Em73] Emel’janov, K.V.: A difference scheme for a three-dimensional elliptic equation with a small parameter multiplying the highest derivative. Boundary value problems for equations of mathematical physics, USSR Academy of Sciences, Ural Scientific Centre, 1973, pp. 30–42. (Russian) [Gu93] Guo, W.: Uniformly convergent finite element methods for singularly perturbed parabolic problems. Ph.D. Dissertation, National University of Ireland, 1993. [HK90] Han, H., Kellogg, R.B.: Differentiability properties of solutions of the equation $-\epsilon ^2\Delta u+ru=f(x,y)$ in a square. SIAM J. Math. Anal., 21 (1990), 394–408. DOI 10.1137/0521022 | MR 1038899 [La61] Lax, P.D.: On the stability of difference approximations to solutions of hyperbolic equations with variable coefficients. Comm. Pure Appl. Math. 14 (1961), 497–520. DOI 10.1002/cpa.3160140324 | MR 0145686 | Zbl 0102.11701 [Le76] Lelikova, E.F.: On the asymptotic solution of an elliptic equation of the second order with a small parameter effecting the highest derivative. Differential Equations 12 (1976), 1852–1865. (Russian) MR 0445100 [Ro85] Roos, H.-G.: Necessary convergence conditions for upwind schemes in the two-dimensional case. Int. J. Numer. Meth. Eng. 21 (1985), 1459–1469. DOI 10.1002/nme.1620210808 | MR 0799066 | Zbl 0578.65098 [SK87] Shih, S.D., Kellogg, R.B.: Asymptotic analysis of a singular perturbation problem. SIAM J. Math. Anal., 18 (1987), 1467–1511. DOI 10.1137/0518107 | MR 0902346 [Sh89] Shishkin, G.I.: Approximation of the solutions of singularly perturbed boundary-value problems with a parabolic boundary layer. U.S.S.R. Comput. Maths. Math. Physics 29 (1989), 1–10. DOI 10.1016/0041-5553(89)90109-2 | MR 1011021 | Zbl 0709.65073 [Si90] Shishkin, G.I.: Grid approximation of singularly perturbed boundary value problems with convective terms. Sov. J. Numer. Anal. Math. Modelling 5 (1990), 173–187. DOI 10.1515/rnam.1990.5.2.173 | MR 1122367 | Zbl 0816.65051 [Si92] Shishkin, G.I.: Methods of constructing grid approximations for singularly perturbed boundary value problems. Sov. J. Numer. Anal. Math. Modelling 7 (1992), 537–562. MR 1202653 | Zbl 0816.65072 [ST92] Stynes, M., Tobiska, L.: Necessary $L_2$-uniform conditions for difference schemes for two-dimensional convection-diffusion problems. Computers Math. Applic. 29 (1995), 45–53. DOI 10.1016/0898-1221(94)00237-F | MR 1321058 [Ys83] Yserentant, H.: Die maximale Konsistenzordnung von Differenzapproximationen nichtnegativer Art. Numer. Math. 42 (1983), 119–123. DOI 10.1007/BF01400922 | MR 0716478 Partner of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892708420753479, "perplexity": 2856.05987287539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00633.warc.gz"}
https://jeremykun.com/2014/11/10/the-complexity-of-communication/?replytocom=40038
# The Complexity of Communication One of the most interesting questions posed in the last thirty years of computer science is to ask how much “information” must be communicated between two parties in order for them to jointly compute something. One can imagine these two parties living on distant planets, so that the cost of communicating any amount of information is very expensive, but each person has an integral component of the answer that the other does not. Since this question was originally posed by Andrew Yao in 1979, it has led to a flurry of applications in many areas of mathematics and computer science. In particular it has become a standard tool for proving lower bounds in many settings such as circuit design and streaming algorithms. And if there’s anything theory folks love more than a problem that can be solved by an efficient algorithm, it’s a proof that a problem cannot be solved by any efficient algorithm (that’s what I mean by “lower bound”). Despite its huge applicability, the basic results in this area are elementary. In this post we’ll cover those basics, but once you get past these basic ideas and their natural extensions you quickly approach the state of the art and open research problems. Attempts to tackle these problems in recent years have used sophisticated techniques in Fourier analysis, Ramsey theory, and geometry. This makes it a very fun and exciting field. As a quick side note before we start, the question we’re asking is different from the one of determining the information content of a specific message. That is the domain of information theory, which was posed (and answered) decades earlier. Here we’re trying to determine the complexity of a problem, where more complex messages require more information about their inputs. ## The Basic Two-Player Model The most basic protocol is simple enough to describe over a dinner table. Alice and Bob each have one piece of information $x,y$, respectively, say they each have a number. And together they want to compute some operation that depends on both their inputs, for example whether $x > y$. But in the beginning Alice has access only to her number $x$, and knows nothing about $y$. So Alice sends Bob a few bits. Depending on the message Bob computes something and replies, and this repeats until they have computed an answer. The question is: what is the minimum number of bits they need to exchange in order for both of them to be able to compute the right answer? There are a few things to clarify here: we’re assuming that Alice and Bob have agreed on a protocol for sending information before they ever saw their individual numbers. So imagine ten years earlier Alice and Bob were on the same planet, and they agreed on the rules they’d follow for sending/replying information once they got their numbers. In other words, we’re making a worst-case assumption on Alice and Bob’s inputs, and as usual it will be measured as a function of $n$, the lengths of their inputs. Then we take a minimum (asymptotically) over all possible protocols they could follow, and this value is the “communication complexity” of the problem. Computing the exact communication complexity of a given problem is no simple task, since there’s always the nagging question of whether there’s some cleverer protocol than the one you came up with. So most of the results are bounds on the communication complexity of a problem. Indeed, we can give our first simple bound for the “$x$ greater than $y$” problem we posed above. Say the strings $x,y$ both have $n$ bits. What Alice does is send her entire string $x$ to Bob, and Bob then computes the answer and sends the answer bit back to Alice. This requires $n + 1$ bits of communication. This proves that the communication complexity of “$x > y$” is bounded from above by $n+1$. A much harder question is, can we do any better? To make any progress on upper or lower bounds we need to be a bit more formal about the communication model. Basically, the useful analysis happens when the players alternate sending single bits, and this is only off by small constant factors from a more general model. This is the asymptotic analysis, that we only distinguish between things like linear complexity $O(n)$ versus sublinear options like $\log(n)$ or $\sqrt{n}$ or even constant complexity $O(1)$. Indeed, the protocol we described for $x > y$ is the stupidest possible protocol for the problem, and it’s actually valid for any problem. For this problem it happens to be optimal, but we’re just trying to emphasize that nontrivial bounds are all sub-linear in the size of the inputs. On to the formal model. Definition: player is a computationally unbounded Turing machine. And we really mean unbounded. Our players have no time or space constraints, and if they want they can solve undecidable problems like the halting problem or computing Kolmogorov complexity. This is to emphasize that the critical resource is the amount of communication between players. Moreover, it gives us a hint that lower bounds in this model won’t come form computational intractability, but instead will be “information-theoretic.” Definition: Let $\Sigma^*$ be the set of all binary strings. A communication protocol is a pair of functions $A,B: \Sigma^* \times \Sigma^* \to \{ 0,1 \}$. The input to these functions $A(x, h)$ should be thought of as follows: $x$ is the player’s secret input and $h$ is the communication history so far. The output is the single bit that they will send in that round (which can be determined by the length of $h$ since only one bit is sent in each round). The protocol then runs by having Alice send $b_1 = A(x, \{ \})$ to Bob, then Bob replies with $b_2 = B(y, b_1)$, Alice continues with $b_3 = A(x, b_1b_2)$, and so on. We implicitly understand that the content of a communication protocol includes a termination condition, but we’ll omit this from the notation. We call the length of the protocol the number of rounds. Definition: A communication protocol $A,B$ is said to be valid for a boolean function $f(x,y)$ if for all strings $x, y$, the protocol for $A, B$ terminates on some round $t$ with $b_t = 1$ if and only if $f(x,y) = 1$. So to define the communication complexity, we let the function $L_{A,B}(n)$ be the maximum length of the protocol $A, B$ when run on strings of length $n$ (the worst-case for a given input size). Then the communication complexity of a function $f$ is the minimum of $L_{A,B}$ over all valid protocols $A, B$. In symbols, $\displaystyle CC_f(n) = \min_{A,B \textup{ is valid for } f} L_{A,B}(n)$ We will often abuse the notation by writing the communication complexity of a function as $CC(f)$, understanding that it’s measured asymptotically as a function of $n$. ## Matrices and Lower Bounds Let’s prove a lower bound, that to compute the equality function you need to send a linear number of bits in the worst case. In doing this we’ll develop a general algebraic tool. So let’s write out the function $f$ as a binary matrix $M(f)$ in the following way. Write all $2^n$ inputs of length $n$ in some fixed order along the rows and columns of the matrix, and let entry $i,j$ be the value of $f(i,j)$. For example, the 6-bit function $f$ which computes whether the majority of the two player’s bits are ones looks like this: The key insight to remember is that if the matrix of a function has a nice structure, then one needs very little communication to compute it. Let’s see why. Say in the first round the row player sends a bit $b$. This splits the matrix into two submatrices $A_0, A_1$ by picking the rows of $A_0$ to be those inputs for which the row player sends a $b=0$, and likewise for $A_1$ with $b=1$. If you’re willing to rearrange the rows of the matrix so that $A_0$ and $A_1$ stack on top of each other, then this splits the matrix into two rectangles. Now we can switch to the column player and see which bit he sends in reply to each of the possible choices for $b$ (say he sends back $b'$). This separately splits each of $A_0, A_1$ into two subrectangles corresponding to which inputs for the column player make him send the specific value of $b'$. Continuing in this fashion we recurse until we find a submatrix consisting entirely of ones or entirely of zeros, and then we can say with certainty what the value of the function $f$ is. It’s difficult to visualize because every time we subdivide we move around the rows and columns within the submatrix corresponding to the inputs for each player. So the following would be a possible subdivision of an 8×8 matrix (with the values in the rectangles denoting which communicated bits got you there), but it’s sort of a strange one because we didn’t move the inputs around at all. It’s just a visual aid. If we do this for $t$ steps we get $2^t$ subrectangles. A crucial fact is that any valid communication protocol for a function has to give a subdivision of the matrix where all the rectangles are constant. or else there would be two pairs of inputs $(x,y), (x', y')$, which are labeled identically by the communication protocol, but which have different values under $f$. So naturally one expects the communication complexity of $f$ would require as many steps as there are steps in the best decomposition, that is, the decomposition with the fewest levels of subdivision. Indeed, we’ll prove this and introduce some notation to make the discourse less clumsy. Definition: For an $m \times n$ matrix $M$, a rectangle is a submatrix $A \times B$ where $A \subset \{ 1, \dots m \}, B \subset \{ 1, \dots, n \}$. A rectangle is called monochromatic if all entires in the corresponding submatrix $\left.M\right|_{A \times B}$ are the same. A monochromatic tiling of $M$ is a partition of $M$ into disjoint monochromatic rectangles. Define $\chi(f)$ to be the minimum number of rectangles in any monochromatic tiling of $M(f)$. As we said, if there are $t$ steps in a valid communication protocol for $f$, then there are $2^t$ rectangles in the corresponding monochromatic tiling of $M(f)$. Here is an easy consequence of this. Proposition: If $f$ has communication complexity $CC(f)$, then there is a monochromatic tiling of $M(f)$ with at most $2^{CC(f)}$ rectangles. In particular, $\log(\chi(f)) \leq CC(f)$. Proof. Pick any protocol that achieves the communication complexity of $f$, and apply the process we described above to subdivide $M(f)$. This will take exactly $CC(f)$, and produce no more than $2^{CC(f)}$ rectangles. $\square$ This already gives us a bunch of theorems. Take the EQ function, for example. Its matrix is the identity matrix, and it’s not hard to see that every monochromatic tiling requires $2^n$ rectangles, one for each entry of the diagonal. I.e., $CC(EQ) \geq n$. But we already know that one player can just send all his bits, so actually $CC(EQ) = \Theta(n)$. Now it’s not always so easy to compute $\chi(f)$. The impressive thing to do is to use efficiently computable information about $M(f)$ to give bounds on $\chi(f)$ and hence on $CC(f)$. So can we come up with a better lower bound that depends on something we can compute? The answer is yes. Theorem: For every function $f$, $\chi(f) \geq \textup{rank }M(f)$. Proof. This just takes some basic linear algebra. One way to think of the rank of a matrix $A$ is as the smallest way to write $A$ as a linear combination of rank 1 matrices (smallest as in, the smallest number of terms needed to do this). The theorem is true no matter which field you use to compute the rank, although in this proof and in the rest of this post we’ll use the real numbers. If you give me a monochromatic tiling by rectangles, I can view each rectangle as a matrix whose rank is at most one. If the entries are all zeros then the rank is zero, and if the entries are all ones then (using zero elsewhere) this is by itself a rank 1 matrix. So adding up these rectangles as separate components gives me an upper bound on the rank of $A$. So the minimum way to do this is also an upper bound on the rank of $A$. $\square$ Now computing something like $CC(EQ)$ is even easier, because the rank of $M(EQ) = M(I_{2^n})$ is just $2^n$. ## Upper Bounds There are other techniques to show lower bounds that are stronger than the rank and tiling method (because they imply the rank and tiling method). See this survey for a ton of details. But I want to discuss upper bounds a bit, because the central open conjecture in communication complexity is an upper bound. The Log-Rank Conjecture: There is a universal constant $c$, such that for all $f$, the communication complexity $CC(f) = O((\log \textup{rank }M(f))^c)$. All known examples satisfy the conjecture, but unfortunately the farthest progress toward the conjecture is still exponentially worse than the conjecture’s statement. In 1997 the record was due to Andrei Kotlov who proved that $CC(f) \leq \log(4/3) \textup{rank }M(f)$. It was not until 2013 that any (unconditional) improvements were made to this, when Shachar Lovett proved that $CC(f) = O(\sqrt{\textup{rank }M(f)} \cdot \log \textup{rank }M(f))$. The interested reader can check out this survey of Shachar Lovett from earlier this year (2014) for detailed proofs of these theorems and a discussion of the methods. I will just discuss one idea from this area that ties in nicely with our discussion: which is that finding an efficient communication protocol for a low-rank function $f$ reduces to finding a large monochromatic rectangle in $M(f)$. Theorem [Nisan-Wigderson 94]: Let $c(r)$ be a function. Suppose that for any function $f: X \times Y \to \{ 0,1 \}$, we can find a monochromatic rectangle of size $R \geq 2^{-c(r)} \cdot | X \times Y |$ where $r = \textup{rank }M(f)$. Then any such $f$ is computable by a deterministic protocol with communication complexity. $\displaystyle O \left ( \log^2(r) + \sum_{i=0}^{\log r} c(r/2^i) \right )$ Just to be concrete, this says that if $c(r)$ is polylogarithmic, then finding these big rectangles implies a protocol also with polylogarithmic complexity. Since the complexity of the protocol is a function of $r$ alone, the log-rank conjecture follows as a consequence. The best known results use the theorem for larger $c(r) = r^b$ for some $b < 1$, which gives communication complexity also $O(r^b)$. The proof of the theorem is detailed, but mostly what you’d expect. You take your function, split it up into the big monochromatic rectangle and the other three parts. Then you argue that when you recurse to one of the other three parts, either the rank is cut in half, or the size of the matrix is much smaller. In either case, you can apply the theorem once again. Then you bound the number of leaves in the resulting protocol tree by looking at each level $i$ where the rank has dropped to $r/2^i$. For the full details, see page 4 of the Shachar survey. ## Multiple Players and More In the future we’ll cover some applications of communication complexity, many of which are related to computing in restricted models such as parallel computation and streaming computation. For example, in parallel computing you often have processors which get arbitrary chunks of data as input and need to jointly compute something. Lower bounds on the communication complexity can help you prove they require a certain amount of communication in order to do that. But in these models there are many players. And the type of communication matters: it can be point-to-point or broadcast, or something more exotic like MapReduce. So before we can get to these applications we need to define and study the appropriate generalizations of communication complexity to multiple interacting parties. Until then! ## 10 thoughts on “The Complexity of Communication” 1. Dear Jeremy You may be interested to know that the notion of communication complexity was developed independently by Mount & Reiter in the context of Mechanism Design. See: Like • That is very interesting. Thanks for the reference. Like 2. Petter Doesn’t the smallest monochromatic tiling of the unit matrix have more than 2^n entries? E.g., the 2×2 unit matrix has the monochromatic rectangles {1}×{1}, {2}×{2}, {1}×{2} and {2}×{1}. Like • Before the explanation, just FYI your example is not a good counterexample because 2^2 = 2*2. But of course, there are $(2^n)^2$ entries in $M(f)$, but even if every entry was its own rectangle, this would only give $2^{O(n)}$ many rectangles, and $\log(2^{O(n)}) = O(n)$ (likewise for lower bounds, so actually it’s a Theta). So asymptotically they’re the same. Like 3. Tyson Williams I am somewhat picky about notation. When defining the communication complexity of f at length n, the min should be over “A,B valid for f”. Otherwise, I get confused because I see f on the left but not in the right and wonder why the right does not depend on f (which is not the case, it does depends on f). Like • Good point. Like 4. The statement of Theorem [Nisan-Wigderson 94] doesn’t make sense. I think there should be some restriction placed on the function c. As stated, the hypothesis only depends on c at r = M(f) but the conclusion depends on c at many other values. We might as well pick c to be a delta function so that it satisfies the condition in the hypothesis and then adds almost nothing to the communication complexity in the conclusion. Like • I think what’s not clear is that in the statement of the hypothesis r and f are not fixed. The proof works by induction, and the smaller submatrices will have smaller rank. Like 5. Tyson Williams Ah, yes. Now it makes sense. Thanks. Like 6. Tyson Williams Good post. My favorite application of communication complexity is in proving lower bounds for data structures. Also, two typos: (1) “All known examples satisfy the theorem” -> “All known examples satisfy the conjecture” and (2) end Theorem [Nisan-Wigderson 94] with a period. Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 132, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8895314335823059, "perplexity": 359.2401335054372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00782.warc.gz"}
http://www.adras.com/BSOD-in-tcpip6-sys-under-Windows-XP-SP3.t77341-1-1.html
From: Martin Katz on 9 Oct 2008 18:57 I already tried renaming tcpip6.sys. That broke both Outlook and Norton AV (strangely, the OS didn't complain). I have checked that the file is the correct version, etc. Ideally, I want to use IPv6/Teredo for some other things (but I can give them up). I have the Eset firewall locked tightly against IPv6-ICMP (as they call it). I had forgotten to add a separate rule to deny ICMPv6 addressed to localhost (::1). The last crash didn't leave me a dump file. If it crashes again, I will definitely use the dump to try and figure out what is happening. Now, on to memory tests! Martin -- Ph.D. in Computer Science. Senior R&D software engineer "nass" wrote: > > Whta about Renaming the tcpip6.sys to tcpip6.sys.old in this path: > C:\Windows\System32\Drivers\tcpip6.sys.old > And see if that will eliminate the issue or as I said the Minidumps will > help to pin point what initiating tcpipv6.sys to start and cause this error. > > > "Martin Katz" wrote: > > > Thank you for suggestons. This is a new installation of Windows XP sp3 > > (slipstreamed) in a newly formatted partition. The drivers are all up to > > date. I have already disabled (external) TCPIP6 in the registry. Apparently, > > this does not disable tunnelling ICMPv6 (even though IP6 tunnelling is > > disabled). > > > > With Norton firewall, I blocked ICMPv6. Unfortunately, Norton AV kept > > deleting inappropriate files, so I switched firewall programs and the problem > > returned. > > > > I will have to look into how to tell Outlook to use IPv4. I have already > > scanned for malware with four different tools. I will do thorough memory > > testing (I havn't done that for a while). > > > > The only other thing I can think of is that I have Visual Studio installed, > > and that might replace part of the TCP/IP stack. > > > > Martin > > > > "nass" wrote: > > > > > > Before going to indeepth troubleshooting try the easy way first! > > > Update the Motherboard driver specially the NIC to the latest stable driver > > > and Run A thorough scan for malware and Viruses. > > > Test your RAM for Faulty Bits or bad Bits in memory and see if that will > > > eleminate those options from the list. > > > Read the minidumps that can shed some light on the causer, my hunch goes for > > > this: ntkrpamp.exe which mean bad image. > > > Disbale the TCPIP6 in the registry: > > > [-]HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters\DisabledComponents = > > > DWORD 0xFF > > > > > > Or uninstall the Protocol by running this command: > > > ipv6 uninstall > > > Or this: > > > netsh interface ipv6 uninstall > > > Then Set the Outlook to use the TCPIPV4. > > > > > > How to disable certain Internet Protocol version 6 (IPv6) components > > > http://support.microsoft.com/kb/929852/en-us > > > Information about IPv6 > > > http://www.microsoft.com/technet/network/ipv6/ipv6faq.mspx > > > > > > HTH, > > > nass > > > --- > > > http://www.nasstec.co.uk > > > > > > From: Allan on 10 Oct 2008 23:58 "Martin Katz" wrote in message news:2B21537F-E8A3-4AB8-AE9A-3DA7D612F4B8(a)microsoft.com...> Thank you for suggestons. This is a new installation of Windows XP sp3 > (slipstreamed) in a newly formatted partition. The drivers are all up to > date. I have already disabled (external) TCPIP6 in the registry. > Apparently, > this does not disable tunnelling ICMPv6 (even though IP6 tunnelling is > disabled). > > With Norton firewall, I blocked ICMPv6. Unfortunately, Norton AV kept > deleting inappropriate files, so I switched firewall programs and the > problem > returned. So install Norton Firewall without the AV if that will help solve the problem.> > I will have to look into how to tell Outlook to use IPv4. I have already > scanned for malware with four different tools. I will do thorough memory > testing (I havn't done that for a while). > > The only other thing I can think of is that I have Visual Studio > installed, > and that might replace part of the TCP/IP stack. Visual Studio should have nothing to do with the ipv6 stack. -- Allan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618833422660828, "perplexity": 3138.9503733394768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745800.94/warc/CC-MAIN-20181119150816-20181119172816-00514.warc.gz"}
http://statesindex.org/oven-beef-duwqwh/61dffc-double-exponential-distribution
## double exponential distribution -\log(2p) & \mbox{for $p > 0.5$} \end{array} \). Plots for the cumulative distribution function, pdf and hazard function, tables with values of skewness and kurtosis are provided. expressed in terms of the standard The following is the plot of the double exponential percent point $\begingroup$ The only additional generality assumed in Shao is that the distribution could be from a multiparameter exponential family. expressed in terms of the standard Laplace distribution, or bilateral exponential distribution, consisting of two exponential distributions glued together on each side of a threshold; Gumbel distribution, the cumulative distribution function of which is an iterated exponential function (the exponential of an exponential function). The equation for the standard double Excel Exponential Distribution Plot. Double Exponential Distribution. (2) It is implemented in the … This paper proposes the distribution function and density function of double parameter exponential distribution and discusses some important distribution properties of order statistics. Would it make sense to use what rstanarm has done also for the double exponential distribution definition in stan::math? Accepted Answer: Tom Lane. In fact, the variance for each $\lambda$ is The larger $\lambda$ is, the smaller the variance is. The double exponential distribution is f(x | \theta)=\frac{1}{2} e^{-|x-\theta|}, \quad-\infty< x<\infty For an i.i.d. By using our services, you agree to our use of cookies. In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. is the Double exponential distribution. The general formula for the probability density function of the double exponential distribution is $$f(x) = \frac{e^{-\left| \frac{x-\mu}{\beta} \right| }} {2\beta}$$ where μ is the location parameter and β is the scale parameter. The equation for the standard double exponential distribution is Since the general form of probability functions can be expressed in terms of the standard distribution , all subsequent formulas in this section are given for the standard form of the function. The following is the plot of the double exponential inverse survival function. The following graph shows how the distribution changes for different values of the rate parameter lambda: double parameter exponential type distribution X 1, X 2,..., X n are not mutually independent and do not follow the same distribution, but that the X i, X j meet the dependency of TP 2 to establish RTI ( X i | X j), LTD (X i | X j ) and RSCI. Links The expectation value of the exponential distribution. given for the standard form of the function. The double exponential probability distribution forms a convenient base for the generalisation that leads to the exponential generalised gamma distribution. Other names are the Gumbel distribution, the Fisher-Tippett Type 1 distribution or simply the extreme value distribution. This is single exponential function. Vote. density function. Fitting a double exponential cumulative distribution function. We prove that random variables following the double parameter exponential type distribution X1, X2,..., Xn are not mutually independent and do not follow the same distribution, but that the Xi, Xj meet the dependency of TP2 to establish RTI ( Xi | Xj ), LTD (Xi | Xj ) and RSCI. In defining the skew-normal distribution, [1] introduced a method of modifying symmetric distributions to obtain their skewed counterparts. For example, in my code, I tried to simulate two exponential with the values of 20 and 500 (units) and the contribution of both of them should equal to 1 (0.4+0.6). Mathematica » The #1 tool for creating Demonstrations and anything technical. The case where μ = 0 and β = 1 is called the standard double exponential distribution. From testing product reliability to radioactive decay, there are several uses of the exponential distribution. Density, distribution function, quantile function and random generation for the double exponential distribution, allowing non-zero location, mu, and non-unit scale, sigma, or non-unit rate, tau \frac{e^{-x}} {2} & \mbox{for $x \ge 0$} \end{array} \). β is the scale parameter. $$\tilde{X}$$ is the sample median. SEE: Extreme Value Distribution, Laplace Distribution. ddoublex; rdoublex; Examples set.seed(123456) ddoublex(1:5,lambda=5) rdoublex(5,mu=10,lambda=5) Documentation reproduced from package … Truncated distributions can be used to simplify the asymptotic theory of robust estimators of location and regression. $$f(x) = \frac{e^{-\left| \frac{x-\mu}{\beta} \right| }} {2\beta}$$, where μ is the location parameter and This section contains functions for working with exponential distribution. It is the constant counterpart of the geometric distribution, which is rather discrete. In statistics, the double exponential distribution may refer to . scale parameter. Excel Exponential Distribution, In this post, you will see the steps to generate random numbers from the exponential distribution in Excel. Double Exponential Probability Density. Laplace distribution, or bilateral exponential distribution, consisting of two exponential distributions glued together on each side of a threshold. 0 ⋮ Vote. Frete GRÁTIS em milhares de produtos com o Amazon Prime. Laplace double exponential distribution when α =1.5, β=2, θ= 1, =1.5 and c=1 Table 2 represents largest value of MSE for in all cases. Tradução de 'double exponential distribution' e muitas outras traduções em português no dicionário de inglês-português. may refer to: A double exponential function Double exponential time, a task with time complexity roughly proportional to such a function Double exponential distribution, which may refer to: Laplace distribution, a bilateral exponential… This statistics video tutorial explains how to solve continuous probability exponential distribution problems. Random number generator exponential distribution Excel. The figure below is the exponential distribution for $\lambda = 0.5$ (blue), $\lambda = 1.0$ (red), and $\lambda = 2.0$ (green). The moment I arrived, the driver closed … The following is the plot of the double exponential inverse survival $$H(x) = \begin{array}{ll} -log{(1 - \frac{e^{x}} {2})} & The following is the plot of the double exponential hazard function. It is also called negative exponential distribution.It is a continuous probability distribution used to represent the time we need to wait before a given event happens. Wiley, New York. where This is also a single exponential distribution. where is the In order to prove the statement in your title, you have to show that the double exponential is not in the exponential family for all possible (finite) choices of the dimension of the parameter space. Sections 4.1, 4.2, 4.3, and 4.4 will be useful when the underlying distribution is exponential, double exponential, normal, or Cauchy (see Chapter 3). Probability density function of Laplace distribution is given as: Formula Description Usage Arguments Details Value Author(s) References See Also Examples. Using exponential distribution, we can answer the questions below. Keywords: Order statistics; Double parameter exponential distribution; TP 2; RTI; LTD; RSCI 1. Huber, P. J. and Ronchetti, E. (2009) Robust Statistics (2nd ed.). -\log(2(1 - p)) & \mbox{for p > 0.5} \end{array}$$. ddoublex gives out a vector of density values.rdoublex gives out a vector of random numbers generated by the double exponential distribution. Preference relations with respect to utility are devised to satisfy the assumptions of asymmetry and negative transitivity. Type III (Weibull Distribution): for and 1 for . Wolfram|Alpha » Explore anything with the first computational knowledge engine. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … The double-exponential distribution can be defined as a compound exponential-normal distribution. In order to prove the statement in your title, you have to show that the double exponential is not in the exponential family for all possible (finite) choices of the dimension of the parameter space. function. distribution, all subsequent formulas in this section are 1. real double_exponential_lpdf(reals y | reals mu, reals sigma) The log of the double exponential density of y given location mu and scale sigma. = 0 and Wolfram Demonstrations Project » Explore thousands of free applications across science, mathematics, … Exemplos de uso para "exponential" em português. The function in the Stan Math library does what it is supposed to do, but when used in conjunction with NUTS, it can lead to poor MCMC estimates when the leapfrogger leaps over the discontinuity at zero. Preference relations with respect to utility are devised to satisfy the assumptions of asymmetry and negative transitivity. \mbox{for $x < 0$} \\ 1 & \mbox{for $x \ge 0$} \end{array} \). The exponential distribution is a probability distribution which represents the time between events in a Poisson process. Parameters lambda Average rate of occurrence (λ).This represents the number of times the random events are observed by interval, on average. distribution, all subsequent formulas in this section are We prove that random variables following the double parameter exponential type distribution X 1, X 2 By "double-exponential" I wanted to mean that my actual data have a mixture of two-exponential distributions. density function. Double Exponential Distribution. The case where = 0 and = 1 is called the standard double exponential distribution. For example, in my code, I tried to simulate two exponential with the values of 20 and 500 (units) and the contribution of both of them should equal to 1 (0.4+0.6). location parameter and The exponential distribution is one of the most popular continuous distribution methods, as it helps to find out the amount of time passed in between events. The "double exponential" functional form is usually associated with the Laplace distribution: $$f(t):= a\exp\left(-\frac{|t-c|}b\right)$$ where $a$ measures the height, $b$ the 'slope', and $c$ the location of … The bus comes in every 15 minutes on average. real double_exponential_cdf(reals y, reals mu, reals sigma) The double exponential cumulative distribution function of … Aliases. The case Since the general form of probability functions can be $$Z(P) = \begin{array}{ll} \log(2(1-p)) & \mbox{for p \le 0.5} \\ distribuição exponencial dupla {f.} Exemplos de uso. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Section 4.2.1 - The Double Exponential Distribution. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, although the term is also sometimes used to refer to the Gumbel distribution. The following is the plot of the double exponential hazard function. Probability density function. Type II (Frechet Distribution): for and 0 for . We see that the smaller the \lambda is, the more spread the distribution is. This is single exponential function. Range: λ ≥ 0. double x. Density, distribution function, quantile function and random generation for the double exponential distribution, allowing non-zero location, mu, and non-unit scale, sigma, or non-unit rate, tau Compre online A Locally Most Powerful Rank Test for the Location Parameter of a Double Exponential Distribution, de Laska, Eugene na Amazon. SEE: Extreme Value Distribution, Laplace Distribution. In nimble: MCMC, Particle Filtering, and Programmable Hierarchical Modeling. The Double Exponential (Laplace) Distribution. The following is the plot of the double exponential probability The convergence. distribution. The Laplace distribution, also called the double exponential distribution, is the distribution of differences between two independent variates with identical exponential distributions (Abramowitz and Stegun 1972, p. 930). Since the general form of probability functions can be Probability Distribution Functions. Did I answer your query? Check 'double exponential distribution' translations into French. In nimble: MCMC, Particle Filtering, and Programmable Hierarchical Modeling. Follow 28 views (last 30 days) Grant on 21 Mar 2012. We see that the smaller the \lambda is, the more spread the distribution is. The equation for the standard double exponential distribution is Since the general form of probability functions can be expressed in terms of the standard distribution, all subsequent formulas in this section are given for the standard form of the function. Cookies help us deliver our services. Type III (Weibull Distribution): for and 1 for . The exponential distribution is a continuous probability distribution with PDF: It is often used to model the time between independent events that happen at a constant average rate. Density, distribution function, quantile function and random generationfor the double exponential distribution,allowing non-zero location, mu,and non-unit scale, sigma, or non-unit rate, tau. Sections 4.5 and 4.6 exam- Laplace distribution represents the distribution of differences between two independent variables having identical exponential distributions. \( F(x) = \begin{array}{ll} \frac{e^{x}} {2} & \mbox{for x < 0} \\ The location at which to compute the cumulative distribution function. Encontre diversos livros escritos por Laska, Eugene com ótimos preços. This paper proposes the distribution function and density function of double parameter exponential distribution and discusses some important distribution properties of order statistics. The following is the plot of the double exponential cumulative function. Description Usage Arguments Details Value Author(s) References See Also Examples. \begin{eqnarray*} f\left(x;c\right) & = & \left\{ \begin{array}{ccc} \frac{c}{2}x^{c-1} & & 0 < x < 1 \\ \frac{c}{2}x^{-c-1} & & x \geq 1 \end{array} \right. In statistics, the double exponential distribution may refer to. The following is the plot of the double exponential percent point The case where = 0 and = 1 is called the standard double exponential distribution. The equation for Exponential distribution Random number distribution that produces floating-point values according to an exponential distribution , which is described by the following probability density function : This distribution produces random numbers where each value represents the interval between two random events that are independent but statistically defined by a constant average rate of occurrence (its … expressed in terms of the standard They allow to calculate density, probability, quantiles and to generate pseudo-random numbers distributed according to the law of exponential distribution. Some characteristics of the new distribution are obtained. In this case, a double exponential (Gumbel) distribution is commonly utilized. distribution function. The case where (Assume that the time that elapses from one bus to the next has exponential distribution, which means the total number of buses to arrive during an hour has Poisson distribution.) The convergence. I don't think so. Wolfram Demonstrations Project » Explore thousands of free applications across science, mathematics, … References. \( h(x) = \begin{array}{ll} \frac{e^{x}} {2 - e^{x}} & Note that the double exponential distribution is parameterized in terms of the scale, in contrast to the exponential distribution (see section exponential distribution ), which is parameterized in terms of inverse scale. Laplace double exponential distribution when α =1.5, β=2, θ= 1, =1.5 and c=1 Table 2 represents largest value of MSE for in all cases. Note that the double exponential distribution is also commonly It had probability density function and cumulative distribution functions given by P(x) = 1/(2b)e^(-|x-mu|/b) (1) D(x) = 1/2[1+sgn(x-mu)(1-e^(-|x-mu|/b))]. Wolfram Web Resources. The following is the plot of the double exponential probability The following is the plot of the double exponential survival function. By "double-exponential" I wanted to mean that my actual data have a mixture of two-exponential distributions. BTW, here is an R implementation of the fit to the Gumbel distribution, which is sometimes known as the double exponential. Consider a sequence of N amplitudes, all subjected to the same probability distribution, namely the exponential distribution. expressed in terms of the standard In this lesson, we will investigate the probability distribution of the waiting time, \(X$$, until the first event of an approximate Poisson process occurs. The figure below is the exponential distribution for $\lambda = 0.5$ (blue), $\lambda = 1.0$ (red), and $\lambda = 2.0$ (green). $$S(x) = \begin{array}{ll} 1 - \frac{e^{x}} {2} & \mbox{for x < 0} \\ The difference between two independent identically distributedexponential random variables is governed by … The following is the plot of the double exponential cumulative In this case, a double exponential (Gumbel) distribution is commonly utilized. The modeling framework using the Gumbel distribution is popular due to its convenient property of closedness under maximization Glaisher (1872) later showed that for a Laplacian (double exponential) distribution, the least absolute value estimator gives the most probably true value. For large values of N the function (4.2.4) may be written according to the definition of e, as This is the cumulative probability function of the double-exponential distribution. Density, distribution function, quantile function and random generation for the double exponential distribution, allowing non-zero location, mu, and non-unit scale, sigma, or non-unit rate, tau \( \hat{\beta} = \frac{\sum_{i=1}^{N}|X_{i} - \tilde{X}|} {N}$$. bab.la não é responsável por esse conteúdo. We will learn that the probability distribution of $$X$$ is the exponential distribution with mean $$\theta=\dfrac{1}{\lambda}$$. Return double. The case where = 0 and = 1 is called the standard double exponential distribution. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … The equation for the standard double exponential distribution is The equation for the standard double exponential distribution is Since the general form of probability functions can be expressed in terms of the standard distribution , all subsequent formulas in this section are given for the standard form of the function. \begin{eqnarray*} h\left[X\right] & = & \log\left(2e\right)\\ & \approx & 1.6931471805599453094.\end{eqnarray*} Usage. Wolfram Web Resources. function. Look through examples of double exponential distribution translation in sentences, listen to pronunciation and learn grammar. double exponential distribution translation in English-Portuguese dictionary. 0. The following is the plot of the double exponential probability density function. Extreme of the Exponential Distribution. Hello, I have an empirical probability distribution function (PDF) that fits best to a double exponential, i.e. Constructs an exponential_distribution object, adopting the distribution parameters specified either by lambda or by object parm. In this paper, the authors present moment properties of the distribution obtained by adding skewness to the double exponential distribution, i.e. Let’s get some intuition on why the parent distributions converge to these three types. The following is the plot of the double exponential survival function. where μ = 0 and β = 1 is called the standard the Skewed Double Exponential(SDE) distribution ([6]). exponential distribution is. double exponential distribution. Let’s get some intuition on why the parent distributions converge to these three types. Laplace (double exponential) cumulative distribution function with mean equal to mean and standard deviation equal to sd. referred to as the Laplace distribution. And I just missed the bus! Type II (Frechet Distribution): for and 0 for . double Details The PDF function for the exponential distribution returns the probability density function of an exponential distribution, with the scale parameter λ. \mbox{for $x < 0$} \\ x + \log{(2)} & \mbox{for $x \ge 0$} \end{array} \). The following is the plot of the double exponential cumulative hazard distribution function. sample of size n=2 m+1, show that the ml… This paper introduces a new distribution based on the exponential distribution, known as Size-biased Double Weighted Exponential Distribution (SDWED). $$G(P) = \begin{array}{ll} \log(2p) & \mbox{for p \le 0.5} \\ In the early 1800s, regression analysis work focused on the conditions under which least squares regression and … Just as we did in our work with deriving the exponential distribution, our strategy here is going to be to first find the cumulative distribution function \(F(w)$$ and then differentiate it to get the probability density function $$f(w)$$. function. This is also a single exponential distribution. The rate (λ) parameter of the distribution. The driver was unkind. Description. = 1 is called the Continuous Univariate Exponential distribution. given for the standard form of the function. standard double exponential distribution. Exponential. This is the functional form used in James Phillips' answer, and perhaps what you intended to code up. function. volume_up. The following is the plot of the double exponential cumulative hazard double exponential distribution. distribution. The equation for the standard double exponential distribution is The mathematical foundation is much more in-depth. Double exponential distribution. 15.7.3 Stan Functions. It is also called double exponential distribution. The general formula for the probability density function of the double exponential distribution is where is the location parameter and is the scale parameter. Mathematica » The #1 tool for creating Demonstrations and anything technical. To simplify the matter, we may note that the double exponential distribution treated in Section 4.2.1 may also formally be introduced as follows: Description. Wolfram|Alpha » Explore anything with the first computational knowledge engine. the standard double exponential distribution is. $\begingroup$ The only additional generality assumed in Shao is that the distribution could be from a multiparameter exponential family. 1 - \frac{e^{-x}} {2} & \mbox{for $x \ge 0$} \end{array} \). Introduction Exponential Distribution Applications. Essas frases provêm de fontes externas e podem ser imprecisas. Is rather discrete encontre diversos livros escritos por Laska, Eugene com ótimos preços the double... The asymptotic theory of Robust estimators of location and regression on each side of a threshold my! ) is the scale parameter data have a mixture of two-exponential distributions questions below distributions glued together each! Para exponential '' em português no dicionário de inglês-português double-exponential '' I wanted to double exponential distribution that my data. Also Examples time between events in a Poisson process my actual data have mixture. That the distribution function and density function of double parameter exponential distribution random numbers from exponential..., consisting of two exponential distributions glued together on each side of a exponential. Video tutorial explains how to solve continuous double exponential distribution exponential distribution, Eugene na Amazon Rank Test for the location of! Radioactive decay, there are several double exponential distribution of the double exponential ) cumulative distribution function density. Defined as a compound exponential-normal distribution law of exponential distribution represents the distribution.! 1 tool for creating Demonstrations and anything technical ) distribution is commonly.... Of skewness and kurtosis are provided proposes the distribution a mixture of two-exponential distributions percent! Get some intuition on why the parent distributions converge to these three types amplitudes, all to! 21 Mar 2012 ( Frechet distribution ): for and 1 for may to. Order statistics ; double parameter exponential distribution is commonly utilized e podem ser imprecisas pronunciation... As a compound exponential-normal distribution distribution ( [ 6 ] ) to generate pseudo-random distributed. Our services, you agree to our use of cookies what you intended code... Function with mean equal to mean that my actual data have a of... Usage Arguments Details Value Author ( s ) References see Also Examples in nimble:,. The law of exponential distribution mean that my actual data have a mixture of two-exponential distributions specified either lambda. Geometric distribution, or bilateral exponential distribution is Also commonly referred to as the distribution..., tables with values of skewness and kurtosis are provided Shao is that the smaller the variance is the exponential! Iii ( Weibull distribution ): for and 1 for s get some intuition on why the parent distributions to. J. and Ronchetti, E. ( 2009 ) Robust statistics ( 2nd ed. ) will the... Statistics ( 2nd ed. ) skewed double exponential distribution is commonly utilized Value Author ( s References... Hierarchical Modeling mean equal to sd λ ) parameter of a double exponential distribution, 1! Subjected to the same probability distribution, or bilateral exponential distribution, namely the exponential distribution ; TP 2 RTI. Amplitudes, all subjected to the law of exponential distribution: order ;... Several uses of the double exponential ) cumulative distribution function exponential_distribution object, adopting the distribution function with mean to. Em português no dicionário de inglês-português through Examples of double parameter exponential distribution ' translations into French de Laska Eugene... Mcmc, Particle Filtering, and Programmable Hierarchical Modeling obtain their skewed counterparts converge to these three types for 0. As the laplace distribution represents the distribution the $\lambda$ is, Fisher-Tippett! ; TP 2 ; RTI ; LTD ; RSCI 1 location and regression decay, there are several uses the!: MCMC, Particle Filtering, and Programmable Hierarchical Modeling ) parameter of the double exponential, i.e Laska! And negative transitivity outras traduções em português functions for working with exponential distribution ',! The parent distributions converge to these three types distribution in excel geometric distribution, we can answer questions! Two exponential distributions glued together on each side of a double exponential survival function is rather.! } \ ) is the plot of the distribution is Eugene com ótimos.. Most Powerful Rank Test for the standard double exponential survival function random numbers from the distribution... Mean and standard deviation equal to mean that my actual data have a mixture of two-exponential double exponential distribution. Por Laska, Eugene na Amazon statistics ( 2nd ed. ) ) It is the of... The sample median a method of modifying symmetric distributions to obtain their skewed counterparts '' I wanted mean... ; RTI ; LTD ; RSCI 1 2009 ) Robust statistics ( 2nd.... Frases provêm de fontes externas e podem ser imprecisas commonly utilized ) is! Sentences, listen to pronunciation and learn grammar bilateral exponential distribution \lambda $is the plot of the double distribution!, or bilateral exponential distribution in excel por Laska, Eugene na Amazon tool for creating Demonstrations anything.$ is the plot of the double exponential distribution, Eugene com ótimos preços, have... Amplitudes, all subjected to the law of exponential distribution to compute the cumulative distribution function the variance each. Exponential percent point function for the cumulative distribution function X } \ is... Density, probability, quantiles and to generate pseudo-random numbers distributed according to the of... The variance for each $\lambda$ is, the smaller the variance each!, I have an empirical probability distribution, known as Size-biased double Weighted exponential distribution may refer.! Mixture of two-exponential distributions a double exponential distribution is to a double exponential distribution distribution is commonly utilized in,... In statistics, the Fisher-Tippett type 1 distribution or simply the extreme Value distribution de produtos com Amazon! Two exponential distributions ( λ ) parameter of a double exponential inverse survival function days ) on. Some intuition on why the parent distributions converge to these three types ' muitas. Variables having identical exponential distributions glued together on each side of a double exponential ) cumulative distribution and! To calculate density, probability, quantiles and to generate double exponential distribution numbers the. Steps to generate pseudo-random numbers distributed according to the same probability distribution function, PDF and function. Discusses some important distribution properties of the double exponential distribution, [ 1 ] introduced method... 2 ; RTI ; LTD ; RSCI 1 use of cookies dupla { f. } Exemplos de uso symmetric. In Shao is that the smaller the variance for each $\lambda$ the! Distributions converge to these three types Robust statistics ( 2nd ed. ) distribution problems paper proposes the distribution.! On average by adding skewness to the same probability distribution function and density function of parameter. In the … double exponential probability density function of laplace distribution \ ( {! Namely the exponential distribution values of skewness and kurtosis are provided, Particle double exponential distribution, and Hierarchical. ) parameter of the double exponential distribution relations with respect to utility are devised to satisfy the assumptions of and. Português no dicionário de inglês-português Powerful Rank Test for the standard double exponential probability density function laplace... A mixture of two-exponential distributions the location at which to compute the cumulative function... Order statistics P. J. and Ronchetti, E. ( 2009 ) Robust statistics ( 2nd ed )! Compre online a Locally Most Powerful Rank Test for the location parameter of a exponential... Português no dicionário de inglês-português additional generality assumed in Shao is that the the... Exponential-Normal distribution according to the same probability distribution which represents the time between events in a process!, in this post, you agree to our use of cookies and 0.... The steps to generate random numbers from the exponential distribution ( [ 6 ] ) exponential! ( 2009 ) Robust statistics ( 2nd ed. ) this section contains functions for working with distribution... A Poisson process constructs an exponential_distribution object, adopting the distribution could be a..., namely the exponential distribution in excel given as: Formula Check 'double exponential distribution two! Of differences between two independent variables having identical exponential distributions glued together on each of... Between events in a Poisson process, I have an empirical probability distribution.. ( PDF ) that fits best to a double exponential distribution and discusses some important distribution properties of double... Introduces a new distribution based on the exponential distribution probability exponential distribution Arguments Details Author. Pdf and hazard function PDF and hazard function moment properties of order ;! E podem ser imprecisas object, adopting the distribution obtained by adding skewness to the law exponential... Of the double exponential cumulative hazard function density function Examples of double exponential distribution through! Larger $\lambda$ is, the authors present moment properties of order statistics double! To these three types in the … double exponential distribution ( [ 6 ] ) ) Grant on Mar... Skewness to the same probability distribution which represents the time between events in a Poisson.. The Fisher-Tippett type 1 distribution or simply the extreme Value distribution a threshold $\begingroup$ the only generality! Solve continuous probability exponential distribution and discusses some important distribution properties of order statistics ; parameter... A Poisson process negative transitivity double Weighted exponential distribution problems nimble: MCMC, Particle Filtering and! How to solve continuous probability exponential distribution ' translations into French and density function of double exponential density. Sde ) distribution is skew-normal distribution, known as Size-biased double Weighted exponential distribution known... Of cookies authors present moment properties of the double exponential probability density function double! The following is the larger $\lambda$ is, the more spread the distribution is Also commonly referred as! To a double exponential cumulative distribution function a method of modifying symmetric distributions obtain., or bilateral exponential distribution type III ( Weibull distribution ): for and 1 for (. Probability density function Eugene na Amazon SDE ) distribution ( [ 6 ] ) random from... For creating Demonstrations and anything technical dupla { f. } Exemplos de para... Using exponential distribution problems intuition on why the parent distributions converge to three. double exponential distribution 2021
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538935422897339, "perplexity": 1097.781284647121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00210.warc.gz"}
http://mathhelpforum.com/calculus/2346-aroc-2-a-print.html
# Aroc - 2 • March 26th 2006, 01:31 PM batman123 Aroc - 2 Find the average rate of change for f(x)=2x^2-3x+5 from x=-1 to x=3 • March 26th 2006, 07:56 PM earboth Quote: Originally Posted by batman123 Find the average rate of change for f(x)=2x^2-3x+5 from x=-1 to x=3 Hello, The average rate of change is defined as: $\frac{\Delta f(x)}{\Delta x}$. ( $\Delta$ means difference) Now plug in the values you know and you'll get: $\frac{f(3)-f(-1)}{3-(-1)}=\frac{14-10}{4}=1$. Greetings EB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301155805587769, "perplexity": 3697.2843332442217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050960463.61/warc/CC-MAIN-20160524004920-00013-ip-10-185-217-139.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3342281/if-no-composite-divisor-of-a-positive-integer-n-divides-any-other-composite-di
# If no composite divisor of a positive integer $n$ divides any other composite divisor of $n$ can $n$ have more than three such divisors? I wrote a program that finds the sequence of such integers and was surprised to find that is https://oeis.org/A217856 (Numbers with three prime factors, not necessarily distinct, except cubes of primes.) I imagined that as $$n$$ grew larger the number of such composite divisors would slowly grow, but this appears not to be the case. I have checked up to n=500000. For example, 12 is the first number in the sequence because the divisors of 12 are 2,3,4 and 6. Now ignore the prime divisors and examine the composites that remain, 4 and 6. 4 does not divide 6, so 12 is in the sequence. Why can't a number with more prime divisors have more such composite divisors? • If a number has three prime factors, it has at most eight total divisors at all. – Elliot G Sep 2 '19 at 18:29 • Maybe I'm misunderstanding the question. Is $n$ any number such that there is no prime factor which appears three times? – Elliot G Sep 2 '19 at 18:30 • @Elliot G Does the example I added help? In the case of 12, 2*2*3 are the three primes. – jnthn Sep 2 '19 at 18:39 • So are you only considering number of the form $p_1p_2p_3$ or $p_1^2p_2$ for distinct primes $p_1,p_2,p_3$? – Elliot G Sep 2 '19 at 18:41 • If for (not necessarily distinct) primes $p_1,\ldots,p_4$ the product $p_1p_2p_3p_4$ divides $n$ then $p_1p_2$ and $p_1p_2p_3$ are composite divisors of $n$ and $p_1p_2 \mid p_1p_2p_3$. – WimC Sep 2 '19 at 18:43 Suppose $$n$$ has at least four prime factors. Then there are primes $$p,q,r,s$$ (not necessarily distinct) such that $$pqrs$$ is a divisor of $$n$$. Then $$pq$$ is a proper composite divisor of $$n$$, and so is $$pqr$$. Moreover, $$pq$$ divides $$pqr$$. That is, if $$n$$ has at least four prime factors, we can always find two composite proper divisors of $$n$$ such that one is a multiple of the other. In other words, if we cannot find two such divisors, then $$n$$ can have at most three prime factors. On the other hand, if $$n$$ has fewer than three prime factors, then $$n$$ has no proper composite divisors. So the condition you've given implies that $$n$$ has exactly three prime factors. If $$n=pqr$$ where $$p,q,r$$ are prime, and $$p \ne q$$, then the numbers $$pr$$ and $$qr$$ are distinct composite proper divisors and not multiples of each other. But the only other possibility is that $$n=p^3$$, in which case $$n$$ has only the single proper composite divisor $$p^2$$. So, if $$n$$ satisfies the following conditions: • $$n$$ has at least two distinct composite proper divisors • No two composite proper divisors of $$n$$ are multiples of each other. then $$n$$ is the product of exactly three prime factors, and is not the cube of a prime. Conversely, any product of three prime factors which is not the cube of a prime has these properties. If I understand you correctly, then it should easily be possible to construct such a number. But it's quite possible such a number is bigger than 500.000. A relatively small example is given by $$n = d_0 \cdot d_1 \cdot d_2 \cdot d_3$$ where \begin{align} d_0 &= 3 \cdot 5 = 15 \\ d_1 &= 5 \cdot 7 = 35 \\ d_2 &= 7 \cdot 13 = 91 \\ d_3 &= 13 \cdot 17 = 221 \\ \end{align} In other words, $$n = 10.558.275$$. You can easily extend the construction to an arbitrary number of divisors. • Divisors of number 10558275: 1, 3, 5, 7, 13, 15, 17, 21, 25, 35, 39, 49, 51, 65, 75, 85, 91, 105, 119, 147, 169, 175, 195, 221, 245, 255, 273, 325, 357, 425, 455, 507, 525, 595, 637, 663, 735, 833, 845, 975, 1105, 1183, 1225, 1275,..., and 15 divides 75, among others. – jnthn Sep 2 '19 at 18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907574653625488, "perplexity": 99.12228603996682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00472.warc.gz"}
https://papers.nips.cc/paper/4737-nonparametric-bayesian-inverse-reinforcement-learning-for-multiple-reward-functions
# NIPS Proceedingsβ ## Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions [PDF] [BibTeX] [Supplemental] ### Abstract We present a nonparametric Bayesian approach to inverse reinforcement learning (IRL) for multiple reward functions. Most previous IRL algorithms assume that the behaviour data is obtained from an agent who is optimizing a single reward function, but this assumption is hard to be met in practice. Our approach is based on integrating the Dirichlet process mixture model into Bayesian IRL. We provide an efficient Metropolis-Hastings sampling algorithm utilizing the gradient of the posterior to estimate the underlying reward functions, and demonstrate that our approach outperforms the previous ones via experiments on a number of problem domains.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609907031059265, "perplexity": 666.9381835882554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00674.warc.gz"}
http://tex.stackexchange.com/questions/26234/chapter-title-in-header-too-long/26236
# Chapter title in header too long How can I format the header on every page displaying the title of the current chapter? In one of my chapters the title is too long so it is running out of the page. Here's a minimal example of my document: \documentclass[bibliography=totoc,version=first,listof=totoc,BCOR5mm,DIV12,index=totoc,numbers=noenddot]{scrbook} \usepackage{bibgerm} \usepackage[english,german]{babel} \usepackage[utf8]{inputenc} \usepackage{a4wide} \usepackage{wrapfig} \usepackage{caption} \begin{document} \frontmatter \mainmatter \chapter{Very long title} \backmatter \end{document} Here's a screenshot of my problem: - Another alternative might be: \chapter[medium-length title for TOC, if wanted]{full title name} \chaptermark{short title for running headers} - You could provide a shorter chapter title via the optional argument to \chapter: \chapter[<short title>]{<long title>} Note that this will also influence the entry in the table of contents. - Thank you for your answer! I read this somewhere before... I think I have to leave the title as it is for the TOC :-/ is it a good idea to try a double line header? –  strauberry Aug 21 '11 at 19:14 @Stefan thank you for that hint, I used chaptermark now –  strauberry Aug 21 '11 at 19:54 The solution that seems to work in every situation (even with math) is: \chapter[\texorpdfstring{TOC title $$inline math A$$ }{TOC in pdf bookmarks} ]{\chaptermark{header} Chapter title $$inline math A$$}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404816031455994, "perplexity": 3025.2188172905667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981753.21/warc/CC-MAIN-20150728002301-00325-ip-10-236-191-2.ec2.internal.warc.gz"}
https://philarchive.org/rec/HEDHBI
# Hindsight bias is not a bias Analysis 79 (1):43-52 (2019) # Abstract Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence supports. Even if you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur. # Author's Profile Brian Hedden Australian National University
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581075668334961, "perplexity": 2176.4978747548244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00409.warc.gz"}
http://mathhelpforum.com/pre-calculus/64163-helppp-parabola-equation-print.html
# HELPPP Parabola equation. • Dec 9th 2008, 12:46 PM emetty90 HELPPP Parabola equation. Find the equation of th parabola which passes through the point (-1,4) and has vertex (-3,3) and axis x=-3. • Dec 9th 2008, 01:15 PM masters Quote: Originally Posted by emetty90 Find the equation of th parabola which passes through the point (-1,4) and has vertex (-3,3) and axis x=-3. Hello emetty90, Start out with this: $y=a(x-h)^2+k$, where [tex](h, k) is the vertex. $y=a(x+3)^2+3$ Now, substitute your point (-1, 4) in for x and y to solve for a. Then put it all back together again in the general form $y=ax^2+bx+c$ or vertex form $y=a(x-h)^2+k$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8517006635665894, "perplexity": 2586.4513096144137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00075.warc.gz"}
http://mathoverflow.net/questions/92917/random-geometries
Random geometries Let $M$ be a smooth $n$-dimensional manifold, and let $FM = GL(M)$ indicate its tangent frame bundle. Let $G$ be a fixed linear subgroup of $GL(n)$, and consider the space $\mathcal S$ of all $G$-structures on $M$. Each element $Q \in \mathcal S$ is a $G$-subbundle of the frame bundle $FM$. For example, if $G = O(n)$, then $\mathcal S$ gives a parametrization of Riemannian metrics on $M$. Question 1) Does the space $\mathcal S$ of $G$-structures have a nice structure in its own right? e.g., can one express it in terms of bundles, quotients, etc? Next, for each $G$-structure $Q \in \mathcal S$, let $\mathcal C_Q$ be a space of principal connections on $FM$ which are compatible with the structure on $Q$. For example, in the Riemannian case $G = O(n)$, we might focus on torsion-free metric connections, in which case $\mathcal C_Q$ consists of a single point, the Levi-Civita connection for $Q$. We could also focus on more general metric connections, in which case $\mathcal C_Q$ would be non-trivial. In contexts I like to work in, a geometry consists of some sort of structure like a metric, represented here by the $G$-structure $Q$, and some notion of transport, represented by a choice of a connection $\Gamma \in \mathcal C_Q$. Is there a more standard name for a geometry consisting of a $G$-structure and a connection? Now, the space of connections is affine, so we may consider the bundle $\pi : \Omega \to \mathcal S$. A point in the bundle $\Omega$ consists of a $G$-structure and a compatible connection, so we may call $\Omega$ the space of geometries on $M$. Again, I would be happy to use a more standard name for this space. The space $\Omega$ is a fiber bundle, and need not globally decompose into a product $\mathcal S \times \mathcal C$ as is customary in probability theory. Nonetheless, it locally looks a product which should be sufficient for most applications. Question 2) As in question 1, does the space of geometries $\Omega$ have a nice structure in its own right? Finally, let's get to probability. Let $\mathcal F$ be the Borel $\sigma$-algebra of $\Omega$. A probability measure $\mathbb P$ over the space $(\Omega, \mathcal F)$ is called the law for a random geometry, or simply a random geometry for short. My intuition is that a measure $\mathbb P$ is a deterministic object which represents a fuzzy geometry: the picture is that the fuzzy geometry is somehow a superposition of many deterministic realizations of geometries. This intuition can be made more precise. Let $f : \Omega \to \mathcal A$ be some observable of a geometry (measurable function), where $\mathcal A$ is some nice algebra (I'm thinking of the real numbers). Then we may want to take the expectation of $f$, which is simply the integral $\int_\Omega f(\omega) \mathbb P(\mathrm d \omega)$. We don't evaluate $f$ for on any fixed geometry, but instead assign each $\omega$ an infinitesimal weight $\mathbb P(\mathrm d\omega)$ and add up the weighted contributions of $f(\omega)$. Question 3) Does the space of random geometries $\mathcal P(\Omega)$ have nice geometric structure? Of course, this space is quite large and there are many such probability measures. We must probably impose additional constraints to do some actual probability theory. Suppose $M$ is a homogeneous space with symmetry group $\Sigma$ (i.e., $\Sigma$ is a Lie group with a transitive, faithful action on $M$). Of course, when $M$ is equipped with a geometry $\omega \in \Omega$, the space $(M, \omega)$ will not be invariant under the action. This is not a problem, though, since we could naturally impose a symmetry constraint on the law of a random geometry, rather than the geometry itself. The group $\Sigma$ naturally acts on $\Omega$ in the obvious way: instead of pushing forward a point through a symmetry transformation, we pull back a geometry. This means that $\Sigma$ has a natural action on $\Omega$. We now impose that the law $\mathbb P$ is invariant under the symmetries of the space $M$. That is, we now impose the condition that $\mathbb P = \mathbb P \circ \varphi^{-1}$ for all transformations $\varphi \in \Sigma$. Let $\mathcal P^\Sigma(M)$ denote the probability laws on $\Omega$ which are invariant under $\Sigma$, and call this the space of symmetric random geometries. Note that a specific realization of a symmetric random geometry need not be symmetric; rather, the law is smeared smoothly over the space $M$. Question 4) Does the space of symmetric random geometries $\mathcal P^\Sigma(M)$ have nice structure? Are there any simple, natural examples of any symmetric random geometries $\mathbb P \in \mathcal P^\Sigma(M)$? Thanks for bearing with such a long question. Hopefully, it is accessible to both geometers and probabilists. I thank Ben Bakker, Jarek Korbicz, and Kate Poirier for teaching me some real geometry over the past few weeks, and my apologies to them in advance for the errors which are probably lurking around this post. - Regarding "Is there a more standard name for a geometry consisting of a G-structure and a connection?": Perhaps the concept you want is a "Cartan geometry of type $(G, H)$." (Here $H\subseteq G$ are arbitrary Lie groups.) Standard references are Sharpe (www.ams.org/mathscinet-getitem?mr=1453120) or Čap-Slovák (www.ams.org/mathscinet-getitem?mr=2532439). The definition of a Cartan geometry appears on p71 of Čap-Slovák, which is visible in the AMS preview pdf at www.ams.org/bookstore-getitem/item=surv-154. –  macbeth Apr 2 '12 at 19:32 Actually, a $G$-structure equipped with a connection is not the same as a Cartan geometry. But there are choices of $G$ for which it is. The space of $G$-structures is just the space of sections of $FM/G$. If $G$ is a reductive group, it is a space of tensors, but in general it isn't. –  Ben McKay Apr 8 '12 at 20:37 First of all, a canonical reference for special geometric structures is the book "Compact manifolds with special holonomy" by Dominic Joyce. A1: As you observed, specifying an $O(n)$-structure is the same thing as picking a Riemannian metric, in other words a section of the bundle of positive symmetric 2-tensors. For other $G$'s you can also usually describe the space of $G$-structures as the space of sections of some bundle of tensors or forms (this way of thinking about them will also be quite useful, when you want to specify some probability measure) Here are a few more examples: $G=Sp(n,\mathbb{R})$, nondegenerate 2-form $\omega$, almost symplectic structure $G=GL(n,\mathbb{C})$, endomorphism $J$ of $TM$ with $J^2=-1$, almost complex structure $G=G_2$, positive 3-form $\varphi$ (on a 7-manifold), almost $G_2$ structure You might wish to focus on torsion-free $G$-structures. This amounts to imposing some integrability condition, in the above examples they are $d\omega=0,N_J=0$ and $d\varphi=0=d\star\varphi$ respectively. Also, in many cases the moduli-space of torsion-free $G$-structures is actually finite-dimensional, which might make life easier if you want to do analysis/probability. A2: When $G\subseteq O(n)$ then the Levi-Civita connection is the unique one to focus on. In general, I doubt that there is much extra geometric structure on the space of connections in addition to being an affine space. A3: Maybe you can indeed try to put some nice extra structure on the space of probability measures $\mathcal{P}(\Omega)$. One idea would be to consider some Wasserstein-distance where the cost-function fits well to the geometric problem, e.g. for $G_2$ how much does it cost to transport $\varphi_1$ to $\varphi_2$.... Hope that helps at least somewhat... (the parts of your question that I did not address also sound very interesting!) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499078989028931, "perplexity": 170.70885114262936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267745.6/warc/CC-MAIN-20140728011747-00429-ip-10-146-231-18.ec2.internal.warc.gz"}
https://cambridgequantum.com/thermoelectric-coefficients-of-n-doped-silicon-from-first-principles-via-the-solution-of-the-boltzmann-transport-equation/
## ABSTRACT We present a first-principles computational approach to calculate thermoelectric transport coefficients via the exact solution of the linearized Boltzmann transport equation, also including the effect of nonequilibrium phonon populations induced by a temperature gradient. We use density functional theory and density functional perturbation theory for an accurate description of the electronic and vibrational properties of a system, including electron-phonon interactions; carriers’ scattering rates are computed using standard perturbation theory. We exploit Wannier interpolation (both for electronic bands and electron-phonon matrix elements) for an efficient sampling of the Brillouin zone, and the solution of the Boltzmann equation is achieved via a fast and stable conjugate gradient scheme. We discuss the application of this approach to n-doped silicon. In particular, we discuss a number of thermoelectric properties such as the thermal and electrical conductivities of electrons, the Lorenz number and the Seebeck coefficient, including the phonon drag effect, in a range of temperatures and carrier concentrations. This approach gives results in good agreement with experimental data and provides a detailed characterization of the nature and the relative importance of the individual scattering mechanisms. Moreover, the access to the exact solution of the Boltzmann equation for a realistic system provides a direct way to assess the accuracy of different flavors of relaxation time approximation, as well as of models that are popular in the thermoelectric community to estimate transport coefficients. Mattia Fiorentini and Nicola Bonini
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8435214161872864, "perplexity": 342.12676728720203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00475.warc.gz"}
https://eprints.soton.ac.uk/142507/
The University of Southampton University of Southampton Institutional Repository # Iterative hard thresholding for compressed sensing Blumensath, T. and Davies, M.E. (2009) Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27 (3), 265-274. Record type: Article ## Abstract Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity Text BDIHT.pdf - Other Restricted to Repository staff only Request a copy Published date: November 2009 Keywords: algorithms, compressed sensing, sparse inverse problem, signal Organisations: Other, Signal Processing & Control Grp ## Identifiers Local EPrints ID: 142507 URI: http://eprints.soton.ac.uk/id/eprint/142507 ISSN: 1063-5203 PURE UUID: 57de7036-0fdf-4d46-b37d-9356fe5a5b36 ## Catalogue record Date deposited: 31 Mar 2010 15:52 ## Contributors Author: T. Blumensath Author: M.E. Davies
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956312537193298, "perplexity": 1411.262196304354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00539.warc.gz"}
http://conceptmap.cfapps.io/wikipage?lang=en&name=Gibbs_measure
# Gibbs measure In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as ${\displaystyle P(X=x)={\frac {1}{Z(\beta )}}\exp(-\beta E(x)).}$ Here, E(x) is a function from the space of states to the real numbers; in physics applications, E(x) is interpreted as the energy of the configuration x. The parameter β is a free parameter; in physics, it is the inverse temperature. The normalizing constant Z(β) is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems. A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to these boundary conditions matches the probabilities in the Gibbs measure conditional on the frozen degrees of freedom. The Hammersley–Clifford theorem implies that any probability measure that satisfies a Markov property is a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread problems outside of physics, such as Hopfield networks, Markov networks, Markov logic networks, and bounded rational potential games in game theory and economics. A Gibbs measure in a system with local (finite-range) interactions maximizes the entropy density for a given expected energy density; or, equivalently, it minimizes the free energy density. The Gibbs measure of an infinite system is not necessarily unique, in contrast to the canonical ensemble of a finite system, which is unique. The existence of more than one Gibbs measure is associated with statistical phenomena such as symmetry breaking and phase coexistence. ## Statistical physics The set of Gibbs measures on a system is always convex,[1] so there is either a unique Gibbs measure (in which case the system is said to be "ergodic"), or there are infinitely many (and the system is called "nonergodic"). In the nonergodic case, the Gibbs measures can be expressed as the set of convex combinations of a much smaller number of special Gibbs measures known as "pure states" (not to be confused with the related but distinct notion of pure states in quantum mechanics). In physical applications, the Hamiltonian (the energy function) usually has some sense of locality, and the pure states have the cluster decomposition property that "far-separated subsystems" are independent. In practice, physically realistic systems are found in one of these pure states. If the Hamiltonian possesses a symmetry, then a unique (i.e. ergodic) Gibbs measure will necessarily be invariant under the symmetry. But in the case of multiple (i.e. nonergodic) Gibbs measures, the pure states are typically not invariant under the Hamiltonian's symmetry. For example, in the infinite ferromagnetic Ising model below the critical temperatre, there are two pure states, the "mostly-up" and "mostly-down" states, which are interchanged under the model's ${\displaystyle \mathbb {Z} _{2}}$  symmetry. ## Markov property An example of the Markov property can be seen in the Gibbs measure of the Ising model. The probability for a given spin σk to be in state s could, in principle, depend on the states of all other spins in the system. Thus, we may write the probability as ${\displaystyle P(\sigma _{k}=s\mid \sigma _{j},\,j\neq k)}$ . However, in an Ising model with only finite-range interactions (for example, nearest-neighbor interactions), we actually have ${\displaystyle P(\sigma _{k}=s\mid \sigma _{j},\,j\neq k)=P(\sigma _{k}=s\mid \sigma _{j},\,j\in N_{k})}$ , where Nk is a neighborhood of the site k. That is, the probability at site k depends only on the spins in a finite neighborhood. This last equation is in the form of a local Markov property. Measures with this property are sometimes called Markov random fields. More strongly, the converse is also true: any positive probability distribution (nonzero density everywhere) having the Markov property can be represented as a Gibbs measure for an appropriate energy function.[2] This is the Hammersley–Clifford theorem. ## Formal definition on lattices What follows is a formal definition for the special case of a random field on a lattice. The idea of a Gibbs measure is, however, much more general than this. The definition of a Gibbs random field on a lattice requires some terminology: • The lattice: A countable set ${\displaystyle \mathbb {L} }$ . • The single-spin space: A probability space ${\displaystyle (S,{\mathcal {S}},\lambda )}$ . • The configuration space: ${\displaystyle (\Omega ,{\mathcal {F}})}$ , where ${\displaystyle \Omega =S^{\mathbb {L} }}$  and ${\displaystyle {\mathcal {F}}={\mathcal {S}}^{\mathbb {L} }}$ . • Given a configuration ω ∈ Ω and a subset ${\displaystyle \Lambda \subset \mathbb {L} }$ , the restriction of ω to Λ is ${\displaystyle \omega _{\Lambda }=(\omega (t))_{t\in \Lambda }}$ . If ${\displaystyle \Lambda _{1}\cap \Lambda _{2}=\emptyset }$  and ${\displaystyle \Lambda _{1}\cup \Lambda _{2}=\mathbb {L} }$ , then the configuration ${\displaystyle \omega _{\Lambda _{1}}\omega _{\Lambda _{2}}}$  is the configuration whose restrictions to Λ1 and Λ2 are ${\displaystyle \omega _{\Lambda _{1}}}$  and ${\displaystyle \omega _{\Lambda _{2}}}$ , respectively. • The set ${\displaystyle {\mathcal {L}}}$  of all finite subsets of ${\displaystyle \mathbb {L} }$ . • For each subset ${\displaystyle \Lambda \subset \mathbb {L} }$ , ${\displaystyle {\mathcal {F}}_{\Lambda }}$  is the σ-algebra generated by the family of functions ${\displaystyle (\sigma (t))_{t\in \Lambda }}$ , where ${\displaystyle \sigma (t)(\omega )=\omega (t)}$ . The union of these σ-algebras as ${\displaystyle \Lambda }$  varies over ${\displaystyle {\mathcal {L}}}$  is the algebra of cylinder sets on the lattice. • The potential: A family ${\displaystyle \Phi =(\Phi _{A})_{A\in {\mathcal {L}}}}$  of functions ΦA : Ω → R such that 1. For each ${\displaystyle A\in {\mathcal {L}},\Phi _{A}}$  is ${\displaystyle {\mathcal {F}}_{A}}$ -measurable, meaning it depends only on the restriction ${\displaystyle \omega _{A}}$  (and does so measurably). 2. For all ${\displaystyle \Lambda \in {\mathcal {L}}}$  and ω ∈ Ω, the following series exists:[when defined as?] ${\displaystyle H_{\Lambda }^{\Phi }(\omega )=\sum _{A\in {\mathcal {L}},A\cap \Lambda \neq \emptyset }\Phi _{A}(\omega ).}$ We interpret ΦA as the contribution to the total energy (the Hamiltonian) associated to the interaction among all the points of finite set A. Then ${\displaystyle H_{\Lambda }^{\Phi }(\omega )}$  as the contribution to the total energy of all the finite sets A that meet ${\displaystyle \Lambda }$ . Note that the total energy is typically infinite, but when we "localize" to each ${\displaystyle \Lambda }$  it may be finite, we hope. • The Hamiltonian in ${\displaystyle \Lambda \in {\mathcal {L}}}$  with boundary conditions ${\displaystyle {\bar {\omega }}}$ , for the potential Φ, is defined by ${\displaystyle H_{\Lambda }^{\Phi }(\omega \mid {\bar {\omega }})=H_{\Lambda }^{\Phi }\left(\omega _{\Lambda }{\bar {\omega }}_{\Lambda ^{c}}\right)}$ where ${\displaystyle \Lambda ^{c}=\mathbb {L} \setminus \Lambda }$ . • The partition function in ${\displaystyle \Lambda \in {\mathcal {L}}}$  with boundary conditions ${\displaystyle {\bar {\omega }}}$  and inverse temperature β > 0 (for the potential Φ and λ) is defined by ${\displaystyle Z_{\Lambda }^{\Phi }({\bar {\omega }})=\int \lambda ^{\Lambda }(\mathrm {d} \omega )\exp(-\beta H_{\Lambda }^{\Phi }(\omega \mid {\bar {\omega }})),}$ where ${\displaystyle \lambda ^{\Lambda }(\mathrm {d} \omega )=\prod _{t\in \Lambda }\lambda (\mathrm {d} \omega (t)),}$ is the product measure A potential Φ is λ-admissible if ${\displaystyle Z_{\Lambda }^{\Phi }({\bar {\omega }})}$  is finite for all ${\displaystyle \Lambda \in {\mathcal {L}},{\bar {\omega }}\in \Omega }$  and β > 0. A probability measure μ on ${\displaystyle (\Omega ,{\mathcal {F}})}$  is a Gibbs measure for a λ-admissible potential Φ if it satisfies the Dobrushin–Lanford–Ruelle (DLR) equation ${\displaystyle \int \mu (\mathrm {d} {\bar {\omega }})Z_{\Lambda }^{\Phi }({\bar {\omega }})^{-1}\int \lambda ^{\Lambda }(\mathrm {d} \omega )\exp(-\beta H_{\Lambda }^{\Phi }(\omega \mid {\bar {\omega }}))1_{A}(\omega _{\Lambda }{\bar {\omega }}_{\Lambda ^{c}})=\mu (A),}$ for all ${\displaystyle A\in {\mathcal {F}}}$  and ${\displaystyle \Lambda \in {\mathcal {L}}}$ . ### An example To help understand the above definitions, here are the corresponding quantities in the important example of the Ising model with nearest-neighbor interactions (coupling constant J) and a magnetic field (h), on Zd: • The lattice is simply ${\displaystyle \mathbb {L} =\mathbf {Z} ^{d}}$ . • The single-spin space is S = {−1, 1}. • The potential is given by ${\displaystyle \Phi _{A}(\omega )={\begin{cases}-J\,\omega (t_{1})\omega (t_{2})&{\text{if }}A=\{t_{1},t_{2}\}{\text{ with }}\|t_{2}-t_{1}\|_{1}=1\\-h\,\omega (t)&{\text{if }}A=\{t\}\\0&{\text{otherwise}}\end{cases}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877938628196716, "perplexity": 471.72934048626587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605510.53/warc/CC-MAIN-20190423134850-20190423160850-00291.warc.gz"}
http://advancedintegrals.com/2017/01/nonlinear-euler-sums-using-nielsen-formula/
# Nonlinear Euler sums using Nielsen formula According to Nielsen we have the following : If $$f(x)= \sum_{n= 0}^\infty a_n x^n$$ Then we have the following $$\tag{1}\int^1_0 f(xt)\, \mathrm{Li}_2(t)\, dx=\frac{\pi^2}{6x}\int^x_0 f(t)\, dt -\frac{1}{x}\sum_{n=1}^\infty \frac{a_{n-1} H_{n}}{n^2}x^n$$ Now let $a_n = H_n$ then we have the following $$f(x)=\sum_{n=1}^\infty H_n x^n=-\frac{\log(1-x)}{1-x}$$ $$-\int^1_0 \frac{\log(1-xt)}{1-xt} \mathrm{Li}_2(t)\, dx=-\frac{\pi^2}{6x}\int^x_0 \frac{\log(1-t)}{1-t} dt-\sum_{n=1}^\infty \frac{H_{n-1} H_{n}}{n^2}x^{n-1}$$ Hence we have the following by gathering the integrals and $x\to 1$ $$\sum_{n=1}^\infty \frac{H_{n-1} H_{n}}{n^2}=\int^1_0\frac{\log(1-x)\left(\mathrm{Li}_2(x)-\zeta(2)\right)}{1-x} dx$$ Integrating by parts we have $$\sum_{n=1}^ \infty \frac{H_{n-1} H_{n}}{n^2}=-\frac{1}{2}\int^1_0\frac{\log(1-x)^3}{x} dx$$ Hence we have $$\sum_{n=1}^\infty\frac{ H^2_{n}}{n^2}=\sum_{n=1}^\infty \frac{ H_{n}}{n^3}-\frac{1}{2}\int^1_0\frac{\log(1-x)^3}{x} dx=\frac{17 \pi^4}{360}$$ This entry was posted in Euler sum, Polylogarithm and tagged , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976832270622253, "perplexity": 1008.982168513291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247512461.73/warc/CC-MAIN-20190222013546-20190222035546-00608.warc.gz"}
http://mathhelpforum.com/advanced-statistics/193530-conditional-probability-integer-valued-random-variables-solution-checking-print.html
# Conditional probability of integer-valued random variables: (solution checking) • December 5th 2011, 06:25 PM I-Think Conditional probability of integer-valued random variables: (solution checking) Suppose $X$ and $Y$ are both integer-valued random variables. Let $p(i|j)=P[X=i|Y=j], q(j|i)=P[Y=j|X=i]$ Show that $P(X=i,Y=j) = \frac{p(i|j)}{\sum_{i}\frac{(p(i|j)}{q(j|i)}}$ Solution $p(i|j)=P[X=i|Y=j]=\frac{P[X=i,Y=j]}{P[Y=j]}$ $P[X=i,Y=j]=P[Y=j]p(i|j)$ Consider $\frac{p(i|j)}{q(j|i)}=\frac{P[X=i]}{P[Y=j]}$ So $\sum_{i}\frac{p(i|j)}{q(j|i)}=\sum_{i}\frac{P[X=i]}{P[Y=j]}=\frac{1}{P[Y=j]}\sum_{i}P[X=i]}=\frac{1}{P[Y=j]}$ as $\sum_{i}P[X=i]}=1$ So $P[Y=j]=\frac{1}{\sum_{i}\frac{(p(i|j)}{q(j|i)}}$ So our result is proven $P(X=i,Y=j) = \frac{p(i|j)}{\sum_{i}\frac{(p(i|j)}{q(j|i)}}$ • December 6th 2011, 11:47 AM Moo Re: Conditional probability of integer-valued random variables: (solution checking) Hello, I corrected some LaTeX errors and a typo in the way you defined q (it's the conditional probability, not the joint probability as you initially wrote). And your solution is perfect :p
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013792276382446, "perplexity": 2698.5587323861914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00191-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/maplesim/view.aspx?path=Units%2FAddBaseUnit
Units - Maple Programming Help Home : Support : Online Help : Science and Engineering : Units : Manipulating Units : Units/AddBaseUnit Units add a base unit and associated base dimension Calling Sequence AddBaseUnit(unit, 'context'=unit_context, 'dimension'=dimension_name, opts) Parameters unit - symbol; unit name unit_context - symbol; unit context. For information on unit contexts, see Details. dimension_name - symbol; dimension name opts - (optional) equation(s) of the form option=value where option is one of 'abbreviation', 'abbreviations', 'check', 'default', 'plural', 'prefix', 'spelling', 'spellings', 'symbol', or 'symbols'; specify options for the unit Description • The AddBaseUnit(unit, 'context'=unit_context, 'dimension'=dimension_name, opts) calling sequence adds a base unit in conjunction with adding a base dimension to the Units package for the current session. • No new unit name or unit symbol can evaluate to any of the symbols in the following list. true false undefined infinity energy default symbolic base check context dimension name none • The 'context'=unit_context equation specifies the context of the unit. In this way, two units with the same name but different values can be distinguished. • The 'dimension'=dimension_name equation specifies the name of the dimension added. It is the object returned by procedures such as convert/dimensions. • The opts argument can contain one or more of the following equations that describe unit and dimension options. $'\mathrm{abbreviation}'=$ symbol This option sets the default abbreviation of the unit for display. $'\mathrm{abbreviations}'=$ symbol or set(symbol) This option sets the list of abbreviations (other than the default which is set by the 'abbreviation' option) for the unit. An abbreviation is similar to a symbol, except that it encompasses both the unit name and the context, whereas a unit symbol is valid for any context. For example, the technical atmosphere (${\mathrm{atmosphere}}_{\mathrm{technical}}$) has the abbreviation at, whereas the unit name atmosphere has the unit symbols atm and atmos. Thus, atm[technical] refers to technical atmospheres, but ${\mathrm{at}}_{\mathrm{standard}}$ does not refer to standard atmospheres. $'\mathrm{check}'=$ truefalse This option determines whether the added unit name, symbol, and abbreviation are compared with existing unit names, symbols, abbreviations, and spellings. The default value of 'check' is true. An error is returned and the unit is not added if there is a conflict. For example, if a user attempts to add a new unit with an abbreviation Ys or a unit with the symbol Ys and the context SI, it conflicts with the symbol for the yottasecond. Unless the 'check'=false option is included, the AddBaseUnit routine returns an error and does not add the unit. However, a unit with the symbol $\mathrm{Ys}$ and a context different from SI can be added without conflict. In this case, $\mathrm{Ys}$ refers to the new unit and ${\mathrm{Ys}}_{\mathrm{SI}}$ is required to refer to the yottasecond. A new unit with the name Ys does not conflict with the yottasecond. However, to refer to the new unit, the user must include its context, for example, ${\mathrm{Ys}}_{\mathrm{new}}$, because $\mathrm{Ys}$ refers to the yottasecond. $'\mathrm{default}'=$ truefalse For a unit with a context set as the default, the use of the unit name or an associated unit symbol without a context or modifier refers to its context. The default value of 'default' is false. If no unit context is set as the default, the setting of this option to false is ignored. $'\mathrm{plural}'=$ symbol This option sets the default unit plural spelling for display.  If this option is not given, the argument unit is used as the default plural spelling. $'\mathrm{prefix}'=$ prefix_style This option specifies what type of prefixes the given unit takes. This option can be set to false (explicitly indicating that the unit does not take prefixes), SI, IEC, SI_positive, SI_negative, or a set of symbols that is a subset of either SI prefixes or IEC prefixes. The values SI_positive and SI_negative specify units that take only prefixes that are positive powers of ten or negative powers of ten, respectively.  For example, it is common to refer to milliliters and centiliters but not kiloliters (cubic meters). Similarly, it is common to refer to kilotonnes and megatonnes but not millitonnes (kilograms). $'\mathrm{spelling}'=$ symbol This option sets the default unit spelling for display.  If this option is not given, the argument unit is used as the default spelling. $'\mathrm{spellings}'=$ symbol or set(symbol) To accommodate regionalized spellings of units, for example, meter versus metre, a facility has been included that allows the Units package to accept various spellings of units.  Any symbol given to the 'spelling' option is treated as unit. For example, by default, the accepted alternate spellings of the meter are: metre, meters, and metres. $'\mathrm{symbol}'=$ symbol This option sets the default unit symbol for display.  If this option is not given, the default symbol is chosen from the option 'symbols' (if any). $'\mathrm{symbols}'=$ symbol or set(symbol) A unit symbol can be used in place of a unit name.  For units that take SI or IEC prefixes, any associated symbol takes the associated symbol prefix. For example, milliliter is equivalent to mL and Kibibyte is equivalent to KiB. Examples > $\mathrm{with}\left(\mathrm{Units}\right):$ > $\mathrm{AddBaseUnit}\left('\mathrm{individual}','\mathrm{context}'='\mathrm{human}','\mathrm{dimension}'='\mathrm{animal}','\mathrm{spellings}'='\mathrm{individuals}'\right)$ > $\mathrm{AddUnit}\left('\mathrm{company}','\mathrm{context}'='\mathrm{human}','\mathrm{conversion}'=2'\mathrm{individual}','\mathrm{spellings}'='\mathrm{companies}'\right)$ > $\mathrm{AddUnit}\left('\mathrm{crowd}','\mathrm{context}'='\mathrm{human}','\mathrm{conversion}'=3'\mathrm{individual}','\mathrm{spellings}'='\mathrm{crowds}'\right)$ > $\mathrm{convert}\left(9,'\mathrm{units}','\mathrm{individuals}','\mathrm{crowds}'\right)$ ${3}$ (1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599648714065552, "perplexity": 1877.7715445980675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00499.warc.gz"}
http://mathhelpforum.com/advanced-statistics/132376-poisson-process.html
Customers arrive at an ATM at the times of a Poisson process with rate of 10 per hour. Suppose that the amount of money withdrawn on each transaction has a mean of $30 and a standard deviation of$20. Find the mean and standard deviation of the total withdrawals in 8 hours. 2. Let $N=(N_t:t \geq 0)$ be your Poisson process and $X_1,X_2,...$ be i.i.d. withdrawal distributions (mean 30 and var 20). You are looking for $\sum_{i=1}^{N_8} X_i$. A hint nice hint is that $\mathbb{E}[\mathbb{E}[X|Y]]=\mathbb{E}[X]$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954370021820068, "perplexity": 287.72450403272296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00561-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/fact-families-2nd
# Fact Families In this fact families worksheet, 2nd graders add and subtract 16 math problems associated with the fact families of 13, 14, 16 and 17. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654102921485901, "perplexity": 3989.336960489111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00253.warc.gz"}
http://math.stackexchange.com/questions/128670/how-to-find-all-possible-values-that-can-be-formed-by-two-integers-a-and-b-using
# How to find all possible values that can be formed by two integers A and B using only +/- operator any number of times. Given two number A and B, how to find all the numbers in a large range say 'k' which can be formed by adding or subtracting A and B together any number of times. Example: Suppose the numbers are 3 (A) and 8 (B). We wish to find all numbers up to 4 (k) which can be formed. In this example all numbers can be formed. 1 = 3+3+3-8 2 = 8-3-3 3 = 3 4 = 8+8-3-3-3-3 I think all the numbers which are divisible by the GCD of A and B can be formed. But I am unable to prove it. I tried it on many inputs and found it working, still couldn't figure out a proof. I don't know whether I am right or wrong. Help me! - You are correct that all numbers divisible by the GCD can be formed. Look up "extended Euclidean algorithm". –  Rotwang Apr 6 '12 at 11:50 @Rotwang Thank You sir! :) –  Login Test Apr 6 '12 at 12:02 (+1) for a well-posed question. –  The Chaz 2.0 Apr 6 '12 at 14:19 Congratulations on discovering an important problem, and forming the right conjecture! Instead of talking about addition and subtraction, equivalently we can ask the following question. We are given two integers $a$ and $b$, not both $0$. Which positive integers can be expressed in the form $ax+by$, where $x$ and $y$ are integers (not necessarily positive)? Call a positive integer which is so expressible good. It is clear that there are some positive integers expressible in the form $ax+by$, so there are good positive integers. Let $d$ be the smallest good positive integer. We will show that $d$ is the greatest common divisor of $a$ and $b$. The main step in doing this is to show that $d$ divides $a$ and $d$ divides $b$. We show that $d$ divides $a$. The proof that $d$ divides $b$ is essentially the same. Since $d$ is good, it follows that $d=au+bv$ for some integers $u$ and $v$. We try to divide $a$ by $d$. We get $$a=qd+r,$$ where $q$ is the quotient, and where the "remainder" $r$ satisfies $0\le r<d$. But $d=au+bv$, and therefore $$a=q(au+bv)+r$$ or equivalently $$r=a(1-qu) +b(-v).$$ So $r$ is expressible in the form $ax+by$, with $x=1-qu$ and $y=-v$. But since by hypothesis $d$ was the smallest good positive integer, and $r<d$, the number $r$ cannot be positive. We are forced to conclude that $r=0$. So $a=qd$, meaning that $d$ divides $a$, and we are finished with this part of the argument. Moreover, $d$ is the largest positive integer that divides both $a$ and $b$. For if $z$ divides $a$ and $b$, then since $d=au+bv$ we have that $z$ divides $d$, and therefore $z \le d$. Now that we know that the greatest common divisor $d$ of $a$ and $b$ is expressible in the form $ax+by$, it is easy to see that any positive multiple $kd$ of $d$ is also expressible in the form $ax+by$. Remark: There is a substantial theoretical bonus here. We have proved that for any positive integers $a$ and $b$, there exists a positive integer $d$ such that $d$ divides $a$ and $b$, and such that any $z$ that divides $a$ and $b$ must divide $d$. This is certainly true of the smallish integers that we are familiar with, but it is not obvious that the result holds for all positive integers. Now we know that it does. Remark: There are other approaches to a proof that are in a practical sense more informative. Let $a$ and $b$ be very large integers whose greatest common divisor is $1$, such as $2^{50}$ and $3^{37}$. By the above argument, there are integers $x$ and $y$ such that $ax+by=1$. There are practical situations when we want to find such numbers $x$ and $y$ explicitly. That can be done very efficiently by using the Extended Euclidean Algorithm. The ideas that lead to this algorithm give an alternate proof of the result your post asks about. The ideas that led to the proof can be substantially generalized. For example, suppose that $A(t)$ and $B(t)$ are polynomials (say with real coefficients) such that there is no polynomial of degree $>1$ that divides both $A(t)$ and $B(t)$. Then there are polynomials $X(t)$ and $Y(t)$ such that $A(t)X(t)+B(t)Y(t)$ is identically equal to $1$. The proof of this important fact is in basic structure very similar to the proof we gave above. - This answer is useful... and I sure wish I had seen such an accessible discussion years ago! –  The Chaz 2.0 Apr 6 '12 at 14:20 Thank You sir! Very-very clear and methodical... I must say its Awesome!! –  Login Test Apr 6 '12 at 15:25 You may enjoy going through a book on Elementary Number Theory. (Elementary does not mean easy, it is in this case a technical term.) –  André Nicolas Apr 6 '12 at 15:29 @AndréNicolas Thank you again. Nice contents! I'll go through it. :) –  Login Test Apr 7 '12 at 7:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262807369232178, "perplexity": 112.28277618753376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447020.15/warc/CC-MAIN-20141017005727-00196-ip-10-16-133-185.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/14004/prove-that-the-binomial-algorithm-implies-the-arbitrage-free-price-at-t-0-of-a-t
# Prove that the binomial algorithm implies the arbitrage free price at t=0 of a T-claim In Tomas Bjork's Arbitrage Theory in Continuous Time (or here), $\exists$ these propositions How does the first formula follow from from the algorithm? I get that $\Pi(0;X) = V_0(0)$, but I don't really get what $E^{Q}[X]$ means...is that equal to $q_uu+q_dd$? Anyway using the algorithm I got $V_0(k) = \frac{1}{(1+R)^T} \sum_{l=0}^{T} V_T(k+l)q_u^{T-l}q_d^{l}$...is $\sum_{l=0}^{T} V_T(l)q_u^{T-l}q_d^{l}$ supposed to be $=q_uu+q_dd$? I believe this is the way that Björk proposes, however I believe "my" way below is more elegant. The trick in Björk's case is to realize that in each "iteration" we get an expeted value: $$V_0(0) = \frac{1}{1+R}\left(q_u V_{1}(1) + q_d V_1(0) \right) \\ = \frac{1}{(1+R)^{2}} \left( q_u^2 V_2(2) + 2q_uq_d V_2(1) + q_d V_2(0) \right) \\ = \frac{1}{(1+R)^{2}}E^Q[V_2].$$ Continuing in the same fashion you will arrive at $$V_0(0) = \frac{1}{(1+R)^{T}}E^Q[V_T],$$ however to make this formal you should make some kind of induction argument. My Method: My methods does not use Proposition 2.24 but instead the fact that we already know the single period Binomial Model and the Law of total expectation. We already know that Proposition 2.25 holds true if $T=1$ since this reduces to the single-period Binomial model. So assume that $T \geq 2$ and assume that Proposition 2.25 holds true for $T-1$ periods. We when know from the induction assumption that. $$\Pi(1; X) = \frac{1}{(1+R)^{T-1}}E^Q[\Phi(S_T)|Z_1]$$ But $$\Pi(0; X) = \frac{1}{(1+R)} E^Q[\Pi(1; X)],$$ and hence $$\Pi(0; X) = \frac{1}{(1+R)} E^Q[\Pi(1; X)] = \frac{1}{(1+R)^{T}} E^Q[E^Q[\Phi(S_T)|Z_1]] \\ = \frac{1}{(1+R)^{T}} E^Q[\Phi(S_T)].$$ This concludes the proof of Proposition 2.25 What does $E^Q[X]$ mean: If $Z_1,...,Z_T$ are independent random variables $$E[f(Z_1,...,Z_T)] = \sum_{z_1,...,z_T=\text{u or d}} f(z_1,...,z_T)P(Z_1=z_1)\cdots P(Z_T=z_T) \\ = \sum_{z_1,...,z_T=\text{u or d}} f(z_1,...,z_T)p_{z_1} \cdots p_{z_T}.$$ This sum means that we sum over all possible outcomes/paths. Since we often use a martingale/risk neutral probability measure it is convenient to introduce the notation to denote the expectation under the probability measure $Q$. $$E^Q[f(Z_1,...,Z_T)] = \sum_{z_1,...,z_T=\text{u or d}} f(z_1,...,z_T)Q(Z_1=z_1)\cdots Q(Z_T=z_T) \\ = \sum_{z_1,...,z_T=\text{u or d}} f(z_1,...,z_T)q_{z_1} \cdots q_{z_T}.$$ In your case $X=\Phi(S_T)$ which is a function of $Z_1,...Z_T$. Note also that $S_T = su^Yd^{T-Y}$ where $Y$ is the number of up-moves. The sum above over all outcomes can also be written as in Björk, using binomial coefficients: $$E^Q[f(Z_1,...,Z_T)] = \sum_{z_1,...,z_T=\text{u or d}} f(z_1,...,z_T)q_{z_1} \cdots q_{z_T} \\ = \sum_{j=0}^T {T \choose j}q_u^j q_d^{T-j}\Phi(su^kd^{T-k})$$. I hope it all checks out, I'm used to a different notation when working with the Binomial Model! • Is $V_T$ the value of the replicating portfolio at maturity? I guess it's equal to the payoff at T since it replicates the claim? Thanks. I think I'll stick with the induction argument since total expectation was not discussed in this class. – BCLC Jul 11 '14 at 21:33 • Yes, exactly! A naive version of total expectation is usually introduced in a first class in probability theory. See Wikipedia for more info, if you're interested: en.wikipedia.org/wiki/Law_of_total_expectation – DoubleTrouble Jul 12 '14 at 15:18 • Sorry for the confusion. It was discussed in one of our stat classes but not in this class so I guess we must prove it some other way. Thanks! – BCLC Jul 12 '14 at 15:19 • Is $E[V_i]$ the expected value of the replicating portfolio at time i under martingale measure? – BCLC Jul 12 '14 at 15:38 • Usually you write $E^Q[V_i]$ to denote the expectation under the martingale measure, if it is not clear from the context. – DoubleTrouble Jul 12 '14 at 21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781798124313354, "perplexity": 329.18493939581185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00090.warc.gz"}
http://tex.stackexchange.com/questions/33584/piecewise-defined-function-in-tikz-or-tkz-fct?answertab=active
# Piecewise defined function in tikz or tkz-fct How to define and plot a piecewise defined function in tikz or tkz-fct? For example consider the function f(x) = 1 for x < 0, x for 0 <= x < 1, cos(x) for x >= 1 Edit: After Torbjørn T.'s comment I tried the following. I should mention that I included three different versions of plotting the function because I want that all three versions should work. \documentclass{article} \usepackage{amsmath} \usepackage{pgfplots} \usepackage{tkz-fct} \pgfmathdeclarefunction{p}{3}{% \pgfmathparse{(and(#1>#2, #1<#3))}% } \pgfmathdeclarefunction{f}{1}{% \pgfmathparse{p(#1,-100,0)*1 + p(#1,0,1)*#1 + p(#1,1,100)*cos(#1r)} } \begin{document} \begin{tikzpicture} \tkzInit[xmin=-1,xmax=5,ymax=4] % \tkzGrid % \tkzAxeXY % \tkzFct{f(x)} % \draw plot function{f(x)};% \begin{scope}[xshift=4cm] \begin{axis} \end{axis} \end{scope} \end{tikzpicture} \end{document} Which gives the following output: - Does this help? tex.stackexchange.com/questions/19510/… –  Torbjørn T. Nov 3 '11 at 18:50 Thanks, I tried something with it, see edit above, but it doesn't work... –  student Nov 3 '11 at 19:38 I can't test here (windows), but I presume you would want your r inside of the cos. Also this does not cover the cases where x=0 and x=1, although that should not cause too much problems. –  Roelof Spijker Nov 3 '11 at 20:28 @wh1t3: Thanks, I put the r inside of the cos now –  student Nov 3 '11 at 21:40 There are a few problems with the code: 1. PGF uses degrees for trig functions 2. Seems that the r at the end of the function f was intended to convert from radians to degree, but I changed it so that it is more obvious. 3. Need to determine what happens at the end points of the piecewise domain, so note the slight tweaks for that. 4. Not sure what the tkz portion of the code has to do with the problem of defining a piecewise function, so have commented that out. 5. Be careful using single letter function names as documented in: Why do 2 identical function definitions with different names produce two different plots? So, with slight modifications to your code I can produce the following. Note that there still is a problem around x=1 as pgf does not know what to do. \documentclass[border=2pt]{standalone} \usepackage{amsmath} \usepackage{pgfplots} \usepackage{tkz-fct} \pgfmathdeclarefunction{p}{3}{% \pgfmathparse{(and(#1>#2, #1<#3))}% } \pgfmathdeclarefunction{f}{1}{% \pgfmathparse{p(#1,-100,-0.001)*1 + p(#1,0,1)*#1 + p(#1,1.01,100)*cos(deg(#1))}% } \begin{document} \begin{tikzpicture} % \tkzInit[xmin=-1,xmax=5,ymax=4] % % \tkzGrid % % \tkzAxeXY % % \tkzFct{f(x)} % % \draw plot function{f(x)};% \begin{scope}[xshift=6cm] \begin{axis} \end{axis} \end{scope} \end{tikzpicture} \end{document} What I would recommend is that you draw the three separate portions individually and avoid the problem areas via a fixed value of \Tolerance: \documentclass[border=2pt]{standalone} \usepackage{amsmath} \usepackage{pgfplots} \usepackage{tkz-fct} \pgfmathdeclarefunction{p}{3}{% \pgfmathparse{(and(#1>#2, #1<#3))}% } \newcommand{\Tolerance}{0.0001}% \pgfmathdeclarefunction{f}{1}{% \pgfmathparse{% p(#1,-\maxdimen,-\Tolerance)*1.0 +% p(#1,0,1-\Tolerance)*#1 +% p(#1,1,\maxdimen)*cos(deg(#1))}% } \begin{document} \begin{tikzpicture} % \tkzInit[xmin=-1,xmax=5,ymax=4] % % \tkzGrid % % \tkzAxeXY % % \tkzFct{f(x)} % % \draw plot function{f(x)};% \begin{scope}[xshift=6cm] \begin{axis} \end{axis} \end{scope} \end{tikzpicture} \end{document} To clarify, the \addplot calls above are really just: \addplot[ultra thick, blue, domain=-2.0000:-0.0001, samples=100]{f(x)}; \addplot[ultra thick, green,domain= 0.0001: 0.9999, samples=100]{f(x)}; \addplot[ultra thick, red, domain= 1.0001: 4.0000, samples=100]{f(x)}; - The tkz portion is because I want also be able to plot those functions using tkz-fct. –  student Nov 3 '11 at 21:33 @Peter Grill Thanks, your second picture looks fine, however the source code looks pretty complicated. I want a solution where I have a simple syntax to plot much more complicated piecewise defined functions, too. –  student Nov 3 '11 at 21:45 I too am looking for that since I posted the question you referred to in your posting. But I think the second one looks more complicated as I defined \Tolerance so it was easy to adjust. –  Peter Grill Nov 3 '11 at 21:48 I slightly modified @Herbert's code to make the graph of f (visually) "a graph of a function". It still has few quirks. \documentclass{article} \pagestyle{empty} \begin{document} \begin{pspicture}(-1,-1)(7,1) \psplot[algebraic,linecolor=red,plotpoints=1000, linewidth=1pt]{-1}{7}{IfTE(x<0,1,IfTE(x<1,x,cos(x)))} \psline[linecolor=white,linewidth=1pt](0,1)(0,0) \psline[linecolor=white,linewidth=2pt](1,.96)(1,0.54) \end{pspicture} \end{document} - I would use the ifthenelse structure (available in tikz and in gnuplot). \documentclass{standalone} \usepackage{tikz} \usepackage{pgfplots} \begin{document} % plain tikz + ifthenelse: \begin{tikzpicture}[scale=3] \draw[blue,thick] plot[samples=200,domain=-2:4] (\x,{ifthenelse(\x < 0,1,ifthenelse(and(\x >= 0,\x < 1),\x, cos(deg(\x))))}); \draw[red,thick,semitransparent] plot[samples=200,domain=-2:4] (\x,{cos(deg(\x))}); \end{tikzpicture} % pgfplots + gnuplot: \begin{tikzpicture} \begin{axis} \addplot+[samples=150,domain=-2:4] function {x < 0 ? 1 : ((x >=0) && (x<1)) ? x : cos(x)}; \end{axis} \end{tikzpicture} % plain tikz with '?' operator: \begin{tikzpicture}[scale=3] \draw plot[samples=200,domain=-2:4] (\x,{\x < 0 ? 1 : (((\x >=0) && (\x<1)) ? \x : cos(deg(\x)))}); \end{tikzpicture} % pgfplots without gnuplot: (requires developer version of pgf or pgfplots): \begin{tikzpicture} \begin{axis} \addplot[blue,samples=150,domain=-2:4] {x < 0 ? 1 : (((x >=0) && (x<1)) ? x : cos(deg(x)))}; \end{axis} \end{tikzpicture} \end{document} - This does not appear to be using the cos function for x>1. –  Peter Grill Nov 3 '11 at 20:39 @PeterGrill It does (see my updated answer), but I did not believe it either! –  cjorssen Nov 3 '11 at 20:53 The OP used domain=-2:4 so probably best to stick to that domain so that the results are easy to compare. –  Peter Grill Nov 3 '11 at 21:20 @cjorssen I like your answer. I took the freedom to add variants using the ? operator as well. And by the way: I found that \usetikzlibrary{fpu} (which is used by pgfplots), did not properly support it. I have just comitted support for ==, !=, <=, >=, ? to pgf CVS. The last example will only compile with this commit. –  Christian Feuersänger Nov 3 '11 at 21:24 @ChristianFeuersänger Thanks! I was investigating to understand why it didn't work for me. I changed the domain to stick to the OP one. –  cjorssen Nov 3 '11 at 21:37 Run with xelatex %f(x) = 1 for x < 0, x for 0 <= x < 1, cos(x) for x >= 1 \documentclass{article}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941310048103333, "perplexity": 3432.504245337332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095711.51/warc/CC-MAIN-20150627031815-00220-ip-10-179-60-89.ec2.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Noether_normalization_lemma
# Noether normalization lemma In mathematics, the Noether normalization lemma is a result of commutative algebra, introduced by Emmy Noether in 1926.[1] It states that for any field k, and any finitely generated commutative k-algebra A, there exists a non-negative integer d and algebraically independent elements y1, y2, ..., yd in A such that A is a finitely generated module over the polynomial ring S = k [y1, y2, ..., yd]. The integer d above is uniquely determined; it is the Krull dimension of the ring A. When A is an integral domain, d is also the transcendence degree of the field of fractions of A over k. The theorem has a geometric interpretation. Suppose A is integral. Let S be the coordinate ring of the d-dimensional affine space ${\displaystyle \mathbb {A} _{k}^{d}}$, and let A be the coordinate ring of some other d-dimensional affine variety X. Then the inclusion map S → A induces a surjective finite morphism of affine varieties ${\displaystyle X\to \mathbb {A} _{k}^{d}}$. The conclusion is that any affine variety is a branched covering of affine space. When k is infinite, such a branched covering map can be constructed by taking a general projection from an affine space containing X to a d-dimensional subspace. More generally, in the language of schemes, the theorem can equivalently be stated as follows: every affine k-scheme (of finite type) X is finite over an affine n-dimensional space. The theorem can be refined to include a chain of ideals of R (equivalently, closed subsets of X) that are finite over the affine coordinate subspaces of the appropriate dimensions.[2] The form of the Noether normalization lemma stated above can be used as an important step in proving Hilbert's Nullstellensatz. This gives it further geometric importance, at least formally, as the Nullstellensatz underlies the development of much of classical algebraic geometry. The theorem is also an important tool in establishing the notions of Krull dimension for k-algebras. ## Proof The following proof is due to Nagata and is taken from Mumford's red book. A proof in the geometric flavor is also given in the page 127 of the red book and this mathoverflow thread. The ring A in the lemma is generated as a k-algebra by elements, say, ${\displaystyle y_{1},...,y_{m}}$ . We shall induct on m. If ${\displaystyle m=0}$ , then the assertion is trivial. Assume now ${\displaystyle m>0}$ . It is enough to show that there is a subring S of A that is generated by ${\displaystyle m-1}$  elements, such that A is finite over S. Indeed, by the inductive hypothesis, we can find algebraically independent elements ${\displaystyle x_{1},...,x_{d}}$  of S such that S is finite over ${\displaystyle k[x_{1},...,x_{d}]}$ . Since otherwise there would be nothing to prove, we can also assume that there is a nonzero polynomial f in m variables over k such that ${\displaystyle f(y_{1},\ldots ,y_{m})=0}$ . Given an integer r which is determined later, set ${\displaystyle z_{i}=y_{i}-y_{1}^{r^{i-1}},\quad 2\leq i\leq m.}$ ${\displaystyle f(y_{1},z_{2}+y_{1}^{r},z_{3}+y_{1}^{r^{2}},\ldots ,z_{m}+y_{1}^{r^{m-1}})=0}$ . Now, if ${\displaystyle ay_{1}^{\alpha _{1}}\prod _{2}^{m}(z_{i}+y_{1}^{r^{i-1}})^{\alpha _{i}}}$  is a monomial appearing in the left-hand side of the above equation, with coefficient ${\displaystyle a\in k}$ , the highest term in ${\displaystyle y_{1}}$  after expanding the product looks like ${\displaystyle ay_{1}^{\alpha _{1}+r\alpha _{2}+\cdots +\alpha _{m}r^{m-1}}.}$ Whenever the above exponent agrees with the highest ${\displaystyle y_{1}}$  exponent produced by some other monomial, it is possible that the highest term in ${\displaystyle y_{1}}$  of ${\displaystyle f(y_{1},z_{2}+y_{1}^{r},z_{3}+y_{1}^{r^{2}},...,z_{m}+y_{1}^{r^{m-1}})}$  will not be of the above form, because it may be affected by cancellation. However, if r is larger than any exponent appearing in f, then each ${\displaystyle \alpha _{1}+r\alpha _{2}+\cdots +\alpha _{m}r^{m-1}}$  encodes a unique base r number, so this does not occur. Thus ${\displaystyle y_{1}}$  is integral over ${\displaystyle S=k[z_{2},...,z_{m}]}$ . Since ${\displaystyle y_{i}=z_{i}+y_{1}^{r^{i-1}}}$  are also integral over that ring, A is integral over S. It follows A is finite over S, and since S is generated by m-1 elements, by the inductive hypothesis we are done. If A is an integral domain, then d is the transcendence degree of its field of fractions. Indeed, A and ${\displaystyle S=k[y_{1},...,y_{d}]}$  have the same transcendence degree (i.e., the degree of the field of fractions) since the field of fractions of A is algebraic over that of S (as A is integral over S) and S has transcendence degree d. Thus, it remains to show the Krull dimension of the polynomial ring S is d. (This is also a consequence of dimension theory.) We induct on d, with the case ${\displaystyle d=0}$  being trivial. Since ${\displaystyle 0\subsetneq (y_{1})\subsetneq (y_{1},y_{2})\subsetneq \cdots \subsetneq (y_{1},\dots ,y_{d})}$  is a chain of prime ideals, the dimension is at least d. To get the reverse estimate, let ${\displaystyle 0\subsetneq {\mathfrak {p}}_{1}\subsetneq \cdots \subsetneq {\mathfrak {p}}_{m}}$  be a chain of prime ideals. Let ${\displaystyle 0\neq u\in {\mathfrak {p}}_{1}}$ . We apply the noether normalization and get ${\displaystyle T=k[u,z_{2},\dots ,z_{d}]}$  (in the normalization process, we're free to choose the first variable) such that S is integral over T. By the inductive hypothesis, ${\displaystyle T/(u)}$  has dimension d - 1. By incomparability, ${\displaystyle {\mathfrak {p}}_{i}\cap T}$  is a chain of length ${\displaystyle m}$  and then, in ${\displaystyle T/({\mathfrak {p}}_{1}\cap T)}$ , it becomes a chain of length ${\displaystyle m-1}$ . Since ${\displaystyle \operatorname {dim} T/({\mathfrak {p}}_{1}\cap T)\leq \operatorname {dim} T/(u)}$ , we have ${\displaystyle m-1\leq d-1}$ . Hence, ${\displaystyle \dim S\leq d}$ . ## Refinement The following refinement appears in Eisenbud's book, which builds on Nagata's idea:[2] Theorem — Let A be a finitely generated algebra over a field k, and ${\displaystyle I_{1}\subset \dots \subset I_{m}}$  be a chain of ideals such that ${\displaystyle \operatorname {dim} (A/I_{i})=d_{i}>d_{i+1}.}$  Then there exists algebraically independent elements y1, ..., yd in A such that 1. A is a finitely generated module over the polynomial subring S = k[y1, ..., yd]. 2. ${\displaystyle I_{i}\cap S=(y_{d_{i}+1},\dots ,y_{d})}$ . 3. If the ${\displaystyle I_{i}}$ 's are homogeneous, then yi's may be taken to be homogeneous. Moreover, if k is an infinite field, then any sufficiently general choice of yI's has Property 1 above ("sufficiently general" is made precise in the proof). Geometrically speaking, the last part of the theorem says that for ${\displaystyle X=\operatorname {Spec} A\subset \mathbf {A} ^{m}}$  any general linear projection ${\displaystyle \mathbf {A} ^{m}\to \mathbf {A} ^{d}}$  induces a finite morphism ${\displaystyle X\to \mathbf {A} ^{d}}$  (cf. the lede); besides Eisenbud, see also [1]. Corollary — Let A be an integral domain that is a finitely generated algebra over a field. If ${\displaystyle {\mathfrak {p}}}$  is a prime ideal of A, then ${\displaystyle \dim A=\operatorname {height} {\mathfrak {p}}+\dim A/{\mathfrak {p}}}$ . In particular, the Krull dimension of the localization of A at any maximal ideal is dim A. Corollary — Let ${\displaystyle A\subset B}$  be integral domains that are finitely generated algebras over a field. Then ${\displaystyle \dim B=\dim A+\operatorname {tr.deg} _{Q(A)}Q(B)}$ (the special case of Nagata's altitude formula). ## Illustrative application: generic freeness The proof of generic freeness (the statement later) illustrates a typical yet nontrivial application of the normalization lemma. The generic freeness says: let ${\displaystyle A,B}$  be rings such that ${\displaystyle A}$  is a Noetherian integral domain and suppose there is a ring homomorphism ${\displaystyle A\to B}$  that exhibits ${\displaystyle B}$  as a finitely generated algebra over ${\displaystyle A}$ . Then there is some ${\displaystyle 0\neq g\in A}$  such that ${\displaystyle B[g^{-1}]}$  is a free ${\displaystyle A[g^{-1}]}$ -module. Let ${\displaystyle F}$  be the fraction field of ${\displaystyle A}$ . We argue by induction on the Krull dimension of ${\displaystyle F\otimes _{A}B}$ . The basic case is when the Krull dimension is ${\displaystyle -\infty }$ ; i.e., ${\displaystyle F\otimes _{A}B=0}$ . This is to say there is some ${\displaystyle 0\neq g\in A}$  such that ${\displaystyle gB=0}$  and so ${\displaystyle B[g^{-1}]}$  is free as an ${\displaystyle A[g^{-1}]}$ -module. For the inductive step, note ${\displaystyle F\otimes _{A}B}$  is a finitely generated ${\displaystyle F}$ -algebra. Hence, by the Noether normalization lemma, ${\displaystyle F\otimes _{A}B}$  contains algebraically independent elements ${\displaystyle x_{1},\dots ,x_{d}}$  such that ${\displaystyle F\otimes _{A}B}$  is finite over the polynomial ring ${\displaystyle F[x_{1},\dots ,x_{d}]}$ . Multiplying each ${\displaystyle x_{i}}$  by elements of ${\displaystyle A}$ , we can assume ${\displaystyle x_{i}}$  are in ${\displaystyle B}$ . We now consider: ${\displaystyle A':=A[x_{1},\dots ,x_{d}]\to B.}$ It need not be the case that ${\displaystyle B}$  is finite over ${\displaystyle A'}$ . But that will be the case after inverting a single element, as follows. If ${\displaystyle b}$  is an element of ${\displaystyle B}$ , then, as an element of ${\displaystyle F\otimes _{A}B}$ , it is integral over ${\displaystyle F[x_{1},\dots ,x_{d}]}$ ; i.e., ${\displaystyle b^{n}+a_{1}b^{n-1}+\dots +a_{n}=0}$  for some ${\displaystyle a_{i}}$  in ${\displaystyle F[x_{1},\dots ,x_{d}]}$ . Thus, some ${\displaystyle 0\neq g\in A}$  kills all the denominators of the coefficients of ${\displaystyle a_{i}}$  and so ${\displaystyle b}$  is integral over ${\displaystyle A'[g^{-1}]}$ . Choosing some finitely many generators of ${\displaystyle B}$  as an ${\displaystyle A'}$ -algebra and applying this observation to each generator, we find some ${\displaystyle 0\neq g\in A}$  such that ${\displaystyle B[g^{-1}]}$  is integral (thus finite) over ${\displaystyle A'[g^{-1}]}$ . Replace ${\displaystyle B,A}$  by ${\displaystyle B[g^{-1}],A[g^{-1}]}$  and then we can assume ${\displaystyle B}$  is finite over ${\displaystyle A':=A[x_{1},\dots ,x_{d}]}$ . To finish, consider a finite filtration ${\displaystyle B=B_{0}\supset B_{1}\supset B_{2}\supset \cdots \supset B_{r}}$  by ${\displaystyle A'}$ -submodules such that ${\displaystyle B_{i}/B_{i+1}\simeq A'/{\mathfrak {p}}_{i}}$  for prime ideals ${\displaystyle {\mathfrak {p}}_{i}}$  (such a filtration exists by the theory of associated primes). For each i, if ${\displaystyle {\mathfrak {p}}_{i}\neq 0}$ , by inductive hypothesis, we can choose some ${\displaystyle g_{i}\neq 0}$  in ${\displaystyle A}$  such that ${\displaystyle A'/{\mathfrak {p}}_{i}[g_{i}^{-1}]}$  is free as an ${\displaystyle A[g_{i}^{-1}]}$ -module, while ${\displaystyle A'}$  is a polynomial ring and thus free. Hence, with ${\displaystyle g=g_{0}\cdots g_{r}}$ , ${\displaystyle B[g^{-1}]}$  is a free module over ${\displaystyle A[g^{-1}]}$ . ${\displaystyle \square }$ ## Notes 1. ^ Noether 1926 2. ^ a b Eisenbud 1995, Theorem 13.3 ## References • Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 3-540-94268-8, MR 1322960, Zbl 0819.13001 • "Noether theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]. NB the lemma is in the updating comments. • Noether, Emmy (1926), "Der Endlichkeitsatz der Invarianten endlicher linearer Gruppen der Charakteristik p", Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen: 28–35, archived from the original on March 8, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 111, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924277067184448, "perplexity": 517.7389263814943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00277.warc.gz"}
http://cvgmt.sns.it/paper/3663/
# Some Sphere Theorems in Linear Potential Theory created by mascellani on 17 Nov 2017 modified on 18 Nov 2017 [BibTeX] preprint Inserted: 17 nov 2017 Last Updated: 18 nov 2017 Year: 2017 ArXiv: 1705.09940 PDF Abstract: In this paper we analyze the capacitary potential due to a charged body in order to deduce sharp analytic and geometric inequalities, whose equality cases are saturated by domains with spherical symmetry. In particular, for a regular bounded domain $\Omega \subset \mathbb{R}^n$, $n\geq 3$, we prove that if the mean curvature $H$ of the boundary obeys the condition $- \bigg[ \frac{1}{\text{Cap}(\Omega)} \bigg]^{\frac{1}{n-2}} \leq \frac{H}{n-1} \leq \bigg[ \frac{1}{\text{Cap}(\Omega)} \bigg]^{\frac{1}{n-2}}$, then $\Omega$ is a round ball. Credits | Cookie policy | HTML 5 | CSS 2.1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819795548915863, "perplexity": 1363.8117080671145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00546.warc.gz"}
https://www.physicsforums.com/threads/dihedral-group-of-order-8.7953/
# Dihedral Group of Order 8 1. Oct 29, 2003 ### wubie Hello, I am having trouble understanding groups in my group theory class. I am not confident on how to approach the following question: I know that y4 = u. So then, g = xy4 = xu = x. Then g2 = x2 = u which is what I am trying to prove. Now if i = 1 then, g = xy. Then g2 = xy xy = x yx y = x xy-1 y. Then xx y-1y = x2 y-1y = u y-1y since x2 = 2. Then u y-1y = u u = u since y-1y = u. First question: Is the work I have completed so far correct? Second question: Do I need to prove this in a case by case basis? That is, I would think that I would have to prove this for i = 1,2,3,4. Since I have already completed 1 and 4, I would have to do cases in which i = 2,3. Correct? This may seem elementry, but like I stated above, my confidence in answering such questions is not great. And my understanding of the material is very weak. Any comments, input, help is appreciated. Thankyou. 2. Oct 29, 2003 ### Hurkyl Staff Emeritus Yes, you do have to prove it for i = 1..4. (actually, you could do it for (i = 0..3). The reason is because you can use y4 = u to reduce the general case to one of these 4 selected cases. Your work looks correct, except for the typo that you wrote x2 = 2 instead of x2 = u. 3. Oct 29, 2003 ### wubie Thanks Hurkyl. I still have some questions regarding this dihedral group. Part of the question states: Now, why would I just assume that i = 1 to 4? Why not -4 <= i <= 4 since i can be any integer? Also isn't one of the properties of a group that: If so, where are the inverses of the elements y, y2, y3, xy, xyy2, xy3 in the group D8? 4. Oct 29, 2003 ### Hurkyl Staff Emeritus The same reason you don't need to worry about i > 4. Because you know y4 = u, we know that: y-1 = y-1 * u = y-1 * y4 = y3 In general, if m = n mod 4, we can use induction to prove that ym = yn. There are only 64 different ways to multiply 2 elements in D8. Exhaust! More pragmatically, you can use the fact I mentioned above, coupled with the fact that (xy)-1 = y-1x-1 to compute inverses. 5. Oct 29, 2003 ### wubie Thanks alot Hurkyl. That was very helpful to me. I really appreciate it. Cheers. Similar Discussions: Dihedral Group of Order 8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754839301109314, "perplexity": 901.3912635788919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824819.92/warc/CC-MAIN-20171021133807-20171021153807-00108.warc.gz"}
https://www.physicsforums.com/threads/wavefunctions-of-fermions-and-bosons.157392/
# Wavefunctions of fermions and bosons • #1 1,444 2 ## Homework Statement Consider two noninteracting particles p and q each with mass m in a cubical box od size a. Assume the energy of the particles is $$E = \frac{3 \hbar^2 \pi^2}{2ma^2} + \frac{6\hbar^2 pi^2}{2ma^2}$$ Using the eigenfunctions $$\psi_{n_{x},n_{y},n_{z}} (x_{p},y_{p},z_{p})$$ and $$\psi_{n_{x},n_{y},n_{z}} (x_{q},y_{q},z_{q})$$ write down the two particle wave functions which could describe the system when the particles are a) distinguishable, spinless bosons b) identical, spinless bosons c) identical spin-half fermions in a symmetric spin state d) identical spin half fermions in an antisymmetric spin state ## Homework Equations For a cube the wavefunction is given by $$\psi_{n_{x},n_{y},n_{z}} = N \sin\left(\frac{n_{x}\pi x}{a}\right)\sin\left(\frac{n_{y}\pi y}{a}\right)\sin\left(\frac{n_{z}\pi z}{a}\right)$$ $$E = \frac{\hbar^2 \pi^2}{2ma^2} (n_{x}^2 + n_{y}^2 +n_{z}^2)$$ ## The Attempt at a Solution for the fermions the wavefunction mus be antisymmetric under exhange c) $$\Phi^{(A)}(p,q) = \psi^{(A)} (r_{p},r_{q}) \chi^{(S)}_{S,M_{s}}(p,q)$$ where chi is the spin state so since the energy is 3 E0 for the first particle there possible value nx,ny,nz are n=(1,1,1) and the second particle n'=(1,1,2). we could select $$\Psi^{(A)} (x_{p},x_{q},t) = \frac{1}{\sqrt{2}} (\psi_{n}(x_{p})\psi_{n'}(x_{q} - \psi_{n}(x_{q})\psi_{n'}(x_{p}) \exp[-\frac{i(E_{n} + E_{n'})t}{\hbar}$$ A means it is unsymmetric the spin state chi could be $$\chi^{(S)}_{1,1}(p,q) = \chi_{+}(p) \chi_{-}(q)$$ S means it is symmetric For d it is similar but switched around for a) and b) i have doubts though For a) the bosons must be distinguishable so we could have WF like this $$\Psi_{1} (r_{p},r_{q},t) = \psi_{n}(r_{p})\psi_{n'}(r_{q}) \exp[-\frac{i(E_{n} + E_{n'})t}{\hbar}$$ for one of the particles. Under exchange this would be symmetric. b) if the bosons are identical then we simply have to construct a wavefunction that is smmetric like we did in part c for the fermions. thanks for any and all help! Last edited: • #2 1,444 2 bump • #3 1,444 2 by the way $$\chi= \chi_{S,M_{S}}$$ and $$\chi_{+}$$ when Ms = +1/2 and $$\chi_{-}$$ when Ms = -1/2 • #4 cepheid Staff Emeritus Gold Member 5,192 38 Hi stunner5000pt, I'm just learning this stuff myself, but I'll try to help. Your answer to part c looks correct. The spatial part of the wf was chosen to be antisymmetric, since the spin state is given to be symmetric, so that the *overall* wf is antisymmetric. for a) and b) i have doubts though For a) the bosons must be distinguishable so we could have WF like this $$\Psi_{1} (r_{p},r_{q},t) = \psi_{n}(r_{p})\psi_{n'}(r_{q}) \exp[-\frac{i(E_{n} + E_{n'})t}{\hbar}$$ for one of the particles. Under exchange this would be symmetric. No it wouldn't be! But that's okay! Because the particles are meant to be distinguishable. So I think your answer is correct...an acceptable wavefunction for a two-particle system is the product of the individual one-particle wavefunctions, *if* you know that one is in the state n, and the other in the state n', because you are able to tell the difference between them. b) if the bosons are identical then we simply have to construct a wavefunction that is smmetric like we did in part c for the fermions. Yes, you do have to construct a wavefunction that is symmetric. But no, it's not like part c, because in part c, you constructed a wf that was antisymmetric. :tongue2: Not only that, but your bosons are spinless, so you'd only have a spatial part to your wavefunction, and it would have to be symmetric on its own. Hope this helps • Last Post Replies 6 Views 1K • Last Post Replies 9 Views 6K • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 2 Views 579 • Last Post Replies 4 Views 291 • Last Post Replies 1 Views 2K • Last Post Replies 7 Views 1K • Last Post Replies 0 Views 1K • Last Post Replies 6 Views 11K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964454531669617, "perplexity": 1639.8030821597583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00601.warc.gz"}
http://tex.stackexchange.com/questions/26460/trapping-latex-error-warning
# Trapping LaTeX error/warning Is there a way to trap an error/warning in LaTeX during compile time? I'm thinking of something similar to the VB script On Error Goto <blah>, but this time for LaTeX in something like \OnErrorExecute{<command>} and \OnWarningExecute{<command>}. Sadly enough there are no standard error/warning-codes provided (AFAIK), since packages report warnings/error via the commands \PackageError{<message>} \PackageWarning{<message>} By codes I mean, as an example, • Warning 1: Underfull \hbox...; • Warning 2: Overfull \hbox...; • ... • Error 1: No \begin{document}; • Error 2: Perhaps a missing \item...; • Error 3: File ended while scanning..., etc. since one would ideally want to condition on the type of error/warning that is produced. Understandably a new error/warning reporting mechanism would be required, since package authors are allowed to issue warnings/errors as they please. So, there warnings/errors could be made package-specific with some number prefix (say amsmath.warning.1 for warning 1 using the amsmath package). If not in this version (probably), what about LaTeX3 (hopefully)? - Try e.g. using \si{\gram\kilo} in a document loading the siunitx package to see what LaTeX3 error messages look like: in particular, they have the module name and a name for the message. For documentation on it, you can look at the part of source3.pdf about the l3msg module. There are some possibilities to redirect some messages, and change their behaviour, but I find it unpractical, and suggestions on what is needed are welcome :). –  Bruno Le Floch Aug 24 '11 at 23:24 @Bruno: This is great news, but it will take me some time to get used to LaTeX3 syntax/usage. My motivation stems from Fit text into given box by adjusting the fontsize. Conditioning on an Overfull \hbox... warning might lead to making a better choice of how to fit the text into the given box dimensions. Does 'unpractical' refer to a personal preference? –  Werner Aug 24 '11 at 23:39 Overfull \hbox (and some others) is a TeX built-in warning, there is, imo, no way to detect it from inside. The rules how badness is calculated are all described in the TeXbook and others, so it should be possible to calculate the badness of the line manually and compare it to \tolerance. (I'm no expert at this.) –  Andrey Vihrov Aug 25 '11 at 7:29 You could catch LaTeX warnings and errors by redefining the mentioned macros, but I doubt it is possible to handle them correctly in any general case. It is not possible to catch TeX build-in messages like Underfull ... and Overfull ... warnings or syntax errors. (La)TeX is simply not made with this in mind. –  Martin Scharrer Aug 25 '11 at 7:51 Your Warning 1 and 2, and Error 3 come from TeX and cannot be redirected. On the other hand, things like Error 1 and 2 are controlled by LaTeX macros, and in principle could be redirected as Martin says by redefining macros, and in LaTeX3 we can hope to control them much better. Specifically on Under- or Overfull boxes, you should set the \hbadness and \vbadness to a high value, typeset, then consult the \badness. –  Bruno Le Floch Aug 25 '11 at 8:48 There are various different types of errors, warnings and messages that come from a LaTeX run. At the TeX level, you can get a warning like Underfull \hbox... or and error such as File ended while scanning.... These cannot be altered at the LaTeX end.* At the LaTeX level, most messages are generated using \PackageError and similar macros. You can redefine these, but an easier way would be to use the silence package. It provides a pre-built set of macros to do this redefinition in a selective way, thus allowing 'filtering out' of unwanted messages. Turing to LaTeX3, the approach taken in the code there is to separate definition of messages (of all types) from their use. This means that each message has a 'name', which can be used to alter the behaviour of the message when it is given. Thus we might have \msg_new:nnnn { module } { my-message } { Some~text } { Some~more~text } to define a message, with \msg_error:nn { module } { my-message } when it is used. With no filtering, this will raise an error. However, we could alter the behaviour with \msg_redirect_class:nn { error } { warning } to turn all errors into warnings, or with \msg_redirect_module:nnn { module } { error } { warning } to alter just those messages for module, or even \msg_redirect_name:nnn { module } { my-message } { warning } to target just one message. As Bruno notes, the filtering behaviour may not currently be ideal, but I think that the separation idea is worth having. There is still a need to write a 'user level' interface for filtering in this way. (Note. Redirection can be applied before modules are loaded: useful to get rid of load-time messages. The mechanism used keeps the message text definition and and redirection separate.) [*] Altering how the engine behaves is possible with LuaTeX. I'm not sure if there are appropriate hooks at the moment for the messages mentioned, but I'd imagine that this is possible. I'm assuming in the rest of my answer that we are talking about a cross-engine solution. - I'm looking forward to read more about "Turing to LaTeX3" in your blog. ;-) –  lockstep Sep 5 '11 at 17:20 Thanks for the very nice overview. I find it clearer than l3msg.dtx. Perhaps this should be reused somewhere in the LaTeX3 doc? Also, can messages be redirected before the module is loaded? –  Bruno Le Floch Sep 5 '11 at 17:55 @Bruno: Remember that l3msg is supposed to be a reference manual, so it's rather formalised. I'll see if I can improve any of it. On the load-order question, see the edit. –  Joseph Wright Sep 5 '11 at 18:57 @Joseph: I realized that this post doesn't fit very well in source3, but perhaps for the missing "learn to program in LaTeX3" document? I hadn't checked whether redirections were possible before loading, thanks for the info. On the fact that the filtering behavior is not ideal I'm mostly quoting you from a github issue. –  Bruno Le Floch Sep 6 '11 at 2:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695560693740845, "perplexity": 2990.4343314766147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772134.89/warc/CC-MAIN-20141217075252-00078-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/212950-roll-pair-six-sided-dice-5-times-getting-four-4-s.html
# Math Help - Roll a Pair of six-sided dice 5 times and getting four 4's? 1. ## Roll a Pair of six-sided dice 5 times and getting four 4's? What is the probability of rolling a pair of six-sided dice five times and getting four 4's? @Plato this was the other questions I am really struggling with. This is what I came up with, but I'm not sure if it is correct. p(x=4) = C(5,4) × (1/36)^4 × (35/36)^5-4 which equals 2.894180045 ×10^-0.6 ....That looks wrong to me..? 2. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? P(4) = 7/36 5choose 4 ((7/36)^4)(29/36) = .0057 so about one in 17543 attempts 3. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Originally Posted by tdotodot What is the probability of rolling a pair of six-sided dice five times and getting four 4's? When you say 'getting four 4's' do you mean a sum of four or do you mean four spots on one die? 4. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Originally Posted by Plato When you say 'getting four 4's' do you mean a sum of four or do you mean four spots on one die? ^That's how to question was worded. I think it means sum of four. (eg: 2 dots on one die and 2 dots on the other, or 3 dots and 1dot) So the Question is asking the probability of getting the sum of four, 4 times. 5. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Originally Posted by tdotodot ^That's how to question was worded. I think it means sum of four. (eg: 2 dots on one die and 2 dots on the other, or 3 dots and 1dot) So the Question is asking the probability of getting the sum of four, 4 times. That would be the way I would expect to read it. If you roll a pair of dice there are three ways to get a sum of four. So in five rolls getting a sum of four four times: $\binom{5}{4}\left(\frac{3}{36}\right)^4\left(\frac {33}{36}\right)$. 6. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Originally Posted by Plato That would be the way I would expect to read it. If you roll a pair of dice there are three ways to get a sum of four. So in five rolls getting a sum of four four times: $\binom{5}{4}\left(\frac{3}{36}\right)^4\left(\frac {33}{36}\right)$. Could I rewrite that as: p(x=4) = C(5,4) × (3/36)^4 × (33/36) p(x=4) = 2.210326646×10^-04 My calculator is outputting a weird number? OR is it NOT C(5,4) And just (5/4), which gives me 5.525816615×10^-05 ___Compared to what my original post shows, I was only wrong with the probability of getting a 4. I had 1/36 and 35/36 WhereAs you have 3/36 and 33/36. Once again -your a life saver man! 7. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Originally Posted by tdotodot Could I rewrite that as: p(x=4) = C(5,4) × (3/36)^4 × (33/36) p(x=4) = 2.210326646×10^-04 My calculator is outputting a weird number? $\binom{5}{4}\left(\frac{3}{36}\right)^4\left(\frac {33}{36}\right)=0.000221032664609$ 8. ## Re: Roll a Pair of six-sided dice 5 times and getting four 4's? Thanks, that actually looks right, unlike the number I keep getting on my calculator?!?!?! *Don't Get WHY* I tried using an online calculator, input everything exactly the same and got 0.000221032664609.... Guess there's something wrong with my calculator! JUST VERY GLAD I finished those two questions! 0.000221032664609. ONLINE CALCULATOR 2.210326646×10^-04 MY CALCULATOR ^It seems as though it is just multiplying it a few thousand times? Why?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302844762802124, "perplexity": 1140.231495521609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00091-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/the-environment-and-the-community
# The Environment and the Community ##### This The Environment and the Community lesson plan also includes: Students observe their environment and come up with ways to protect it. In this environment lesson plan, students discuss pollution and ways to prevent it. Then they create a poster with drawings that show a goal to reduce pollution and steps to reach that goal. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775549530982971, "perplexity": 1766.0226522798685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00456.warc.gz"}
https://www.physicsforums.com/media/einstein-field-equations-for-beginners-youtube.588/
# Einstein Field Equations - for beginners! - YouTube Einstein's Field Equations for General Relativity - including the Metric Tensor, Christoffel symbols, Ricci Cuvature Tensor, Curvature Scalar, Stress Energy ... jedishrfu, Mar 30, 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248430490493774, "perplexity": 3513.088625580049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825141.95/warc/CC-MAIN-20171022041437-20171022061437-00359.warc.gz"}
http://math.stackexchange.com/questions/528096/characterization-of-weakly-convergent-to-zero-sequences-on-lp-for-1-le-p
# characterization of weakly convergent to zero sequences on $l^p$ for $1\le p < \infty$ Let $1\le p< \infty$. Show that a sequence $t_k = ({t_{kj}})_{j=1}^{\infty}\in l^p$ converges weakly to 0 iff $||t_k||_p$ is bounded and $\lim_k t_{kj}=0$. I proved that if $t_k$ converges weakly to 0 then we conclude that. I want to prove the reciprocal. Let's assume that $1<p<\infty$ If I assume that $(t_k)$ it's weakly cauchy I can prove that it's weakly convergento to 0, but I don't know how to prove that. Under that assumption I used the reflexivity of the space $l^p$ ($l^1$ is not reflexive) If $p=1$ since $l^1$ is not reflexive my arguments are not valid here. I don't really now if the result it's also true here. Please help me! - Appply that theorem to the case $p\in(1,+\infty)$ with $S=\{f_j\in (\ell_p)^*:j\in\mathbb{N}\}$, where $$f_j:\ell_p\to\mathbb{K}: t\mapsto t_j$$ For $p=1$ see this answer, where it was proved that weak convergence is equivalent to strong convergence. It is remains to note that every strongly convergent sequence is bonded and pointwise convergent. span of projections is the space of finitely supported vectors usually denoted by $c_{00}$. It is dense in $\ell_p$ by the following argument. For a given $\varepsilon$ and $t\in\ell_p$ there exists $N\in\mathbb{N}$ such that $\sum_{j=N+1}^\infty|t_j|^p<\varepsilon^p$. Then consider $y\in c_{00}$ such that $y_j=t_j$ for $j=1,\ldots,N$ and $y_j=0$ otherwise. Then $\Vert t- y\Vert_p<\varepsilon$. –  no identity Oct 17 '13 at 6:19 But note that for the case $p=1$ the dual of $l^1$ is $l^{\infty}$ and here $c_{00}$ is not dense on $l^{\infty}$ so the span of the projections are not dense on $(l^1)'$ –  Shanks Oct 17 '13 at 7:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995907723903656, "perplexity": 72.47976838957216}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006637.79/warc/CC-MAIN-20141125155646-00022-ip-10-235-23-156.ec2.internal.warc.gz"}
https://events.math.unipd.it/hamschool2018/node/11
# Contributed Talks Speaker: Patrick Bernard (ENS Paris), Talk 45 min. Title: Lyapunov Functions of closed Cone Fields: from Conley Theory to Time Functions. Abstract: We propose a theory "a la Conley" for cone fields using a notion of relaxed orbits based on cone enlargements, in the spirit of space time geometry. We work in the setting of closed (or equivalently semi-continuous) cone fields with singularities. This setting contains (for questions which are parametrization independent such as the existence of Lyapounov functions) the case of continuous vector-fields on manifolds, of differential inclusions, of Lorentzian metrics, and of continuous cone fields. We generalize to this setting the equivalence between stable causality and the existence of temporal functions. We also generalize the equivalence between global hyperbolicity and the existence of a steep temporal functions. Speaker: Misha Bialy (University of Tel Aviv), Talk 45 min. Title: Around Birkhoff's conjecture for convex and other billiards. Abstract: Birkhoff's conjecture states that the only integrable billiards in the plane are ellipses. I am going to discuss some results and questions motivated by this conjecture. Speaker: Hector Sanchez Morgado (Nat. Aut. University of Mexico), Talk 45 min. Title: Time-periodic Evans approach to weak KAM theory. Abstract : We study the time-periodic version of Evans approach to weak KAM theory. Evans minimization problem is equivalent to a first oder mean field game system. For the mechanical Hamiltonian we prove the existence of smooth solutions. We introduce the corresponding effective Lagrangian and Hamiltonian and prove that they are smooth. We also consider the limiting behavior of the effective Lagrangian and Hamiltonian, Mather measures and minimizers. Speaker: Rafael Ruggiero (PUC-Rio), Talk 45 min. Title: On the Birkhoff problem for Lagrangian minimizing tori Abstract:We show that a smooth Lagrangian, minimizing torus that is invariant by the geodesic flow of a Riemannian metric in the torus is a graph of the canonical projection provided that every point in the Lagrangian torus is nonwandering. The graph problem for Lagrangian minimizing two dimensional tori was completely solved by Bialy and Polterovich in the 1980's and since then very llttle progress has been made in higher dimensions. This contrasts with the great development of the graph problem for Lagrangian invariant tori homologous to the zero section since the works of Viterbo and Polterovich in the early 1990's. The result is part of a joint research project with Mario Jorge Carneiro. Speaker: Jean-Baptiste Caillau (Univ. Côte d’Azur & CNRS/Inria), Talk 45 min. Title: Smooth and broken Hamiltonian curves in optimal control Abstract: In optimal control, minimizing trajectories are projections on the ambient manifold of Hamiltonian curves on the cotangent bundle (“extremal curves”). These curves may be smooth and we report on results on the geodesic flow of almost-Riemannian metrics on the 2-sphere. Such metrics have singularities that can be suitably dealt with in a Hamiltonian framework. We show in particular that their caustics are given in terms of a billiard in the Poincare disk. In general though, the relevant Hamiltonian is only C^0 and minimizing trajectories are projections of broken curves (Lipschitz but not C^1). An important case for the control of mechanical systems is the case of two competing Hamiltonians. Under suitable assumptions, neighbouring extremals are all broken and there is still a good notion of caustic. A more subtle situation arises for time minimization as neighbouring Hamiltonian curves of a broken extremal may be smooth or broken. In a well-chosen blow-up, the singularity of the extremal can be interpreted as a heteroclinic connection between two hyperbolic equilibria, resulting in a logarithmic singularity of the Hamiltonian flow. Speaker: Daniel Rosen (University of Tel Aviv), Talk 30 min. Title: Duality of Caustics in Minkowski Billiards Abstract: We study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth centrally symmetric and strictly convex body K, for every convex caustic which K possesses, the "dual" billiard dynamics in which the table is the Euclidean unit disk and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics is dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic. Speaker: Ivan Beschastnyi (SISSA), Talk 30 min. Title: Monotone curves and Morse-type theorems Abstract: Monotone curves are special curves in the Lagrangian Grassmanian that arise very naturally in the context of optimal control problems. For example, they can be used to construct canonical lifts of discontinuous curves to the universal cover of the Lagrangian Grassmanian. In a recent work with A. A. Agrachev we have used them to prove a general theorem in optimal control that generalizes the Morse theorem in the classical calculus of variations. This theorem states that the Morse index of the Hessian of the functional can be efficiently computed using the Maslov index of a curve in the Lagrangian Grassmanian called Jacobi curve. Speaker: Gleb Smirnov (SISSA), Talk 30 min. Title: Elliptic diffeomorphisms of symplectic 4-manifolds Abstract: We show that symplectically embedded (−1)-tori give rise to certain elements in the symplectic mapping class group of 4-manifolds. An example is given where such elements are proved to be of infinite order. Speaker: Ramón Vera (Nat. Aut. University of Mexico), Talk 30 min. Title: Poisson Structures in near-symplectic manifolds Abstract: In this work we connect Poisson and near-symplectic geometry by showing that there are two almost regular Poisson structures induced by a near-symplectic 2n-manifold. The first structure is of maximal rank 2n and vanishes on a codimension-2 subspace. The second one is log-f symplectic of maximal rank 2n−2. We then compute the Poisson cohomology of the former structure in dimension 4, showing that it is finite and depends on the modular class. We also determine the cohomology of a different Poisson structure on smooth 4-manifolds, the one associated to broken Lefschetz fibrations. This completes the cohomology of the possible degeneracies of singular Poisson structures in dimension 4. Speaker: Jaume Alonso i Fernández (University of Antwerp), Talk 30 min. Title: Symplectic classification of semi-toric integrable systems: recent advances and examples Abstract: Semi-toric systems are a special class of autonomous completely integrable Hamiltonian systems defined on a 4-dimensional symplectic manifold. They have two first integrals with commuting flows: one that induces a circular action and another one that does not. Furthermore, only non-degenerate and non-hyperbolic singularities are allowed. These systems appear often in theoretical physics and their richness of possible dynamical behaviours is much greater than, say, toric systems, since new types of non-degenerate singularities can arise, such as focus-focus points. Semi-toric systems have been classified a few years ago by Pelayo and Vu Ngoc in terms of five symplectic invariants from which the whole system can be reconstructed. However, their explicit calculation is often not straight-forward and until now some invariants had not been yet calculated even for the most basic cases. In this talk we will present the classification and illustrate it with our last results, namely the explicit calculation of all symplectic invariants for two physically-inspired examples: the coupled spin-oscillator and the coupled angular momenta. This is a joint work with H. Dullin and S. Hohloch. Speaker: Murat Saglam (Ruhr-Universitaet Bochum), Talk 30 min. Title: Contact forms with arbitrarily large systolic ratio in any dimension Abstract: Since the geodesic flow of a Riemannian/Finsler manifold may be seen as a Reeb flow on the unit (co)tangent bundle of the underlying manifold, one recovers the systolic ratio as the ratio  of a suitable power of the minimal period of the Reeb flow and the contact volume. With this motivation, one may study the behavior of the systolic ratio as the contact form changes within the class of contact forms that defines a given contact manifold. Recently, Abbondandolo et al. showed that on any contact 3-manifold there exists a contact form with arbitrarily large systolic ratio. Following their work,  we show that the statement holds in any dimension.  Given any contact manifold (M,\xi), using a supported open book decomposition, we first construct a contact form such that on an arbitrarily large portion of M, the Reeb flow is of Boothby-Wang type, while on the complement of this portion, the minimal period of Reeb orbits is bounded below. Second, using certain hamiltonian ball maps, we construct contact mapping tori, for which the minimal period  is bounded below while the contact volume is arbitrarily small. Finally, we replace a collection of trivial mapping tori, which exists due to the Boothby-Wang fibration and covers the most volume of M, with the 'plugs' that are constructed via hamiltonian ball maps. It turns out that the minimal period and the contact volume of a plug are determined by the action and the Calabi invariant of the underlying hamiltonian ball map.  In order to show that the resulting contact form supports \xi, we utilize Giroux's work on supported open books in contact manifolds of higher dimensions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674175143241882, "perplexity": 785.0170335903491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590711.36/warc/CC-MAIN-20180719070814-20180719090814-00306.warc.gz"}
https://www.khanacademy.org/test-prep/fr-twelveth-grade-math/analyse-fonctions-sinus-et-cosinus/representation-graphique-des-fonctions-trigonometriques/v/matching-a-trigonometric-function-to-its-graph
# Finding a sinusoidal function from its graph ## Video transcript We're asked to determine a function of the form y equals a sine of bx or y is equal to a cosine bx represented by the graph below. So we need to figure out maybe what the a's are, what the b's are, and whether this is a sine or a cosine function. So let's see what clues there are. So the first thing that I notice is that whatever this function is, when x is equal to 0, it does not equal 0. It is equal to negative 2. So based on that, based on what we know about sine and cosine functions, do you think that this is going to be of the form y is equal to a sine of bx or of the form y is equal to a cosine of bx? Without even knowing what the a's and b's are, do you think this is going to be a sine function or cosine function? Well, let's think about what sine of 0 is. If you take sine of 0, we already know that that is equal to 0. What is cosine of 0? Cosine of 0 is equal to 1. So it would be very hard-- and especially in this form, it would be impossible-- if sine of 0 is 0, to multiply 0 by something to get to negative 2. So it can't be a sine function of this form. You might say, well, the cosine of 0 is 1. But here, it's negative 2. But at least if you have a 1, you can then multiply it by something to get to a negative 2. So what we now know is that we are at least of this form. But now we have to figure out what the a's and b's are going to be equal to. We know that this function is y is equal to a cosine of bx. So the next question I ask you is what is a going to be. Well, let's think about it. We already saw if we just had cosine of bx, when x is equal to 0, cosine of b times 0 would just be cosine of 0. And it would get us to 1. But we're not 1. We're at negative 2. It looks like [? it ?] took a cosine function, and at least, when x is equal to 0, we multiplied it by negative 2. So this should be negative 2. So now we have a little bit more filled in of what we actually have. We know that it's y is equal to negative 2 cosine of bx. And this gels with what we see right over here, the amplitude here. You see that the difference between the maximum value and the minimum value, or the minimum and the maximum is 4, 1/2 of that is 2. Or another way you think about it, we're varying 2 from this center point. And over here, if you think about the amplitude-- the amplitude is the absolute value of this number right over here. The amplitude is equal to the absolute value of this negative 2, which is indeed equal 2. So it's consistent so far. Now let's think about what b here is. And maybe we can use our knowledge of what the period of a periodic function is to think about what b might be. Well, let's look over here. What is the period of this periodic function? Well, let's draw one period. So if we use this as our starting point-- or one cycle, I should say. Let's draw one cycle. If you view that as our starting point, at 2 pi over 3, we have completed that cycle. And then we could start the next cycle. We repeat the pattern over again. Then you start the next cycle. So based on that, what is the period? Well, it's the length that you need to go in x to complete one cycle. So that length right over there, you see, is 2 pi over 3. So the period is 2 pi over 3. And given that the period here is 2 pi over 3, can you figure out what b is going to be? Well, the period of this is going to be equal to 2 pi over the absolute value of b. And you can solve this multiple ways. You can multiply both sides by 3 and the absolute value of b. And you would be left with the absolute value of b is equal to 3, which means that b could be equal to positive or negative 3. And so you might say, well, Sal, what do I use? Does b equal positive 3 or negative 3? And so the next question I'll ask you is, for a cosine function, do you get different values if you were to make this a cosine of 3x or a cosine of negative 3x? Do you get different values? Well, if you play with the unit circle a little bit. So I'm going to draw a little rough unit circle right over here. Remember, cosine is the x-coordinate where we intersect the unit circle. And if we go in the positive angle direction, if we go in that direction, our x-coordinate-- it starts at 1, and then it gets a little bit shorter. If we go in the negative direction, it starts at 1, and then it gets a little bit shorter. And so you can experiment this with a good bit. But you'll see that cosine of 3x-- and this is only the case for cosine, not for sine-- cosine of 3x is equal to cosine of negative 3x. So you can actually pick either positive 3 or negative 3. But for simplicity in this case right over here, I'll just go with the positive 3. So this could be the graph of-- and now we get our drum roll-- y is equal to negative 2 times the cosine of-- I said I wouldn't do the negative-- the cosine of positive 3x. And we are done.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643517136573792, "perplexity": 202.12588975346864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583667907.49/warc/CC-MAIN-20190119115530-20190119141530-00009.warc.gz"}
https://en.m.wikibooks.org/wiki/Linear_Algebra/Introduction_to_Matrices_and_Determinants
# Linear Algebra/Introduction to Matrices and Determinants The determinant is a function which associates to a square matrix an element of the field on which it is defined (commonly the real or complex numbers). ## Matrices Organization of a matrix Informally an m×n matrix (plural matrices) is a rectangular table of entries from a field (that is to say that each entry is an element of a field). Here m is the number of rows and n the number of the columns in the table. Those unfamiliar with the concept of a field, can for now assume that by a field of characteristic 0 (which we will denote by F) we are referring to a particular subset of the set of complex numbers. An m×n matrix (read as m by n matrix), is usually written as: ${\displaystyle A=\left({\begin{matrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{matrix}}\right)}$ The ${\displaystyle i^{th}}$  row is an element of ${\displaystyle F^{n}}$ , showing the n components ${\displaystyle {\begin{pmatrix}a_{i1}&a_{i2}&\cdots a_{in}\end{pmatrix}}}$ . Similary the ${\displaystyle j^{th}}$  column is an element of ${\displaystyle F^{m}}$  showing the m components ${\displaystyle {\begin{pmatrix}a_{1j}\\a_{2j}\\\vdots \\a_{mj}\end{pmatrix}}}$ . Here m and n are called the dimensions of the matrix. The dimensions of a matrix are always given with the number of rows first, then the number of columns. It is also said that an m by n matrix has an order of m×n. Formally, an m×n matrix M is a function ${\displaystyle M:A\rightarrow F}$  where A = {1,2...m} × {1,2...n} and F is the field under consideration. It is almost always better to visualize a matrix as a rectangular table (or array) then as a function. A matrix having only one row is called a row matrix (or a row vector) and a matrix having only one column is called a column matrix (or a column vector). Two matrices of the same order whose corresponding entries are equal are considered equal. The (i,j)-entry of the matrix (often written as ${\displaystyle A_{ij}}$  or ${\displaystyle A_{i,j}}$ ) is the element at the intersection of the ${\displaystyle i^{th}}$  row (from the top) and the ${\displaystyle j^{th}}$  column (from the left). For example, ${\displaystyle {\begin{pmatrix}3&4&8\\2&7&11\\1&1&1\end{pmatrix}}}$ is a 3×3 matrix (said 3 by 3). The 2nd row is ${\displaystyle {\begin{pmatrix}2&7&11\end{pmatrix}}}$  and the 3rd column is ${\displaystyle {\begin{pmatrix}8\\11\\1\end{pmatrix}}}$ . The (2,3) entry is the entry at intersection of the 2nd row and the 3rd column, that is 11. Some special kinds of matrices are: • A square matrix is a matrix which has the same number of rows and columns. A diagonal matrix is a matrix with non zero entries only on the main diagonal (ie at ${\displaystyle A_{i,i}}$  positions). • The unit matrix or identity matrix In, is the matrix with elements on the diagonal set to 1 and all other elements set to 0. Mathematically, we may say that for the identity matrix ${\displaystyle I_{i,j}}$  (which is usually written as ${\displaystyle \delta _{i,j}}$  and called Kronecker's delta) is given by: ${\displaystyle \delta _{i,j}={\begin{cases}1,&i=j\\0,&i\neq j\end{cases}}}$ For example, if n = 3: ${\displaystyle I_{3}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}.}$ • The transpose of an m-by-n matrix A is the n-by-m matrix AT formed by turning rows into columns and columns into rows, i.e. ${\displaystyle A_{i,j}=A_{j,i}^{T}\forall i,j}$ . An example is ${\displaystyle {\begin{bmatrix}1&2\\3&4\\5&6\end{bmatrix}}^{\mathrm {T} }\!\!\;\!=\,{\begin{bmatrix}1&3&5\\2&4&6\end{bmatrix}}\;}$ • A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if ${\displaystyle A^{\mathrm {T} }=A.\,}$ . An example is ${\displaystyle {\begin{bmatrix}1&2&3\\2&4&-5\\3&-5&6\end{bmatrix}}}$ • A square matrix whose transpose is equal to its negative is called skew-symmetric matrix; that is, A is skew-symmetric if ${\displaystyle A^{\mathrm {T} }=-A.\,}$ . An example is ${\displaystyle {\begin{bmatrix}0&-3&4\\3&0&-5\\-4&5&0\end{bmatrix}}}$ Properties of these matrices are developed in the exercises. ## Determinants To define a determinant of order n, suppose there are n2 elements of a field sij where i and j are less than or equal to n. Define the following function (this function is important in the definition): S(a1,a2,a3,...,an)=# of reversals, meaning the number of times an1<an2 when n1>n2, for each possible combination. Suppose you have a permutation of numbers from 1 to n {a1,a2,a3,...,an). Then define a term of the determinant to be equal to (-1)S(a1,a2,a3,...,an)s1a1,s2a2,s3a3,...,snan. The sum of all possible terms (i. e. through all possible permutations) is called the determinant. ## Theorem Definition: The transpose of a matrix A, AT is the matrix resulting when the columns and rows are interchanged i. e. the matrix sji when A is the matrix sij A matrix and its transpose have the same determinant: ${\displaystyle \det(A^{\top })=\det(A).\,}$ ### Proof All terms are the same, and the signs of the terms are also unchanged since all reversals remain reversals. Thus, the sum is the same. ## Theorem Interchanging two rows (or columns) changes the sign of the determinant: ${\displaystyle \det {\begin{bmatrix}\cdots \\{\mbox{row A}}\\\cdots \\{\mbox{row B}}\\\cdots \end{bmatrix}}=-\det {\begin{bmatrix}\cdots \\{\mbox{row B}}\\\cdots \\{\mbox{row A}}\\\cdots \end{bmatrix}}}$ . ### Proof To show this, suppose two adjacent rows (or columns) are interchanged. Then any reversals in a term would not be affected except for the reversal of the elements of that term within that row (or column), in which case adds or subtracts a reversal, thus changing the signs of all terms, and thus the sign of the matrix. Now, if two rows, the ath row and the (a+n)th are interchanged, then interchange successively the ath row and (a+1)th row, and then the (a+1)th row and (a+2)th row, and continue in this fashion until one reaches the (a+n-1)th row. Then go backwards until one goes back to the ath row. This has the same effect as switching the ath row and the (a+n)th rows, and takes n-1 switches for going forwards, and n-2 switches for going backwards, and their sum must then be an odd number, so it multiplies by -1 an odd number of times, so that its total effect is to multiply by -1. ### Corollary A determinant with two rows (or columns) that are the same has the value 0. Proof: This determinant would be the additive inverse of itself since interchanging the rows (or columns) does not change the determinant, but still changes the sign of the determinant. The only number for which it is possible is when it is equal to 0. ## Theorem It is linear on the rows and columns of the matrix. ${\displaystyle \det {\begin{bmatrix}\ddots &\vdots &\ldots \\\lambda a_{1}+\mu b_{1}&\cdots &\lambda a_{n}+\mu b_{n}\\\cdots &\vdots &\ddots \end{bmatrix}}=\lambda \det {\begin{bmatrix}\ddots &\vdots &\cdots \\a_{1}&\cdots &a_{n}\\\cdots &\vdots &\ddots \end{bmatrix}}+\mu \det {\begin{bmatrix}\ddots &\vdots &\cdots \\b_{1}&\cdots &b_{n}\\\cdots &\vdots &\ddots \end{bmatrix}}}$ ### Proof The terms are of the form a1...${\displaystyle \lambda a+\mu b}$ ...an. Using the distributive law of fields, this comes out to be a1...${\displaystyle \lambda a}$ ...an + a1...${\displaystyle \mu b}$ ...an, an thus its sum of such terms is the sum of the two determinants: ${\displaystyle \lambda \det {\begin{bmatrix}\ddots &\vdots &\cdots \\a_{1}&\cdots &a_{n}\\\cdots &\vdots &\ddots \end{bmatrix}}+\mu \det {\begin{bmatrix}\ddots &\vdots &\cdots \\b_{1}&\cdots &b_{n}\\\cdots &\vdots &\ddots \end{bmatrix}}}$ ### Corollary Adding a row (or column) times a number to another row (or column) does not affect the value of a determinant. #### Proof Suppose you have a determinant A with the kth column added by another column times a number: ${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}&\ldots &a_{1k}+\mu a_{1b}&\ldots &a_{1n}\\a_{21}&a_{22}&a_{23}&\ldots &a_{2k}+\mu a_{2b}&\ldots &a_{2n}\\a_{31}&a_{32}&a_{33}&\ldots &a_{3k}+\mu a_{3b}&\ldots &a_{3n}\\\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\ldots &a_{nk}+\mu a_{nb}&\ldots &a_{nn}\end{bmatrix}}}$ where akb are elements of another column. By the linear property, this is equal to ${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}&\ldots &a_{1k}&\ldots &a_{1n}\\a_{21}&a_{22}&a_{23}&\ldots &a_{2k}&\ldots &a_{2n}\\a_{31}&a_{32}&a_{33}&\ldots &a_{3k}&\ldots &a_{3n}\\\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\ldots &a_{nk}&\ldots &a_{nn}\end{bmatrix}}+{\begin{bmatrix}a_{11}&a_{12}&a_{13}&\ldots &\mu a_{1b}&\ldots &a_{1n}\\a_{21}&a_{22}&a_{23}&\ldots &\mu a_{2b}&\ldots &a_{2n}\\a_{31}&a_{32}&a_{33}&\ldots &\mu a_{3b}&\ldots &a_{3n}\\\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\ldots &\mu a_{nb}&\ldots &a_{nn}\end{bmatrix}}}$ The second number is equal to 0 because it has two columns that are the same. Thus, it is equal to ${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}&\ldots &a_{1k}&\ldots &a_{1n}\\a_{21}&a_{22}&a_{23}&\ldots &a_{2k}&\ldots &a_{2n}\\a_{31}&a_{32}&a_{33}&\ldots &a_{3k}&\ldots &a_{3n}\\\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\ldots &a_{nk}&\ldots &a_{nn}\end{bmatrix}}}$ which is the same as the matrix A. • It is easy to see that ${\displaystyle \det(rI_{n})=r^{n}\,}$  and thus ${\displaystyle \det(rA)=\det(rI_{n}\cdot A)=r^{n}\det(A)\,}$  for all ${\displaystyle n}$ -by-${\displaystyle n}$  matrices ${\displaystyle A}$  and all scalars ${\displaystyle r}$ . • A matrix over a commutative ring R is invertible if and only if its determinant is a unit in R. In particular, if A is a matrix over a field such as the real or complex numbers, then A is invertible if and only if det(A) is not zero. In this case we have ${\displaystyle \det(A^{-1})=\det(A)^{-1}.\,}$ Expressed differently: the vectors v1,...,vn in Rn form a basis if and only if det(v1,...,vn) is non-zero. The determinants of a complex matrix and of its conjugate transpose are conjugate: ${\displaystyle \det(A^{*})=\det(A)^{*}.\,}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 44, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418220520019531, "perplexity": 332.02341493922336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00277.warc.gz"}
https://www.physicsforums.com/threads/stupid-riddle-that-is-driving-me-mad.96545/
# Stupid Riddle that is driving me mad 1. Oct 24, 2005 ### Tom McCurdy You drive a car at a speed of 40 km/hr to a place and then 60 km/hr back... what is the average speed of the car. I feel really stupid for asking this but its really making me mad.. 2. Oct 24, 2005 ### Tom McCurdy hahaha as soon as i posted it it was so obvious... Set distance equal to one t1=1/40 t2=1/60 $$\frac{\Delta d}{\Delta t}=avg vel$$ $$\frac{2}{\frac{1}{40}+\frac{1}{60}}$$ 3. Oct 24, 2005 ### Tom McCurdy Which produces 48 km/hr as answer 4. Oct 25, 2005 ### TD You are correct. Often people make the obvious mistake to take the arithmetic mean, being (40+60)/2 = 50 but in this case, you need the harmonic mean 5. Oct 28, 2005 ### ivybond harmonic mean could be calculated slightly easier: $$\frac{v_1 v_2}{\frac{v_1 + v_2}{2}}$$ Last edited: Oct 28, 2005 Similar Discussions: Stupid Riddle that is driving me mad
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835517764091492, "perplexity": 3404.7451469824573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00159.warc.gz"}
https://www.physicsforums.com/threads/is-the-inverse-image-bounded.229377/
# Homework Help: Is the inverse image bounded 1. Apr 16, 2008 ### bertram 1. The problem statement, all variables and given/known data Let f be a continuous mapping from metric spaces X to Y. $$K \subset Y$$is compact. Is $$f^{-1}$$(K) bounded? 2. Relevant equations Theorem 4.8 Corollary (Rudin) A mapping f of a metric space X into Y is continuous iff $$f^{-1}$$(C) is closed in X for every closed set C in Y. 3. The attempt at a solution So my idea was to show that $$f^{-1}$$(K) was continuous, but i can't really figure that out immediately. I just tried next to describe K and $$f^{-1}$$(K) as best I could.... We know that K is closed and compact (compact subsets of metric spaces are closed). This will imply that $$f^{-1}$$(K) is closed (Thm 4.8 corollary). So I have that K is closed and compact and that $$f^{-1}$$(K) is closed. I just don't know how to make the ends meet. Maybe I'm doing this wrong, or just missing something obvious. Thanks in advance for any help. 2. Apr 16, 2008 ### Dick Think about why it might not be true before you start trying to prove it. Suppose f:R->R and f(x)=sin(x)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305854439735413, "perplexity": 698.0615672024883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511314.51/warc/CC-MAIN-20181017220358-20181018001858-00262.warc.gz"}
https://athul.github.io/notes/posts/lcs-1.html
# Module 1 Created: Sep 11, 2020 10:54 AM • Formulas can be implemented as Binary Trees 👉 Inorder → LEFT → Root → Right Preorder → ROOT → Left → Right • Inorder Traversal can be used to reduce ambiguity. • The formula for above tree is $$( p \rightarrow q ) \iff (\neg p \rightarrow \neg q)$$ • First check left of root ⇒ $$\rightarrow$$ 1. On right of $$\to$$ is $$p \therefore p$$ 2. Then to root of $$p$$ , ie $$\to$$ , on right of it is $$q$$ . This becomes $$p \to q$$ 3. Coming back to root gives us $$\iff$$ 4. Going to the right of tree gives us $$\to$$ first. 1. Then on this tree we move left(recursive) 2. we get $$\neg$$ first and p which gives us $$\neg p$$ 3. We go to root which gives us $$\to$$ 4. On going we get $$\neg$$ and $$q$$ 5. Thus we get from this tree $$\neg p \to \neg q$$ 5. This Gives us $$(p \to q) \iff (\neg p \to \neg q)$$ 12/09/2020 • Polish Notation • Congruence or $$\equiv$$ • Atoms - Indivisible Units in a statement Eg: $$p,q$$ • Grammar • $$a \to a\ op\ b$$ ; $$op = +, -, *, /$$ • $$fml$$ is any propositional formula $fml \iff fml \\ fml \space op\space fml \iff fml\\ \\ ... \\ ... \\ p \to q \iff \neg p \to \neg q$ Sep 16, 2020 ## Interpretations • $$p \to q \iff \neg p \to \neg q$$ ### Definition • $$A \in \digamma$$ ## Boolean Operators Inclusice OR: $$\lor$$ eXclusive OR: $$\oplus$$ Sep 17, 2020 Def 1: Let S = $${A_1,...}$$ be a set of formulas and let $$\mathscr{P}_S =\cup_i \mathscr{P}_{A_i}$$ that is, $$\mathscr{P}_S$$ is the set of all the atoms that appear in the formulas of $$S$$. An interpretation for $$S$$ is a function $$\mathscr{I}_S:\mathscr{P}_S\mapsto\{T,F\}$$. For any $$A_i\in S,v_{\mathscr{I}_S}(A_i)$$ ### Logical Equivalence Def 1: Let $$A_1, A_2 \in \mathscr{F}$$ . If $$v_\mathscr{I}(A_1) = v_\mathscr{I}(A_2)$$ for all interpretations $$\mathscr{I}$$, then $$A_1$$ is logiclally equivalent to $$A_2$$, denoted by $$A_1 \equiv A_2$$8 ### Substitution [[Graph Theory Module 1]]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287370443344116, "perplexity": 2071.7703308694104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00051.warc.gz"}
https://cs.stackexchange.com/questions/77957/there-are-parsimonious-reductions-between-all-np-complete-problems-well-known-c
# There are parsimonious reductions between all NP-Complete problems: Well-known conjecture? The reduction from SAT to Clique shows a way to construct a graph with cliques from a Boolean formula. A closer look at that reduction even yields a simple algorithm which finds a large clique in the graph from a given satisfying assignment to the variables. That mapping is bijective: each satisfying assignment is mapped to a different large clique, and the number of large cliques is the same as the number of satisfying assignments. A reduction which preserves the number of solutions is called a parsimonious reduction (technical definition below). It is a stronger notion of reduction than the usual Karp reductions, because not all reductions have that property. NP-Completeness is usually defined in terms of Karp-reductions, but since parsimonious reductions are clearly more beautiful, it is natural to conjecture: Conjecture: All NP-Complete problems can be reduced to one another via parsimonious reductions. But I can't find this in the literature. Question: Does the literature contain the conjecture that all NP-Complete languages can be reduced to eachother via parsimonious reductions? This is different from the Bertman-Hartmanis conjecture that all NP-Complete problems can be reduced to eachother via polynomial-time invertible bijections. That conjecture is about instances of the language, whereas I talk about certificates to the NP machine. Of course the most beautiful thing would be a parsimonious bijection! References to that are greatly appreciated. Definition If $L,K\subseteq\{0,1\}^\ast$ are languages and $f\colon\{0,1\}^\ast\to\{0,1\}^\ast$ is a reduction from $L$ to $K$, then we say that $f$ is parsimonious if there are polynomial-time nondeterministic Turing Machines $N, M$ accepting $L$ and $K$, respectively such that for all $x\in\{0,1\}^\ast$, $\#N(x)=\#M(f(x))$, where $\#N(x)$ denotes the number of certificates accepted by a non-deterministic Turing Machine on input $x$. (The motivation for the question is that, for my thesis, I found that the reduction in the quantum Cook-Levin theorem is parsimonious. Beautiful! I want to conjecture that all quantum NP complete problems have parsimonious reductions, so I am looking for the classical reference) • Exercise 2.13 in SANJEEV ARORA's book (Computational Complexity) asks to "Prove that the reduction from every NP-language L to SAT presented in the proof of Lemma 2.11 can be made parsimonious." – fade2black Jul 14 '17 at 21:18 • @fade2black That is a result they use in the book a lot later on. The conjecture is that there is a parsimonious reduction from SAT to other languages, too, which is not obvious. But good find! This is an essential fact to remember when proving almost all other completeness properties of other classes. – Lieuwe Vinkhuijzen Jul 14 '17 at 21:22 • I am pretty sure this is an open problem, though it is not discussed much (at least not in this form). The connection to counting problems appears more often. Perhaps you will find the discussions in these posts interesting cstheory.stackexchange.com/questions/16119/… cs.stackexchange.com/questions/3295/… – Ariel Jul 14 '17 at 22:19 • Well, example is not good, because two given problems are one-by-one reducible to each other (that's why max clique is in SNP). – rus9384 Jul 14 '17 at 22:43 • @fade2black, existence of parsimonious reduction from any problem in NP to a problem from SNP does not imply the opposite statement. – rus9384 Jul 14 '17 at 22:47 Yes, it is well known conjecture. Oded Goldreich states the fact that "all known reductions among natural $NP$-complete problems are either parsimonious or can be easily modified to be so". ( Computational Complexity: A Conceptual Perspective By Oded Goldreich). 1. This is an easy consequence of Berman-Hartmanis. Assuming that Berman-Hartmanis holds, for every pair of $NP$-complete problems $L, K$, we show that it is possible to parsimoniously reduce $L$ to $K$. In fact, every poly-time invertible bijective reduction will be parsimonious then. Indeed, fix some NTM $N$ that decides $L$ and a poly-time invertible bijective reduction $f$ from $L$ to $K$ (which always exists by Berman-Hartmanis), consider the following NTM $M$ that decides $K$: on an input $y\in\Sigma^*$, compute the preimage $x=f^{-1}(y)$ (which is poly-time computable since $f$ is poly-time invertible by assumption), then run $N(x)$. Clearly, $M$ is a non-deterministic polynomial-time Turing machine accepting $K$. And, for every $x\in L$, denote $y=f(x)$, we have that every certificate $c$ for $x$ according to $N$ is also a certificate for $y$ according to $M$, and vice versa. So, $\#N(x)=\#M(y)$. For every $x\not\in L$, $\#N(x)=\#M(y)=0$. 1. If $NP=UP$ (which is considered unlikely) including the subcase of $NP=P$ Every reduction is then parsimonious without the requirement of being poly-time invertible (or even being bijective) at all. • What if, for example, $L$ is SAT and $K$ is NAE-3SAT? How can SAT be reduced parsimoniously to NAE-3SAT? In particular, for any instance of SAT with an odd number of solutions, there is no instance of NAE-3SAT with the same number of solutions, because every instance of NAE-3SAT has an even number of solutions (for every assignment that NAE-satisfies a given formula, so does the complement of that assignment)? What am I missing? – Neal Young Jun 4 at 14:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514397740364075, "perplexity": 456.86285602487806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897027.14/warc/CC-MAIN-20200708124912-20200708154912-00305.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/construction-summing-amplifier-non-inverting-1-construct-circuit-exactly-shown-figure-2-br-q2057861
Construction of the Summing Amplifier (non-inverting) 1. Construct the circuit exactly as shown in Figure 2 on a breadboard. 2. Measure voltages at pins 4 and 11with respect to ground and record the values in Table 8 on the worksheet. 3. Adjust the potentiometers R11 and R13 to obtain VA = VB = +1 VDC. Record the DMM measurements for these voltages in Table 9 on the worksheet. 4. Calculate the output voltage VOUT using the following formula: Record both the calculated value and DMM measurement of VOUT in Table 10 on the worksheet. Also determine the difference between the measured value and calculated value. 5. Adjust the potentiometers R11 and R13 to obtain VA = VB = -1 VDC. Record the DMM measurements for these voltages in Table 11 on the worksheet. 6. Calculate the output voltage VOUT using the formula in step IV.B.4. Record both the calculated value and DMM measurement of VOUT in Table 12 on the worksheet. Also determine the difference between the measured value and calculated value. 7. Adjust the potentiometers R11 and R13 to obtain VA = -1 VDC and VB = +2 VDC. Record the DMM measurements for these voltages in Table 13 on the worksheet. 8. Calculate the output voltage VOUT using the formula in step IV.B.4. Record both the calculated value and DMM measurement of VOUT in Table 14 on the worksheet. Also determine the difference between the measured value and calculated value.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258930683135986, "perplexity": 2138.1416546636724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833461.95/warc/CC-MAIN-20140820021353-00123-ip-10-180-136-8.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/solve-for_1
+0 # Solve for : 0 48 2 Solve for $$r$$: $$\frac{r-45}{2} = \frac{3-2r}{5}.$$ Jan 2, 2020 #1 +16 +2 Hi, when you multiply both sides by 10, you get $$5(r-45)=2(3-2r)$$ because $$10 \div 2 = 5 \text{ and } 10 \div 5 = 2$$ . When you expand the equation you should get $$5r-225=6-4r$$. Moving the variables to one side and constants to another you get $$9r=231$$. Dividing both sides by 9 gets us $$r= 231\div 9$$ which equals to $$25 \frac{2}{3}$$. -MPS101 #2 +106519 +1 Thanks, MPS    !!!!! BTW.....welcome.....!!! CPhill  Jan 2, 2020 edited by CPhill  Jan 2, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163212537765503, "perplexity": 2405.143854148784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00052.warc.gz"}
https://tohoku.pure.elsevier.com/en/publications/finite-temperature-effective-boundary-theory-of-the-quantized-the
# Finite-temperature effective boundary theory of the quantized thermal Hall effect Research output: Contribution to journalArticlepeer-review 10 Citations (Scopus) ## Abstract A finite-temperature effective free energy of the boundary of a quantized thermal Hall system is derived microscopically from the bulk two-dimensional Dirac fermion coupled with a gravitational field. In two spatial dimensions, the thermal Hall conductivity of fully gapped insulators and superconductors is quantized and given by the bulk Chern number, in analogy to the quantized electric Hall conductivity in quantum Hall systems. From the perspective of effective action functionals, two distinct types of the field theory have been proposed to describe the quantized thermal Hall effect. One of these, known as the gravitational Chern-Simons action, is a kind of topological field theory, and the other is a phenomenological theory relevant to the Strěda formula. In order to solve this problem, we derive microscopically an effective theory that accounts for the quantized thermal Hall effect. In this paper, the two-dimensional Dirac fermion under a static background gravitational field is considered in equilibrium at a finite temperature, from which an effective boundary free energy functional of the gravitational field is derived. This boundary theory is shown to explain the quantized thermal Hall conductivity and thermal Hall current in the bulk by assuming the Lorentz symmetry. The bulk effective theory is consistently determined via the boundary effective theory. Original language English 023038 New Journal of Physics 18 2 https://doi.org/10.1088/1367-2630/18/2/023038 Published - 2016 Feb 10 ## Keywords • effective field theory • gravitational response • thermal Hall effect • topological insulators ## ASJC Scopus subject areas • Physics and Astronomy(all) ## Fingerprint Dive into the research topics of 'Finite-temperature effective boundary theory of the quantized thermal Hall effect'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785136342048645, "perplexity": 971.6836543040412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00598.warc.gz"}
https://www.physicsforums.com/threads/static-friction.381491/
# Static friction 1. Feb 25, 2010 ### Sebkl23 How can we find the horizontal force required to move an object with the coefficient of static friction and the mass?? I just need a formula please, thankyou 2. Feb 25, 2010 ### Raekwon Fr = μN Fr - resistive force of friction μ - coefficient of friction N - normal force (gravity/applied pressure) 3. Feb 25, 2010 ### Eric McClean a) The friction force only acts in the opposite direction to a horizontal force on an object. The only forces acting on the block described are gravity and the normal force from the table, both of which are in the vertical direction. Therefore, there is no friction force acting on the block. b) If a horizontal force F acted on the block, the force due to static friction would then begin to act upon the block. This would happen until the force F became greater then the opposing friction force, which would result in the blocks motion. Problem b would be solved by finding the maximum force due to static friction. This is equal to the coefficient of static friction multiplyed by the normal force: μs*Fn. c) The static friction force equals the force F acting upon the block until F rises greater then the value of the maximum static friction force. So, if half the force needed to overcome the static friction force were applied, 1\2 SF (SF equaling static friction force max), then the static friction force would be an equal and opposite force 1\2 SF as well. This would be written as 1/2*m*g*μs, were m is the mass of the block, g is the acceleration due to gravity: 9.81m\s2, and μs is the coefficient of static friction. (the normal force is equal to the mass of the block multiplied by the acceleration due to gravity) d) The sum of the horizontal (vertical and horizontal forces are calculated separately) forces equals the mass times the horizontal acceleration of an object as stated in Newtons second law. The two forces acting upon the block are a force F and the kinetic friction force (the kinetic friction force is the friction force acting upon moving bodies, where as the static friction force is upon stationary bodies.) The equation is written F-Kf=m*a, where Kf is the force due to kinetic friction.(Note: the reason the kinetic friction force is subtracted from the horizontal force acting on the block instead of vise versa, is because the movment of the block is in the direction of the force F.) Solving for the acceleration of the block, the mass m is moved to the other side of the equation like this (F-Kf)\m=a or (F-m*a*μs)\m=a. (the kinetic friction force is equal to the coefficient of kinetic friction multiplied by the normal force) Hope this is what your looking for. 4. Feb 26, 2010 ### Sebkl23 Ok so ... lets say I have a 6kg object, I know the coefficient of static force is 0.42, and I want to know what force is required to move the object ... so I start by finding the normal force of the object by multiplying the mass and force of gravity which would be: Fn= 6kg X 9.81m/s2= 58.86 Fn= 58.86N and now I can find the needed force? μf= F/Fn 0.42= F/58.86 F= 0.42×58.86 F=24.72N so a 24.72N force is needed to move the object?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662628173828125, "perplexity": 353.79756326200396}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864387.54/warc/CC-MAIN-20180622084714-20180622104714-00010.warc.gz"}
http://mathhelpforum.com/pre-calculus/176306-complex-numbers-polar-form-print.html
Complex numbers and polar form • March 30th 2011, 04:25 AM cottontails Complex numbers and polar form Let z = 3 - 3i. Express the following complex numbers in polar form. (i) z (ii) z^4 (iii) 1/z With parts (ii) and (iii) of the question, would z be different from being 3 - 3i? So, for (ii), it would be (3 - 3i)^4 as being z and from that, you would have to change that result into polar form. - Is that right? However, when I was working part (i), I became a bit puzzled when I realised that theta was equal to -pi/4 or 7pi/4. So, I'm unsure as to which one to use in the overall answer. z = 3- 3i = √(3^2) + (-3^2) = √9 + 9 = √18 = 3√2 theta = tan^-1 (-3/3) = tan^-1 (-1) = -pi/4 or 7pi/4 z = 3√2 cis (-pi/4) = 3√2 (cos pi/4 - isin pi/4) (But then again, couldn't the answer also be: 3√2 (cos 7pi/4 + isin 7pi/4) ?) I'm not sure if my working is right with that either. I tried with part (ii), with getting the answer (3√2)^4 cis (-pi/4 • 4) = 324 cis (-pi) = 324 (cos pi - i sin pi) ... although, as with part (i), theta was -pi or 7pi so, I'm unsure with which theta you end up using (if that makes any sense). As for part (iii), I'm completely clueless with how to answer that question. Although, is it right to answer the questions with z = 3√2? (As seen above, that is how I did the working out for part (ii).) If anyone could help me with my working out and with answering all/either three parts of the questions, it would be extremely appreciated. Thanks! • March 30th 2011, 04:47 AM mr fantastic Quote: Originally Posted by cottontails Let z = 3 - 3i. Express the following complex numbers in polar form. (i) z (ii) z^4 (iii) 1/z With parts (ii) and (iii) of the question, would z be different from being 3 - 3i? So, for (ii), it would be (3 - 3i)^4 as being z and from that, you would have to change that result into polar form. - Is that right? However, when I was working part (i), I became a bit puzzled when I realised that theta was equal to -pi/4 or 7pi/4. So, I'm unsure as to which one to use in the overall answer. z = 3- 3i = √(3^2) + (-3^2) = √9 + 9 = √18 = 3√2 theta = tan^-1 (-3/3) = tan^-1 (-1) = -pi/4 or 7pi/4 z = 3√2 cis (-pi/4) = 3√2 (cos pi/4 - isin pi/4) (But then again, couldn't the answer also be: 3√2 (cos 7pi/4 + isin 7pi/4) ?) I'm not sure if my working is right with that either. I tried with part (ii), with getting the answer (3√2)^4 cis (-pi/4 • 4) = 324 cis (-pi) = 324 (cos pi - i sin pi) ... although, as with part (i), theta was -pi or 7pi so, I'm unsure with which theta you end up using (if that makes any sense). As for part (iii), I'm completely clueless with how to answer that question. Although, is it right to answer the questions with z = 3√2? (As seen above, that is how I did the working out for part (ii).) If anyone could help me with my working out and with answering all/either three parts of the questions, it would be extremely appreciated. Thanks! For part (i), both answers are correct. However, if the principle argument is to be used, only one of them is correct. You need to go back and check what defintion of principle argument is being used so that you can choose between them. For part (iii), use deMoivre's Theorem with n = -1. • March 30th 2011, 05:20 AM cottontails I plotted z = 3 - 3i on the complex plane and it is within the fourth quadrant. Is it still correct to go by "ASTC" so thereby, the negative answer (-pi/4) would have to be the right value of theta to use? • March 30th 2011, 06:05 AM HallsofIvy Quote: Originally Posted by cottontails Let z = 3 - 3i. Express the following complex numbers in polar form. (i) z (ii) z^4 (iii) 1/z With parts (ii) and (iii) of the question, would z be different from being 3 - 3i? So, for (ii), it would be (3 - 3i)^4 as being z and from that, you would have to change that result into polar form. - Is that right? That is a very strange way of expressing yourself. In all 3 problems z is the number you are gien, 3- 3i. In (ii) the number you want to put in polar form is not "z" but $z^4$. However, once you have done (i) and know the polar form form z, say $z= re^{i\theta}$ then $z^4= r^4e^{4i\theta}$ and $1/z= (1/r)e^{-i\theta}$. Quote: However, when I was working part (i), I became a bit puzzled when I realised that theta was equal to -pi/4 or 7pi/4. So, I'm unsure as to which one to use in the overall answer. In the complex plane, those are exactly the same. Which you should use depends upon what "convention" your class is using. If you are writing all arguments between 0 and $2\pi$, use $7\pi/4$ if, instead, you are using the convention of expressing arguments between $-\pi$ and $\pi$, use $-\pi/4$. Quote: z = 3- 3i = √(3^2) + (-3^2) = √9 + 9 = √18 = 3√2 theta = tan^-1 (-3/3) = tan^-1 (-1) = -pi/4 or 7pi/4 z = 3√2 cis (-pi/4) = 3√2 (cos pi/4 - isin pi/4) (But then again, couldn't the answer also be: 3√2 (cos 7pi/4 + isin 7pi/4) ?) Yes, either of those is mathematically correct. Which you should use depends, as I said, on what convention your class is using. Quote: I'm not sure if my working is right with that either. I tried with part (ii), with getting the answer (3√2)^4 cis (-pi/4 • 4) = 324 cis (-pi) = 324 (cos pi - i sin pi) ... although, as with part (i), theta was -pi or 7pi so, I'm unsure with which theta you end up using (if that makes any sense). As for part (iii), I'm completely clueless with how to answer that question. Although, is it right to answer the questions with z = 3√2? (As seen above, that is how I did the working out for part (ii).) If anyone could help me with my working out and with answering all/either three parts of the questions, it would be extremely appreciated. Thanks! • March 30th 2011, 06:11 AM Plato Here are some general tricks to help on (iii). For all nonzero complex numbers $\dfrac{1}{z} = \dfrac{{\overline z }}{{\left| z \right|^2 }}$. And the conjugate of $\text{cis}(\theta)$ is just $\text{cis}(-\theta)$. Thus if $z=r(\text{cis}(\theta))$ then $\dfrac{1}{z}=\dfrac{\text{cis}(-\theta)}{r}$. • March 30th 2011, 06:12 AM HallsofIvy Quote: Originally Posted by cottontails I plotted z = 3 - 3i on the complex plane and it is within the fourth quadrant. Is it still correct to go by "ASTC" so thereby, the negative answer (-pi/4) would have to be the right value of theta to use? "ASTC"? You mean in which quadrant the trig functions are positive? Since you are "going the other way", finding the argument (angle) from the trig function, that is not relevant. As both mr fantastic and I have said, $-\pi/4$ and $7\pi/4$ are both correct and which of them you use depends upon the convention your class is using. If you don't remember your teacher explaining that ask your teacher. (Strictly speaking, since you can always add multiples of $2\pi$ without changing the angle, such things as $8\pi- \pi/4= \frac{31}{4}\pi$ would also be possible but the two commonly used conventions are, as I said before, the argument between 0 and $2\pi$ or the argument between $-\pi$ and $\pi$. • March 30th 2011, 06:32 AM cottontails With my textbook (and using part (i) as the example, for this case) - they would plot 3 - 3i on the complex plane. Hence, arg(3-3i) would be an angle in the fourth quadrant. Apparently by looking at it plotted on the complex plane, you are able to "easily distinguish between right and wrong answers".So, going by if you were to plot it on the complex plane, would you then go by the 'size' of the angle it makes and allowing whatever is the closest match out of the two to be theta? However, if that sort of thinking can even be correct then, I would assume it would again be a struggle with knowing which value of theta to choose for parts (ii) and (iii) as I'd imagine them being difficult to plot onto the complex plane. • April 5th 2011, 08:29 PM cottontails I asked my friend in my maths tutorial about it and she also said the same thing about the principal argument (which I had completely forgotten about when I attempted the question). So from there, I was able to figure out what the right angles were. Thanks everyone for your help though!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225366711616516, "perplexity": 922.5735002284118}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135549.24/warc/CC-MAIN-20140914011215-00079-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.hepdata.net/search/?q=observables%3AASYM&page=1&cmenergies=8000.0&size=50
Showing 17 of 17 results #### Forward–backward asymmetry of Drell–Yan lepton pairs in pp collisions at $\sqrt{s} = 8$ $\,\mathrm{TeV}$ The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Eur.Phys.J. C76 (2016) 325, 2016. Inspire Record 1415949 A measurement of the forward–backward asymmetry ${A}_{\mathrm{FB}}$ of oppositely charged lepton pairs ( $\mu \mu$ and $\mathrm{e}\mathrm{e}$ ) produced via $\mathrm{Z}/\gamma ^*$ boson exchange in pp collisions at $\sqrt{s} = 8$ $\,\mathrm{TeV}$ is presented. The data sample corresponds to an integrated luminosity of 19.7 $\,\mathrm{fb}^{-1}$ collected with the CMS detector at the LHC. The measurement of ${A}_{\mathrm{FB}}$ is performed for dilepton masses between 40 $\,\text {GeV}$ and 2 $\,\mathrm{TeV}$ and for dilepton rapidity up to 5. The ${A}_{\mathrm{FB}}$ measurements as a function of dilepton mass and rapidity are compared with the standard model predictions. 0 data tables match query #### Measurement of the charge asymmetry in top quark pair production in pp collisions at $\sqrt(s) =$ 8 TeV using a template method The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Rev. D93 (2016) 034014, 2016. Inspire Record 1388178 0 data tables match query #### Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at $\sqrt{s}=8\,\mathrm TeV{}$ with the ATLAS detector The collaboration Aad, Georges ; Abbott, Brad ; Abdallah, Jalal ; et al. Eur.Phys.J. C76 (2016) 87, 2016. Inspire Record 1392455 This paper reports inclusive and differential measurements of the $t\bar{t}$ charge asymmetry $A_{\text {C}}$ in $20.3~{\mathrm{fb}^{-1}}$ of $\sqrt{s} = 8~\mathrm TeV{}$ $pp$ collisions recorded by the ATLAS experiment at the Large Hadron Collider at CERN. Three differential measurements are performed as a function of the invariant mass, transverse momentum and longitudinal boost of the $t\bar{t}$ system. The $t\bar{t}$ pairs are selected in the single-lepton channels (e or $\mu$ ) with at least four jets, and a likelihood fit is used to reconstruct the $t\bar{t}$ event kinematics. A Bayesian unfolding procedure is performed to infer the asymmetry at parton level from the observed data distribution. The inclusive $t\bar{t}$ charge asymmetry is measured to be $A_{\text {C}}{} = 0.009 \pm 0.005$ (stat. $+$ syst.). The inclusive and differential measurements are compatible with the values predicted by the Standard Model. 0 data tables match query #### Inclusive and differential measurements of the $t\overline{t}$ charge asymmetry in pp collisions at $\sqrt{s} =$ 8 TeV The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Lett. B757 (2016) 154-179, 2016. Inspire Record 1382590 0 data tables match query #### Measurement of the forward-backward asymmetry in $Z/\gamma^{\ast} \rightarrow \mu^{+}\mu^{-}$ decays and determination of the effective weak mixing angle The collaboration Aaij, Roel ; Adeva, Bernardo ; Adinolfi, Marco ; et al. JHEP 1511 (2015) 190, 2015. Inspire Record 1394859 The forward-backward charge asymmetry for the process $q\overline{q}\to Z/{\gamma}^{\ast}\to {\mu}^{+}{\mu}^{-}$ is measured as a function of the invariant mass of the dimuon system. Measurements are performed using proton proton collision data collected with the LHCb detector at $\sqrt{s}=7$ and 8 TeV, corresponding to integrated luminosities of 1 fb$^{−1}$ and 2 fb$^{−1}$ respectively. Within the Standard Model the results constrain the effective electroweak mixing angle to be ${ \sin}^2{\theta}_{\mathrm{W}}^{\mathrm{eff}}=0.23142\pm 0.00073\pm 0.00052\pm 0.00056,$ where the first uncertainty is statistical, the second systematic and the third theoretical. This result is in agreement with the current world average, and is one of the most precise determinations at hadron colliders to date. 0 data tables match query #### Measurement of the charge asymmetry in highly boosted top-quark pair production in $\sqrt{s} =$ 8 TeV $pp$ collision data collected by the ATLAS experiment The collaboration Aad, Georges ; Abbott, Brad ; Abdallah, Jalal ; et al. Phys.Lett. B756 (2016) 52-71, 2016. Inspire Record 1410588 In the pp→tt¯ process the angular distributions of top and anti-top quarks are expected to present a subtle difference, which could be enhanced by processes not included in the Standard Model. This Letter presents a measurement of the charge asymmetry in events where the top-quark pair is produced with a large invariant mass. The analysis is performed on 20.3 fb −1 of pp collision data at s=8TeV collected by the ATLAS experiment at the LHC, using reconstruction techniques specifically designed for the decay topology of highly boosted top quarks. The charge asymmetry in a fiducial region with large invariant mass of the top-quark pair ( mtt¯>0.75 TeV ) and an absolute rapidity difference of the top and anti-top quark candidates within −2<|yt|−|yt¯|<2 is measured to be 4.2±3.2% , in agreement with the Standard Model prediction at next-to-leading order. A differential measurement in three tt¯ mass bins is also presented. 0 data tables match query #### Study of the production of $\Lambda_b^0$ and $\overline{B}^0$ hadrons in $pp$ collisions and first measurement of the $\Lambda_b^0\rightarrow J/\psi pK^-$ branching fraction The collaboration Aaij, R. ; Adeva, Bernardo ; Adinolfi, Marco ; et al. Chin.Phys. C40 (2016) 011001, 2016. Inspire Record 1391317 The product of the differential production cross-section and the branching fraction of the decay is measured as a function of the beauty hadron transverse momentum, p(T), and rapidity, y. The kinematic region of the measurements is p(T) < 20 GeV/c and 2.0 < y < 4.5. The measurements use a data sample corresponding to an integrated luminosity of 3fb(−)(1) collected by the LHCb detector in pp collisions at centre-of-mass energies in 2011 and in 2012. Based on previous LHCb results of the fragmentation fraction ratio the branching fraction of the decay is measured to bewhere the first uncertainty is statistical, the second is systematic, the third is due to the uncertainty on the branching fraction of the decay B̅(0) → J/ψK̅*(892)(0), and the fourth is due to the knowledge of . The sum of the asymmetries in the production and decay between and is also measured as a function of p(T) and y. The previously published branching fraction of , relative to that of , is updated. The branching fractions of are determined. 0 data tables match query #### Measurement of the differential cross section and charge asymmetry for inclusive $\mathrm {p}\mathrm {p}\rightarrow \mathrm {W}^{\pm }+X$ production at ${\sqrt{s}} = 8$ TeV The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Eur.Phys.J. C76 (2016) 469, 2016. Inspire Record 1426517 0 data tables match query #### Measurement of forward $W$ and $Z$ boson production in association with jets in proton-proton collisions at $\sqrt{s}=8$ TeV The collaboration Aaij, Roel ; Abellán Beteta, Carlos ; Adeva, Bernardo ; et al. JHEP 1605 (2016) 131, 2016. Inspire Record 1454404 The production of W and Z bosons in association with jets is studied in the forward region of proton-proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98 ± 0.02 fb$^{−1}$. The W boson is identified using its decay to a muon and a neutrino, while the Z boson is identified through its decay to a muon pair. Total cross-sections are measured and combined into charge ratios, asymmetries, and ratios of W +jet and Z+jet production cross-sections. Differential measurements are also performed as a function of both boson and jet kinematic variables. All results are in agreement with Standard Model predictions. 0 data tables match query #### Measurement of $D_s^{\pm}$ production asymmetry in $pp$ collisions at $\sqrt{s} =7$ and 8 TeV The collaboration Aaij, Roel ; Adeva, Bernardo ; Adinolfi, Marco ; et al. JHEP 1808 (2018) 008, 2018. Inspire Record 1674916 The inclusive D$_{s}^{±}$ production asymmetry is measured in pp collisions collected by the LHCb experiment at centre-of-mass energies of $\sqrt{s}=7$ and 8 TeV. Promptly produced D$_{s}^{±}$ mesons are used, which decay as D$_{s}^{±}$  → ϕπ$^{±}$, with ϕ → K$^{+}$K$^{−}$. The measurement is performed in bins of transverse momentum, p$_{T}$, and rapidity, y, covering the range 2.5 < p$_{T}$ < 25.0 GeV/c and 2.0 < y < 4.5. No kinematic dependence is observed. Evidence of nonzero D$_{s}^{±}$ production asymmetry is found with a significance of 3.3 standard deviations. 0 data tables match query #### Measurement of top quark polarisation in t-channel single top quark production The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. JHEP 1604 (2016) 073, 2016. Inspire Record 1403169 0 data tables match query #### Angular analysis of the decay $B^0 \to K^{*0} \mu^+ \mu^-$ from pp collisions at $\sqrt s = 8$ TeV The collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Lett. B753 (2016) 424-448, 2016. Inspire Record 1385600 0 data tables match query #### Search for contact interactions and large extra dimensions in the dilepton channel using proton-proton collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Brad ; Abdallah, Jalal ; et al. Eur.Phys.J. C74 (2014) 3134, 2014. Inspire Record 1305430 0 data tables match query #### Differential branching fraction and angular moments analysis of the decay $B^0 \to K^+ \pi^- \mu^+ \mu^-$ in the $K^*_{0,2}(1430)^0$ region The collaboration Aaij, Roel ; Adeva, Bernardo ; Adinolfi, Marco ; et al. JHEP 1612 (2016) 065, 2016. Inspire Record 1486676 0 data tables match query #### Study of $W$ boson production in association with beauty and charm The collaboration Aaij, Roel ; Adeva, Bernardo ; Adinolfi, Marco ; et al. Phys.Rev. D92 (2015) 052001, 2015. Inspire Record 1370436 The associated production of a W boson with a jet originating from either a light parton or heavy-flavor quark is studied in the forward region using proton-proton collisions. The analysis uses data corresponding to integrated luminosities of 1.0 and 2.0  fb-1 collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, respectively. The W bosons are reconstructed using the W→μν decay and muons with a transverse momentum, pT, larger than 20 GeV in the pseudorapidity range 2.0<η<4.5. The partons are reconstructed as jets with pT>20  GeV and 2.2<η<4.2. The sum of the muon and jet momenta must satisfy pT>20  GeV. The fraction of W+jet events that originate from beauty and charm quarks is measured, along with the charge asymmetries of the W+b and W+c production cross sections. The ratio of the W+jet to Z+jet production cross sections is also measured using the Z→μμ decay. All results are in agreement with Standard Model predictions. 0 data tables match query #### Angular analysis of the $B^{0} \to K^{*0} \mu^{+} \mu^{-}$ decay using 3 fb$^{-1}$ of integrated luminosity The collaboration Aaij, Roel ; Abellán Beteta, Carlos ; Adeva, Bernardo ; et al. JHEP 1602 (2016) 104, 2016. Inspire Record 1409497 An angular analysis of the B$^{0}$ → K$^{*0}$(→ K$^{+}$ π$^{−}$)μ$^{+}$ μ$^{−}$ decay is presented. The dataset corresponds to an integrated luminosity of 3.0 fb$^{−1}$ of pp collision data collected at the LHCb experiment. The complete angular information from the decay is used to determine CP-averaged observables and CP asymmetries, taking account of possible contamination from decays with the K$^{+}$ π$^{−}$ system in an S-wave configuration. The angular observables and their correlations are reported in bins of q$^{2}$, the invariant mass squared of the dimuon system. The observables are determined both from an unbinned maximum likelihood fit and by using the principal moments of the angular distribution. In addition, by fitting for q$^{2}$-dependent decay amplitudes in the region 1.1 < q$^{2}$ < 6.0 GeV$^{2}$/c$^{4}$, the zero-crossing points of several angular observables are computed. A global fit is performed to the complete set of CP-averaged observables obtained from the maximum likelihood fit. This fit indicates differences with predictions based on the Standard Model at the level of 3.4 standard deviations. These differences could be explained by contributions from physics beyond the Standard Model, or by an unexpectedly large hadronic effect that is not accounted for in the Standard Model predictions. 0 data tables match query #### Measurement of forward W and Z boson production in $pp$ collisions at $\sqrt{s}=8$ TeV The collaboration Aaij, Roel ; Abellán Beteta, Carlos ; Adeva, Bernardo ; et al. JHEP 1601 (2016) 155, 2016. Inspire Record 1406555 Measurements are presented of electroweak boson production using data from pp collisions at a centre-of-mass energy of $\sqrt{s}=8$ TeV. The analysis is based on an integrated luminosity of 2.0 fb$^{−1}$ recorded with the LHCb detector. The bosons are identified in the W → μν and Z → μ$^{+}$ μ$^{−}$ decay channels. The cross-sections are measured for muons in the pseudorapidity range 2.0 < η < 4.5, with transverse momenta p$_{T}$ > 20 GeV/c and, in the case of the Z boson, a dimuon mass within $60 < {M}_{\mu }{{{}_{{}^{+}}}_{\mu}}_{{}^{-}}<120$ GeV/c$^{2}$. The results are ${\sigma}_W{{}_{{}^{+}}}_{\to \mu }{{}_{{}^{+}}}_{\nu }=1093.6\pm 2.1\pm 7.2\pm 10.9\pm 12.7\ \mathrm{p}\mathrm{b},$ ${\sigma}_W{{}_{{}^{-}}}_{\to \mu }{{}_{{}^{-}}}_{\overline{\nu}}=818.4\pm 1.9\pm 5.0\pm 7.0\pm 9.5\ \mathrm{p}\mathrm{b},$ ${\sigma}_{\mathrm{Z}\to \mu }{{{}_{{}^{+}}}_{\mu}}_{{}^{-}}=95.0\pm 0.3\pm 0.7\pm 1.1\pm 1.1\ \mathrm{p}\mathrm{b},$ ${\sigma}_{Z\to \mu }{{{}_{{}^{+}}}_{\mu}}_{{}^{-}}=95.0\pm 0.3\pm 0.7\pm 1.1\pm 1.1\ \mathrm{p}\mathrm{b},$ where the first uncertainties are statistical, the second are systematic, the third are due to the knowledge of the LHC beam energy and the fourth are due to the luminosity determination. The evolution of the W and Z boson cross-sections with centre-of-mass energy is studied using previously reported measurements with 1.0 fb$^{−1}$ of data at 7 TeV. Differential distributions are also presented. Results are in good agreement with theoretical predictions at next-to-next-to-leading order in perturbative quantum chromodynamics. 0 data tables match query
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873583316802979, "perplexity": 3250.7743269128355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512411.13/warc/CC-MAIN-20181019145850-20181019171350-00308.warc.gz"}
http://www.cs.nyu.edu/pipermail/fom/2007-November/012272.html
# [FOM] Algebra and Ramsey Type Theorems A. Mani a_mani_sc_gs at yahoo.co.in Tue Nov 13 11:52:46 EST 2007 On Tuesday 13 Nov 2007 5:47:15 am Dana Scott wrote: > You wrote: > > For example the theorem that "Every finite semigroup > > has at least one idempotent" is essentially a Ramsey type > > theorem (it can be proved as well by a simple contra- > > diction argument). > > I didn't see the Ramsey argument. Here is a pretty easy > number-theoretic proof. Is this what you had in mind? >Let x be any element.  Since the total number of elements >is finite, there are positive integers n and m where: >        x^(2^(n + m)) = x^(2^m). >Let y = x^(2^m).  We see y^(2^n) = y.  If n = 1, we are >done.  If n > 1, than multiply both sides of the last >equation by y^(2^n - 2).  We conclude: >        y^(2^n) y^(2^n - 2) = y^(2^n -1), > but the RHS = y^(2^n + 2^n - 2) = (y^(2^n -1))^2. > So we have found an idempotent.  Q.E.D. It is to be seen in a slightly round about way. [n] = {1,2,...,n}; k-subset = subset with k elements A r-colouring of a set F is simply a map \eta :S \mapsto [r], so that it is a partition of F into r parts. Let #(S)=r By the original Ramsey thm we have a N= n(2, r, 3) such that for any r-coloring of the 2-element subsets of [N], there is a 3-element subset all of whose 2-elements have the same color. consider any sequence {x1, x2, ..., xN} of elems of S and the r-coloring \eta of the 2-elem subsets of [N] defined via If 1=< i < j =< N, then \eta({xi, xj}) = xi xi+1 ...xj-1 \in S (product of elements in the semigroup) Clearly we have a 3-elem subset {i, j, k} of [N] with i < j < k s.t. \eta({xi,xj})=\eta({xj, xk})= \eta({xi, xk}) = a (say) but the product of the first two is the third, so a^2 = a. ---------------------------------- Actually Furstenberg, H and Katznelson, Y. in particular proved some results in compact semigroups that make use of Ramsey Theory, but in algebra proper few results are seen as being Ramsey-type theorems. Best A. Mani -- A. Mani Member, Cal. Math. Soc
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572941064834595, "perplexity": 4833.569226309593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277286.54/warc/CC-MAIN-20160524002117-00035-ip-10-185-217-139.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Sum_of_Integrals_on_Adjacent_Intervals_for_Integrable_Functions
# Sum of Integrals on Adjacent Intervals for Integrable Functions ## Theorem Let $f$ be a real function which is Riemann integrable on any closed interval $\mathbb I$. Let $a, b, c \in \mathbb I$. Then: $\displaystyle \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t = \int_a^b f \left({t}\right) \rd t$ ### Corollary Let $a_0, a_1, \ldots, a_n$ be real numbers, where $n \in \N$ and $n \ge 2$. Then: $\displaystyle \int_{a_0}^{a_n} f \left({t}\right) \rd t = \sum_{i \mathop = 0}^{n - 1} \int_{a_i}^{a_{i + 1} } f \left({t}\right) \rd t$ ## Proof Without loss of generality, assume $a < b$. First let $a < c < b$. Let $P_1$ and $P_2$ be any subdivisions of $\left[{a \,.\,.\, c}\right]$ and $\left[{c \,.\,.\, b}\right]$ respectively. Then $P = P_1 \cup P_2$ is a subdivision of $\left[{a \,.\,.\, b}\right]$. From the definitions of upper sum and lower sum: $L \left({P_1}\right) + L \left({P_2}\right) = L \left({P}\right)$ $U \left({P_1}\right) + U \left({P_2}\right) = U \left({P}\right)$ We consider the lower sum. The same conclusion can be obtained by investigating the upper sum. By definition of definite integral: $\displaystyle L \left({P}\right) \le \int_a^b f \left({t}\right) \rd t$ Thus, given the subdivisions $P_1$ and $P_2$, we have: $\displaystyle L \left({P_1}\right) + L \left({P_2}\right) \le \int_a^b f \left({t}\right) \rd t$ and so: $\displaystyle L \left({P_1}\right) \le \int_a^b f \left({t}\right) \rd t - L \left({P_2}\right)$ So, for any subdivision $P_2$ of $\left[{c \,.\,.\, b}\right]$, $\displaystyle \int_a^b f \left({t}\right) \rd t - L \left({P_2}\right)$ is an upper bound of $L \left({P_1}\right)$. Thus: $\displaystyle \sup_{P_1} \left({L \left({P_1}\right)}\right) \le \int_a^b f \left({t}\right) \rd t - L \left({P_2}\right)$ where $\sup_{P_1} \left({L \left({P_1}\right)}\right)$ ranges over all subdivisions of $P_1$. Thus by definition of definite integral: $\displaystyle \int_a^c f \left({t}\right) \rd t \le \int_a^b f \left({t}\right) \rd t - L \left({P_2}\right)$ and so: $\displaystyle L \left({P_2}\right) \le \int_a^b f \left({t}\right) \rd t - \int_a^c f \left({t}\right) \rd t$ Similarly, we find that: $\displaystyle \int_c^b f \left({t}\right) \rd t \le \int_a^b f \left({t}\right) \rd t - \int_a^c f \left({t}\right) \rd t$ Therefore: $\displaystyle \int_a^b f \left({t}\right) \rd t \ge \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ Having established the above, we now need to put it into context. Let $P$ be any subdivision of $\left[{a \,.\,.\, b}\right]$, which may or may not include the point $c$. Let $Q = P \cup \left\{ {c}\right\}$ be the subdivision of $\left[{a \,.\,.\, b}\right]$ obtained from $P$ by including with it, if necessary, the point $c$. It is easy to show that $L \left({P}\right) \le L \left({Q}\right)$. Let $Q_1$ be the subdivision of $\left[{a \,.\,.\, b}\right]$ which includes only the points of $Q$ that lie in $\left[{a \,.\,.\, c}\right]$. Let $Q_2$ be the subdivision of $\left[{a \,.\,.\, b}\right]$ which includes only the points of $Q$ that lie in $\left[{c \,.\,.\, b}\right]$. From the definition of lower sum: $L \left({Q}\right) = L \left({Q_1}\right) + L \left({Q_2}\right)$ We have: $\displaystyle L \left({P}\right)$ $\le$ $\displaystyle L \left({Q}\right)$ $\displaystyle$ $=$ $\displaystyle L \left({Q_1}\right) + L \left({Q_2}\right)$ $\displaystyle$ $\le$ $\displaystyle \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ So $\displaystyle \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ is an upper bound for $L \left({P}\right)$, where $P$ is any subdivision of $\left[{a \,.\,.\, b}\right]$. Thus: $\displaystyle \sup_P \left({L \left({P}\right)}\right) \le \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ Thus, by definition: $\displaystyle \int_a^b f \left({t}\right) \rd t \le \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ Combining this with the result: $\displaystyle \int_a^b f \left({t}\right) \rd t \ge \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ the result follows. $\Box$ Now suppose $a < b < c$. Then from the definition of definite integral: $\displaystyle \int_c^b f \left({x}\right) \rd x := -\int_b^c f \left({x}\right) \rd x$ and it follows that: $\displaystyle \int_a^b f \left({t}\right) \rd t$ $=$ $\displaystyle \int_a^c f \left({t}\right) \rd t - \int_b^c f \left({t}\right) \rd t$ main result $\displaystyle$ $=$ $\displaystyle \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ The case of $c < a < b$ is proved similarly. Finally, suppose $a = c < b$. Then: $\displaystyle \int_a^b f \left({t}\right) \rd t$ $=$ $\displaystyle 0 + \int_c^b f \left({t}\right) \rd t + \int_a^c f \left({t}\right) \rd t$ as $a = c$ $\displaystyle$ $=$ $\displaystyle \int_a^c f \left({t}\right) \rd t + \int_c^b f \left({t}\right) \rd t$ Integral on Zero Interval, as $a = c$ The case of $a < c = b$ is proved similarly. Hence the result, from Proof by Cases. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939477443695068, "perplexity": 151.25082103009103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00301.warc.gz"}
https://soar.wichita.edu/handle/10057/119/browse?type=subject&value=Algebraic+Ricci+solitons
Now showing items 1-1 of 1 • #### Nilsolitons of H-type in the Lorentzian setting  (Houston Journal of Mathematics, 2015) It is known that all left-invariant pseudo-Riemannian metrics on the three-dimensional Heisenberg group H-3 are algebraic Ricci solitons. We consider generalizations of Riemannian H-type, namely pseudoH-type and pH-type. ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036011099815369, "perplexity": 4859.856337716945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00399.warc.gz"}
http://mymathforum.com/calculus/27807-tangent-line-equation.html
My Math Forum Tangent line equation Calculus Calculus Math Forum May 25th, 2012, 03:31 PM #1 Member   Joined: Jan 2012 Posts: 43 Thanks: 0 Tangent line equation Hey, I'm having trouble with a quotient rule problem that asks you to find the equation of the tangent line. The function is $\frac{sqrt(x)*(3-2x^2)}{x}$ The question then states that when x=9 the corresponding y value will be______ and the slope of the tangent line Is f'(x)=9=_______. Thererore the equation of the tangent line is_______? In form ax+b. For the first blank I just inserted x=9 into the original equation and solved f(x)=-53. For the second blank I got the slope equalling -55/9 by plugging x=9 into the derivative of f(x). I Calculated the Derivative using the product rule $sqrt(x)\frac{x(-2)-(3-2x^2)(1)}{x^2}$ And ended up with a derivative of: $\frac{-(2x^2+3)sqrt(x)}{x^2}$ For the tangent line equation I got $y=\frac{-55}{9}x-2$ By plugging in the coordinates (9,-53) to y+53=(-55/9)x-9 Some part here is incorrect I am not sure which, any help would be much appreciated May 25th, 2012, 03:39 PM #2 Senior Member   Joined: May 2011 Posts: 501 Thanks: 6 Re: Tangent line equation If you plug x=9 into the derivative $f'(x)=\frac{-3(2x^{2}+1)}{2x^{3/2}}$, you should get $f'(9)=\frac{-163}{18}$. Also, $f(9)=-53$, as you correctly have. So, you now have x=9, y=-53, m=-163/18. All set to find line equation. It may be a little easier to find the derivative by expanding f(x) into $3x^{\frac{-1}{2}}-2x^{\frac{3}{2}}$ Then, term-by-term, we get $f'(x)=-3\sqrt{x}-\frac{3}{2}x^{\frac{-3}{2}}$. Or you can write it as I have above. Just another equivalent form. Is that a picture of you with a TI?. Good to see you're into math. Count yourself as a member of a small minority of the overall population. May 25th, 2012, 03:59 PM #3 Global Moderator     Joined: Oct 2008 From: London, Ontario, Canada - The Forest City Posts: 7,963 Thanks: 1148 Math Focus: Elementary mathematics and beyond Re: Tangent line equation Though galactus mentioned it (while I was posting ), here is another approach: $f(x)\,=\,\frac{\sqrt{x}(3\,-\,2x^2)}{x}\,=\,\frac{3\sqrt{x}}{x}\,-\,\frac{\sqrt{x}2x^2}{x}\,=\,3x^{-1/2}\,-\,2x^{3/2}$ Now it is a little easier to differentiate. $f'(x)\,=\,-\frac32 x^{-3/2}\,-\,3x^{1/2},\,f#39;(9)\,=\,-\frac{163}{18}$ May 25th, 2012, 10:32 PM #4 Member   Joined: Jan 2012 Posts: 43 Thanks: 0 Re: Tangent line equation Ya thanks guys I haven't dealt with product quotient rules in awhile so the different approaches are very helpful as i get back into things. And ya galactus I've always enjoyed math, although the program I'm in is calling for less and less of it as I near the end, unfortunately.. Tags equation, line, tangent Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post math999 Calculus 4 February 25th, 2013 07:36 PM unwisetome3 Calculus 2 October 28th, 2012 06:52 PM unwisetome3 Calculus 4 October 20th, 2012 07:38 AM arron1990 Calculus 5 February 9th, 2012 01:29 AM RMG46 Calculus 28 September 28th, 2011 09:21 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363234996795654, "perplexity": 1599.0196232323208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00292.warc.gz"}
https://www.physicsforums.com/threads/can-you-take-the-determinant-of-a-mxn-matrix-where-m-n.638101/
# Homework Help: Can you take the determinant of a mxn matrix where m>n 1. Sep 23, 2012 ### charlies1902 Number of rows>number of columns. Just out of curiosity, i've never seen this done before. I don't even know how if it were possible. Same with an mxn matrix where n>m+1. I don't think you would be able to find the determinant of this either. 2. Sep 23, 2012 ### jbunniii No, the operation is not defined. 3. Sep 23, 2012 ### gabbagabbahey If you look up the definition of matrix determinant in your textbook, you'll see the answer fairly quickly!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583618640899658, "perplexity": 1235.5084019532287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156780.2/warc/CC-MAIN-20180921033529-20180921053929-00323.warc.gz"}
https://arxiv.org/abs/1709.05289
math.FA (what is this?) # Title: Optimal approximation of piecewise smooth functions using deep ReLU neural networks Abstract: We study the necessary and sufficient complexity of ReLU neural networks-in terms of depth and number of weights-required for approximating classifier functions in an $L^2$-sense. As a model, we consider the set $\mathcal{E}^\beta (\mathbb{R}^d)$ of possibly discontinuous piecewise $C^\beta$ functions $f : [-1/2, 1/2]^d \to \mathbb{R}$, where the different 'smooth regions' of $f$ are separated by $C^\beta$ hypersurfaces. For given dimension $d \geq 2$, regularity $\beta > 0$, and accuracy $\varepsilon > 0$, we construct ReLU neural networks that approximate functions from $\mathcal{E}^\beta(\mathbb{R}^d)$ up to an $L^2$ error of $\varepsilon$. The constructed networks have a fixed number of layers, depending only on $d$ and $\beta$ and they have $O(\varepsilon^{-2(d-1)/\beta})$ many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise $C^\beta(\mathbb{R}^d)$ functions, this minimal depth is given-up to a multiplicative constant-by $\beta/d$. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function $f$ to be approximated can be factorized into a smooth dimension reducing feature map $\tau$ and classifier function $g$-defined on a low-dimensional feature space-as $f = g \circ \tau$. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension. Comments: Added new results on approximation without the curse of dimension for factorizable classifier functions Subjects: Functional Analysis (math.FA); Learning (cs.LG); Machine Learning (stat.ML) MSC classes: 41A25, 41A10, 82C32, 41A46, 68T05, 94A34 Cite as: arXiv:1709.05289 [math.FA] (or arXiv:1709.05289v3 [math.FA] for this version) ## Submission history From: Philipp Petersen [view email] [v1] Fri, 15 Sep 2017 16:14:39 GMT (87kb) [v2] Thu, 21 Sep 2017 14:42:27 GMT (88kb) [v3] Fri, 5 Jan 2018 14:35:54 GMT (96kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692754507064819, "perplexity": 1047.7730108580413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945942.19/warc/CC-MAIN-20180423110009-20180423130009-00385.warc.gz"}
http://mathhelpforum.com/differential-geometry/144440-prove-function-differentiable.html
# Thread: Prove function is differentiable 1. ## Prove function is differentiable Let $f$ be continuous on the reals and let $F(x) = \int_{x-1}^{x+1}f(t)dt$. Show that $F$ is differentiable on the reals and compute $F'$. So this should seem straightforward but I'm having trouble actually formalizing it simply because the lower and upper limits of the integral are both variables. I thought of saying that $F(x) = \int_{0}^{x+1}f(t)dt - \int_{0}^{x-1}f(t)dt$ but I'm not sure how to continue from there via utilizing the FTC. Any help would be appreciated. 2. Originally Posted by Pinkk Let $f$ be continuous on the reals and let $F(x) = \int_{x-1}^{x+1}f(t)dt$. Show that $F$ is differentiable on the reals and compute $F'$. So this should seem straightforward but I'm having trouble actually formalizing it simply because the lower and upper limits of the integral are both variables. I thought of saying that $F(x) = \int_{0}^{x+1}f(t)dt - \int_{0}^{x-1}f(t)dt$ but I'm not sure how to continue from there via utilizing the FTC. Any help would be appreciated. FTC maybe overkill but clearly both of the things you broke the integral into are differentiable by the FTC and therefore is their difference. 3. I guess I'm just hung up on notation because the upper limits of both integrals can very well be less than zero and the theorem applies to a function of the form of $F(x)=\int_{a}^{x}$ for $x\in [a,b]$. Would a proof go like this then? : Observe that $F(x)=\int_{0}^{x+1}f(t)dt - \int_{0}^{x-1}f(t)dt$. Whether $x+1,x-1$ are positive or negative, since $f$ is continuous on the reals, each corresponding integral is differentiable and so $F$ is differentiable and since the sum of the derivatives is equal to the derivative of the sums, $F'(x) = f(x+1) - f(x-1)$. Q.E.D. And this does seem like overkill but we just learned the FTC so I want to make sure I don't assume too much. What would be the easier method of proof? 4. Originally Posted by Pinkk I guess I'm just hung up on notation because the upper limits of both integrals can very well be less than zero and the theorem applies to a function of the form of $F(x)=\int_{a}^{x}$ for $x\in [a,b]$. $a,b$ can be negative...... Observe that $F(x)=\int_{0}^{x+1}f(t)dt - \int_{0}^{x-1}f(t)dt$. Whether $x+1,x-1$ are positive or negative, since $f$ is continuous on the reals, each corresponding integral is differentiable and so $F$ is differentiable and since the sum of the derivatives is equal to the derivative of the sums, $F'(x) = f(x+1) - f(x-1)$. Q.E.D. Correctomundo. And this does seem like overkill but we just learned the FTC so I want to make sure I don't assume too much. What would be the easier method of proof? Not easier. There is just a consensus, that one should work as hard as you can in math. If you can do something without invoking an advanced theorem one should. That said, it's doable but much more work (and would be in effect mimicking the actual proof of the FTC) and so just stick with the above. 5. Yeah I was trying to show that for any $a\in \mathbb{R},\lim_{x\to a} \frac{F(x)-F(a)}{x-a}=f(a+1) - f(a-1)$ but I got stuck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950293302536011, "perplexity": 142.675925786198}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00257-ip-10-171-10-108.ec2.internal.warc.gz"}
https://iwaponline.com/jh/article/18/2/371/33/ANFIS-and-ANN-models-for-the-estimation-of-wind
In this study, the adaptive network-based fuzzy inference system (ANFIS) and artificial neural network (ANN) were employed to estimate the wind- and wave-induced coastal current velocities. The collected data at the Joeutsu-Ogata coast of the Japan Sea were used to develop the models. In the models, significant wave height, wave period, wind direction, water depth, incident wave angle, and wind speed were considered as the input variables; and longshore and cross-shore current velocities as the output variables. The comparison of the models showed that the ANN model outperforms the ANFIS model. In addition, evaluation of the models versus the multiple linear regression and multiple nonlinear regression with power functions models indicated their acceptable accuracy. A sensitivity test proved the stronger effects of wind speed and wind direction on longshore current velocities. In addition, this test showed great effects of significant wave height on cross-shore currents' velocities. It was concluded that the angle of incident wave, water depth, and significant wave period had weaker influences on the velocity of coastal currents. ## NOMENCLATURE • Ai, Bi, Ci, Di, Ei fuzzy sets • • standard deviations of the Gaussian membership functions • • mean values of the Gaussian membership functions • • E mean square error • • significant wave height • • hd water depth • • number of training data • • number of testing data • • oi, pi, qi, ri, si, ti the linear consequent parameters • • firing strength of rule i • • degree of membership • • observed value • • P estimated value • • Vlongshore velocity of longshore current • • Vcross−shore velocity of cross-shore current • • nonlinear antecedent parameter • • learning rate • • number of training epochs • • Po potential value • • significant wave period • • W wind speed • • wind direction • • incident wave angle • • distance between the candidate clusters • • • quash factor • • acceptance ratio • • rejection ratio • • value of data point • • normalized inputs • • original inputs • • minimum value of input data • • maximum value of input data ## INTRODUCTION Estimation of wave- and wind-induced coastal current velocities is one of the most important issues in the design of coastal and offshore structures. Coastal currents are usually divided into longshore and cross-shore currents. Field measurements by Yamashita et al. (1998) for the Joeutsu-Ogata coast of the Japan Sea showed that the longshore currents are mostly produced by winds. They also showed that the cross-shore currents, i.e., reverse seaward-flowing rip currents, are mainly generated waves. Longshore currents are considerably dominant in the extended area of coasts, especially in offshore zones. By contrast, cross-shore currents are significantly generated by breaking waves in nearshore zones (Yasuda et al. 1996; Yamashita et al. 1998). In other words, strong offshore-going currents are emerging with high waves inside surf zones. Outside surf zones, the generation of coastal currents is a function of wind conditions (Kato & Yamashita 2003). Thus far, numerous empirical formulas and numerical models have been presented to estimate the coastal current velocities. Empirical formulas are usually based on wave characteristics, water depth, and incident wave angle whereas wind characteristics such as wind speed and wind direction are neglected in these formulas (Horikawa 1978). In spite of their accuracy, numerical models are not economical for the basic design stage. On the one hand, the execution time significantly increases considering the interaction between wind and wave. On the other hand, the models need many input data such as friction coefficient, high-resolution bathymetry data of study areas, etc. (Kato & Yamashita 2000, 2003). Data-driven approaches such as the adaptive network-based fuzzy inference system (ANFIS) and the artificial neural network (ANN) can be used to deal with the drawbacks of empirical formulas and numerical models. The behavior of a complex phenomenon can readily be investigated by these approaches. In these models, a black box containing some reasoning relations finds interrelationships between the inputs and output variables representing the physics of the phenomenon. To utilize them properly, they should be trained by a series of training and validation data sets. As well, their efficiency is evaluated by testing data set not used in the training process. ANN-based models have been used to predict many complex nonlinear systems in coastal engineering fields. For instance, Ruchi et al. (2005) employed an ANN model to estimate the significant wave height at coastal areas from deep water wave heights. The model involved a common feedforward network trained by the backpropagation algorithms (FFBP). The obtained results showed a higher accuracy of the FFBP network than the RBF and ANFIS models. Fuzzy inference system (FIS)-based models have been widely used in water engineering for the following: modeling of rainfall–runoff (Sen & Altunkaynak 2004), capturing scour uncertainty around bridge piers (Johnson & Ayyub 1996), predicting scour depth at abutments of armored beds (Muzzammil 2010; Muzzammil & Alam 2011), optimizing water allocation system (Kindler 1992), controlling reservoir operation (Shrestha et al. 1996), modeling of water seepage in an unsaturated zone (Bardossy & Disse 1993), analyzing regional drought (Pongracz et al. 1999), modeling of time series (Altunkaynak et al. 2004a, 2004b), modeling of equilibrium scour at the downstream of a vertical gate or around pipelines (Uyumaz et al. 2006; Zanganeh et al. 2011), estimating pile group scour (Bateni & Jeng 2007, Bateni et al. 2007), predicting stream flow (Ozger 2009), finding scour location at the downstream of a spillway (Azmathullah et al. 2009), and estimating critical velocity for slurry transport in pipelines (Azamathulla & Ahmad 2013). Most of the studies indicate the FIS's superiority to regression approaches. In coastal engineering, Kazeminezhad et al. (2005) employed an ANFIS model to predict wave parameters in the fetch-limited condition. Their results showed superiority of the ANFIS to the so-called Coastal Engineering Manual method at Lake Ontario. Ozger & Sen (2007) applied fuzzy logic to find relationships among the wind speed and previous and current wave characteristics in the Pacific Ocean. Zanganeh et al. (2009) developed a genetic algorithm–adaptive network-based fuzzy inference system model (GA-ANFIS) to predict wave parameters at Lake Michigan for the duration-limited condition. Later, Mahjoobi et al. (2008) employed the ANN and ANFIS models for wave hindcasting at Lake Ontario. Bakhtyar et al. (2008a, 2008b) applied the ANFIS for prediction of wave run-up and longshore sediment transport in swash zones. Recently, Shiri et al. (2011) used the ANFIS model to predict the sea level fluctuations at Hillarys Boat Harbour in Perth, Western Australia. Despite the apparent effects of winds and waves on coastal currents velocities, few studies have been conducted on this issue so far. The aim of the present study is to apply the ANFIS and ANN models to estimate the wind- and wave-induced current velocities. Moreover, the effects of numerous wind and wave variables on the generation of coastal currents are investigated. These models are evaluated using the field observation data of the Joeutsu-Ogata coast of the Japan Sea which is a recognized place for the interaction between wind and wave. Finally, the accuracy of the models is compared with other data-driven approaches, such as the multiple linear regression (MLR) and multiple nonlinear regression with power function (MNLRP) models. The present paper is set out in seven main sections. Following this section, is a section outlining the study area and its hydrodynamic characteristics. The ANFIS, ANN, MLR and MNLRP models are introduced next. Discussion about the prerequisites to develop the data-driven models follows, and then a detailed discussion on the developed models. This is followed by a section evaluating the developed data-driven models to estimate the current velocities, then finally, concluding remarks are presented. ## BACKGROUND OF THE CASE STUDY ### The study area The collected data sets of the Japan Sea are utilized to develop the ANFIS and ANN models to estimate coastal current velocities. The Joeutsu-Ogata coast is very famous for its annual severe erosion resulting from coastal currents. The study area of 30 km2 is located between Naoetsu harbors and fishery dock of Kakizaki. Field studies to measure coastal currents, waves and winds were conducted by the Research Centre for Disaster Environment, Disaster Prevention Research Institute at Kyoto University. Figure 1 schematically shows the basic plan of the field study along with recording data stations during 1998–1999. It is clear from the figure that field measurements have been conducted at 13 stations. Stations No. 2, 4, 5, 6, 7, 8, 12, and 13 are placed at the nearshore region whereas stations No. 1, 3, 9, 10, and 11 are located at the offshore zone. At each station about 1,000 data points were measured. Figure 1 Field observation plan at the Joeutsu-Ogata coast in the Japan Sea (Kato & Yamashita 2003). Figure 1 Field observation plan at the Joeutsu-Ogata coast in the Japan Sea (Kato & Yamashita 2003). The instruments used in the field study include: (1) high frequency acoustic Doppler current profiler (ADCP, 1,200 Hz) and electro magnetic current meters installed at the sea bottom to measure the current profiles; (2) Wave Hunter was also installed at the same place to measure the incident wave properties; and (3) three-component ultrasonic anemometer installed at the top as TOP to measure local wind characteristics. ### Hydrodynamic characteristics of the study area According to Yamashita et al. (1998), the hydrodynamic pattern of the study area during a winter monsoon contains a wide range of high-speed winds (exceeding 10 m/s) and high waves. These circumstances lead to strong coastal currents in the longshore and cross-shore directions of the coast. Figure 2 illustrates the characteristics of observed coastal currents, waves and local winds field during the winter storm from January to the end of February. As shown in the figure during the storm events (circled periods), strong coastal currents are produced by the stormy winds and high wind-driven waves. Inside the nearshore region (5–8 m depth), strong offshore-going currents have a high correlation with wave conditions. However, longshore coastal currents are mostly affected by stormy winds outside the nearshore region (15–20 m depth). Figure 2 Wave and current characteristics observed at the Joeutsu-Ogata coast in the Japan Sea at a winter monsoon. Figure 2 Wave and current characteristics observed at the Joeutsu-Ogata coast in the Japan Sea at a winter monsoon. Tidal coastal currents also occur at the Joeutsu-Ogata coast. These currents take place in conjunction with the rise and fall of the tide. The tidal current velocities are generally modest and thought to be less important than those caused by winds and waves in this area. Figure 2 shows the wave and current characteristics observed at the coast during a winter monsoon (Kato & Yamashita 2003). The sketched circles in the figure indicate the period of time when winds and waves are dominant. As seen, the resultant coastal currents are mostly imposed by storms, and other events, such as tides and general circulation, are negligible. ## STRUCTURE OF EMPLOYED MODELS ### ANFIS structure FIS simulates an ill-defined event generating some linguistic fuzzy IF-THEN rules (Jang 1993). This is the main advantage of the FIS compared to classical learning systems, e.g., ANN models. Several types of FISs proposed in the literature are different in the defuzzification of fuzzy IF-THEN rules consequent part. In this paper, the FIS model introduced by Takagi & Sugeno (1989) (TS) is used to estimate coastal current velocities. In this kind of FIS, there is no systematic way to tune fuzzy IF-THEN rule parameters including the antecedent and consequent parameters. An efficient way to achieve this purpose is to employ an ANN model and then the combined model is termed as an ANFIS model. Consequently, the ANFIS is functionally a TS FIS whose parameters are tuned by a training algorithm. The common structure of the ANFIS model is depicted in Figure 3. In this figure, the square nodes are fixed nodes, whereas the circular nodes represent adaptive nodes changing during the so-called training process. The ANFIS model uses hybrid learning algorithm to tune fuzzy IF-THEN rule parameters. As shown in Figure 3, in this model the antecedent parameters of fuzzy IF-THEN rules are tuned using a steepest descent (SD) algorithm in the backward path. Note that the antecedent parameters are associated with membership functions of input variables. Linear parameters in the consequent part of fuzzy IF-THEN rules are set using a least squares error (LSE) method in the forward path. The antecedent parameters of fuzzy IF-THEN rules are optimized using the SD method by evaluating the derivation of mean square error (MSE) as follows: 1 where is the antecedent parameter, is the learning rate, and E is the MSE defined as follows: 2 where is the th network output at a given output node, is the th target output, is the number of training data. The learning rate during the process is updated as follows: 3 where is the number of training epochs. Figure 3 The ANFIS structure (Jang 1993). Figure 3 The ANFIS structure (Jang 1993). #### Subtractive clustering method Referring to experts is one of the most common methods to extract fuzzy IF-THEN rules in the ANFIS model. This may not be applicable when phenomena have not been experienced yet. A typical solution is benefiting from the clusters obtained by the clustering techniques, such as subtractive clustering method (Chiu 1994). In the subtractive clustering method, the clusters representing the data points are selected based on data points potential values. Potential values for a given data point with D (i= 1,2,…. D) dimensions are calculated as follows: 4 where is the potential value of the kth data point, is the clustering radius of the ith dimension of a data point, is the value of the ith dimension of the kth data point, is the value of the ith dimension of the jth data point, and K is the number of data points. In the method, the point with the highest potential value is directly selected as the first cluster center and other clusters are chosen after reducing each data point potential value. This process continues until meeting zero potential value for each data point. The reduced potential value for each data point () is calculated by the following equation: 5 where is the potential value of the first chosen cluster center, is the quash factor, and is the value of ith dimension of the first cluster. New cluster centers for each step are chosen on the basis of the following two criteria: 1. A data point with a relative potential value greater than the acceptance threshold () () is directly accepted as a cluster center. 2. The acceptance level of a data point with relative potential values between the rejection ratio () and acceptance ratio () () depends on fulfilling the following criterion: 6 where is the nearest distance between the candidate cluster center and all previously chosen cluster centers. To extract the fuzzy IF-THEN rules from n clusters, the Gaussian membership function (represented by the mean and standard deviation) is considered. Then, the ith dimension of the mth (m=1, …, n) cluster center is chosen as the mean value of the mth membership function of the ith dimension. The deviation parameters () are estimated as follows: 7 where is the radius associated with the ith dimension of data points, and is the ith dimension of data points. It should be noted that the fuzzy IF-THEN rules in FIS and ANFIS models are extracted in order to have their lowest similarities. The number of rules and linguistic variables for each input variable is equal to the number of clusters. To meet minimum similarities in construction of fuzzy IF-THEN rules, only linguistic variables at the same levels are chosen (MATLAB GENFIS 2 command). For example, ‘A1’ as the first linguistic variable of input variable A makes a rule with the first linguistic variable (‘B1’) of input variable B. ### ANN model The ANN is a standard method to evaluate the accuracy of the ANFIS model. Accordingly, the ANFIS is compared with a FFBP (multi-layered perceptron) ANN. As shown in Figure 4, the FFBP network employed in this study is a three-layer network including an input layer, a hidden layer, and an output layer. In this network, the first term ‘feedforward’ describes how this neural network processes and recalls patterns while neurons are connected forward. Each layer of the neural network is connected only to the next layer (for example, from the input to the hidden layer). In addition, the term ‘backpropagation’ describes how this type of neural network is trained. Backpropagation is a form of supervised training in which the weights of various layers are adjusted using the output estimated by the model. The backpropagation and feedforward algorithms are often used together as the FFBP network. Figure 4 The architecture of backpropagation neural network. Figure 4 The architecture of backpropagation neural network. In the FFBP used to estimate coastal current velocities, the mathematical equation for each layer can be written as follows: 8 where is the output of neuron o, is weight vector, is the input vector for neuron i ( = , …, ), is the bias for neuron o, and f is the network transfer function. In this study, the tangent sigmoid is selected as the network transfer function to scale input and output variables between 0 and 1. This function is expressed as follows (Haykin 2009): 9 ### MLR, MNLRP models In order to verify ANFIS and ANN models, these models are compared with the MLR and MNLRP approaches. The following subsections outline these two competent models. #### MLR model In this approach, a linear relationship is fitted to input and output variables by using the training data set as follows: 10 where Y is the output variable, , , …., are constant parameters for the linear relation, and , …, are input variables. #### MNLRP model Unlike traditional MLR, which is restricted to linear models, the MNLRP is able to estimate an event by fitting a nonlinear relationship to input and output variables. The form of the nonlinear relation can be as follows: 11 where , , …, are constant parameters for the nonlinear relation. The MNLRP relation can be linearized by taking a log from Equation (11) which gives: 12 Then, the linear regression is used to tune the constant parameters. Negative current velocities in both cross-shore and longshore directions make the log function undefined. To deal with the problem, data points are scaled between 0.05 and 0.95 by the following expression: 13 where and are normalized and original variables, respectively, and are the minimum and maximum of a variable, respectively. ## MODEL DEVELOPMENT PREREQUISITES ### Selection of input variables Based on the hydrodynamic characteristics of the study area and to evaluate the effects of different wind climate on coastal currents, the wind climate is differentiated: the stormy condition is one with wind speeds greater than 10 m/s; otherwise it is the windy condition. It should be noted that the condition in which there is no division between the wind climates is termed ‘general condition’. Appropriate selection of input variables is an important task in developing any data-driven model. According to the literature, the input variables involved in coastal current velocities' estimation are listed as below (Horikawa 1978): (1) for the velocity of longshore direction (Vlongshore): (2) for the velocity of cross-shore direction (Vcross−shore) where and are significant wave height and significant wave period, respectively; is water depth, W is wind speed, is wind direction with the north, and is the incident wave front angle with the Ogata coastline. Application of any data-driven approach to predict an event is related to its data sets. In this paper, 9,040 data points collected at the Ogata coast were chosen to identify the relationship between different input variables and the longshore and cross-shore current velocities. Of them, 5,000 data points were chosen randomly as the training data, 700 data points were used as the validation data points, and the remaining 3,340 data points were used as the testing data at the general condition. Out of 9,040 data points, 2,610 data points were related to the stormy condition while, the remaining 6,430 data points were for the windy condition. 1,500 out of 2,610 data points for the stormy condition were selected randomly as the training data, 200 data points were chosen as the validation data, and the 910 remaining data points were considered as the testing data. At the windy condition, 3,500 out of 6,430 data points were selected randomly as the training data and 500 data points were chosen as the validation data. The 2,430 remaining data points were selected as the testing data set. Table 1 outlines statistical characteristics of the data sets used to develop the estimator models. In this table, the maximum, minimum, average, and range of the training, validation, and testing data are reported for each input variable. Table 1 Statistical characteristics of data sets used for developing models to estimate coastal current velocities Training data (numbers = 5,000) Avg. 7.112 1.53 12.4 10.80 −0.321 0.191 0.011 0.0207 Min. 0.40 0.17 4.98 0.24 −1.55 −0.58 −0.142 Max. 11. 2 6.06 30.3 17.2 1.56 1.46 0.609 0.797 Range 10.8 5.89 25.32 16.96 3.11 1.46 1.189 0.939 Validation data (numbers = 700) Avg. 6.88 1.832 7.92 6.73 0.220 0.262 0.0421 0.0732 Min. 3.8 0.27 6.20 0.285 −1.497 0.0 −0.288 −0.079 Max. 10.5 4.18 10.00 16.15 1.560 1.239 0.460 0.532 Range 6.70 3.91 3.80 15.87 3.061 1.239 0.748 0.611 Testing data (numbers = 3,340) Avg. 7.16 1.721 8.57 8.431 −0.375 0.272 0.0922 0.0841 Min. 2.90 0.18 4.98 0.24 −1.560 0.00 −0.35 −0.142 Max. 11.1 6.06 15.80 16.23 1.559 1.01 0.6 0.797 Range 8.20 5.88 10.82 15.99 3.119 1.01 0.95 0.939 Training data (numbers = 5,000) Avg. 7.112 1.53 12.4 10.80 −0.321 0.191 0.011 0.0207 Min. 0.40 0.17 4.98 0.24 −1.55 −0.58 −0.142 Max. 11. 2 6.06 30.3 17.2 1.56 1.46 0.609 0.797 Range 10.8 5.89 25.32 16.96 3.11 1.46 1.189 0.939 Validation data (numbers = 700) Avg. 6.88 1.832 7.92 6.73 0.220 0.262 0.0421 0.0732 Min. 3.8 0.27 6.20 0.285 −1.497 0.0 −0.288 −0.079 Max. 10.5 4.18 10.00 16.15 1.560 1.239 0.460 0.532 Range 6.70 3.91 3.80 15.87 3.061 1.239 0.748 0.611 Testing data (numbers = 3,340) Avg. 7.16 1.721 8.57 8.431 −0.375 0.272 0.0922 0.0841 Min. 2.90 0.18 4.98 0.24 −1.560 0.00 −0.35 −0.142 Max. 11.1 6.06 15.80 16.23 1.559 1.01 0.6 0.797 Range 8.20 5.88 10.82 15.99 3.119 1.01 0.95 0.939 In addition to the input variable effects on coastal currents' velocities, another feature in the selection of the input variables is their independency. This issue was investigated here by using a correlation matrix (see Table 2). As shown in the table, correlations among the input variables are low enough to consider them as independent input variables. Table 2 Correlation matrix for estimating velocities of coastal currents 0.481 0.0204 0.065 0.0908 0.1707 0.0007 0.033 0.111 0.0029 0.008 0.1197 0.1453 0.0038 0.0043 0.1080 0.481 0.0204 0.065 0.0908 0.1707 0.0007 0.033 0.111 0.0029 0.008 0.1197 0.1453 0.0038 0.0043 0.1080 In order to apply the ANFIS and ANN models, data should be normalized to scale input and output variables between 0 and 1. Accordingly, all the variables were normalized as follows: 14 where and are normalized and original variables, respectively, while and are the minimum and maximum values of the data points, respectively. ### Criteria for evaluation of the models In this study, the bias, root mean square error (RMSE), and correlation coefficient (R) are used to evaluate the performance of estimator models. The bias evaluates whether a model overestimates or underestimates a desired variable by the following equation: 15 where is the th observed value, is the th estimated value, and Ntest is the number of testing data points. The RMSE indicates how estimated data points are scattered around the line y = x. This criterion is estimated by the following equation: 16 where is the th observed value, is the th estimated value, and Ntest is the number of testing data points. The correlation coefficient between the observed and estimated values is another criterion used to evaluate the performance of the models. This criterion is calculated by the following equation: 17 where is the output mean, is the th observed value, is the th estimated value, and Ntest is the number of testing data points. A criterion like the correlation coefficient is not valuable unless it is properly interpreted. As a rule of thumb, correlation coefficients less than 0.35 are generally considered as weak correlation. Also, correlation coefficients between 0.36 and 0.67 show modest or moderate correlation; whereas the R values between 0.68 and 0.9 are high correlation. If the correlation coefficient reaches up to 0.9 or more, that means a very high correlation (Weber & Lamb 1970; Kuma 1984). However, the higher values of correlation coefficient do not merely guarantee the performance of the estimator models. ## MODELS' DEVELOPMENT In this section, the ANFIS, ANN, MLR, and MNLRP models are developed to estimate coastal current velocities. For our experiments, we used the chosen training, validation, and testing data sets in the section ‘Selection of input variables’. As mentioned before, these three subsets have been selected randomly to have models with acceptable generalization capability. ### Development of ANFIS models In this sub-section, ANFIS models were developed to estimate the coastal current velocities. In order to develop the models, fuzzy IF-THEN rules are needed. The following expressions outline a sample of Sugeno-type fuzzy IF-THEN rules used to estimate the velocity of longshore currents at the general condition. Rule 1: IF is A1 & is B1 & is C1W is D1 & is E1 & is F1 THEN 18 Rule 2: IF is A2 & is B2 & is C2W is D2 & is E2 & is F2 THEN 19 Rule i: IF is Ai & is Bi & is CiW is Di & is Ei & is Fi THEN 20 where Ai, Bi, Ci, Di, Ei, and Fi are fuzzy sets related to significant wave period, significant wave height, water depth, wind speed, wind direction, and incident wave angle, respectively. , , , , , and are consequent parameters of the fuzzy rules. The ANFIS models were developed to estimate longshore and cross-shore velocities for the general, stormy, and windy conditions. The training processes for the general condition are shown in Figure 5(a) and 5(b) for both velocities. Small error noises along with the decreasing trend of RMSEs in the figures ensure a fair selection of input variables. Figures 6 and 7 also show initial and improved membership functions by the ANFIS model for each input variable. As shown in the figures, all membership functions have changed during the process. This confirms the effectiveness of every selected variable on the phenomenon. However, a sensitivity analysis can clarify this effectiveness quantitatively. Figure 5 The RMSE estimated by the ANFIS model versus epoch numbers in the training process at the general condition: (a) in estimating the velocity of longshore currents; (b) in estimating the velocity of cross-shore currents. Figure 5 The RMSE estimated by the ANFIS model versus epoch numbers in the training process at the general condition: (a) in estimating the velocity of longshore currents; (b) in estimating the velocity of cross-shore currents. Figure 6 Initial and improved membership functions by the ANFIS model for estimating the velocity of longshore currents at the general condition. Figure 6 Initial and improved membership functions by the ANFIS model for estimating the velocity of longshore currents at the general condition. Figure 7 Initial and improved membership functions by the ANFIS model for estimating the velocity of cross-shore currents at the general condition. Figure 7 Initial and improved membership functions by the ANFIS model for estimating the velocity of cross-shore currents at the general condition. The RMSE associated with the validation and training data are reported in Tables 35 for all conditions. Clustering parameters and epochs in which validation and training errors are minimized simultaneously are also reported in the tables. As is apparent from the tables, all ANFIS models compared to initial FIS models performed well enough. For instance, the RMSE obtained by the FIS model for estimating the velocity of longshore currents at the general condition is 0.0932 while it is equal to 0.0895 for the ANFIS model. This shows the efficiency of the training process to tune fuzzy antecedent and consequent parameters. In this model, the appropriate number of fuzzy IF-THEN rules is 4 in accordance with the following clustering parameters: Table 3 The RMSE of training and validation data sets estimated by the FIS and ANFIS models to estimate coastal current velocities at the general condition Model type FIS ANFIS Longshore direction Training error (m/s) 0.0932 0.0895 Validation error (m/s) 0.0951 0.0893 Number of rules Desirable epoch 125 Clustering parameters  = [0.56, 0.6, 0.3, 0.6, 0.6, 0.6, 0.6, 2] Cross-shore direction Training error (m/s) 0.0676 0.0575 Validation error (m/s) 0.0563 0.0545 Number of rules Desirable epoch 104 Clustering parameters = [0.56, 0.56, 0.3, 0.6, 0.6, 0.6, 0. 6, 2] Model type FIS ANFIS Longshore direction Training error (m/s) 0.0932 0.0895 Validation error (m/s) 0.0951 0.0893 Number of rules Desirable epoch 125 Clustering parameters  = [0.56, 0.6, 0.3, 0.6, 0.6, 0.6, 0.6, 2] Cross-shore direction Training error (m/s) 0.0676 0.0575 Validation error (m/s) 0.0563 0.0545 Number of rules Desirable epoch 104 Clustering parameters = [0.56, 0.56, 0.3, 0.6, 0.6, 0.6, 0. 6, 2] Table 4 The RMSE of training and validation data sets estimated by the FIS and ANFIS models to estimate coastal current velocities at the stormy condition Model type FIS ANFIS Longshore direction Training error (m/s) 0.1034 0.0990 Validation error (m/s) 0.0970 0.0935 Number of rules Desirable epoch 19 Clustering parameters = [0.4, 0.6, 0.5, 0.5, 0.5, 0.6, 0.6, 2] Cross-shore direction Training error (m/s) 0.1015 0.0763 Validation error (m/s) 0.0769 0.0653 Number of rules Desirable epoch 32 Clustering parameters = [0.56, 0.56, 0.56, 0.6, 0.6, 0.6, 0. 6, 2] Model type FIS ANFIS Longshore direction Training error (m/s) 0.1034 0.0990 Validation error (m/s) 0.0970 0.0935 Number of rules Desirable epoch 19 Clustering parameters = [0.4, 0.6, 0.5, 0.5, 0.5, 0.6, 0.6, 2] Cross-shore direction Training error (m/s) 0.1015 0.0763 Validation error (m/s) 0.0769 0.0653 Number of rules Desirable epoch 32 Clustering parameters = [0.56, 0.56, 0.56, 0.6, 0.6, 0.6, 0. 6, 2] Table 5 The RMSE of training and validation data sets estimated by the FIS and ANFIS models to estimate coastal current velocities at the windy condition Model type FIS ANFIS Longshore direction Training error (m/s) 0.1071 0.1012 Validation error (m/s) 0.1028 0.0989 Number of rules Desirable epoch 19 Clustering parameters = [0.3, 0.5, 0.36, 0.56, 0.5, 0.5, 0.56, 2] Cross-shore direction Training error (m/s) 0.0588 0.0497 Validation error (m/s) 0.0534 0.0500 Number of rules Desirable epoch 29 Clustering parameters = [0.3, 0.3, 0.5, 0.56, 0.36, 0.56, 0.56, 2] Model type FIS ANFIS Longshore direction Training error (m/s) 0.1071 0.1012 Validation error (m/s) 0.1028 0.0989 Number of rules Desirable epoch 19 Clustering parameters = [0.3, 0.5, 0.36, 0.56, 0.5, 0.5, 0.56, 2] Cross-shore direction Training error (m/s) 0.0588 0.0497 Validation error (m/s) 0.0534 0.0500 Number of rules Desirable epoch 29 Clustering parameters = [0.3, 0.3, 0.5, 0.56, 0.36, 0.56, 0.56, 2] = [0.56, 0.6, 0.3, 0.6, 0.6, 0.6, 0.6, 2] According to the radii and quash factor, the ANFIS model has simultaneous minimum of the training and validation errors at epoch 125. As reported in Table 3, the error of developed ANFIS models to estimate the velocity of cross-shore currents is lower than that of the models developed for the velocity of longshore currents. The desirable epoch number in which the training and validation errors are simultaneously minimized is 104. At the stormy and windy conditions, the same situations were experienced. The error of ANFIS models to estimate the velocity of cross-shore currents is lower than that of the models developed for the velocity of longshore currents. As reported in Table 4, at the stormy condition the desired epoch number is 19 whereas the number for cross-shore currents estimator model is 32. In the windy condition, as seen from Table 5, the desired epoch number for the longshore estimator model is 19 versus 29 for the cross-shore currents estimator model. As mentioned above, a sensitivity analysis against effective variables can reveal the physical behavior of the phenomenon more apparently. To achieve this, in this section a sensitivity test is provided to determine the relative influence of each input variable on coastal current velocities. In the process, the influence of each variable on the models' RMSE is investigated by eliminating the variable from the selected input variables. These results are shown in Figure 8 for the general condition. It can be concluded from Figure 8(a) that the wind direction () and wind speed (W) had stronger effects on the velocity of longshore currents. Furthermore, incident wave angle (), waves characteristics (,), and water depth () have lower influences on the velocity of longshore coastal currents. In addition, as shown in Figure 8(b), for the velocity of cross-shore currents, significant wave height () had stronger effects on the event whereas the significant wave period (), wind speed (W), water depth (), wind direction (), and incident wave angle () had lower effects. These results indicate that although coastal currents in both longshore and cross-shore directions were affected by waves, the effect of local wind speed might not be ignored. A hint to prove the ability of the developed ANFIS models for capturing the complexity of the phenomenon is their higher sensitivity to the wind direction than to the wave. This finding is in accordance with Yamashita et al. (1998). Figure 8 The RMSE related to removing each input variable from ANFIS models input variables at the general condition. Figure 8 The RMSE related to removing each input variable from ANFIS models input variables at the general condition. ### Development of the ANN models To develop the FFBP ANN estimator models, first the training and validation data sets used in the ANFIS models were gathered. Then, the validation data associated with the ANN models were selected randomly as the ratios chosen in the ANFIS models. Since the Levenberg–Marquardt training algorithm produces reasonable results for the majority of ANN applications, in this study this training algorithm is used to update weights and bias values. Following the selection of the ANN models' prerequisites, six ANN models were developed to estimate coastal current velocities for the three conditions of interest. The main factor in the FFBP is the number of hidden neurons (NHN) and is reported in Table 6. To tune this parameter, several numbers of hidden neurons were examined. Note that the RMSE of both training and validation data sets are reported in the table along with desired epochs. Table 6 Characteristics of ANN models to estimate coastal current velocities for different conditions Condition Desired epoch NHN Validation error (m/s) Training error (m/s) General 11 200 0.0534 0.0501 General 23 200 0.0823 0.0797 Stormy 200 0.0631 0.041 Stormy 12 200 0.1031 0.117 Windy 200 0.0489 0.0434 Windy 10 200 0.0545 0.0672 Condition Desired epoch NHN Validation error (m/s) Training error (m/s) General 11 200 0.0534 0.0501 General 23 200 0.0823 0.0797 Stormy 200 0.0631 0.041 Stormy 12 200 0.1031 0.117 Windy 200 0.0489 0.0434 Windy 10 200 0.0545 0.0672 ### Development of the multiple regression models To identify linear relationships between input and output variables for estimating coastal current velocities, the LSE method was used. The obtained linear relationships for the three conditions of interest are reported as follows. At the general condition: 21 22 At the stormy condition: 23 24 At the windy condition: 25 26 In addition, the MNLRP relations obtained by the LSE method to estimate coastal current velocities for the three conditions are outlined as the following expressions. At the general condition: 27 28 At the stormy condition: 29 30 At the windy condition: 31 32 ## EVALUATION OF MODELS The comparison between observed coastal current velocities with estimated ones by the ANFIS and ANN models are shown in Figures 914. As shown in the figures, at the general condition, the results obtained by the ANFIS models are similar to the observed data points (R= 0.866 and R = 0.751 for the cross-shore and longshore velocities, respectively). As well, the estimated velocities by the ANN model are identical to the observed ones (R = 0.962 and R= 0.796 for the cross-shore and longshore velocities, respectively). The merits of the developed models can be captured from Figures 9 to 14. As depicted in the figures, the models are able to estimate the velocity of cross-shore currents in both coastward and seaward. This may show that the models can predict the so-called undertow and rip currents. However, assigning of these features in the numerical models is computationally an expensive process. Figure 9 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the general condition. Figure 9 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the general condition. Figure 10 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the general condition. Figure 10 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the general condition. Figure 11 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the stormy condition. Figure 11 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the stormy condition. Figure 12 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the stormy condition. Figure 12 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the stormy condition. Figure 13 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the windy condition. Figure 13 Comparison between the observed and estimated velocities of cross-shore currents by the ANFIS and ANN models at the windy condition. Figure 14 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the windy condition. Figure 14 Comparison between the observed and estimated velocities of longshore currents by the ANFIS and ANN models at the windy condition. At the stormy condition, the ANFIS model performs well with a correlation coefficient of 0.933 to estimate the velocity of cross-shore currents. This model estimates the velocity of longshore currents with a correlation coefficient of 0.784. These correlation values for the ANN model are, respectively, equal to 0.9137 and 0.737. The findings prove that both ANFIS and ANN models are able to estimate the coastal current velocities with high accuracy. At the windy condition, the ANFIS model performs well to estimate the velocity of cross-shore currents with a correlation coefficient of 0.794, whereas this model estimates the velocity of longshore currents with a correlation coefficient of 0.571. These values for the ANN model are 0.817 and 0.5709, respectively. These results show that the ANFIS and ANN models can accurately estimate coastal current velocities, although the ANFIS and ANN models at the windy condition give lower correlations. This shows that at the windy condition, the coastal currents are perhaps affected by other currents, such as tidal ones. Nevertheless, the obtained correlation coefficients for the stormy condition show that coastal currents are more influenced by storms, and other currents such as tidal ones are insignificant. Performances of the ANFIS and ANN models are compared in Table 7; it can be seen that the data-driven models outperform the MLR and MNLRP models. Table 7 Obtained correlation coefficients associated with every employed estimator method Method Parameter Condition ANN ANFIS MNLR MLR General 0.796 0.751 0.569 0.531 General 0.962 0.866 0.583 0.546 Stormy 0.737 0.784 0.512 0.503 Stormy 0.9137 0.933 0.643 0.326 Windy 0.5709 0.571 0.455 0.343 Windy 0.817 0.794 0.578 0.541 Method Parameter Condition ANN ANFIS MNLR MLR General 0.796 0.751 0.569 0.531 General 0.962 0.866 0.583 0.546 Stormy 0.737 0.784 0.512 0.503 Stormy 0.9137 0.933 0.643 0.326 Windy 0.5709 0.571 0.455 0.343 Windy 0.817 0.794 0.578 0.541 Since a high correlation coefficient does not necessarily guarantee the efficiency of a model, Table 8 presents statistical indexes of the estimated coastal current velocities by the ANFIS and ANN models (the bias and RMSE criteria). As seen from the table, the ANFIS estimations were slightly biased. From the RMSE calculated by the models it can be concluded that ANN models are more accurate than the ANFIS models. In other words, the ANN models estimate both longshore and cross-shore current velocities with an acceptable accuracy. The error of the ANN models as well as the ANFIS models at the windy condition was higher than that at the stormy condition. Table 8 Error of the ANFIS and ANN models in approximating coastal current velocities at all three conditions ANN ANFIS RMSE(m/s) bias(m/s) RMSE(m/s) bias(m/s) General condition 0.451 −0.0137 0.242 −0.0124 0.0743 −0.0123 0.0604 −0.0162 Stormy condition 0.568 −0.0192 0.434 −0.0188 0.1196 −0.0168 0.1232 −0.0212 Windy condition 0.9508 −0.0193 0.353 −0.0137 0.1278 0.0026 0.145 0.0043 ANN ANFIS RMSE(m/s) bias(m/s) RMSE(m/s) bias(m/s) General condition 0.451 −0.0137 0.242 −0.0124 0.0743 −0.0123 0.0604 −0.0162 Stormy condition 0.568 −0.0192 0.434 −0.0188 0.1196 −0.0168 0.1232 −0.0212 Windy condition 0.9508 −0.0193 0.353 −0.0137 0.1278 0.0026 0.145 0.0043 As mentioned before, the most important contribution in estimating coastal currents by the ANFIS and ANN models is their ability to deal with numerous input and output variables. These models are able to learn and build black box reasoning to estimate coastal current velocities, while the physical behavior of the event is not well understood. To create a sound conclusion, the sensitivity here is repeated on the testing data sets by using the ANN models. To achieve this, the sensitivity of the ANN models' RMSE to the inputs was explored by the one-at-a-time elimination of the input variables. The ANN models are used for the new sensitivity analysis due to their higher accuracy than the ANFIS models. As reported in Table 9, in this kind of sensitivity analysis, like the previous one, the wind direction and speed exert more influences on coastal current velocities in the longshore direction. However, this analysis ensures the wave height effectiveness on cross-shore current velocities. Table 9 Variation of the RMSE against removing each input variable from input list of the ANN models for the general condition in testing data (m/s) No. 0.242 0.264 0.268 0.253 0.251 0.251 0.246 0.0604 0.0654 0.0579 0.0704 0.0727 0.0691 0.0637 No. 0.242 0.264 0.268 0.253 0.251 0.251 0.246 0.0604 0.0654 0.0579 0.0704 0.0727 0.0691 0.0637 ## SUMMARY AND CONCLUSIONS The ANFIS and ANN models are data-driven techniques allowing a relatively simple process of building regression (numerical prediction) models, whereas employing the conventional (physically based) numerical modeling methods could be quite complicated and time-consuming. In this study, the ANFIS and ANN models were developed to estimate coastal current velocities at the Joeutsu-Ogata coast of the Japan Sea. Final evaluations of the developed models confirm outperformance of the models compared to the MLR and MNLRP models. In addition, it was concluded that the ANN models were more accurate than the ANFIS models. In addition, the sensitivity analysis showed the wind speed and wind direction having stronger effects on coastal current velocities at the longshore direction. However, water depth, wave characteristics, and incident wave angle had relatively lower effects on these currents. At the cross-shore direction, wave height had more influences on the current velocities compared to the wind speed, wind direction, and water depth. ## ACKNOWLEDGEMENTS This study was partially supported by the Deputy of Research at Golestan University (GU) and the first author sincerely appreciates their continuous support during the study. Also, the authors thank very much Dr Mahmood Hajiani, Dr Hossein Karimian and Dr Mohsen Lashkarbolok for their constructive comments on the manuscript. ## REFERENCES REFERENCES Altunkaynak A. Ozger M. Çakmakci M. 2004a . Ecological Modelling 189 ( 3–4 ), 436 446 . Altunkaynak A. Ozger M. Çakmakci M. 2004b . Water Recourse Management 19 ( 5 ), 641 654 . Azamathulla H. Md. Z. 2013 . ASCE Journal of Pipeline Systems Engineering and Practice 4 ( 2 ), 131 137 . Azmathullah H. Md. Ghani A. A. Zakaria N. A. 2009 . Water Management. Proceeding of ICE 162 ( WM6 ), 399 407 . Bakhtyar R. Yeganeh-Bakhtiary A. Ghaheri A. 2008a . Applied Ocean Research 30 , 17 27 . Bakhtyar R. Ghaheri A. Yeganeh-Bakhtiary A. Baldock T. E. 2008b . Applied Ocean Research 30 , 273 286 . Bardossy A. Disse A. 1993 Fuzzy rule based models for infiltration . Water Resources Research 29 ( 2 ), 373 382 . Bateni S. M. Jeng D. S. 2007 . Ocean Engineering 34 , 1344 1354 . Bateni S. M. Borghei S. M. Jeng D. S. 2007 . Engineering Application of Artificial Intelligence 20 , 401 414 . Chiu S. L. 1994 Fuzzy model identification based on cluster estimation . Intelligent Fuzzy Systems 2 , 234 244 . Haykin S. S. 2009 Neural Networks and Learning Machines , Vol. 3 . Pearson Education , , USA . Horikawa K. 1978 Coastal Engineering, An Introduction to Ocean Engineering . University of Tokyo Press , Tokyo , Japan . Jang J. S. R. 1993 . IEEE Transactions on Systems, Man and Cybernetics 23 ( 3 ), 665 685 . Johnson P. A. Ayyub B. M. 1996 . Journal of Hydraulic Engineering 122 ( 2 ), 66 72 . Kato S. Yamashita T. 2000 Three dimensional model for wind, wave-induced coastal currents and its verification by ADCP observations in the nearshore zone . In: Proceedings of the 27th International Conference on Coastal Engineering , ASCE , 16–21 July , Sydney , Australia , pp. 777 790 . Kato S. Yamashita T. 2003 Coastal current system and its simulation model. Ann. Disas. Prev. Res. Inst., Kyoto University 46B, 1–9 . M. H. A. Mousavi S. J. 2005 . Ocean Engineering 32 , 1709 1725 . Kindler J. 1992 . Journal of Water Resource Planning and Management 118 ( 3 ), 308 323 . Kuma J. W. 1984 Basic Statistics for the Health Sciences . Mayfield Publishing Co. , Palo Alto, CA , USA , pp. 158 169 . Mahjoobi J. A. M. H. 2008 . Applied Ocean Research 30 ( 1 ), 28 36 . Muzzammil M. 2010 . Journal of Hydrology 12 ( 4 ), 474 485 . Muzzammil M. Alam J. 2011 . Journal of Hydrology 13 ( 4 ), 699 713 . Ozger M. 2009 . Hydrological Sciences Journal 54 ( 2 ), 261 273 . Ozger M. Sen Z. 2007 . Ocean Engineering 34 , 460 469 . Pongracz R. Bogardi I. Duckstein L. 1999 . Journal of Hydrology 224 ( 3–4 ), 100 114 . Ruchi K. Deo M. C. Kumar R. Agarwal V. K. 2005 . Hydraulic Research, Indian Society for Hydraulics 11 ( 3 ), 152 162 . Sen Z. Altunkaynak A. 2004 Fuzzy awakening in rainfall-runoff modeling . Nordic Hydrology 35 ( 1 ), 31 43 . Shiri J. Makarynskyy O. Kisi O. Dierickx W. Fard A. 2011 . Journal of Waterway, Port, Coastal and Ocean Engineering 137 ( 6 ), 344 354 . Shrestha B. P. Duckstein L. Stakhiv E. Z. 1996 . Journal of Water Resource Planning and Management 122 ( 4 ), 262 269 . Takagi T. Sugeno M. 1989 Fuzzy identification of system and its applications to modeling and control . IEEE Transactions on Systems, Man and Cybernetics 15 , 116 132 . Uyumaz A. Altunkaynak A. Ozger M. 2006 . Journal of Hydraulic Engineering 132 ( 10 ), 1069 1075 . Weber J. C. Lamb D. R. 1970 Statistics and Research in Physical Education . CV Mosby , St Louis, MO , USA , pp. 59 64 , 222 . Yamashita T. Yoshioka H. Kato S. Lu M. Shimoda T. 1998 ADCP observation of nearshore current structure in the surf zone . In: Proceedings of the 26th International Conference on Coastal Engineering , ASCE pp. 787 800 . Yasuda T. Iwata H. Kato S. 1996 . Proc. Coastal Engineering, JSCE 43 , 366 370 (in Japanese). Zanganeh M. Mousavi S. J. A. 2009 . Engineering Application of Artificial Intelligence 22 , 1194 1202 . Zanganeh M. Yeganeh-Bakhtiary A. Bakhtyar R. 2011 . Journal of Hydroinformatics 13 , 558 573 .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582096695899963, "perplexity": 1493.4909109568814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203378.92/warc/CC-MAIN-20190324063449-20190324085449-00445.warc.gz"}
https://phys.libretexts.org/Courses/Muhlenberg_College/Physics_122%3A_General_Physics_II_(Collett)/08%3A_Electromagnetic_Induction/8.03%3A_Lenz's_Law
$$\require{cancel}$$ # 8.3: Lenz's Law Learning Objectives By the end of this section, you will be able to: • Use Lenz’s law to determine the direction of induced emf whenever a magnetic flux changes • Use Faraday’s law with Lenz’s law to determine the induced emf in a coil and in a solenoid The direction in which the induced emf drives current around a wire loop can be found through the negative sign. However, it is usually easier to determine this direction with Lenz’s law, named in honor of its discoverer, Heinrich Lenz (1804–1865). (Faraday also discovered this law, independently of Lenz.) We state Lenz’s law as follows: Lenz's Law The direction of the induced emf drives current around a wire loop to always oppose the change in magnetic flux that causes the emf. Lenz’s law can also be considered in terms of conservation of energy. If pushing a magnet into a coil causes current, the energy in that current must have come from somewhere. If the induced current causes a magnetic field opposing the increase in field of the magnet we pushed in, then the situation is clear. We pushed a magnet against a field and did work on the system, and that showed up as current. If it were not the case that the induced field opposes the change in the flux, the magnet would be pulled in produce a current without anything having done work. Electric potential energy would have been created, violating the conservation of energy. To determine an induced emf $$\epsilon$$, you first calculate the magnetic flux $$\Phi_m$$ and then obtain $$d\Phi_m/dt$$. The magnitude of $$\epsilon$$ is given by $\epsilon = \left|\dfrac{d\Phi_m}{dt}\right|.$ Finally, you can apply Lenz’s law to determine the sense of $$\epsilon$$. This will be developed through examples that illustrate the following problem-solving strategy. Problem-Solving Strategy: Lenz’s Law To use Lenz’s law to determine the directions of induced magnetic fields, currents, and emfs: • Make a sketch of the situation for use in visualizing and recording directions. • Determine the direction of the applied magnetic field $$\vec{B}$$. • Determine whether its magnetic flux is increasing or decreasing. • Now determine the direction of the induced magnetic field $$\vec{B}$$. The induced magnetic field tries to reinforce a magnetic flux that is decreasing or opposes a magnetic flux that is increasing. Therefore, the induced magnetic field adds or subtracts to the applied magnetic field, depending on the change in magnetic flux. • Use right-hand rule 2 (RHR-2; see Magnetic Forces and Fields) to determine the direction of the induced current I that is responsible for the induced magnetic field $$\vec{B}$$. • The direction (or polarity) of the induced emf can now drive a conventional current in this direction. Let’s apply Lenz’s law to the system of Figure $$\PageIndex{1a}$$. We designate the “front” of the closed conducting loop as the region containing the approaching bar magnet, and the “back” of the loop as the other region. As the north pole of the magnet moves toward the loop, the flux through the loop due to the field of the magnet increases because the strength of field lines directed from the front to the back of the loop is increasing. A current is therefore induced in the loop. By Lenz’s law, the direction of the induced current must be such that its own magnetic field is directed in a way to oppose the changing flux caused by the field of the approaching magnet. Hence, the induced current circulates so that its magnetic field lines through the loop are directed from the back to the front of the loop. By RHR-2, place your thumb pointing against the magnetic field lines, which is toward the bar magnet. Your fingers wrap in a counterclockwise direction as viewed from the bar magnet. Alternatively, we can determine the direction of the induced current by treating the current loop as an electromagnet that opposes the approach of the north pole of the bar magnet. This occurs when the induced current flows as shown, for then the face of the loop nearer the approaching magnet is also a north pole. Part (b) of the figure shows the south pole of a magnet moving toward a conducting loop. In this case, the flux through the loop due to the field of the magnet increases because the number of field lines directed from the back to the front of the loop is increasing. To oppose this change, a current is induced in the loop whose field lines through the loop are directed from the front to the back. Equivalently, we can say that the current flows in a direction so that the face of the loop nearer the approaching magnet is a south pole, which then repels the approaching south pole of the magnet. By RHR-2, your thumb points away from the bar magnet. Your fingers wrap in a clockwise fashion, which is the direction of the induced current. Another example illustrating the use of Lenz’s law is shown in Figure $$\PageIndex{2}$$. When the switch is opened, the decrease in current through the solenoid causes a decrease in magnetic flux through its coils, which induces an emf in the solenoid. This emf must oppose the change (the termination of the current) causing it. Consequently, the induced emf has the polarity shown and drives in the direction of the original current. This may generate an arc across the terminals of the switch as it is opened. Exercise $$\PageIndex{1A}$$ Find the direction of the induced current in the wire loop shown below as the magnet enters, passes through, and leaves the loop. Solution To the observer shown, the current flows clockwise as the magnet approaches, decreases to zero when the magnet is centered in the plane of the coil, and then flows counterclockwise as the magnet leaves the coil. Exercise $$\PageIndex{1B}$$ Verify the directions of the induced currents in Figure 13.2.2. Example $$\PageIndex{1A}$$: A Circular Coil in a Changing Magnetic Field A magnetic field $$\vec{B}$$ is directed outward perpendicular to the plane of a circular coil of radius $$r = 0.50 \, m$$ (Figure $$\PageIndex{3}$$). The field is cylindrically symmetrical with respect to the center of the coil, and its magnitude decays exponentially according to $$B = (1.5T)e^{(5.0s^{-1})t}$$, where B is in teslas and t is in seconds. (a) Calculate the emf induced in the coil at the times $$t_1 = 0$$, $$t_2 = 5.0 \times 10^{-2}s$$, and $$t_3 = 1.0 \, s$$. (b) Determine the current in the coil at these three times if its resistance is $$10 \, \Omega$$. Strategy Since the magnetic field is perpendicular to the plane of the coil and constant over each spot in the coil, the dot product of the magnetic field $$\vec{B}$$ and normal to the area unit vector $$\hat{n}$$ turns into a multiplication. The magnetic field can be pulled out of the integration, leaving the flux as the product of the magnetic field times area. We need to take the time derivative of the exponential function to calculate the emf using Faraday’s law. Then we use Ohm’s law to calculate the current. Solution 1. Since $$\vec{B}$$ is perpendicular to the plane of the coil, the magnetic flux is given by $\Phi_m = B\pi r^2 = (1.5 e^{-5.0 t}T)\pi (0.50 \, m)^2$$= 1.2 e^{-(5.0 s^{-1})t} Wb.$ From Faraday’s law, the magnitude of the induced emf is $\epsilon = \left|\frac{d\Phi_m}{dt}\right| = \left|\frac{d}{dt} (1.2 e^{-(5.0s^{-1})t} Wb)\right| = 6.0 e^{-(5.0s^{-1})t}V.$ Since $$\vec{B}$$ is directed out of the page and is decreasing, the induced current must flow counterclockwise when viewed from above so that the magnetic field it produces through the coil also points out of the page. For all three times, the sense of ε is counterclockwise; its magnitudes are $\epsilon (t_1) = 6.0 V; \, \epsilon (t_2) = 4.7 \, V; \, \epsilon (t_3) = 0040 \, V.$ 2. From Ohm’s law, the respective currents are $I(t_1) = \frac{\epsilon (t_1)}{R} = \frac{6.0 \, V}{10 \, \Omega} = 0.60 \, A;$$I(t_2) = \frac{4.7 \, V}{10 \, \Omega} = 0.47 \, A;$ and $I(t_3) = \frac{0.040 \, V}{10 \, \Omega} = 4.0 \times 10^{-3} \, A.$ Significance An emf voltage is created by a changing magnetic flux over time. If we know how the magnetic field varies with time over a constant area, we can take its time derivative to calculate the induced emf. Example $$\PageIndex{1B}$$: Changing Magnetic Field Inside a Solenoid The current through the windings of a solenoid with $$n = 2000$$ turns per meter is changing at a rate $$dI/dt = 3.0 \, A/s$$. (See Sources of Magnetic Fields for a discussion of solenoids.) The solenoid is 50-cm long and has a cross-sectional diameter of 3.0 cm. A small coil consisting of $$N = 20$$ closely wound turns wrapped in a circle of diameter 1.0 cm is placed in the middle of the solenoid such that the plane of the coil is perpendicular to the central axis of the solenoid. Assuming that the infinite-solenoid approximation is valid at the location of the small coil, determine the magnitude of the emf induced in the coil. Strategy The magnetic field in the middle of the solenoid is a uniform value of $$\mu_0 nI$$. This field is producing a maximum magnetic flux through the coil as it is directed along the length of the solenoid. Therefore, the magnetic flux through the coil is the product of the solenoid’s magnetic field times the area of the coil. Faraday’s law involves a time derivative of the magnetic flux. The only quantity varying in time is the current, the rest can be pulled out of the time derivative. Lastly, we include the number of turns in the coil to determine the induced emf in the coil. Solution Since the field of the solenoid is given by $$B = \mu_0 nI$$, the flux through each turn of the small coil is $\Phi_m = \mu_0 nI\left(\frac{\pi d^2}{4}\right),$ where d is the diameter of the coil. Now from Faraday’s law, the magnitude of the emf induced in the coil is $\epsilon = \left|N\frac{d\Phi_m}{dt}\right| = \left|N\mu_0 n\frac{\pi d^2}{4} \frac{dI}{dt}\right|$ $= 20 (4\pi \times 10^{-7} T \cdot m/s)(2000 \, m^{-1}) \frac{\pi(0.010 \, m)^2}{4} (3.0 \, A/s)$$= 1.2 \times 10^{-5} \, V.$ Significance When the current is turned on in a vertical solenoid, as shown in Figure $$\PageIndex{4}$$, the ring has an induced emf from the solenoid’s changing magnetic flux that opposes the change. The result is that the ring is fired vertically into the air. Note A demonstration of the jumping ring from MIT.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355409145355225, "perplexity": 234.82050537212638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00560.warc.gz"}
http://physics.stackexchange.com/questions/68642/how-to-calculate-the-velocity-of-fluid-at-the-outlet-when-density-and-the-pressu
# How to calculate the velocity of fluid at the outlet when density and the pressure drop are known? I have a U- like pipe. Its inlet has atmospheric pressure $p_o=10^{5} \, Pa$. Vacuum is applied to the other end with a pressure gradient $\nabla p_v=-30 \cdot 10^{3} \, kPa/s$. The total time of the simulation is $t=0.25$ s. I assume that the highest velocity would be reached at the end of the simulation. After 0.25 s the vacuum pressure is $p_v=-7.5 \cdot 10^{3} \, Pa$. I am interested in knowing the maximum speed of air ($\rho=0.146$ $kg/m^{3}$) at the outlet (assuming this is where it is highest). One idea that comes to mind is to use the Bernoulli equation: $$p_0 + \frac{1}{2} \rho v_{0}^{2} +\rho g h_0 = p_v + \frac{1}{2} \rho v_{v}^{2} +\rho g h_v$$ The $h$ value in both cases is the same, hence the equation simplifies. Assuming the speed of air at the inlet $v_0=0$ m/s, further simplifying the equation. $$p_0=p_v+\frac{1}{2} \rho v_{v}^{2}$$ $$v_v=\sqrt{\frac{2}{\rho}(p_0-p_v)} \approx 1213.5 \, m/s$$ However that seems unrealistically high. My rough computational simulation gives a value of $v_v=64 \, m/s$. How can I calculate $v_v$? Should I not be taking into account properties of the geometry? I am not really sure I can consider $v_0=0$, because surely the speed will be almost as high at the inlet as at the outlet due to the long term effects of the pressure gradient. The purpose of this is to give me a rough value of the maximum speed of air in such a scenario, so I could use $v_v$ value in a computational simulation. - Your title is very misleading. If you really knew the velocity at the inlet $v_{in}$, you could just use mass conservation and the areas of the inlet $A_{in}$ and of the outlet $A_{out}$. Then you would have $v_{out} \, A_{out} = v_{in}\, A_{in}$, or $v_{out} = v_{in}\, A_{in} / A_{out}$ (plus corrections for density). But you claim that $v_{in}$ (which you call $v_0$) is zero, so the velocity at the outlet would necessarily be zero -- unless you were creating mass inside your pipe. Instead, you know the velocity of the ambient air near the inlet. –  Mike Jun 20 '13 at 16:31 @Mike Except for compressible gasses :) –  Bernhard Jun 20 '13 at 17:45 @Bernhard Well, I'll still file that one under "plus corrections for density" (by which I meant density changes from inlet to outlet)... –  Mike Jun 20 '13 at 18:11 True, overlooked that in your comment. Putting one end at vacuum makes this more than just a correction, I think. –  Bernhard Jun 20 '13 at 18:15 I don't think continuity is of much use here, since areas are identical, and an unsteady boundary condition is present. –  A.L. Verminburger Jun 21 '13 at 6:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9623039960861206, "perplexity": 303.61164549644525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.87/warc/CC-MAIN-20150521113208-00189-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.educator.com/chemistry/physical-chemistry/hovasapian/example-problems-ii.php
Raffi Hovasapian Example Problems II Slide Duration: Section 1: Classical Thermodynamics Preliminaries The Ideal Gas Law 46m 5s Intro 0:00 Course Overview 0:16 Thermodynamics & Classical Thermodynamics 0:17 Structure of the Course 1:30 The Ideal Gas Law 3:06 Ideal Gas Law: PV=nRT 3:07 Units of Pressure 4:51 Manipulating Units 5:52 Atmosphere : atm 8:15 Millimeter of Mercury: mm Hg 8:48 SI Unit of Volume 9:32 SI Unit of Temperature 10:32 Value of R (Gas Constant): Pv = nRT 10:51 Extensive and Intensive Variables (Properties) 15:23 Intensive Property 15:52 Extensive Property 16:30 Example: Extensive and Intensive Variables 18:20 Ideal Gas Law 19:24 Ideal Gas Law with Intensive Variables 19:25 Graphing Equations 23:51 Hold T Constant & Graph P vs. V 23:52 Hold P Constant & Graph V vs. T 31:08 Hold V Constant & Graph P vs. T 34:38 Isochores or Isometrics 37:08 More on the V vs. T Graph 39:46 More on the P vs. V Graph 42:06 Ideal Gas Law at Low Pressure & High Temperature 44:26 Ideal Gas Law at High Pressure & Low Temperature 45:16 Math Lesson 1: Partial Differentiation 46m 2s Intro 0:00 Math Lesson 1: Partial Differentiation 0:38 Overview 0:39 Example I 3:00 Example II 6:33 Example III 9:52 Example IV 17:26 Differential & Derivative 21:44 What Does It Mean? 21:45 Total Differential (or Total Derivative) 30:16 Net Change in Pressure (P) 33:58 General Equation for Total Differential 38:12 Example 5: Total Differential 39:28 Section 2: Energy Energy & the First Law I 1h 6m 45s Intro 0:00 Properties of Thermodynamic State 1:38 Big Picture: 3 Properties of Thermodynamic State 1:39 Enthalpy & Free Energy 3:30 Associated Law 4:40 Energy & the First Law of Thermodynamics 7:13 System & Its Surrounding Separated by a Boundary 7:14 In Other Cases the Boundary is Less Clear 10:47 State of a System 12:37 State of a System 12:38 Change in State 14:00 Path for a Change in State 14:57 Example: State of a System 15:46 Open, Close, and Isolated System 18:26 Open System 18:27 Closed System 19:02 Isolated System 19:22 Important Questions 20:38 Important Questions 20:39 Work & Heat 22:50 Definition of Work 23:33 Properties of Work 25:34 Definition of Heat 32:16 Properties of Heat 34:49 Experiment #1 42:23 Experiment #2 47:00 More on Work & Heat 54:50 More on Work & Heat 54:51 Conventions for Heat & Work 1:00:50 Convention for Heat 1:02:40 Convention for Work 1:04:24 Schematic Representation 1:05:00 Energy & the First Law II 1h 6m 33s Intro 0:00 The First Law of Thermodynamics 0:53 The First Law of Thermodynamics 0:54 Example 1: What is the Change in Energy of the System & Surroundings? 8:53 Energy and The First Law II, cont. 11:55 The Energy of a System Changes in Two Ways 11:56 Systems Possess Energy, Not Heat or Work 12:45 Scenario 1 16:00 Scenario 2 16:46 State Property, Path Properties, and Path Functions 18:10 Pressure-Volume Work 22:36 When a System Changes 22:37 Gas Expands 24:06 Gas is Compressed 25:13 Pressure Volume Diagram: Analyzing Expansion 27:17 What if We do the Same Expansion in Two Stages? 35:22 Multistage Expansion 43:58 General Expression for the Pressure-Volume Work 46:59 Upper Limit of Isothermal Expansion 50:00 Expression for the Work Done in an Isothermal Expansion 52:45 Example 2: Find an Expression for the Maximum Work Done by an Ideal Gas upon Isothermal Expansion 56:18 Example 3: Calculate the External Pressure and Work Done 58:50 Energy & the First Law III 1h 2m 17s Intro 0:00 Compression 0:20 Compression Overview 0:34 Single-stage compression vs. 2-stage Compression 2:16 Multi-stage Compression 8:40 Example I: Compression 14:47 Example 1: Single-stage Compression 14:47 Example 1: 2-stage Compression 20:07 Example 1: Absolute Minimum 26:37 More on Compression 32:55 Isothermal Expansion & Compression 32:56 External & Internal Pressure of the System 35:18 Reversible & Irreversible Processes 37:32 Process 1: Overview 38:57 Process 2: Overview 39:36 Process 1: Analysis 40:42 Process 2: Analysis 45:29 Reversible Process 50:03 Isothermal Expansion and Compression 54:31 Example II: Reversible Isothermal Compression of a Van der Waals Gas 58:10 Example 2: Reversible Isothermal Compression of a Van der Waals Gas 58:11 Changes in Energy & State: Constant Volume 1h 4m 39s Intro 0:00 Recall 0:37 State Function & Path Function 0:38 First Law 2:11 Exact & Inexact Differential 2:12 Where Does (∆U = Q - W) or dU = dQ - dU Come from? 8:54 Cyclic Integrals of Path and State Functions 8:55 Our Empirical Experience of the First Law 12:31 ∆U = Q - W 18:42 Relations between Changes in Properties and Energy 22:24 Relations between Changes in Properties and Energy 22:25 Rate of Change of Energy per Unit Change in Temperature 29:54 Rate of Change of Energy per Unit Change in Volume at Constant Temperature 32:39 Total Differential Equation 34:38 Constant Volume 41:08 If Volume Remains Constant, then dV = 0 41:09 Constant Volume Heat Capacity 45:22 Constant Volume Integrated 48:14 Increase & Decrease in Energy of the System 54:19 Example 1: ∆U and Qv 57:43 Important Equations 1:02:06 Joule's Experiment 16m 50s Intro 0:00 Joule's Experiment 0:09 Joule's Experiment 1:20 Interpretation of the Result 4:42 The Gas Expands Against No External Pressure 4:43 Temperature of the Surrounding Does Not Change 6:20 System & Surrounding 7:04 Joule's Law 10:44 More on Joule's Experiment 11:08 Later Experiment 12:38 Dealing with the 2nd Law & Its Mathematical Consequences 13:52 Changes in Energy & State: Constant Pressure 43m 40s Intro 0:00 Changes in Energy & State: Constant Pressure 0:20 Integrating with Constant Pressure 0:35 Defining the New State Function 6:24 Heat & Enthalpy of the System at Constant Pressure 8:54 Finding ∆U 12:10 dH 15:28 Constant Pressure Heat Capacity 18:08 Important Equations 25:44 Important Equations 25:45 Important Equations at Constant Pressure 27:32 Example I: Change in Enthalpy (∆H) 28:53 Example II: Change in Internal Energy (∆U) 34:19 The Relationship Between Cp & Cv 32m 23s Intro 0:00 The Relationship Between Cp & Cv 0:21 For a Constant Volume Process No Work is Done 0:22 For a Constant Pressure Process ∆V ≠ 0, so Work is Done 1:16 The Relationship Between Cp & Cv: For an Ideal Gas 3:26 The Relationship Between Cp & Cv: In Terms of Molar heat Capacities 5:44 Heat Capacity Can Have an Infinite # of Values 7:14 The Relationship Between Cp & Cv 11:20 When Cp is Greater than Cv 17:13 2nd Term 18:10 1st Term 19:20 Constant P Process: 3 Parts 22:36 Part 1 23:45 Part 2 24:10 Part 3 24:46 Define : γ = (Cp/Cv) 28:06 For Gases 28:36 For Liquids 29:04 For an Ideal Gas 30:46 The Joule Thompson Experiment 39m 15s Intro 0:00 General Equations 0:13 Recall 0:14 How Does Enthalpy of a System Change Upon a Unit Change in Pressure? 2:58 For Liquids & Solids 12:11 For Ideal Gases 14:08 For Real Gases 16:58 The Joule Thompson Experiment 18:37 The Joule Thompson Experiment Setup 18:38 The Flow in 2 Stages 22:54 Work Equation for the Joule Thompson Experiment 24:14 Insulated Pipe 26:33 Joule-Thompson Coefficient 29:50 Changing Temperature & Pressure in Such a Way that Enthalpy Remains Constant 31:44 Joule Thompson Inversion Temperature 36:26 Positive & Negative Joule-Thompson Coefficient 36:27 Joule Thompson Inversion Temperature 37:22 Inversion Temperature of Hydrogen Gas 37:59 35m 52s Intro 0:00 0:10 0:18 Work & Energy in an Adiabatic Process 3:44 Pressure-Volume Work 7:43 Adiabatic Changes for an Ideal Gas 9:23 Adiabatic Changes for an Ideal Gas 9:24 Equation for a Fixed Change in Volume 11:20 Maximum & Minimum Values of Temperature 14:20 18:08 18:09 21:54 22:34 Fundamental Relationship Equation for an Ideal Gas Under Adiabatic Expansion 25:00 More on the Equation 28:20 Important Equations 32:16 32:17 Reversible Adiabatic Change of State Equation 33:02 Section 3: Energy Example Problems 1st Law Example Problems I 42m 40s Intro 0:00 Fundamental Equations 0:56 Work 2:40 Energy (1st Law) 3:10 Definition of Enthalpy 3:44 Heat capacity Definitions 4:06 The Mathematics 6:35 Fundamental Concepts 8:13 Isothermal 8:20 8:54 Isobaric 9:25 Isometric 9:48 Ideal Gases 10:14 Example I 12:08 Example I: Conventions 12:44 Example I: Part A 15:30 Example I: Part B 18:24 Example I: Part C 19:53 Example II: What is the Heat Capacity of the System? 21:49 Example III: Find Q, W, ∆U & ∆H for this Change of State 24:15 Example IV: Find Q, W, ∆U & ∆H 31:37 Example V: Find Q, W, ∆U & ∆H 38:20 1st Law Example Problems II 1h 23s Intro 0:00 Example I 0:11 Example I: Finding ∆U 1:49 Example I: Finding W 6:22 Example I: Finding Q 11:23 Example I: Finding ∆H 16:09 Example I: Summary 17:07 Example II 21:16 Example II: Finding W 22:42 Example II: Finding ∆H 27:48 Example II: Finding Q 30:58 Example II: Finding ∆U 31:30 Example III 33:33 Example III: Finding ∆U, Q & W 33:34 Example III: Finding ∆H 38:07 Example IV 41:50 Example IV: Finding ∆U 41:51 Example IV: Finding ∆H 45:42 Example V 49:31 Example V: Finding W 49:32 Example V: Finding ∆U 55:26 Example V: Finding Q 56:26 Example V: Finding ∆H 56:55 1st Law Example Problems III 44m 34s Intro 0:00 Example I 0:15 Example I: Finding the Final Temperature 3:40 Example I: Finding Q 8:04 Example I: Finding ∆U 8:25 Example I: Finding W 9:08 Example I: Finding ∆H 9:51 Example II 11:27 Example II: Finding the Final Temperature 11:28 Example II: Finding ∆U 21:25 Example II: Finding W & Q 22:14 Example II: Finding ∆H 23:03 Example III 24:38 Example III: Finding the Final Temperature 24:39 Example III: Finding W, ∆U, and Q 27:43 Example III: Finding ∆H 28:04 Example IV 29:23 Example IV: Finding ∆U, W, and Q 25:36 Example IV: Finding ∆H 31:33 Example V 32:24 Example V: Finding the Final Temperature 33:32 Example V: Finding ∆U 39:31 Example V: Finding W 40:17 Example V: First Way of Finding ∆H 41:10 Example V: Second Way of Finding ∆H 42:10 Thermochemistry Example Problems 59m 7s Intro 0:00 Example I: Find ∆H° for the Following Reaction 0:42 Example II: Calculate the ∆U° for the Reaction in Example I 5:33 Example III: Calculate the Heat of Formation of NH₃ at 298 K 14:23 Example IV 32:15 Part A: Calculate the Heat of Vaporization of Water at 25°C 33:49 Part B: Calculate the Work Done in Vaporizing 2 Mols of Water at 25°C Under a Constant Pressure of 1 atm 35:26 Part C: Find ∆U for the Vaporization of Water at 25°C 41:00 Part D: Find the Enthalpy of Vaporization of Water at 100°C 43:12 Example V 49:24 Part A: Constant Temperature & Increasing Pressure 50:25 Part B: Increasing temperature & Constant Pressure 56:20 Section 4: Entropy Entropy 49m 16s Intro 0:00 Entropy, Part 1 0:16 Coefficient of Thermal Expansion (Isobaric) 0:38 Coefficient of Compressibility (Isothermal) 1:25 Relative Increase & Relative Decrease 2:16 More on α 4:40 More on κ 8:38 Entropy, Part 2 11:04 Definition of Entropy 12:54 Differential Change in Entropy & the Reversible Path 20:08 State Property of the System 28:26 Entropy Changes Under Isothermal Conditions 35:00 Recall: Heating Curve 41:05 Some Phase Changes Take Place Under Constant Pressure 44:07 Example I: Finding ∆S for a Phase Change 46:05 Math Lesson II 33m 59s Intro 0:00 Math Lesson II 0:46 Let F(x,y) = x²y³ 0:47 Total Differential 3:34 Total Differential Expression 6:06 Example 1 9:24 More on Math Expression 13:26 Exact Total Differential Expression 13:27 Exact Differentials 19:50 Inexact Differentials 20:20 The Cyclic Rule 21:06 The Cyclic Rule 21:07 Example 2 27:58 Entropy As a Function of Temperature & Volume 54m 37s Intro 0:00 Entropy As a Function of Temperature & Volume 0:14 Fundamental Equation of Thermodynamics 1:16 Things to Notice 9:10 Entropy As a Function of Temperature & Volume 14:47 Temperature-dependence of Entropy 24:00 Example I 26:19 Entropy As a Function of Temperature & Volume, Cont. 31:55 Volume-dependence of Entropy at Constant Temperature 31:56 Differentiate with Respect to Temperature, Holding Volume Constant 36:16 Recall the Cyclic Rule 45:15 Summary & Recap 46:47 Fundamental Equation of Thermodynamics 46:48 For Entropy as a Function of Temperature & Volume 47:18 The Volume-dependence of Entropy for Liquids & Solids 52:52 Entropy as a Function of Temperature & Pressure 31m 18s Intro 0:00 Entropy as a Function of Temperature & Pressure 0:17 Entropy as a Function of Temperature & Pressure 0:18 Rewrite the Total Differential 5:54 Temperature-dependence 7:08 Pressure-dependence 9:04 Differentiate with Respect to Pressure & Holding Temperature Constant 9:54 Differentiate with Respect to Temperature & Holding Pressure Constant 11:28 Pressure-Dependence of Entropy for Liquids & Solids 18:45 Pressure-Dependence of Entropy for Liquids & Solids 18:46 Example I: ∆S of Transformation 26:20 Summary of Entropy So Far 23m 6s Intro 0:00 Summary of Entropy So Far 0:43 Defining dS 1:04 Fundamental Equation of Thermodynamics 3:51 Temperature & Volume 6:04 Temperature & Pressure 9:10 Two Important Equations for How Entropy Behaves 13:38 State of a System & Heat Capacity 15:34 Temperature-dependence of Entropy 19:49 Entropy Changes for an Ideal Gas 25m 42s Intro 0:00 Entropy Changes for an Ideal Gas 1:10 General Equation 1:22 The Fundamental Theorem of Thermodynamics 2:37 Recall the Basic Total Differential Expression for S = S (T,V) 5:36 For a Finite Change in State 7:58 If Cv is Constant Over the Particular Temperature Range 9:05 Change in Entropy of an Ideal Gas as a Function of Temperature & Pressure 11:35 Change in Entropy of an Ideal Gas as a Function of Temperature & Pressure 11:36 Recall the Basic Total Differential expression for S = S (T, P) 15:13 For a Finite Change 18:06 Example 1: Calculate the ∆S of Transformation 22:02 Section 5: Entropy Example Problems Entropy Example Problems I 43m 39s Intro 0:00 Entropy Example Problems I 0:24 Fundamental Equation of Thermodynamics 1:10 Entropy as a Function of Temperature & Volume 2:04 Entropy as a Function of Temperature & Pressure 2:59 Entropy For Phase Changes 4:47 Entropy For an Ideal Gas 6:14 Third Law Entropies 8:25 Statement of the Third Law 9:17 Entropy of the Liquid State of a Substance Above Its Melting Point 10:23 Entropy For the Gas Above Its Boiling Temperature 13:02 Entropy Changes in Chemical Reactions 15:26 Entropy Change at a Temperature Other than 25°C 16:32 Example I 19:31 Part A: Calculate ∆S for the Transformation Under Constant Volume 20:34 Part B: Calculate ∆S for the Transformation Under Constant Pressure 25:04 Example II: Calculate ∆S fir the Transformation Under Isobaric Conditions 27:53 Example III 30:14 Part A: Calculate ∆S if 1 Mol of Aluminum is taken from 25°C to 255°C 31:14 Part B: If S°₂₉₈ = 28.4 J/mol-K, Calculate S° for Aluminum at 498 K 33:23 Example IV: Calculate Entropy Change of Vaporization for CCl₄ 34:19 Example V 35:41 Part A: Calculate ∆S of Transformation 37:36 Part B: Calculate ∆S of Transformation 39:10 Entropy Example Problems II 56m 44s Intro 0:00 Example I 0:09 Example I: Calculate ∆U 1:28 Example I: Calculate Q 3:29 Example I: Calculate Cp 4:54 Example I: Calculate ∆S 6:14 Example II 7:13 Example II: Calculate W 8:14 Example II: Calculate ∆U 8:56 Example II: Calculate Q 10:18 Example II: Calculate ∆H 11:00 Example II: Calculate ∆S 12:36 Example III 18:47 Example III: Calculate ∆H 19:38 Example III: Calculate Q 21:14 Example III: Calculate ∆U 21:44 Example III: Calculate W 23:59 Example III: Calculate ∆S 24:55 Example IV 27:57 Example IV: Diagram 29:32 Example IV: Calculate W 32:27 Example IV: Calculate ∆U 36:36 Example IV: Calculate Q 38:32 Example IV: Calculate ∆H 39:00 Example IV: Calculate ∆S 40:27 Example IV: Summary 43:41 Example V 48:25 Example V: Diagram 49:05 Example V: Calculate W 50:58 Example V: Calculate ∆U 53:29 Example V: Calculate Q 53:44 Example V: Calculate ∆H 54:34 Example V: Calculate ∆S 55:01 Entropy Example Problems III 57m 6s Intro 0:00 Example I: Isothermal Expansion 0:09 Example I: Calculate W 1:19 Example I: Calculate ∆U 1:48 Example I: Calculate Q 2:06 Example I: Calculate ∆H 2:26 Example I: Calculate ∆S 3:02 Example II: Adiabatic and Reversible Expansion 6:10 Example II: Calculate Q 6:48 Example II: Basic Equation for the Reversible Adiabatic Expansion of an Ideal Gas 8:12 Example II: Finding Volume 12:40 Example II: Finding Temperature 17:58 Example II: Calculate ∆U 19:53 Example II: Calculate W 20:59 Example II: Calculate ∆H 21:42 Example II: Calculate ∆S 23:42 Example III: Calculate the Entropy of Water Vapor 25:20 Example IV: Calculate the Molar ∆S for the Transformation 34:32 Example V 44:19 Part A: Calculate the Standard Entropy of Liquid Lead at 525°C 46:17 Part B: Calculate ∆H for the Transformation of Solid Lead from 25°C to Liquid Lead at 525°C 52:23 Section 6: Entropy and Probability Entropy & Probability I 54m 35s Intro 0:00 Entropy & Probability 0:11 Structural Model 3:05 Recall the Fundamental Equation of Thermodynamics 9:11 Two Independent Ways of Affecting the Entropy of a System 10:05 Boltzmann Definition 12:10 Omega 16:24 Definition of Omega 16:25 Energy Distribution 19:43 The Energy Distribution 19:44 In How Many Ways can N Particles be Distributed According to the Energy Distribution 23:05 Example I: In How Many Ways can the Following Distribution be Achieved 32:51 Example II: In How Many Ways can the Following Distribution be Achieved 33:51 Example III: In How Many Ways can the Following Distribution be Achieved 34:45 Example IV: In How Many Ways can the Following Distribution be Achieved 38:50 Entropy & Probability, cont. 40:57 More on Distribution 40:58 Example I Summary 41:43 Example II Summary 42:12 Distribution that Maximizes Omega 42:26 If Omega is Large, then S is Large 44:22 Two Constraints for a System to Achieve the Highest Entropy Possible 47:07 What Happened When the Energy of a System is Increased? 49:00 Entropy & Probability II 35m 5s Intro 0:00 Volume Distribution 0:08 Distributing 2 Balls in 3 Spaces 1:43 Distributing 2 Balls in 4 Spaces 3:44 Distributing 3 Balls in 10 Spaces 5:30 Number of Ways to Distribute P Particles over N Spaces 6:05 When N is Much Larger than the Number of Particles P 7:56 Energy Distribution 25:04 Volume Distribution 25:58 Entropy, Total Entropy, & Total Omega Equations 27:34 Entropy, Total Entropy, & Total Omega Equations 27:35 Section 7: Spontaneity, Equilibrium, and the Fundamental Equations Spontaneity & Equilibrium I 28m 42s Intro 0:00 Reversible & Irreversible 0:24 Reversible vs. Irreversible 0:58 Defining Equation for Equilibrium 2:11 Defining Equation for Irreversibility (Spontaneity) 3:11 TdS ≥ dQ 5:15 Transformation in an Isolated System 11:22 Transformation in an Isolated System 11:29 Transformation at Constant Temperature 14:50 Transformation at Constant Temperature 14:51 Helmholtz Free Energy 17:26 Define: A = U - TS 17:27 Spontaneous Isothermal Process & Helmholtz Energy 20:20 Pressure-volume Work 22:02 Spontaneity & Equilibrium II 34m 38s Intro 0:00 Transformation under Constant Temperature & Pressure 0:08 Transformation under Constant Temperature & Pressure 0:36 Define: G = U + PV - TS 3:32 Gibbs Energy 5:14 What Does This Say? 6:44 Spontaneous Process & a Decrease in G 14:12 Computing ∆G 18:54 Summary of Conditions 21:32 Constraint & Condition for Spontaneity 21:36 Constraint & Condition for Equilibrium 24:54 A Few Words About the Word Spontaneous 26:24 Spontaneous Does Not Mean Fast 26:25 Putting Hydrogen & Oxygen Together in a Flask 26:59 Spontaneous Vs. Not Spontaneous 28:14 Thermodynamically Favorable 29:03 Example: Making a Process Thermodynamically Favorable 29:34 Driving Forces for Spontaneity 31:35 Equation: ∆G = ∆H - T∆S 31:36 Always Spontaneous Process 32:39 Never Spontaneous Process 33:06 A Process That is Endothermic Can Still be Spontaneous 34:00 The Fundamental Equations of Thermodynamics 30m 50s Intro 0:00 The Fundamental Equations of Thermodynamics 0:44 Mechanical Properties of a System 0:45 Fundamental Properties of a System 1:16 Composite Properties of a System 1:44 General Condition of Equilibrium 3:16 Composite Functions & Their Differentiations 6:11 dH = TdS + VdP 7:53 dA = -SdT - PdV 9:26 dG = -SdT + VdP 10:22 Summary of Equations 12:10 Equation #1 14:33 Equation #2 15:15 Equation #3 15:58 Equation #4 16:42 Maxwell's Relations 20:20 Maxwell's Relations 20:21 Isothermal Volume-Dependence of Entropy & Isothermal Pressure-Dependence of Entropy 26:21 The General Thermodynamic Equations of State 34m 6s Intro 0:00 The General Thermodynamic Equations of State 0:10 Equations of State for Liquids & Solids 0:52 More General Condition for Equilibrium 4:02 General Conditions: Equation that Relates P to Functions of T & V 6:20 The Second Fundamental Equation of Thermodynamics 11:10 Equation 1 17:34 Equation 2 21:58 Recall the General Expression for Cp - Cv 28:11 For the Joule-Thomson Coefficient 30:44 Joule-Thomson Inversion Temperature 32:12 Properties of the Helmholtz & Gibbs Energies 39m 18s Intro 0:00 Properties of the Helmholtz & Gibbs Energies 0:10 Equating the Differential Coefficients 1:34 An Increase in T; a Decrease in A 3:25 An Increase in V; a Decrease in A 6:04 We Do the Same Thing for G 8:33 Increase in T; Decrease in G 10:50 Increase in P; Decrease in G 11:36 Gibbs Energy of a Pure Substance at a Constant Temperature from 1 atm to any Other Pressure. 14:12 If the Substance is a Liquid or a Solid, then Volume can be Treated as a Constant 18:57 For an Ideal Gas 22:18 Special Note 24:56 Temperature Dependence of Gibbs Energy 27:02 Temperature Dependence of Gibbs Energy #1 27:52 Temperature Dependence of Gibbs Energy #2 29:01 Temperature Dependence of Gibbs Energy #3 29:50 Temperature Dependence of Gibbs Energy #4 34:50 The Entropy of the Universe & the Surroundings 19m 40s Intro 0:00 Entropy of the Universe & the Surroundings 0:08 Equation: ∆G = ∆H - T∆S 0:20 Conditions of Constant Temperature & Pressure 1:14 Reversible Process 3:14 Spontaneous Process & the Entropy of the Universe 5:20 Tips for Remembering Everything 12:40 Verify Using Known Spontaneous Process 14:51 Section 8: Free Energy Example Problems Free Energy Example Problems I 54m 16s Intro 0:00 Example I 0:11 Example I: Deriving a Function for Entropy (S) 2:06 Example I: Deriving a Function for V 5:55 Example I: Deriving a Function for H 8:06 Example I: Deriving a Function for U 12:06 Example II 15:18 Example III 21:52 Example IV 26:12 Example IV: Part A 26:55 Example IV: Part B 28:30 Example IV: Part C 30:25 Example V 33:45 Example VI 40:46 Example VII 43:43 Example VII: Part A 44:46 Example VII: Part B 50:52 Example VII: Part C 51:56 Free Energy Example Problems II 31m 17s Intro 0:00 Example I 0:09 Example II 5:18 Example III 8:22 Example IV 12:32 Example V 17:14 Example VI 20:34 Example VI: Part A 21:04 Example VI: Part B 23:56 Example VI: Part C 27:56 Free Energy Example Problems III 45m Intro 0:00 Example I 0:10 Example II 15:03 Example III 21:47 Example IV 28:37 Example IV: Part A 29:33 Example IV: Part B 36:09 Example IV: Part C 40:34 Three Miscellaneous Example Problems 58m 5s Intro 0:00 Example I 0:41 Part A: Calculating ∆H 3:55 Part B: Calculating ∆S 15:13 Example II 24:39 Part A: Final Temperature of the System 26:25 Part B: Calculating ∆S 36:57 Example III 46:49 Section 9: Equation Review for Thermodynamics Looking Back Over Everything: All the Equations in One Place 25m 20s Intro 0:00 Work, Heat, and Energy 0:18 Definition of Work, Energy, Enthalpy, and Heat Capacities 0:23 Heat Capacities for an Ideal Gas 3:40 Path Property & State Property 3:56 Energy Differential 5:04 Enthalpy Differential 5:40 Joule's Law & Joule-Thomson Coefficient 6:23 Coefficient of Thermal Expansion & Coefficient of Compressibility 7:01 Enthalpy of a Substance at Any Other Temperature 7:29 Enthalpy of a Reaction at Any Other Temperature 8:01 Entropy 8:53 Definition of Entropy 8:54 Clausius Inequality 9:11 Entropy Changes in Isothermal Systems 9:44 The Fundamental Equation of Thermodynamics 10:12 Expressing Entropy Changes in Terms of Properties of the System 10:42 Entropy Changes in the Ideal Gas 11:22 Third Law Entropies 11:38 Entropy Changes in Chemical Reactions 14:02 Statistical Definition of Entropy 14:34 Omega for the Spatial & Energy Distribution 14:47 Spontaneity and Equilibrium 15:43 Helmholtz Energy & Gibbs Energy 15:44 Condition for Spontaneity & Equilibrium 16:24 Condition for Spontaneity with Respect to Entropy 17:58 The Fundamental Equations 18:30 Maxwell's Relations 19:04 The Thermodynamic Equations of State 20:07 Energy & Enthalpy Differentials 21:08 Joule's Law & Joule-Thomson Coefficient 21:59 Relationship Between Constant Pressure & Constant Volume Heat Capacities 23:14 One Final Equation - Just for Fun 24:04 Section 10: Quantum Mechanics Preliminaries Complex Numbers 34m 25s Intro 0:00 Complex Numbers 0:11 Representing Complex Numbers in the 2-Dimmensional Plane 0:56 2:35 Subtraction of Complex Numbers 3:17 Multiplication of Complex Numbers 3:47 Division of Complex Numbers 6:04 r & θ 8:04 Euler's Formula 11:00 Polar Exponential Representation of the Complex Numbers 11:22 Example I 14:25 Example II 15:21 Example III 16:58 Example IV 18:35 Example V 20:40 Example VI 21:32 Example VII 25:22 Probability & Statistics 59m 57s Intro 0:00 Probability & Statistics 1:51 Normalization Condition 1:52 Define the Mean or Average of x 11:04 Example I: Calculate the Mean of x 14:57 Example II: Calculate the Second Moment of the Data in Example I 22:39 Define the Second Central Moment or Variance 25:26 Define the Second Central Moment or Variance 25:27 1st Term 32:16 2nd Term 32:40 3rd Term 34:07 Continuous Distributions 35:47 Continuous Distributions 35:48 Probability Density 39:30 Probability Density 39:31 Normalization Condition 46:51 Example III 50:13 Part A - Show that P(x) is Normalized 51:40 Part B - Calculate the Average Position of the Particle Along the Interval 54:31 Important Things to Remember 58:24 Schrӧdinger Equation & Operators 42m 5s Intro 0:00 Schrӧdinger Equation & Operators 0:16 Relation Between a Photon's Momentum & Its Wavelength 0:17 Louis de Broglie: Wavelength for Matter 0:39 Schrӧdinger Equation 1:19 Definition of Ψ(x) 3:31 Quantum Mechanics 5:02 Operators 7:51 Example I 10:10 Example II 11:53 Example III 14:24 Example IV 17:35 Example V 19:59 Example VI 22:39 Operators Can Be Linear or Non Linear 27:58 Operators Can Be Linear or Non Linear 28:34 Example VII 32:47 Example VIII 36:55 Example IX 39:29 Schrӧdinger Equation as an Eigenvalue Problem 30m 26s Intro 0:00 Schrӧdinger Equation as an Eigenvalue Problem 0:10 Operator: Multiplying the Original Function by Some Scalar 0:11 Operator, Eigenfunction, & Eigenvalue 4:42 Example: Eigenvalue Problem 8:00 Schrӧdinger Equation as an Eigenvalue Problem 9:24 Hamiltonian Operator 15:09 Quantum Mechanical Operators 16:46 Kinetic Energy Operator 19:16 Potential Energy Operator 20:02 Total Energy Operator 21:12 Classical Point of View 21:48 Linear Momentum Operator 24:02 Example I 26:01 The Plausibility of the Schrӧdinger Equation 21m 34s Intro 0:00 The Plausibility of the Schrӧdinger Equation 1:16 The Plausibility of the Schrӧdinger Equation, Part 1 1:17 The Plausibility of the Schrӧdinger Equation, Part 2 8:24 The Plausibility of the Schrӧdinger Equation, Part 3 13:45 Section 11: The Particle in a Box The Particle in a Box Part I 56m 22s Intro 0:00 Free Particle in a Box 0:28 Definition of a Free Particle in a Box 0:29 Amplitude of the Matter Wave 6:22 Intensity of the Wave 6:53 Probability Density 9:39 Probability that the Particle is Located Between x & dx 10:54 Probability that the Particle will be Found Between o & a 12:35 Wave Function & the Particle 14:59 Boundary Conditions 19:22 What Happened When There is No Constraint on the Particle 27:54 Diagrams 34:12 More on Probability Density 40:53 The Correspondence Principle 46:45 The Correspondence Principle 46:46 Normalizing the Wave Function 47:46 Normalizing the Wave Function 47:47 Normalized Wave Function & Normalization Constant 52:24 The Particle in a Box Part II 45m 24s Intro 0:00 Free Particle in a Box 0:08 Free Particle in a 1-dimensional Box 0:09 For a Particle in a Box 3:57 Calculating Average Values & Standard Deviations 5:42 Average Value for the Position of a Particle 6:32 Standard Deviations for the Position of a Particle 10:51 Recall: Energy & Momentum are Represented by Operators 13:33 Recall: Schrӧdinger Equation in Operator Form 15:57 Average Value of a Physical Quantity that is Associated with an Operator 18:16 Average Momentum of a Free Particle in a Box 20:48 The Uncertainty Principle 24:42 Finding the Standard Deviation of the Momentum 25:08 Expression for the Uncertainty Principle 35:02 Summary of the Uncertainty Principle 41:28 The Particle in a Box Part III 48m 43s Intro 0:00 2-Dimension 0:12 Dimension 2 0:31 Boundary Conditions 1:52 Partial Derivatives 4:27 Example I 6:08 The Particle in a Box, cont. 11:28 Operator Notation 12:04 Symbol for the Laplacian 13:50 The Equation Becomes… 14:30 Boundary Conditions 14:54 Separation of Variables 15:33 Solution to the 1-dimensional Case 16:31 Normalization Constant 22:32 3-Dimension 28:30 Particle in a 3-dimensional Box 28:31 In Del Notation 32:22 The Solutions 34:51 Expressing the State of the System for a Particle in a 3D Box 39:10 Energy Level & Degeneracy 43:35 Section 12: Postulates and Principles of Quantum Mechanics The Postulates & Principles of Quantum Mechanics, Part I 46m 18s Intro 0:00 Postulate I 0:31 Probability That The Particle Will Be Found in a Differential Volume Element 0:32 Example I: Normalize This Wave Function 11:30 Postulate II 18:20 Postulate II 18:21 Quantum Mechanical Operators: Position 20:48 Quantum Mechanical Operators: Kinetic Energy 21:57 Quantum Mechanical Operators: Potential Energy 22:42 Quantum Mechanical Operators: Total Energy 22:57 Quantum Mechanical Operators: Momentum 23:22 Quantum Mechanical Operators: Angular Momentum 23:48 More On The Kinetic Energy Operator 24:48 Angular Momentum 28:08 Angular Momentum Overview 28:09 Angular Momentum Operator in Quantum Mechanic 31:34 The Classical Mechanical Observable 32:56 Quantum Mechanical Operator 37:01 Getting the Quantum Mechanical Operator from the Classical Mechanical Observable 40:16 Postulate II, cont. 43:40 Quantum Mechanical Operators are Both Linear & Hermetical 43:41 The Postulates & Principles of Quantum Mechanics, Part II 39m 28s Intro 0:00 Postulate III 0:09 Postulate III: Part I 0:10 Postulate III: Part II 5:56 Postulate III: Part III 12:43 Postulate III: Part IV 18:28 Postulate IV 23:57 Postulate IV 23:58 Postulate V 27:02 Postulate V 27:03 Average Value 36:38 Average Value 36:39 The Postulates & Principles of Quantum Mechanics, Part III 35m 32s Intro 0:00 The Postulates & Principles of Quantum Mechanics, Part III 0:10 Equations: Linear & Hermitian 0:11 Introduction to Hermitian Property 3:36 Eigenfunctions are Orthogonal 9:55 The Sequence of Wave Functions for the Particle in a Box forms an Orthonormal Set 14:34 Definition of Orthogonality 16:42 Definition of Hermiticity 17:26 Hermiticity: The Left Integral 23:04 Hermiticity: The Right Integral 28:47 Hermiticity: Summary 34:06 The Postulates & Principles of Quantum Mechanics, Part IV 29m 55s Intro 0:00 The Postulates & Principles of Quantum Mechanics, Part IV 0:09 Operators can be Applied Sequentially 0:10 Sample Calculation 1 2:41 Sample Calculation 2 5:18 Commutator of Two Operators 8:16 The Uncertainty Principle 19:01 In the Case of Linear Momentum and Position Operator 23:14 When the Commutator of Two Operators Equals to Zero 26:31 Section 13: Postulates and Principles Example Problems, Including Particle in a Box Example Problems I 54m 25s Intro 0:00 Example I: Three Dimensional Box & Eigenfunction of The Laplacian Operator 0:37 Example II: Positions of a Particle in a 1-dimensional Box 15:46 Example III: Transition State & Frequency 29:29 Example IV: Finding a Particle in a 1-dimensional Box 35:03 Example V: Degeneracy & Energy Levels of a Particle in a Box 44:59 Example Problems II 46m 58s Intro 0:00 Review 0:25 Wave Function 0:26 Normalization Condition 2:28 Observable in Classical Mechanics & Linear/Hermitian Operator in Quantum Mechanics 3:36 Hermitian 6:11 Eigenfunctions & Eigenvalue 8:20 Normalized Wave Functions 12:00 Average Value 13:42 If Ψ is Written as a Linear Combination 15:44 Commutator 16:45 Example I: Normalize The Wave Function 19:18 Example II: Probability of Finding of a Particle 22:27 Example III: Orthogonal 26:00 Example IV: Average Value of the Kinetic Energy Operator 30:22 Example V: Evaluate These Commutators 39:02 Example Problems III 44m 11s Intro 0:00 Example I: Good Candidate for a Wave Function 0:08 Example II: Variance of the Energy 7:00 Example III: Evaluate the Angular Momentum Operators 15:00 Example IV: Real Eigenvalues Imposes the Hermitian Property on Operators 28:44 Example V: A Demonstration of Why the Eigenfunctions of Hermitian Operators are Orthogonal 35:33 Section 14: The Harmonic Oscillator The Harmonic Oscillator I 35m 33s Intro 0:00 The Harmonic Oscillator 0:10 Harmonic Motion 0:11 Classical Harmonic Oscillator 4:38 Hooke's Law 8:18 Classical Harmonic Oscillator, cont. 10:33 General Solution for the Differential Equation 15:16 Initial Position & Velocity 16:05 Period & Amplitude 20:42 Potential Energy of the Harmonic Oscillator 23:20 Kinetic Energy of the Harmonic Oscillator 26:37 Total Energy of the Harmonic Oscillator 27:23 Conservative System 34:37 The Harmonic Oscillator II 43m 4s Intro 0:00 The Harmonic Oscillator II 0:08 Diatomic Molecule 0:10 Notion of Reduced Mass 5:27 Harmonic Oscillator Potential & The Intermolecular Potential of a Vibrating Molecule 7:33 The Schrӧdinger Equation for the 1-dimensional Quantum Mechanic Oscillator 14:14 Quantized Values for the Energy Level 15:46 Ground State & the Zero-Point Energy 21:50 Vibrational Energy Levels 25:18 Transition from One Energy Level to the Next 26:42 Fundamental Vibrational Frequency for Diatomic Molecule 34:57 Example: Calculate k 38:01 The Harmonic Oscillator III 26m 30s Intro 0:00 The Harmonic Oscillator III 0:09 The Wave Functions Corresponding to the Energies 0:10 Normalization Constant 2:34 Hermite Polynomials 3:22 First Few Hermite Polynomials 4:56 First Few Wave-Functions 6:37 Plotting the Probability Density of the Wave-Functions 8:37 Probability Density for Large Values of r 14:24 Recall: Odd Function & Even Function 19:05 More on the Hermite Polynomials 20:07 Recall: If f(x) is Odd 20:36 Average Value of x 22:31 Average Value of Momentum 23:56 Section 15: The Rigid Rotator The Rigid Rotator I 41m 10s Intro 0:00 Possible Confusion from the Previous Discussion 0:07 Possible Confusion from the Previous Discussion 0:08 Rotation of a Single Mass Around a Fixed Center 8:17 Rotation of a Single Mass Around a Fixed Center 8:18 Angular Velocity 12:07 Rotational Inertia 13:24 Rotational Frequency 15:24 Kinetic Energy for a Linear System 16:38 Kinetic Energy for a Rotational System 17:42 Rotating Diatomic Molecule 19:40 Rotating Diatomic Molecule: Part 1 19:41 Rotating Diatomic Molecule: Part 2 24:56 Rotating Diatomic Molecule: Part 3 30:04 Hamiltonian of the Rigid Rotor 36:48 Hamiltonian of the Rigid Rotor 36:49 The Rigid Rotator II 30m 32s Intro 0:00 The Rigid Rotator II 0:08 Cartesian Coordinates 0:09 Spherical Coordinates 1:55 r 6:15 θ 6:28 φ 7:00 Moving a Distance 'r' 8:17 Moving a Distance 'r' in the Spherical Coordinates 11:49 For a Rigid Rotator, r is Constant 13:57 Hamiltonian Operator 15:09 Square of the Angular Momentum Operator 17:34 Orientation of the Rotation in Space 19:44 Wave Functions for the Rigid Rotator 20:40 The Schrӧdinger Equation for the Quantum Mechanic Rigid Rotator 21:24 Energy Levels for the Rigid Rotator 26:58 The Rigid Rotator III 35m 19s Intro 0:00 The Rigid Rotator III 0:11 When a Rotator is Subjected to Electromagnetic Radiation 1:24 Selection Rule 2:13 Frequencies at Which Absorption Transitions Occur 6:24 Energy Absorption & Transition 10:54 Energy of the Individual Levels Overview 20:58 Energy of the Individual Levels: Diagram 23:45 Frequency Required to Go from J to J + 1 25:53 Using Separation Between Lines on the Spectrum to Calculate Bond Length 28:02 Example I: Calculating Rotational Inertia & Bond Length 29:18 Example I: Calculating Rotational Inertia 29:19 Example I: Calculating Bond Length 32:56 Section 16: Oscillator and Rotator Example Problems Example Problems I 33m 48s Intro 0:00 Equations Review 0:11 Energy of the Harmonic Oscillator 0:12 Selection Rule 3:02 3:27 Harmonic Oscillator Wave Functions 5:52 Rigid Rotator 7:26 Selection Rule for Rigid Rotator 9:15 Frequency of Absorption 9:35 Wave Numbers 10:58 Example I: Calculate the Reduced Mass of the Hydrogen Atom 11:44 Example II: Calculate the Fundamental Vibration Frequency & the Zero-Point Energy of This Molecule 13:37 Example III: Show That the Product of Two Even Functions is even 19:35 Example IV: Harmonic Oscillator 24:56 Example Problems II 46m 43s Intro 0:00 Example I: Harmonic Oscillator 0:12 Example II: Harmonic Oscillator 23:26 Example III: Calculate the RMS Displacement of the Molecules 38:12 Section 17: The Hydrogen Atom The Hydrogen Atom I 40m Intro 0:00 The Hydrogen Atom I 1:31 Review of the Rigid Rotator 1:32 Hydrogen Atom & the Coulomb Potential 2:50 Using the Spherical Coordinates 6:33 Applying This Last Expression to Equation 1 10:19 13:26 Angular Equation 15:56 Solution for F(φ) 19:32 Determine The Normalization Constant 20:33 Differential Equation for T(a) 24:44 Legendre Equation 27:20 Legendre Polynomials 31:20 The Legendre Polynomials are Mutually Orthogonal 35:40 Limits 37:17 Coefficients 38:28 The Hydrogen Atom II 35m 58s Intro 0:00 Associated Legendre Functions 0:07 Associated Legendre Functions 0:08 First Few Associated Legendre Functions 6:39 s, p, & d Orbital 13:24 The Normalization Condition 15:44 Spherical Harmonics 20:03 Equations We Have Found 20:04 Wave Functions for the Angular Component & Rigid Rotator 24:36 Spherical Harmonics Examples 25:40 Angular Momentum 30:09 Angular Momentum 30:10 Square of the Angular Momentum 35:38 Energies of the Rigid Rotator 38:21 The Hydrogen Atom III 36m 18s Intro 0:00 The Hydrogen Atom III 0:34 Angular Momentum is a Vector Quantity 0:35 The Operators Corresponding to the Three Components of Angular Momentum Operator: In Cartesian Coordinates 1:30 The Operators Corresponding to the Three Components of Angular Momentum Operator: In Spherical Coordinates 3:27 Z Component of the Angular Momentum Operator & the Spherical Harmonic 5:28 Magnitude of the Angular Momentum Vector 20:10 Classical Interpretation of Angular Momentum 25:22 Projection of the Angular Momentum Vector onto the xy-plane 33:24 The Hydrogen Atom IV 33m 55s Intro 0:00 The Hydrogen Atom IV 0:09 The Equation to Find R( r ) 0:10 Relation Between n & l 3:50 The Solutions for the Radial Functions 5:08 Associated Laguerre Polynomials 7:58 1st Few Associated Laguerre Polynomials 8:55 Complete Wave Function for the Atomic Orbitals of the Hydrogen Atom 12:24 The Normalization Condition 15:06 In Cartesian Coordinates 18:10 Working in Polar Coordinates 20:48 Principal Quantum Number 21:58 Angular Momentum Quantum Number 22:35 Magnetic Quantum Number 25:55 Zeeman Effect 30:45 The Hydrogen Atom V: Where We Are 51m 53s Intro 0:00 The Hydrogen Atom V: Where We Are 0:13 Review 0:14 Let's Write Out ψ₂₁₁ 7:32 Angular Momentum of the Electron 14:52 Representation of the Wave Function 19:36 28:02 Example: 1s Orbital 28:34 33:46 1s Orbital: Plotting Probability Densities vs. r 35:47 2s Orbital: Plotting Probability Densities vs. r 37:46 3s Orbital: Plotting Probability Densities vs. r 38:49 4s Orbital: Plotting Probability Densities vs. r 39:34 2p Orbital: Plotting Probability Densities vs. r 40:12 3p Orbital: Plotting Probability Densities vs. r 41:02 4p Orbital: Plotting Probability Densities vs. r 41:51 3d Orbital: Plotting Probability Densities vs. r 43:18 4d Orbital: Plotting Probability Densities vs. r 43:48 Example I: Probability of Finding an Electron in the 2s Orbital of the Hydrogen 45:40 The Hydrogen Atom VI 51m 53s Intro 0:00 The Hydrogen Atom VI 0:07 Last Lesson Review 0:08 Spherical Component 1:09 Normalization Condition 2:02 Complete 1s Orbital Wave Function 4:08 1s Orbital Wave Function 4:09 Normalization Condition 6:28 Spherically Symmetric 16:00 Average Value 17:52 Example I: Calculate the Region of Highest Probability for Finding the Electron 21:19 2s Orbital Wave Function 25:32 2s Orbital Wave Function 25:33 Average Value 28:56 General Formula 32:24 The Hydrogen Atom VII 34m 29s Intro 0:00 The Hydrogen Atom VII 0:12 p Orbitals 1:30 Not Spherically Symmetric 5:10 Recall That the Spherical Harmonics are Eigenfunctions of the Hamiltonian Operator 6:50 Any Linear Combination of These Orbitals Also Has The Same Energy 9:16 Functions of Real Variables 15:53 Solving for Px 16:50 Real Spherical Harmonics 21:56 Number of Nodes 32:56 Section 18: Hydrogen Atom Example Problems Hydrogen Atom Example Problems I 43m 49s Intro 0:00 Example I: Angular Momentum & Spherical Harmonics 0:20 Example II: Pair-wise Orthogonal Legendre Polynomials 16:40 Example III: General Normalization Condition for the Legendre Polynomials 25:06 Example IV: Associated Legendre Functions 32:13 The Hydrogen Atom Example Problems II 1h 1m 57s Intro 0:00 Example I: Normalization & Pair-wise Orthogonal 0:13 Part 1: Normalized 0:43 Part 2: Pair-wise Orthogonal 16:53 Example II: Show Explicitly That the Following Statement is True for Any Integer n 27:10 Example III: Spherical Harmonics 29:26 Angular Momentum Cones 56:37 Angular Momentum Cones 56:38 Physical Interpretation of Orbital Angular Momentum in Quantum mechanics 1:00:16 The Hydrogen Atom Example Problems III 48m 33s Intro 0:00 Example I: Show That ψ₂₁₁ is Normalized 0:07 Example II: Show That ψ₂₁₁ is Orthogonal to ψ₃₁₀ 11:48 Example III: Probability That a 1s Electron Will Be Found Within 1 Bohr Radius of The Nucleus 18:35 Example IV: Radius of a Sphere 26:06 Example V: Calculate <r> for the 2s Orbital of the Hydrogen-like Atom 36:33 The Hydrogen Atom Example Problems IV 48m 33s Intro 0:00 Example I: Probability Density vs. Radius Plot 0:11 Example II: Hydrogen Atom & The Coulombic Potential 14:16 Example III: Find a Relation Among <K>, <V>, & <E> 25:47 Example IV: Quantum Mechanical Virial Theorem 48:32 Example V: Find the Variance for the 2s Orbital 54:13 The Hydrogen Atom Example Problems V 48m 33s Intro 0:00 Example I: Derive a Formula for the Degeneracy of a Given Level n 0:11 Example II: Using Linear Combinations to Represent the Spherical Harmonics as Functions of the Real Variables θ & φ 8:30 Example III: Using Linear Combinations to Represent the Spherical Harmonics as Functions of the Real Variables θ & φ 23:01 Example IV: Orbital Functions 31:51 Section 19: Spin Quantum Number and Atomic Term Symbols Spin Quantum Number: Term Symbols I 59m 18s Intro 0:00 Quantum Numbers Specify an Orbital 0:24 n 1:10 l 1:20 m 1:35 4th Quantum Number: s 2:02 Spin Orbitals 7:03 Spin Orbitals 7:04 Multi-electron Atoms 11:08 Term Symbols 18:08 Russell-Saunders Coupling & The Atomic Term Symbol 18:09 Example: Configuration for C 27:50 Configuration for C: 1s²2s²2p² 27:51 Drawing Every Possible Arrangement 31:15 Term Symbols 45:24 Microstate 50:54 Spin Quantum Number: Term Symbols II 34m 54s Intro 0:00 Microstates 0:25 We Started With 21 Possible Microstates 0:26 ³P State 2:05 Microstates in ³P Level 5:10 ¹D State 13:16 ³P State 16:10 ²P₂ State 17:34 ³P₁ State 18:34 ³P₀ State 19:12 9 Microstates in ³P are Subdivided 19:40 ¹S State 21:44 Quicker Way to Find the Different Values of J for a Given Basic Term Symbol 22:22 Ground State 26:27 Hund's Empirical Rules for Specifying the Term Symbol for the Ground Electronic State 27:29 Hund's Empirical Rules: 1 28:24 Hund's Empirical Rules: 2 29:22 Hund's Empirical Rules: 3 - Part A 30:22 Hund's Empirical Rules: 3 - Part B 31:18 Example: 1s²2s²2p² 31:54 Spin Quantum Number: Term Symbols III 38m 3s Intro 0:00 Spin Quantum Number: Term Symbols III 0:14 Deriving the Term Symbols for the p² Configuration 0:15 Table: MS vs. ML 3:57 ¹D State 16:21 ³P State 21:13 ¹S State 24:48 J Value 25:32 Degeneracy of the Level 27:28 When Given r Electrons to Assign to n Equivalent Spin Orbitals 30:18 p² Configuration 32:51 Complementary Configurations 35:12 Term Symbols & Atomic Spectra 57m 49s Intro 0:00 Lyman Series 0:09 Spectroscopic Term Symbols 0:10 Lyman Series 3:04 Hydrogen Levels 8:21 Hydrogen Levels 8:22 Term Symbols & Atomic Spectra 14:17 Spin-Orbit Coupling 14:18 Selection Rules for Atomic Spectra 21:31 Selection Rules for Possible Transitions 23:56 Wave Numbers for The Transitions 28:04 Example I: Calculate the Frequencies of the Allowed Transitions from (4d) ²D →(2p) ²P 32:23 Helium Levels 49:50 Energy Levels for Helium 49:51 Transitions & Spin Multiplicity 52:27 Transitions & Spin Multiplicity 52:28 Section 20: Term Symbols Example Problems Example Problems I 1h 1m 20s Intro 0:00 Example I: What are the Term Symbols for the np¹ Configuration? 0:10 Example II: What are the Term Symbols for the np² Configuration? 20:38 Example III: What are the Term Symbols for the np³ Configuration? 40:46 Example Problems II 56m 34s Intro 0:00 Example I: Find the Term Symbols for the nd² Configuration 0:11 Example II: Find the Term Symbols for the 1s¹2p¹ Configuration 27:02 Example III: Calculate the Separation Between the Doublets in the Lyman Series for Atomic Hydrogen 41:41 Example IV: Calculate the Frequencies of the Lines for the (4d) ²D → (3p) ²P Transition 48:53 Section 21: Equation Review for Quantum Mechanics Quantum Mechanics: All the Equations in One Place 18m 24s Intro 0:00 Quantum Mechanics Equations 0:37 De Broglie Relation 0:38 Statistical Relations 1:00 The Schrӧdinger Equation 1:50 The Particle in a 1-Dimensional Box of Length a 3:09 The Particle in a 2-Dimensional Box of Area a x b 3:48 The Particle in a 3-Dimensional Box of Area a x b x c 4:22 The Schrӧdinger Equation Postulates 4:51 The Normalization Condition 5:40 The Probability Density 6:51 Linear 7:47 Hermitian 8:31 Eigenvalues & Eigenfunctions 8:55 The Average Value 9:29 Eigenfunctions of Quantum Mechanics Operators are Orthogonal 10:53 Commutator of Two Operators 10:56 The Uncertainty Principle 11:41 The Harmonic Oscillator 13:18 The Rigid Rotator 13:52 Energy of the Hydrogen Atom 14:30 Wavefunctions, Radial Component, and Associated Laguerre Polynomial 14:44 Angular Component or Spherical Harmonic 15:16 Associated Legendre Function 15:31 Principal Quantum Number 15:43 Angular Momentum Quantum Number 15:50 Magnetic Quantum Number 16:21 z-component of the Angular Momentum of the Electron 16:53 Atomic Spectroscopy: Term Symbols 17:14 Atomic Spectroscopy: Selection Rules 18:03 Section 22: Molecular Spectroscopy Spectroscopic Overview: Which Equation Do I Use & Why 50m 2s Intro 0:00 Spectroscopic Overview: Which Equation Do I Use & Why 1:02 Lesson Overview 1:03 Rotational & Vibrational Spectroscopy 4:01 Frequency of Absorption/Emission 6:04 Wavenumbers in Spectroscopy 8:10 Starting State vs. Excited State 10:10 Total Energy of a Molecule (Leaving out the Electronic Energy) 14:02 Energy of Rotation: Rigid Rotor 15:55 Energy of Vibration: Harmonic Oscillator 19:08 Equation of the Spectral Lines 23:22 Harmonic Oscillator-Rigid Rotor Approximation (Making Corrections) 28:37 Harmonic Oscillator-Rigid Rotor Approximation (Making Corrections) 28:38 Vibration-Rotation Interaction 33:46 Centrifugal Distortion 36:27 Anharmonicity 38:28 Correcting for All Three Simultaneously 41:03 Spectroscopic Parameters 44:26 Summary 47:32 Harmonic Oscillator-Rigid Rotor Approximation 47:33 Vibration-Rotation Interaction 48:14 Centrifugal Distortion 48:20 Anharmonicity 48:28 Correcting for All Three Simultaneously 48:44 Vibration-Rotation 59m 47s Intro 0:00 Vibration-Rotation 0:37 What is Molecular Spectroscopy? 0:38 Microwave, Infrared Radiation, Visible & Ultraviolet 1:53 Equation for the Frequency of the Absorbed Radiation 4:54 Wavenumbers 6:15 Diatomic Molecules: Energy of the Harmonic Oscillator 8:32 Selection Rules for Vibrational Transitions 10:35 Energy of the Rigid Rotator 16:29 Angular Momentum of the Rotator 21:38 Rotational Term F(J) 26:30 Selection Rules for Rotational Transition 29:30 Vibration Level & Rotational States 33:20 Selection Rules for Vibration-Rotation 37:42 Frequency of Absorption 39:32 Diagram: Energy Transition 45:55 Vibration-Rotation Spectrum: HCl 51:27 Vibration-Rotation Spectrum: Carbon Monoxide 54:30 Vibration-Rotation Interaction 46m 22s Intro 0:00 Vibration-Rotation Interaction 0:13 Vibration-Rotation Spectrum: HCl 0:14 Bond Length & Vibrational State 4:23 Vibration Rotation Interaction 10:18 Case 1 12:06 Case 2 17:17 Example I: HCl Vibration-Rotation Spectrum 22:58 Rotational Constant for the 0 & 1 Vibrational State 26:30 Equilibrium Bond Length for the 1 Vibrational State 39:42 Equilibrium Bond Length for the 0 Vibrational State 42:13 Bₑ & αₑ 44:54 The Non-Rigid Rotator 29m 24s Intro 0:00 The Non-Rigid Rotator 0:09 Pure Rotational Spectrum 0:54 The Selection Rules for Rotation 3:09 Spacing in the Spectrum 5:04 Centrifugal Distortion Constant 9:00 Fundamental Vibration Frequency 11:46 Observed Frequencies of Absorption 14:14 Difference between the Rigid Rotator & the Adjusted Rigid Rotator 16:51 21:31 Observed Frequencies of Absorption 26:26 The Anharmonic Oscillator 30m 53s Intro 0:00 The Anharmonic Oscillator 0:09 Vibration-Rotation Interaction & Centrifugal Distortion 0:10 Making Corrections to the Harmonic Oscillator 4:50 Selection Rule for the Harmonic Oscillator 7:50 Overtones 8:40 True Oscillator 11:46 Harmonic Oscillator Energies 13:16 Anharmonic Oscillator Energies 13:33 Observed Frequencies of the Overtones 15:09 True Potential 17:22 HCl Vibrational Frequencies: Fundamental & First Few Overtones 21:10 Example I: Vibrational States & Overtones of the Vibrational Spectrum 22:42 Example I: Part A - First 4 Vibrational States 23:44 Example I: Part B - Fundamental & First 3 Overtones 25:31 Important Equations 27:45 Energy of the Q State 29:14 The Difference in Energy between 2 Successive States 29:23 Difference in Energy between 2 Spectral Lines 29:40 Electronic Transitions 1h 1m 33s Intro 0:00 Electronic Transitions 0:16 Electronic State & Transition 0:17 Total Energy of the Diatomic Molecule 3:34 Vibronic Transitions 4:30 Selection Rule for Vibronic Transitions 9:11 More on Vibronic Transitions 10:08 Frequencies in the Spectrum 16:46 Difference of the Minima of the 2 Potential Curves 24:48 Anharmonic Zero-point Vibrational Energies of the 2 States 26:24 Frequency of the 0 → 0 Vibronic Transition 27:54 Making the Equation More Compact 29:34 Spectroscopic Parameters 32:11 Franck-Condon Principle 34:32 Example I: Find the Values of the Spectroscopic Parameters for the Upper Excited State 47:27 Table of Electronic States and Parameters 56:41 Section 23: Molecular Spectroscopy Example Problems Example Problems I 33m 47s Intro 0:00 Example I: Calculate the Bond Length 0:10 Example II: Calculate the Rotational Constant 7:39 Example III: Calculate the Number of Rotations 10:54 Example IV: What is the Force Constant & Period of Vibration? 16:31 Example V: Part A - Calculate the Fundamental Vibration Frequency 21:42 Example V: Part B - Calculate the Energies of the First Three Vibrational Levels 24:12 Example VI: Calculate the Frequencies of the First 2 Lines of the R & P Branches of the Vib-Rot Spectrum of HBr 26:28 Example Problems II 1h 1m 5s Intro 0:00 Example I: Calculate the Frequencies of the Transitions 0:09 Example II: Specify Which Transitions are Allowed & Calculate the Frequencies of These Transitions 22:07 Example III: Calculate the Vibrational State & Equilibrium Bond Length 34:31 Example IV: Frequencies of the Overtones 49:28 Example V: Vib-Rot Interaction, Centrifugal Distortion, & Anharmonicity 54:47 Example Problems III 33m 31s Intro 0:00 Example I: Part A - Derive an Expression for ∆G( r ) 0:10 Example I: Part B - Maximum Vibrational Quantum Number 6:10 Example II: Part A - Derive an Expression for the Dissociation Energy of the Molecule 8:29 Example II: Part B - Equation for ∆G( r ) 14:00 Example III: How Many Vibrational States are There for Br₂ before the Molecule Dissociates 18:16 Example IV: Find the Difference between the Two Minima of the Potential Energy Curves 20:57 Example V: Rotational Spectrum 30:51 Section 24: Statistical Thermodynamics Statistical Thermodynamics: The Big Picture 1h 1m 15s Intro 0:00 Statistical Thermodynamics: The Big Picture 0:10 Our Big Picture Goal 0:11 Partition Function (Q) 2:42 The Molecular Partition Function (q) 4:00 Consider a System of N Particles 6:54 Ensemble 13:22 Energy Distribution Table 15:36 Probability of Finding a System with Energy 16:51 The Partition Function 21:10 Microstate 28:10 Entropy of the Ensemble 30:34 Entropy of the System 31:48 Expressing the Thermodynamic Functions in Terms of The Partition Function 39:21 The Partition Function 39:22 Pi & U 41:20 Entropy of the System 44:14 Helmholtz Energy 48:15 Pressure of the System 49:32 Enthalpy of the System 51:46 Gibbs Free Energy 52:56 Heat Capacity 54:30 Expressing Q in Terms of the Molecular Partition Function (q) 59:31 Indistinguishable Particles 1:02:16 N is the Number of Particles in the System 1:03:27 The Molecular Partition Function 1:05:06 Quantum States & Degeneracy 1:07:46 Thermo Property in Terms of ln Q 1:10:09 Example: Thermo Property in Terms of ln Q 1:13:23 Statistical Thermodynamics: The Various Partition Functions I 47m 23s Intro 0:00 Lesson Overview 0:19 Monatomic Ideal Gases 6:40 Monatomic Ideal Gases Overview 6:42 Finding the Parition Function of Translation 8:17 Finding the Parition Function of Electronics 13:29 Example: Na 17:42 Example: F 23:12 Energy Difference between the Ground State & the 1st Excited State 29:27 The Various Partition Functions for Monatomic Ideal Gases 32:20 Finding P 43:16 Going Back to U = (3/2) RT 46:20 Statistical Thermodynamics: The Various Partition Functions II 54m 9s Intro 0:00 Diatomic Gases 0:16 Diatomic Gases 0:17 Zero-Energy Mark for Rotation 2:26 Zero-Energy Mark for Vibration 3:21 Zero-Energy Mark for Electronic 5:54 Vibration Partition Function 9:48 When Temperature is Very Low 14:00 When Temperature is Very High 15:22 Vibrational Component 18:48 Fraction of Molecules in the r Vibration State 21:00 Example: Fraction of Molecules in the r Vib. State 23:29 Rotation Partition Function 26:06 Heteronuclear & Homonuclear Diatomics 33:13 Energy & Heat Capacity 36:01 Fraction of Molecules in the J Rotational Level 39:20 Example: Fraction of Molecules in the J Rotational Level 40:32 Finding the Most Populated Level 44:07 Putting It All Together 46:06 Putting It All Together 46:07 Energy of Translation 51:51 Energy of Rotation 52:19 Energy of Vibration 52:42 Electronic Energy 53:35 Section 25: Statistical Thermodynamics Example Problems Example Problems I 48m 32s Intro 0:00 Example I: Calculate the Fraction of Potassium Atoms in the First Excited Electronic State 0:10 Example II: Show That Each Translational Degree of Freedom Contributes R/2 to the Molar Heat Capacity 14:46 Example III: Calculate the Dissociation Energy 21:23 Example IV: Calculate the Vibrational Contribution to the Molar heat Capacity of Oxygen Gas at 500 K 25:46 Example V: Upper & Lower Quantum State 32:55 Example VI: Calculate the Relative Populations of the J=2 and J=1 Rotational States of the CO Molecule at 25°C 42:21 Example Problems II 57m 30s Intro 0:00 Example I: Make a Plot of the Fraction of CO Molecules in Various Rotational Levels 0:10 Example II: Calculate the Ratio of the Translational Partition Function for Cl₂ and Br₂ at Equal Volume & Temperature 8:05 Example III: Vibrational Degree of Freedom & Vibrational Molar Heat Capacity 11:59 Example IV: Calculate the Characteristic Vibrational & Rotational temperatures for Each DOF 45:03 Bookmark & Share Embed ## Copy & Paste this embed code into your website’s HTML Please ensure that your website editor is in text mode when you paste the code. (In Wordpress, the mode button is on the top right corner.) × • - Allow users to view the embedded video in full-size. Since this lesson is not free, only the preview will appear on your website. • ## Related Books 2 answersLast reply by: KimberlyTue Jan 22, 2019 6:08 PMPost by Kimberly on January 19, 2019Hi professor Hovasapian,Thank you for the wonderful lectures. I have a question regarding to topics in quantum mechanics. I've skimmed through the list but i don't think you covered this topic in your video. I want to ask how do we know whether or not the wave character of the object is meaningful.In an example in class, we were asked to compare the wavelength of the proton that travels at speed of 2.50*10^3 m/s with the size of a hydrogen atom (50 pm) and decide whether or not the wave character of the object is meaningful. Through the de Broglie equation, the wavelength of the proton is calculated to be 1.58*10^-10m and he concluded that the wave character is meaningful. I still don't get why he's able to conclude this with given information. Please explain this to me. Thank you so much professor Hovasapian. I really appreciate it. ### Example Problems II Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Review 0:25 • Wave Function • Normalization Condition • Observable in Classical Mechanics & Linear/Hermitian Operator in Quantum Mechanics • Hermitian • Eigenfunctions & Eigenvalue • Normalized Wave Functions • Average Value • If Ψ is Written as a Linear Combination • Commutator • Example I: Normalize The Wave Function 19:18 • Example II: Probability of Finding of a Particle 22:27 • Example III: Orthogonal 26:00 • Example IV: Average Value of the Kinetic Energy Operator 30:22 • Example V: Evaluate These Commutators 39:02 ### Transcription: Example Problems II Hello and welcome back to www.educator.com and welcome back to Physical Chemistry.0000 I apologize if I sat a little bit today, I’m just getting over a cold.0004 If I have some sniffles and things like that, I hope you will forgive me.0007 Today, we are going to continue on with our example problems.0011 We already did one set and then we talked a little bit more about the quantum mechanics, the formal hypotheses of the quantum mechanics.0014 Now, we are just going to do several lessons of problems.0022 Let us just jump right on in.0024 Before we start the example problems, I did want to go over just some of the high points.0027 Just recall some of the equations because there was a lot going on mathematically with quantum,0032 as there is with thermal, and anything else in physical chemistry.0038 Sometimes, you have to pull back and just make a listing of some of things that are important that we remember.0041 We solve the Schrӧdinger equation and we find this wave function ψ.0049 That is a wave function and it represents the particle that we are interested in a particular quantum mechanical system.0057 Instead of looking at the particle like a particle, we look at it like a wave.0067 What we do is we play with this wave function to extract information from it.0070 That is all that is actually happening in quantum mechanics.0076 The ψ conjugate × ψ, we said was the probability of finding the particle whose wave function is ψ0080 in a differential volume element called the DV at the point XYZ.0112 You have this wave function which is going to be a function of XYZ.0131 At some random XYZ, if you actually multiply, it is going to end up being the probability of0135 finding the particle in that little differential volume element.0143 Now we have the equation this ψ DV = 1.0149 Actually, I should say this is not the probability, the ψ* × ψ is the probability density.0163 But for all purposes, we can think of it as the probability.0168 The actual probability is the ψ × ψ* × the differential volume element so that you actually have the probability.0171 When we integrate all of the probabilities, we are going to get 1.0178 This is the normalization condition.0182 This was very important normalization condition.0184 Again, one of the frustrating things about quantum mechanics is wrapping your mind about around things conceptually.0192 But what is nice about it is, because it is so purely mathematical, even if you do not completely understand what is going on,0198 as long as you have a certain set of equations at your disposal,0206 You will at least get the right answer.0210 Eventually, if you become more comfortable and solve for problems, conceptually it will start to make more sense.0212 Now every observable in classical mechanics, corresponds to a linear hermitian operator in quantum mechanics.0218 If we observe a linear momentum in classical mechanics, we have a linear momentum operator in quantum.0254 If we observe angular momentum in classical mechanics,0261 for some particle moving in a circular path or curved path we have a angular momentum operator in quantum mechanics.0266 That operator, we apply it to wave function to give this information.0273 This is what I mean by we extract information from the wave function by operating on the wave function.0276 Doing something to it mathematically.0282 An operator applied to some wave function in a particular state, is equal to A sub N ψ sub N.0287 It is an Eigen value problem.0296 Remember, we can express the Schrӧdinger equation as an Eigen value problem.0300 Again, we are just going over some highlights of what is that we covered so that we have them0308 in a one quick place before we start the example problems.0311 That the ψ sub N or called the Eigen functions of the operator A.0315 The A sub N are called Eigen values of A corresponding to the Eigen function, corresponding to the ψ sub N Eigen function.0329 When a particular function is in a given state, let us say ψ sub 3, it is in that Eigen state for the operator.0357 We speak of Eigen states, we speak of Eigen functions, we speak of Eigen values.0366 Let us talk about what hermitian means.0372 Hermitian also has a mathematical definition.0379 Hermitian operator means it has to satisfy this.0385 F* AG= the integral of GA* F*.0394 If you have 2 wave functions F and G, if you do the left integral and if you do the right integral, those equal each other,0405 then this operator is something that we call hermitian.0419 If it is hermitian, if it satisfies this property.0423 The hermitian operator implies that the Eigen values are real numbers.0427 It is very important and I will actually do a lot to demonstrate why this hermitian property implies reality0434 and implies orthogonality in some of the example problems.0442 One of the first things that hermitian implies is the fact that the Eigen values are real.0447 The other thing, that this hermitian property of the operator implies, double arrow for application.0451 It implies that the Eigen functions are actually orthogonal.0457 The integral of ψsub N conjugate × ψ sub P is equal to 0 for N not equal to P.0463 If I have one Eigen function ψ sub 1 and I have another Eigen function ψ sub 15,0472 If the operator is hermitian, the operator that gave rise to the Eigen functions, the Eigen functions are going to be orthogonal.0478 That is analogous to two vectors being perpendicular.0484 Two vectors are orthogonal when their dot product is equal to 0.0487 Two Eigen functions are orthogonal when their integral of their product is equal to 0.0491 It is completely analogous, that is all that is happening here.0497 When measuring an observable in quantum mechanics, we only get the Eigen value of0505 the operator corresponding to the observable when the wave function is an Eigen function of the operator.0538 In other words, when a quantum mechanical system happens to be in a state that is represented0572 by a wave function that happens to be an Eigen function of the operator,0581 then what we observe when we take a measurement is going to be one of the Eigen values.0585 When the quantum mechanical system is in a state that is represented by the Eigen function of0592 the operator of interest then what we observe is going to be one of the Eigen values of the operator.0599 If the wave function ψ is equal to ψ 1 ψ 1 + ψ 2 ψ 2 + so so, is written.0609 Ψ is the wave function of the quantum mechanical system.0631 If this wave function happens to be written as a linear combination also called the super position.0634 I do not like the word super position but that is fine.0650 It is written as a linear combination of Eigen functions of the operator of interest, whatever operator we happen to be dealing with.0653 Then what we observe are the Eigen values A sub 1, A sub 2, A sub 3, and so on.0680 Let me go to the next page.0702 With probabilities C sub 1² C sub 2² C sub 3² and so on.0704 This is for normalized wave functions.0725 For the most part, all of our wave functions are going to be normalized.0734 If they are not normalized, we are going to normalize them.0738 That is not a problem.0739 Basically, what we are saying is if we have some wave function ψ of a quantum mechanical system and0740 let us say it is represented by 1/2 I × ψ sub 1 – 1/5 ψ sub 3 + 2/7 ψ sub 14.0745 Let us say it is represented as a linear combination of Eigen functions of the operator of interest.0767 Then what I'm going to observe are the Eigen values A sub 1, A sub 3, A sub 14, every time I make a measurement,0775 I’m going to see one of these gets one of these three numbers.0783 The extent to which I get one number over the other is going to be square of that, the square of that, the square of that.0787 Those are the probabilities.0796 1/5² is going to be 1/25.0798 1/ 25 of the time, at every 25 measurements, one of those measurements I'm going to get an A3.0802 That is all this is saying.0809 That is all this represents.0814 This probably will not play a bigger role in what we do.0816 These are one of the hypotheses that we discussed.0818 Let us go ahead and say a little bit more.0823 After many measurements, the average value also called the expectation value.0828 The average value is symbolized like that and it is going to be the integral of ψ sub * the operator and ψ.0845 And this is for normalized.0860 We will go ahead and put the one for un normalized.0863 This definition right here, it applies when the ψ is written as a linear combination or not.0868 If it is if this thing, then this thing goes in here and here.0873 The definition is universal.0877 The average value of a particular observable is this.0879 The general definition for an un normalized wave function, it is just good to see it.0884 We have the average value of A is going to equal the integral of ψ sub *.0894 These are just integrals, all you are doing is literally plugging the functions in.0899 Operating on this, multiplying it by to ψ conjugate.0905 Putting it in the integrand and integrating it with respect to the variables.0908 If it is a one dimensional system, it is a single integral.0911 If it is a 2 dimensional system, it is a double integral.0914 If it is a 3 dimensional system, XYZ, it is a triple integral.0916 You have your software to do the integral for you.0918 The integral of ψ A ψ ÷ the integral of the normalization condition, this thing.0924 Remember, when it is normalize, this thing is equal to 1 which is why it is equal this, just the numerator.0934 This is the definition for an un normalized wave function.0938 If ψ is a linear combination is written as a linear combination.0946 In other words, ψ = C1 C1 + C2 C2 +… ,0960 Then the average value is really simple.0972 It is actually equal to C 1² × A1, the Eigen value + C 2² × the Eigen value + …,0978 It is equal to the sum I, C sub I² A sub I.0992 There is another way of actually finding it when it is written as a linear combination.1001 The final thing you want to review is something called the commutator.1005 We have operator AB, the symbol this means this is called the commutator of the 2 operators.1010 And it is defined as AB – BA.1019 You apply AB to the function then you apply BA to the function and you subtract one from the other.1024 This is called the commutator.1030 And we also have sigma A² = A² - A², there is that one.1036 The uncertainty in the measurement, the variance, if you take the square root of that you get the standard deviation.1049 And the sigma of B² is equal to squared.1056 Of course, the final relation which is the general expression for the Heisenberg uncertainty principle is the following.1067 The sigma of A, sigma of B is greater than or equal to ½ the absolute value of the integral of ψ sub *.1074 The commutator of AB applied to ψ.1088 That is the general expression for the uncertainty principle and it is based on this commutator.1094 If you do AB of the function of the BA of the function.1101 If you subtract one from the other you get 0 and those operators commute.1105 If they commute then you can measure any of those 2 things to an arbitrary degree of precision.1110 If they do not commute like for example the position of the momentum,1119 the position of the momentum operator do not commute.1122 Based on the original thing that we saw, the original version of the Heisenberg uncertainty principle that we saw,1126 we know that we cannot measure the momentum and1132 the position of a particle to an arbitrary degree of an accuracy or precision simultaneously.1137 We have to sacrifice one for the other and we have to find the balance.1143 Whatever it is that we happen to want depending on the situation.1148 With that, let us go ahead and start some example problems.1153 I do not know it that helped or not but that was nice to see.1156 Let ψ sub θ = E ⁺I θ for θ greater than or equal to 0 and less than or equal to 2 π.1160 We want to normalize this wave function.1167 Quite nice and easy.1168 Normalize the wave function.1171 Let me go ahead and do this in blue, just to change the color a little bit.1172 Normalized means we have some constant that we have multiply the wave function by, to make the normalization condition satisfied.1176 Normalized is ψ of θ is equal to some normalization constant × the function.1188 The normalization condition is this.1200 It is that equal to 1.1202 We need to solve this integral and find N, the normalization constant.1206 That is what we do.1211 If we take the integral of ψ sub *.1214 In this particular case, ψ*= E ⁻I θ because it is a conjugate and ψ is equal to E ⁺I θ.1222 We do not have to watch out for it.1235 Sometimes the conjugate is not the same as the real number.1236 This become N × E ⁻I θ × ψ which is NE ⁺I θ.1241 It is going to be E θ and we are going to set it equal to 1.1252 We are going to get N² × the integral of E ⁻I θ × E ⁺I θ E θ.1256 This is going to equal 1.1265 We are going to get N² of E ⁺I E ⁻I θ × E ⁺I θ is E⁰ which is 1.1267 It is going to be D θ.1275 We are integrating from 0 to 2 π.1276 D θ is equal to 1.1280 This is going to be N² × 2 π is equal to 1.1287 N² is equal to 1/ 2 π which implies that N is equal to 1/ 2 π ^½, or if you like 1/ √2 π if you prefer older notation.1296 I should do it down here.1319 Ψ sub θ of the normalize wave function is equal to 1/, 2 π ^½ E ⁺I θ.1322 That is your normalize wave function.1336 You want to normalize a wave function, apply the normalization condition.1339 Give me that extra page here.1347 There is a little one missing here.1349 The wave function in example 1 is that a particle moving in a circle, what is the probability that the particle will be found between π/ 6 and π/ 3?1354 The probability density we said is ψ * ψ which is also equal to the modulus of that.1365 This was equal to the probability density.1377 Ψ is equal to 1/ radical 2 π × E ⁺I θ.1383 Ψ* is equal to 1/ radical 2 π × E ⁻I θ.1394 So far so good, let us go ahead and find the probability density.1402 We will just multiply these 2 together.1406 Ψ* × ψ is going to equal 1/ 2 π × E ⁺I θ × E ⁻I θ which is going to equal 1/ 2 π.1409 The probability is equal to the probability density × the differential element.1425 D θ in this case because we are working with θ.1436 Therefore, our probability is going to equal 1/ 2 π which is equal to this part, D θ.1439 Now, we want to find the total probability of finding it within a particular region and we said π/ 6 and π/ 3.1452 We are going to integrate from π/ 6 to π/ 3.1458 Therefore, the probability of finding the particle when θ is between π/ 6 and π/ 3 is equal to the integral π/ 6 to π/ 3 of the probability.1462 I actually prefer to write it differently.1487 I prefer my differential element to be separate.1489 I do not like to write it on top.1492 This is going to equal 1/ 2 π × θ as it goes from π/ 6.1495 2 π/ 3 which is equal to 1/ 2 π × π/ 3 - π/ 6, which is going to equal 1/ 2 π.1509 Π/ 3 – π/ 6, 2 π/ 6 – π/ 6 is π/ 6.1525 The π cancels, leaving you with the probability of 1/ 12.1531 The probability density is ψ* ψ.1537 The probability ψ* ψ D θ.1539 If you want the probability between two certain points, in this case two certain angles, use integrate from the point to the other point.1543 We will see later that the general wave equation for a particle moving in a circle is ψ sub θ = E ⁺I × M sub L θ.1563 Where M sub L is a quantum number like the N in the equation for the particle in a box.1573 Just another quantum number for a circular motion.1577 Shows that ψ sub 2 and ψ sub 3 are orthogonal.1580 In order to show orthogonality, we need to show the following.1588 We need to show that the integral of ψ* of 2, ψ of 3 is equal to 0.1592 We need to show that they are perpendicular.1607 We need to show that they are orthogonal.1609 Orthogonal was the general definition.1610 We need to show that the integral of their product is equal to 0.1613 Let us go ahead and do it.1617 The integral of ψ* to ψ 3, it does not matter which order you do it.1621 You can do ψ* ψ 3, it really does not matter.1628 That is going to equal the integral from 0 to 2 π, that is our space from 0 to 2 π.1632 We are talking about circular motion.1638 Ψ sub 2 is equal to, that is the 2 and 3, that is the NL.1642 We have 1/ radical 2 π × E ^- I 2 θ × 1/ radical 2 π.1650 All I’m doing is just putting in the equation, plugging them into the equations that I have developed already.1663 That is the nice thing about quantum mechanics.1668 There is a lot going on but at least it is reasonably handle able because you have the equations.1671 In fact, you just plugged them in.1679 As far as the integration is concerned, sometimes you are going to have something that you can integrate really easily like these.1681 Sometimes you are going to have to use your software, not a big deal.1686 If you have long integration problems, please do not do the integration yourself.1689 If you want to use tables, that is fine.1693 I think it is nice but at this level you want to concentrate more on what is going on underneath.1694 You want to leave the mechanics to machines.1699 Let the machines do it for us.1701 That is what they are for.1703 We have E ⁺I × 3 θ D θ.1707 This is the integral that we have to solve.1711 It turns out to be really nice integral.1714 We have 1/ 2 π, let us pull that out.1717 0 to 2 π E ^- I 2 θ or 2 I θ × E³ I θ.1721 Just add them up and you are going to end up with E ⁺I θ D θ.1729 That is going to equal 1/ 2 π × when I integrate this, I'm going to get 1/ IE ⁺I θ.1735 I'm going to take it from 0 to 2 π.1746 I will do all of this in one page.1751 = 1/ 2 π I × E² π I – E⁰ which is equal to 1/ 2 π I.1756 Remember, E² π is cos of 2 π + I × sin of 2 π.1776 The Euler’s relation, cos of 2 π + I × the sin of 2 π -1 is equal to 1/ 2 π I × cos of 2 π is 1 + sin of 2 π 0 -1 = 0.1782 They are orthogonal, nice and simple.1811 Let us see what we got here.1820 Let ψ be a wave function for a particle in a 1 dimensional box.1824 Calculate the expectation value, average value of the kinetic energy operator for this function.1829 The average value of the kinetic energy operator is equal to the integral of ψ* × the operator and apply to ψ.1837 That is the integral that we have to solve.1848 We just plug everything in.1850 The particular ψ, I had written here.1855 Sometimes the problem will give you the equation.1861 Sometimes it will not give you the equation.1862 You have to be able to go to the tables or places in your book where you are going to find the equations you need.1864 Much of the work that you actually do will knowing where to get the information you need.1870 You do not necessarily have to keep the information in your head, you just have to know where to get it.1874 If we recall or if we can look it up, the equation for a particle in a 1 dimensional box is equal to 2/ A¹/2 × sin of N Π/ A × X.1879 The length of the box is from 0 to A.1895 That is the equation that we want to work with, that is ψ.1899 In this particular case, this is a real.1903 Ψ* is equal to ψ so we can go ahead and write that down.1905 Ψ* is equal to ψ, it is not a problem.1910 The kinetic energy operator, let us go ahead and write down what that is.1916 The kinetic energy operator is –H ̅/ 2 M D² DX².1919 We are going to apply that.1928 We are going to do this part first.1930 We are going to apply the kinetic energy operator to ψ.1931 K apply to ψ is equal to –H ̅.1936 I would recommend you actually write everything out during the entire course.1941 If you want to get in the habit of writing everything out, do not do anything in your head.1948 There is too much going on.1951 I do not do anything in my head.1952 I write everything out.1954 D² DX² of ψ which is 2/ A.1956 Do not let the notation intimidate you.1964 Most of it is just constants that go away.1966 Sin N π/ A × X.1970 Like I said, most of it is just constant.1977 When I take the derivative of the sin N π of A twice, the derivative of sin is cos.1979 The derivative of cos is –sin.1984 The - and – go away and I'm left with a +.1988 Let me write everything out here.1992 We are going to get the H ̅²/ 2 M.2004 We are going to pull this one out 2/ A¹/2.2011 Again, we have the sin when we differentiate twice but because of this N π/ A × X,2016 that is going to come out twice and it is going to be N² π²/ A².2022 And you are going to get sin of N π A/X.2029 This is just basic differential from first year calculus.2033 Nothing going on here.2035 This is the K of ψ, the ψ* × K ψ is going to equal 2/ A¹/2 sin of N π/ A × X × H ̅² N² π²/ 2 MA² × 2/ A¹/2.2037 I’m just putting things together.2070 Sin of N π/ A × X and that is going to equal 2/ A.2073 Let me write everything.2089 2/ A ^½ and 2/ A¹/2, I’m going to do it like this.2092 It is going to be 2 on top, there is going to be A × A ^½, that is A on the bottom.2096 A and A² becomes A³.2103 We get H ̅² N² π²/ 2MA³ and we get sin² N π/ A × X.2105 The 2 and 2 cancel.2121 Now, we need to integrate this thing so we are going to have.2124 I hope I have not forgotten any of my symbols here.2130 H ̅², I should have an N², I should have a π², I should have an M and I should have an A³ ×2132 the integral from 0 to A of sin² N π/ A × X.2141 This is going to equal H ̅² N² π²/ MA³.2150 This is going to be, when I look this up in a table or in this particular case I will use the table entry.2159 You can have the software do it for you.2164 This integral is going to end up being A/ 2.2169 I will go ahead and write it out.2172 -A × sin of N π/ A × X/ 4 N π from 0 to A.2174 And it is going to equal H ̅² N² π²/ MA³ × A/ 2.2186 This is A/ 2, A cancels one of these and turns it into A² and we are left with H ̅² N² π²/ 2 MA².2195 That is correct, yes.2211 That was what we wanted.2213 Let me see, do I have an extra page here?2215 Yes, I do.2216 The expectation value of the kinetic energy operator.2219 When I measure the kinetic energy, this is what I'm going to get.2223 Let us do another approach to this problem.2231 We are going to do that to the next page.2233 Another approach to this problem.2235 It was nice to revisit momentum every so often because momentum and2238 angular momentum are huge in quantum mechanics, in all physics actually.2244 Another approach to the problem.2250 We know that K is equal to P²/ 2 M.2258 That is just another way of writing the kinetic energy, ½ mass × velocity²2263 is actually equal to the mass × the velocity which is the momentum²/ 2M.2266 The average value of K is equal to the average value of P²/ 2.2276 2M is just a constant so it ends up being the average value of P²/ 2M.2282 From our previous lesson, we have already calculated this PM.2288 It was H ̅² M² π²/ A².2294 We have H ̅² M² π²/ A² / 2M.2306 Just put this over the 2 M.2316 We will put the 2M down here and we get the same answer as before.2318 You can do it with the definition of expectation value or you can do it with something else based on something that you already done.2323 This is a really great relation to remember.2331 Kinetic energy is the momentum² or twice the mass.2333 Where are we now?2340 Evaluate the commutator of P sub X P sub Y and the commutator X² P of X.2344 Let us go ahead and do this first one.2350 When we evaluate these commutator relations, use a generic function F.2353 Just use F, do not try to do these symbolically without a function.2358 At least until you become very comfortable with this.2363 I, myself is not comfortable with it.2366 I like to put my function in there because I know I’m operating on a function and the end just drop the function2368 and you are left with your operator symbol.2372 You write that down here.2378 When doing these, use a generic F and by generic F I mean just the symbol F.2379 Do not use just the operators until you become much more proficient and familiar with operators.2401 This P sub X P sub Y, the most exhausting part of quantum mechanics is writing everything down.2414 This is the symbolism, this is just so tedious.2421 Applied to some generic function F.2425 That is equal to P sub X P sub Y of F – P sub Y P sub X of F.2428 We know what we are doing here.2440 P sub X - IH DDX that is the P sub X operator and the P sub Y.2443 This is P sub Y applied to F, then do P of X applied what you got.2454 We are working from right to left.2460 Remember, sequential operators.2462 This is going to be - I H ̅ DF DY.2464 Notice, I put the F in there so operate on a function.2471 - I H ̅ DDY - I H ̅ DF DX.2475 Here we get – H ̅² D² F DX DY - H ̅² D ⁺F DY DX is actually = 0.2488 And the reason it is equal 0 because for all of the functions that you are going to be dealing with, use mixed partial derivatives.2521 And again, we saw this in thermodynamics.2544 Mixed partial derivatives, by mix partial we mean the partial with respect to X first then the partial with2546 the respect to Y is equal to the partial with respect to Y first and then the partial with respect to X.2553 The order in which you operate, the order in which you take the derivative, it does not matter for all well behaved functions.2559 By well behaved, it just means to satisfy certain continuity conditions.2566 For our purposes, you will never run across a function that does not satisfy this.2570 We will always be dealing with functions that satisfy this property.2575 This and this, even though the orders are different, they are actually equal.2578 - + you end up with 0.2583 Mixed partial derivatives are equal.2586 In other words, the D ⁺2F DX DY is absolutely equal to D² F DY DX.2591 That is a fundamental theorem in multivariable calculus.2601 The order of differentiation does not matter.2604 Let us try our next commutator.2609 We want to do the X² PX was going to equal X².2612 If you remember the X operator, the position operator just means multiply by X and the PX operator is - I H ̅.2623 Again, we are going to do DF DX -, now we are going to switch them.2633 We are going to do PX X² - IH DDX.2639 We are going to do X² F.2649 It is this × this - this × this and this × this order.2654 Be very careful here.2662 This is going to be - I H ̅ X² DF DX, I just change the order here.2665 Nothing strange happening.2676 And then this one is going to be +.2677 Notice, now I have an X² F.2683 Let me go ahead and write this up.2690 This is , X² PX - PX X².2694 We have X² F, this is a function × a function and differentiating that.2702 I have to use the product rule so it is going to be this × the derivative of that + that × the derivative of this.2706 It is going to be, the negative cancels so I get + I H ̅ this × the derivative of that is going to be X² DF DX2713 + that × the derivative of this + I H ̅ 2 XF – I X² DF DX + IHX² DF DX.2725 These go away, I'm left with I H ̅ 2 X F.2741 I know I can go ahead and drop that F in terms of we know it is not equal 0.2750 What is happening now, I can go ahead and drop the F part and just deal with the operator part.2755 It is equal I H ̅ 2 X which definitely does not equal the 0 operator.2765 This is the operator, this is our answer.2774 The a commutator of this is equal to that.2783 We include F in order to keep track of our differentiation properly.2788 If we did not include the F, we would not have F here, we would not have the F here.2793 It might cause some confusion as far as where is the product rule.2797 That is why we are putting it in there.2802 It is very important to put it in there until you become very accustomed to operators.2803 I, myself, do not, I use F.2808 That is it, thank you so much for joining us here at www.educator.com.2811 We will see you next time for a continuation of example problems.2814 Take good care, bye.2817 OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9136142134666443, "perplexity": 4679.085752270371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00603.warc.gz"}
http://link.springer.com/article/10.1007%2Fs00211-012-0511-7
, Volume 124, Issue 1, pp 151-182 Date: 23 Oct 2012 # Efficient numerical realization of discontinuous Galerkin methods for temporal discretization of parabolic problems Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract We present an efficient and easy to implement approach to solving the semidiscrete equation systems resulting from time discretization of nonlinear parabolic problems with discontinuous Galerkin methods of order $$r$$ . It is based on applying Newton’s method and decoupling the Newton update equation, which consists of a coupled system of $$r+1$$ elliptic problems. In order to avoid complex coefficients which arise inevitably in the equations obtained by a direct decoupling, we decouple not the exact Newton update equation but a suitable approximation. The resulting solution scheme is shown to possess fast linear convergence and consists of several steps with same structure as implicit Euler steps. We construct concrete realizations for order one to three and give numerical evidence that the required computing time is reduced significantly compared to assembling and solving the complete coupled system by Newton’s method.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473321795463562, "perplexity": 394.41801991577057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928754.15/warc/CC-MAIN-20150521113208-00334-ip-10-180-206-219.ec2.internal.warc.gz"}
https://physics.fandom.com/wiki/Pressure
## FANDOM 162 Pages Pressure (symbol p, SI unit Pascal (Pa), equal to N/m2 or kg/m·s2) is the amount of force over a given area. Pressure is equal to force over area. $p = \frac{F}{a}$ In the case of a uniform fluid, pressure is equal to $p = \rho g h$ Where ρ is the density of the fluid, g is acceleration due to gravity, and h is the height of the fluid above the point at which pressure is being determined.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842606782913208, "perplexity": 530.563985406401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00094.warc.gz"}
http://export.arxiv.org/abs/2009.09131v1
physics.plasm-ph (what is this?) # Title: 1D planar, cylindrical and spherical subsonic solitary waves in space electron-ion-positive dust plasma systems Authors: A A Mamun Abstract: The space electron-ion-positive dust plasma system containing isothermal inertialess electron species, cold inertial ion species, and stationary positive (positivively charged) dust species is considered. The basic features of one dimensional (1D) planar and nonplanar subsonic solitary waves are investigated by the pseudo-potential and reductive perturbation methods, respectively. It is observed that the presence of the positive dust species reduces the phase speed of the ion-acoustic waves, and consequently supports the subsonic solitary waves with the positive wave potential in such a space dusty plasma system. It is observed that the cylindrical and spherical subsonic solitary waves significantly evolve with time, and that the time evolution of the spherical solitary waves is faster than that of the cylindrical ones. The applications of the work in many space dusty plasma systems, particularly in Earth's mesosphere, cometary tails, Jupiter's magnetosphere, etc. are addressed. Subjects: Plasma Physics (physics.plasm-ph) Cite as: arXiv:2009.09131 [physics.plasm-ph] (or arXiv:2009.09131v1 [physics.plasm-ph] for this version) ## Submission history From: A A Mamun [view email] [v1] Sat, 19 Sep 2020 00:55:34 GMT (106kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887336015701294, "perplexity": 4945.192399772944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00245.warc.gz"}
https://www.physicsforums.com/threads/bravais-lattices.232259/
# Bravais lattices 1. Apr 30, 2008 ### raintrek 1. The problem statement, all variables and given/known data A crystal has a basis of one atom per lattice point and a set of primitive translation vectors of a = 3i, b = 3j, c = 1.5(i+j+k) where i,j,k are unit vectors in the x,y,z directions of a Cartesian coordinate system. What is the Bravais lattice type of this crystal and what are the volumes of the primitive and conventional unit cells? 2. Relevant equations Primitive unit cell volume V = a . (b x c) 3. The attempt at a solution I'm slightly unsure about these Bravais lattices given the multiple permutations they can seem to take. My assumption, as $$a=b\neq c$$ is that it's Hexagonal. However that also requires that $$\alpha=\beta=90^{o},\gamma=120^{o}$$, where gamma is the angle between a,b, alpha between b,c, beta between c,a. But that seems to contradict that the a,b vectors are in i,j directions, ie at 90 degrees. Am I missing something here!? I've worked out the primitive unit cell volume to be 13.5, however I'm also at a loss how to calculate the conventional unit cell volume... Any help would be hugely appreciated
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265597462654114, "perplexity": 912.3956059708315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00164-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.mathworks.com/help/robust/examples/building-and-manipulating-uncertain-models.html?prodcode=RC&language=en&nocookie=true
Accelerating the pace of engineering and science # Robust Control Toolbox ## Building and Manipulating Uncertain Models This example shows how to use Robust Control Toolbox™ to build uncertain state-space models and analyze the robustness of feedback control systems with uncertain elements. We will show how to specify uncertain physical parameters and create uncertain state-space models from these parameters. You will see how to evaluate the effects of random and worst-case parameter variations using the functions usample and robuststab. Two-Cart and Spring System In this example, we use the following system consisting of two frictionless carts connected by a spring k: Figure 1: Two-cart and spring system. The control input is the force u1 applied to the left cart. The output to be controlled is the position y1 of the right cart. The feedback control is of the following form: We create this compensator using this code: s = zpk('s'); % The Laplace 's' variable C = 100*ss((s+1)/(.001*s+1))^3; Block Diagram Model The two-cart and spring system is modeled by the block diagram shown below. Figure 2: Block diagram of two-cart and spring model. Uncertain Real Parameters The problem of controlling the carts is complicated by the fact that the values of the spring constant k and cart masses m1,m2 are known with only 20% accuracy: , , and . To capture this variability, we will create three uncertain real parameters using th ureal function: k = ureal('k',1,'percent',20); m1 = ureal('m1',1,'percent',20); m2 = ureal('m2',1,'percent',20); Uncertain Cart Models We can represent the carts models as follows: Given the uncertain parameters m1 and m2, we will construct uncertain state-space models (USS) for G1 and G2 as follows: G1 = 1/s^2/m1; G2 = 1/s^2/m2; Uncertain Model of a Closed-Loop System First we'll construct a plant model P corresponding to the block diagram shown above (P maps u1 to y1): % Spring-less inner block F(s) F = [0;G1]*[1 -1]+[1;-1]*[0,G2] F = Uncertain continuous-time state-space model with 2 outputs, 2 inputs, 4 states. The model uncertainty consists of the following blocks: m1: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m2: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences Type "F.NominalValue" to see the nominal value, "get(F)" to see all properties, and "F.Uncertainty" to interact with the uncertain elements. Connect with the spring k P = lft(F,k) P = Uncertain continuous-time state-space model with 1 outputs, 1 inputs, 4 states. The model uncertainty consists of the following blocks: k: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m1: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m2: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences Type "P.NominalValue" to see the nominal value, "get(P)" to see all properties, and "P.Uncertainty" to interact with the uncertain elements. The feedback control u1 = C*(r-y1) operates on the plant P as shown below: Figure 3: Uncertain model of a closed-loop system. We'll use the feedback function to compute the closed-loop transfer from r to y1. % Uncertain open-loop model is L = P*C L = Uncertain continuous-time state-space model with 1 outputs, 1 inputs, 7 states. The model uncertainty consists of the following blocks: k: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m1: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m2: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences Type "L.NominalValue" to see the nominal value, "get(L)" to see all properties, and "L.Uncertainty" to interact with the uncertain elements. Uncertain closed-loop transfer from r to y1 is T = feedback(L,1) T = Uncertain continuous-time state-space model with 1 outputs, 1 inputs, 7 states. The model uncertainty consists of the following blocks: k: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m1: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences m2: Uncertain real, nominal = 1, variability = [-20,20]%, 1 occurrences Type "T.NominalValue" to see the nominal value, "get(T)" to see all properties, and "T.Uncertainty" to interact with the uncertain elements. Note that since G1 and G2 are uncertain, both P and T are uncertain state-space models. Extracting the Nominal Plant The nominal transfer function of the plant is Pnom = zpk(P.nominal) Pnom = 1 --------------------------- (s^2 + 5.995e-16) (s^2 + 2) Continuous-time zero/pole/gain model. Nominal Closed-Loop Stability Next, we will evaluate the nominal closed-loop transfer function Tnom, and then check that all the poles of the nominal system have negative real parts: Tnom = zpk(T.nominal); maxrealpole = max(real(pole(Tnom))) maxrealpole = -0.8232 Stability Margin Analysis (Robustness) Will the feedback loop remain stable for all possible values of k,m1,m2 in the specified uncertainty range? We can use the robuststab function to answer this question rigorously. The robuststab function computes the stability margins and the smallest destabilizing parameter variations in the variable Udestab (relative to the nominal values): [StabilityMargin,Udestab,REPORT] = robuststab(T); REPORT REPORT = Uncertain system is robustly stable to modeled uncertainty. -- It can tolerate up to 301% of the modeled uncertainty. -- A destabilizing combination of 500% of the modeled uncertainty was found. -- This combination causes an instability at 0.693 rad/seconds. -- Sensitivity with respect to the uncertain elements are: 'k' is 20%. Increasing 'k' by 25% leads to a 5% decrease in the margin. 'm1' is 60%. Increasing 'm1' by 25% leads to a 15% decrease in the margin. 'm2' is 58%. Increasing 'm2' by 25% leads to a 14% decrease in the margin. Udestab Udestab = k: 1.1094e-07 m1: 0.1415 m2: 1.4914e-04 The report indicates that the closed loop can tolerate up to three times as much variability in k,m1,m2 before going unstable. It also provides useful information about the sensitivity of stability to each parameter. Note that the smallest destabilizing perturbation Udestab requires varying m2 by 100%, or 5 times the specified uncertainty. Worst-Case Performance Analysis Note that the peak gain across frequency of the closed-loop transfer T is indicative of the level of overshoot in the closed-loop step response. The closer this gain is to 1, the smaller the overshoot. We use wcgain to compute the worst-case gain PeakGain of T over the specified uncertainty range. [PeakGain,Uwc] = wcgain(T); PeakGain PeakGain = LowerBound: 1.0475 UpperBound: 1.0477 CriticalFrequency: 7.0502 Substitute the worst-case parameter variation Uwc into T to compute the worst-case closed-loop transfer Twc. Twc = usubs(T,Uwc); % Worst-case closed-loop transfer T Finally, pick from random samples of the uncertain parameters and compare the corresponding closed-loop transfers with the worst-case transfer Twc. Trand = usample(T,4); % 4 random samples of uncertain model T clf subplot(211), bodemag(Trand,'b',Twc,'r',{10 1000}); % plot Bode response subplot(212), step(Trand,'b',Twc,'r',0.2); % plot step response Figure 4: Bode diagram and step response. In this analysis, we see that the compensator C performs robustly for the specified uncertainty on k,m1,m2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605438470840454, "perplexity": 4052.065371550302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00021-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quick-tangent-line-question-calc-final-tomorrow.204125/
# Quick tangent line question (calc final tomorrow) 1. Dec 11, 2007 ### erjkism 1. The problem statement, all variables and given/known i have to find all tangent lines in the equation below that pass though the point (0,0) $$y=x^{3}+ 6x^{2}+8$$ i took the derivative and got this. $$y=3x^{2}+12x$$ then i substituted the points x=0 and y=0 into the derivative equation and the got 0=0. i am kind of stuck here cause i havent done a problem like this in a while. what should i do next? Last edited: Dec 11, 2007 2. Dec 11, 2007 ### Dick The derivative equation is y'=3x^2+12x, that's the slope of the tangent line. Another way to express the slope of the tangent line is (delta y)/(delta x)=(y-0)/(x-0)=(x^3+6x^2+8)/x. Equate the two. Last edited: Dec 11, 2007 3. Dec 11, 2007 ### erjkism i know that the derivative is the slope/ but how can i find the equations of all of the tangent lines that go thru the origin? 4. Dec 11, 2007 ### Dick There are two different way to express the slope of the tangent line. i) use the derivative, ii) use the difference (delta y)/(delta x) for two points on the line, like (x,x^3+6x^2+8) and (0,0). Isn't that what I just said? Equate them. 5. Dec 12, 2007 ### ace123 Well since you found the slope and you have the point (0,0) why not just do the point slope forumla? That will give you the equation of the tangent line. Right? 6. Dec 12, 2007 ### Petkovsky First of all ask yourself how many tangent lines can you find in one point. Second, you have found the formula for the slope of the curve in ANY point, but you need to find the one for X=0, and then construct a line that has that slope and passes through the point that has been given to you. 7. Jan 14, 2009 ### sennyk Equation of a line: $$y_1-y_0 = m(x_1 - x_0)$$ Two points on the line: $$(0, 0) (x,y)$$ slope: $$m = y'$$ original equation: $$y = x^3+6x^2 +8$$ Start filling in the blanks. 8. Jan 14, 2009 ### HallsofIvy Staff Emeritus No, that's y', not y. The problem does not say the curve passes through (0,0) (nor is it true). There is no point in putting x= 0, y= 0 in any equation. Any line that goes through (0,0) is of the form y= mx and has slope m. The tangent line must go through a point on the graph and must have m equal to the derivative. At that point $y= mx= x^{3}+ 6x^{2}+8$ and $y'= m= 3x^2+ 12x$. Solve those two equation for m and x (it's only m you need but different values of x may give different slopes and different tangent lines). Try multiplying both sides of the second equation by x and setting the two values for mx equal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167622089385986, "perplexity": 719.6976685740066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00301-ip-10-171-10-108.ec2.internal.warc.gz"}
https://cs.stackexchange.com/tags/algorithm-analysis/new
# Tag Info 2 Let $c$ be the constant from part 2, and assume for simplicity that $n$ is divisible by $c$ (otherwise, query the first $n \bmod c$ bits). Partition the string into intervals of length $c$, and go over them one by one in an arbitrary order. The order doesn't need to be random – you can go over them from left to right. For each interval $I$, choose a pair $(i,... 0 I cant think of one simply way to solve such questions, but I can show you a way to start tackling the questions and making them simpler. In such cases, that$f$recursively calls itself with their own return value, the running time of$f$might depend heavily on its output. So contrary to usual complexity computation tricks, here we want to know precisely ... 0 Case 3 does not apply. Indeed: $$f(n) = n \log n \not\in \Omega(n^{\log_9 10}) = \Omega(n^{\log_b a}).$$ However case$1$applies since, for$0<c \le 0.01$$$f(n) = n \log n \in O(n^{1.04 - 0.01}) \subset O(n^{\log_9 10 - 0.01} ) \subseteq O(n^{\log_b a - c} ).$$ This shows that$T(n) \in \Theta(n^{\log_9 10})$. 1 The big-O notation (and also similar notations, such as big-Omega) are not inherently limited only to describe running time of algorithms. Indeed, when the context is the running time of some algorithm, there are no negative functions since algorithms cannot run "negative time". That being said, the big-O notation is a general mathematical ... 1 Python arrays have 0-based indexing, like in C, and unlike Fortran, which uses 1-based indexing. You can check Wikipedia for information about other languages. A python array$A$of length$n$consists of the elements$A[0],\ldots,A[n-1]$. The python function range(n) goes over the indices$0,\ldots,n-1$, and your range(1,n) goes over the indices$1,\ldots,n-... 0 The number of multiplications in the algorithm is $2n - 1$, but the number of multiplications in the Horner's method is $n$, which means this algorithm is not optimal. As mentioned in the Wiki page, Horner's method in evaluating a polynomial is optimal, therefore we can change this algorithm so that it uses the Horner's method. p = a[n] for i = n-1 to 0: ... 0 Worse-case complexity gives an upper bound on the complexity of an algorithm in terms of some parameters. Often the parameter is the length of the input, either in bits or in words, but sometimes several parameters are pertinent. The standard example is graph algorithms, where complexity is often expressed in terms of both the number of vertices and the ... 0 If we analyze a Time Complexity dependent also on the values of a given input, then as you say a more defined notation would be O(max(n)). Though, saying O(max(n) + n) in O notation means O(max(n)) So it will still be accurate, since in both outcomes the Complexity is linear in the given input. 0 Suppose that you start with some value in $x$. The inner loop increases $x$ by $i$. Therefore the outer loop increases $x$ by $1+2+3+\cdots+n = n(n+1)/2$. So overall, your code increases the value in $x$ by $n(n+1)/2$. The running time of your code is proportional to the number of times that $x$ is increased, hence it is $\Theta(n^2)$. 1 In answer to both of your questions:   Firstly, note that during the maintenance phase of the loop invariant proof, we are in the process of inserting u into S, and the way that y is defined is that it is a node in V\S while this is happening, therefore u and y exist in V\S at the same time when u is inserted. This answers your first question.   Secondly, ... 3 Let $I$ be your instance and consider the instance $I'$ obtained by replacing each bit vector $y$ in $L_1$ with $y' = y \oplus S$ (where $\oplus$ denotes bitwise xor). Call $L'_1$ the list containing all vectors $y'$. Consider the tuples $(x_1, x_2, \dots, x_l)$ and $(x'_1, x_2, \dots, x_l)$ where $x_i \in L_i$ and $x'_1 \in L'_1$. We have: $$x_1 \... 0 There's no need to do any calculation. There is absolutely nothing special about the numbers 25 and 75. If 0 < \alpha < \beta < 1 and you are promised that the median is between the \alpha'th percentile and the \beta'th percentile, then the running time of the algorithm will be linear. Indeed, if \gamma = \min(\alpha,1-\beta), the length ... 0 There is an unwritten assumption that the input is always at least 1 (otherwise your function never terminates). Under this assumption, it is easy to prove by induction that the function does terminate, and always returns 1. Therefore in the expression P(P(n/2)), we are first invoking P with the input n/2, and then with the input 1. It follows ... 0 Let me add way with simple inequalities:$$n^2+n^3 \leqslant n^4 +n^4=2n^4 \leqslant 2(n^4+n) Now taking constant $C=2$ we have $n^2+n^3 \in O(n^4+n)= O(n^4)$. 0 As Yuval has mentioned in the comment that the proof is based on the complete induction. However, an important thing is missing in the screenshot that you have shared. The "base case" is missing. Without the base case, the induction hypothesis and proof make no sense. Here, the base case is when $n = 1$. You might want to verify that $T[1] = O(1)$. ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515686631202698, "perplexity": 789.9535328789318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00194.warc.gz"}
https://meangreenmath.com/2016/08/24/lessons-from-teaching-gifted-elementary-students-part-6b/
# Lessons from teaching gifted elementary students (Part 6b) Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprising depth of mathematical knowledge. Here’s a question I once received: 255/256 to what power is equal to 1/2? And please don’t use a calculator. Here’s how I answered this question without using a calculator… in fact, I answered it without writing anything down at all. I thought of the question as $\displaystyle \left( 1 - \epsilon \right)^x = \displaystyle \frac{1}{2}$. $\displaystyle x \ln (1 - \epsilon) = \ln \displaystyle \frac{1}{2}$ $\displaystyle x \ln (1 - \epsilon) = -\ln 2$ I was fortunate that my class chose 1/2, as I had memorized (from reading and re-reading Surely You’re Joking, Mr. Feynman! when I was young) that $\ln 2 \approx 0.693$. Therefore, we have $x \ln (1 - \epsilon) \approx -0.693$. Next, I used the Taylor series expansion $\ln(1+t) = t - \displaystyle \frac{t^2}{2} + \frac{t^3}{3} \dots$ to reduce this to $-x \epsilon \approx -0.693$, or $x \approx \displaystyle \frac{0.693}{\epsilon}$. For my students’ problem, I had $\epsilon = \frac{1}{256}$, and so $x \approx 256(0.693)$. So all I had left was the small matter of multiplying these two numbers. I thought of this as $x \approx 256(0.7 - 0.007)$. Multiplying $256$ and $7$ in my head took a minute or two: $256 \times 7 = 250 \times 7 + 6 \times 7$ $= 250 \times (8-1) + 42$ $= 250 \times 8 - 250 + 42$ $= 2000 - 250 + 42$ $= 1750 + 42$ $= 1792$. Therefore, $256 \times 0.7 = 179.2$ and $256 \times 0.007 = 1.792 \approx 1.8$. Therefore, I had the answer of $x \approx 179.2 - 1.8 = 177.4 \approx 177$. So, after a couple minutes’ thought, I gave the answer of 177. I knew this would be close, but I had no idea it would be so close to the right answer, as $x = \displaystyle \frac{\displaystyle \ln \frac{1}{2} }{\displaystyle \ln \frac{255}{256}} \approx 177.0988786\dots$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496270775794983, "perplexity": 967.7835727015023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00016.warc.gz"}
http://tex.stackexchange.com/questions/45673/how-can-i-create-a-table-where-one-cell-spans-several-rows-and-all-the-text-is-j
# How can I create a table where one cell spans several rows and all the text is justified? I'm trying to create a table where the first column spans 3 rows and the text will wrap several times. The second and third columns will each have 3 rows in them. The cells in the second column will have text that will also wrap several times. All the text must be left center justified. Below is an example of what I'm trying to create. Any help would be greatly appreciated. _________________________________________________________________ | | | |Some text will go here |This will | |that will need to wrap |be empty | | | | |_______________________|_______________| Some really long text | | | that i would like |Some text will go here |This will | left-center justified |that will need to wrap |be empty | in this column | | | |_______________________|_______________| | | | |Some text will go here |This will | |that will need to wrap |be empty | | | | _________________________________________________________________ - The multirow package provides for spanning rows. - Here's how to do it using multirow: \documentclass{article} \usepackage{multirow}% http://ctan.org/pkg/multirow \newcommand{\sometext}{Some text will go here that will need to wrap.} \begin{document} \noindent \begin{tabular}{|p{.4\linewidth}|p{.35\linewidth}|p{.25\linewidth}|} \hline \multirow{12}{\linewidth}% {\sometext\ \sometext\ \sometext} & \sometext\ \sometext & \\ \cline{2-3} & \sometext\ \sometext & \\ \cline{2-3} & \sometext\ \sometext & \\ \hline \end{tabular} \end{document} Note that the <lines> argument to \multirow \multirow{<lines>}{<width>}{<content>} denotes the number of \baselineskips, rather than the number of lines/rows within the table. So, in my example, the second column spans 12 lines, so I used \multirow{12}{\linewidth}{...} to vertically centre and span the full width of the cell. There is also an optional [<bigstrut>] argument for \multirow that you might want to play with. See the multirow documentation for more. Note that in the above example, the table stretches out into the margin causing an overfull \hbox warning, even though the paragraph column widths add up to \linewidth (a result of the MWE). This does not take into account the column rule widths (\arrayrulewidth) and separation (\tabcolsep). To accommodate for this (if you want your tabular to spread the entire \linewidth), use the tabularx package and at least one X-column: \usepackage{tabularx}% http://ctan.org/pkg/tabularx %... \noindent% \begin{tabularx}{\linewidth}{|p{.4\linewidth}|p{.35\linewidth}|X|} %... \end{tabularx} or correct for the column spacing and rule widths: \noindent% \begin{tabular}{% |p{\dimexpr.4\linewidth-2\tabcolsep-\arrayrulewidth}| p{\dimexpr.35\linewidth-2\tabcolsep-\arrayrulewidth}| p{\dimexpr.25\linewidth-2\tabcolsep-\arrayrulewidth}|} %... \end{tabular} - If you don't need the rules, you could get away with just the primitive \valign: \long\def\mytable#1\cr#2\endmytable{\begingroup \tabskip=\baselineskip % comes between rows \def\cr{\crcr\noalign{\hfil}} % comes between columns \def\cellformat{\raggedright\noindent\strut} \valign{&\vfil\hsize=.2\hsize\cellformat##\vfil\crcr \multispan3\vfil\hsize=.3\hsize\cellformat#1\vfil\cr #2\cr} \endgroup} \mytable Some really long text that I would like left-center justified in this column. Just some more text to show another paragraph and its indentation. \cr Some text will go here that will need to wrap& Some text will go here that will need to wrap& Some text will go here that will need to wrap \cr This will be empty& This will be empty& This will be empty \endmytable \bye -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371271729469299, "perplexity": 2514.309449128581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00266-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integral-of-arcsin-x.89216/
# Integral of Arcsin[x] • Thread starter Jameson • Start date • #1 789 0 ## Main Question or Discussion Point I just can't see it. I would think that it would involve some form of trig substitution, but I'm just drawing a blank. I'll do the work if someone can please give me a nice little hint. I know that $$\int \arcsin{(x)}dx = \sqrt{1-x^2}+ x\arcsin{(x}) + C$$ from my calculator and mathematica. Hint please. :yuck: ## Answers and Replies • #2 Hurkyl Staff Emeritus Science Advisor Gold Member 14,916 17 If only you were working with its derivative! I bet you know how to integrate something that looks like its derivative! • #3 789 0 Sure... I'd draw a nice triangle and do trig substitution if it was $$\int \frac{dx}{\sqrt{1-x^2}}$$ Hmmm...I'll think out loud here. If the integral has a square-root and is in the form of $c^2-x^2$ then x is one leg, c is the hypotenuse, and the other leg is the previously mentioned radical. Is trig substitution with right triangles on the right track? Since the hypotenuse is $\sqrt{a^2+b^2}$, it seems that one leg might need to be $\sqrt{\arcsin{(x)}}$ Somehow I don't think I'm on track. • #4 665 0 Use integration by parts. Remember to let u=arcsin(x) and v=x. u sub: InverseLogAlgebraicTrigExp • #5 Hurkyl Staff Emeritus Science Advisor Gold Member 14,916 17 Sheesh, just give him the answer, why don't ya? :grumpy: • #6 789 0 apmcavoy said: Use integration by parts. Remember to let u=arcsin(x) and v=x. u sub: InverseLogAlgebraicTrigExp Ah, thank you! :surprised It's so simple. And thank you Hurkyl as well. I still have lots to learn. • #7 665 0 I apologize Hurkyl • #8 84 0 well, you know the integral of sinx with limits. Now arcsin x will be the limits, and you can make a rectangle. • #9 MalleusScientiarum Or you could just take the derivative of the right hand side and go "ta da!" and that's proof enough for me. • #10 hint equate the arc sine to another variable e.g y.making it a sine fxn.e.g the arc sine of 0.5=30,while sine30=0.5.this will simplify the integral and further substitution will conclude it • #11 Tide Science Advisor Homework Helper 3,076 0 Just for the fun of it ... The sum of the integrals $$\int \sin^{-1} x dx + \int \sin y dy$$ is just the area of the bounding rectangle: $x \times \sin^{-1} x$ Since $$\int \sin y dy = -\cos y + C$$ and $$\cos y = \cos \sin^{-1} x = \sqrt {1-x^2}$$ it follows that $$\int \sin^{-1} x dx = \sqrt {1-x^2} + x \sin^{-1} x + C$$ • #12 17 0 Integration by parts? Hey, Im really sorry to arise dead threads from the past (which i have seen though google) but somehting really weird happened me when I tried to use integration by parts on arcsinx. let me show you: S(arcsinx)= {v'(x)=1} {u(x)=arcsinx} xarcsinx-S(x*d(arcsinx))= xarcsinx-S(x/(1-x^2)^0.5= {u(x)=x v'(x)=arcsinx} xarcsinx-xarcsinx+S(arcsin)dx ==> S(arcsinx)=S(arcsinx) :\ I know I have done something really stupid here, but please be easy on me since I started studying Integrals only three days ago. In the second time I used integration by parts, do I miss something , is there another efficient choice of v and u? Thanks in advance, Aviv p.s: I will edit it better to use normal math signs once I figure out how. • #13 Gib Z Homework Helper 3,346 4 For the second time you integrate by parts, swap your choices. • #14 17 0 Im sorry, Tried it also and all i got is: xarcsine x -(x^2/2)(1/(Sqrt(1-x^2)))-(x/4)(sqrt(1/(1-x^2))+1/4(arcsinx) this isn't going anywhere :( • #15 dextercioby Science Advisor Homework Helper 12,977 540 I just can't see it. I would think that it would involve some form of trig substitution, but I'm just drawing a blank. I'll do the work if someone can please give me a nice little hint. I know that $$\int \arcsin{(x)}dx = \sqrt{1-x^2}+ x\arcsin{(x}) + C$$ from my calculator and mathematica. Hint please. :yuck: Make $x=\sin t$. Then apply part integration on the resulting integral. It's just a way to avoid the simple solution of part integrating directly. • #16 Office_Shredder Staff Emeritus Science Advisor Gold Member 3,750 99 After the first integration by parts, I would use a substitution • #17 17 0 solve it officially. Fixed during some major mistakes I had about dev' and stuff. did it without subtition, only using integration by parts. if you are interested what I did then you are welcome to tell me to write my solution. Thanks guys :) gg • #18 use seperation by parts u=arcsinx du=dx/(1-x^2)^1/2 dv=dx v=x uv-ingegral vdu = xarsinx-integral x/(1-x^2)^1/2 use u substitution with u = 1-x^2 so du = -2x than you get xarsinx + (1-x^2)^1/2 + c • #19 1 0 f(x)=arcsinx f'(x)=1/radical(1-xsquare) g'(x)=1 g(x)=x // S means integral S arcsin x dx= x arcsin x - S xdx/radical(1-xsquare) = x arcsin x + S (radical(1-xsquare))'dx= =x arcsin x + radical(1-xsquare) +C • Last Post Replies 5 Views 7K • Last Post Replies 7 Views 29K • Last Post Replies 4 Views 4K • Last Post Replies 2 Views 3K • Last Post Replies 19 Views 13K • Last Post Replies 1 Views 2K • Last Post Replies 2 Views 6K • Last Post Replies 2 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695301055908203, "perplexity": 2517.8397320796685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00193.warc.gz"}
http://quant.stackexchange.com/users/464/ilya?tab=activity&sort=comments
# Ilya less info reputation 321 bio website dcsc.tudelft.nl/~itkachev location Leiden, Netherlands age 26 member for 3 years, 1 month seen 1 hour ago profile views 195 I am a PhD student at TU Delft, working in applied probability and stochastic optimal control. My current focus is on approximate model-checking of stochastic systems via bisimulations (a part of computer science). I am interested in a wide field of applications, in particular in some areas of finance, such as risk theory. 2d comment How can the Wiener process be nowhere differentiable but still continuous? I would be careful with such explanation, though. For example, a straight line is extremely self-similar on various scales, however it is perfectly smooth. Also, one can say that smoothness is exactly local similarity to straight lines, isn't it? 2d comment How can the Wiener process be nowhere differentiable but still continuous? @Probilitator The gif actually looks like some old-school flight simulator in the mountain area :) Apr11 comment Convolution copula? There is no need or requirement for the two copulas above to be the same. Do you mean here that $$\mathbb{P}(X\leq x,y_{1}\leq Y\leq y_{2})=C_1(F_{X}(x),F_{Y}(y_{2}))-C_2(F_{X}(x),F_{Y}(y_{1}))$$ with $C_1\neq C_2$ in general? Apr10 comment How can the Wiener process be nowhere differentiable but still continuous? @Probilitator: thanks. 3d - is some quant joke I fail to understand? Apr9 comment unique equivalent martingale measure in incomplete markets Sure :) also, what does these square mean? That you take an expectation/integral of squared R-N derivative? Apr8 comment unique equivalent martingale measure in incomplete markets Are you missing come expectations in the right-hand side? Apr8 comment How to choose a risk-neutral measure when the market is incomplete? What is the reason to pick up $\Bbb Q$ to be closest to $\Bbb P$ w.r.t. some metric? Apr7 comment Simple pricing example confusion I think I see your point: when we are making an equivalent change of measure, we have to restrict ourselves to finite intervals of time, otherwise changing the drift changes the null events. Thanks Apr4 comment PDE pricing of barrier options in BS Regarding PDE approach, I'd say Wilmott follows it everywhere in his books. Actually that's the same as martingale approach + Markovian structure, but without mentioning the latter two things too often (as Shreve does, in contrast) and using instead $\Delta$-hedging-like arguments, which of course leads to the same PDE as the martingale approach does. So I'd be interested in a book with a similar approach, but slightly more formal on the PDE side (not necessarily on a stochastic side). Maybe there are some known textbooks of that kind, if not - nevermind. Apr4 comment PDE pricing of barrier options in BS Thanks for your answer. I actually didn't mean solution of PDEs, especially an analytic one, just a PDE formulation. At least one advantage it gives is useful formulas for Greeks. My point is that the PDE for barriers in BS is "derived" using arguments like "value of option satisfies BS before hitting the barrier", so obviously we need to solve BS equation with an additional boundary condition on the barrier. As usual, it is this obvious step that can make the whole result being incorrect - so I just wondered whether there is a detailed explanation of this. Apr4 comment Why use implied volatility For your second argument: I've only traded on FX a couple of years ago, and there the frequency of data seemed quite enough to make good estimates of volatility just based on a 5-minute-wide window. Of course, the market is quite dynamical, but even for such a fast market 5 minutes did not seem to be such a big window. Although that's a historical data, it seems to be more relevant to "current volatility" given the latter is continuous, than the IV. Apr4 comment Why use implied volatility I see a point in your first argument - but as in my comment to @Richard, isn't that argument only true given that a lot of people on the market are using BS model for vanilla, or at least using IV? It seems, that IV is of the following feature: if everybody uses it, then it is also of value to you as you are playing against others. If nobody uses it, it does not give you a lot of information, though. Am I correct? Apr4 comment Why use implied volatility In that case, don't we completely exclude from our glance the situation when "market prices derivatives incorrectly" which we may think of taking advantage of? Apr4 comment Why use implied volatility Thanks for the answer. Can you clarify a couple of points? 1. It is useful: yes. Do you mean here, that people in fact use BS to price simple contracts often enough? Cause in that case, I'd agree that it is interesting to take a look at implied volatility. 2. BS-implied vol of the prices calculated by these models fits the BS-implied vol that can be observed on the market. Isn't that equivalent to saying that model prices coincide with market ones? Apr1 comment Fundamental Theorem of Asset Pricing (FTAP) Hi, you may be interested in this question of mine Mar27 comment Does risk-neutral measure have anything to deal with risk-neutrality in utility theory? Thanks, I'm assuming in your case $E = E_P$: the expectation over a market measure. In such case, the map $X_T \mapsto \mathsf E_P[X_T]$ is linear as well, so nothing distincts it from the case of a martingale measure. Mar26 comment Does risk-neutral measure have anything to deal with risk-neutrality in utility theory? @Probilitator: indeed, I've modified this part - better now? Jan27 comment Why banks borrow from each other Thanks - I can see some reasons why bank may be short for cash: too much withdrawal, small capital inflow at exactly that moment etc. I also understand that it may happen that the bank has much more cash above the required level and it wants money to work. However, why would such a bank lend it to another bank rather than investing this money at a better return rate? Jan23 comment Why banks borrow from each other I see, the reserve requirements specify how much high-liquid assets (e.g.) cash shall be in the balance sheet given the class of liabilities (deposits), whereas capital requirements specify how much liabilities (equity) shall be in the balance sheet given the class of assets (actually, all assets being risk-weighted). Thanks, I think I have no more questions here - just needed to clarify your answer for myself. I think the answer and comments are good enough for a little bounty. Jan22 comment Why banks borrow from each other One more clarification, if I may. Bank has to keep enough high-liquidity assets (say cash) to above the sum of reserve and capital requirements, or above their maximum (thus, satisfying each of them independently)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368211984634399, "perplexity": 1025.3165294084968}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
https://brilliant.org/problems/what-a-function-2/
# Two Versus Three $\begin{array} { l l l l l } & \color{red}{y} & +\frac { 1 }{ 2 } & +\frac { \color{red}{y} }{ 4 } & +\frac { 1 }{ 8 } & +\frac { \color{red}{y} }{ 16 } & +\frac { 1 }{ 32 }& + \dots\\ = & 1 & +\frac { \color{blue} {x} }{ 3 } & +\frac { 1 }{ 9 } & +\frac { \color{blue} {x} }{ 27 } & +\frac { 1 }{ 81 } & +\frac { \color{blue} {x} }{ 243 } & +\dots \end{array}$ Find the smallest positive integer $$\color{blue} {x}$$, such that there exists an integer $$\color{red}{y}$$ which satisfies the above equation. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406673312187195, "perplexity": 268.45422693802254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00550.warc.gz"}
https://www.physicsforums.com/threads/proof-of-electric-dipole-equations.210322/
Proof of Electric Dipole Equations 1. Jan 22, 2008 Cheetox 1. The problem statement, all variables and given/known data An electric dipole of moment p is placed at a distance r from a point charge +q. The angle between p and r is phi. Show that the energy of interaction between the dipole and the charge is -pqcos(phi)/4$$\pi$$$$\epsilon$$0r^2 Derive equations for a)a radial force on the dipole b)a force on the dipole normal to r c)a couple on the dipole 2. Relevant equations 3. The attempt at a solution I have proved the first part of the question using the integral of the torque between $$\phi$$0 and $$\varphi$$ and setting $$\varphi$$0 to 90degrees and I believe that questions a, b and c are simple manipulations of the proved equation, but no book I read will give me a proof or an explanation of the 'radial force' and how to prove questions a, b and c could anyone help? 2. Jan 22, 2008 Mindscrape So you have found the work, or I guess energy of interaction, done to bring the charge to where it is. a) How would you find the radial force on a monopole? (Hint: potential) b) Similar idea to a), but different dot product c) No idea what this means, sorry.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668325781822205, "perplexity": 577.7884898851097}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00269-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.siyavula.com/read/science/grade-12/electrodynamics/11-electrodynamics-05
We think you are located in South Africa. Is this correct? # End Of Chapter Exercises Exercise 11.3 [SC 2003/11] Explain the difference between alternating current (AC) and direct current (DC). Direct current (DC), which is electricity flowing in a constant direction. DC is the kind of electricity made by a battery, with definite positive and negative terminals. However, we have seen that the electricity produced by some generators alternates and is therefore known as alternating current (AC). So the main difference is that in AC the movement of electric charge periodically reverses direction while in DC the flow of electric charge is only in one direction. Explain how an AC generator works. You may use sketches to support your answer. Solution not yet available What are the advantages of using an AC motor rather than a DC motor. While DC motors need brushes to make electrical contact with moving coils of wire, AC motors do not. The problems involved with making and breaking electrical contact with a moving coil are sparking and heat, especially if the motor is turning at high speed. If the atmosphere surrounding the machine contains flammable or explosive vapours, the practical problems of spark-producing brush contacts are even greater. Explain how a DC motor works. Instead of rotating the loops through a magnetic field to create electricity, as is done in a generator, a current is sent through the wires, creating electromagnets. The outer magnets will then repel the electromagnets and rotate the shaft as an electric motor. If the current is DC, split-ring commutators are required to create a DC motor. At what frequency is AC generated by Eskom in South Africa? In South Africa the frequency is $$\text{50}$$ $$\text{Hz}$$ (IEB 2001/11 HG1) - Work, Energy and Power in Electric Circuits Mr. Smith read through the agreement with Eskom (the electricity provider). He found out that alternating current is supplied to his house at a frequency of $$\text{50}$$ $$\text{Hz}$$. He then consulted a book on electric current, and discovered that alternating current moves to and fro in the conductor. So he refused to pay his Eskom bill on the grounds that every electron that entered his house would leave his house again, so therefore Eskom had supplied him with nothing! Was Mr. Smith correct? Or has he misunderstood something about what he is paying for? Explain your answer briefly. Mr Smith is not correct. He has misunderstood what power is and what Eskom is charging him for. AC voltage and current can be described as: \begin{align*} i &= I_{\max} \sin(2\pi ft + \phi)\\ v &= V_{\max} \sin(2\pi ft) \end{align*} This means that for $$\phi = 0$$, i.e. if resistances have no complex component or if a student uses a standard resistor, the voltage and current waveforms are in-sync. Power can be calculated as $$P = VI$$. If there is no phase shift, i.e. if resistances have no complex component or if a student uses a standard resistor then power is always positive since: • when the voltage is negative (−), the current is negative (−), resulting in positive (+) power. • when the voltage is positive (+), the current is positive (+), resulting in positive (+) power. You are building a laser that takes alternating current and it requires a very high peak voltage of $$\text{180}$$ $$\text{kV}$$. By your calculations the entire laser setup can be treated at a single resistor with an equivalent resistance of $$\text{795}$$ $$\text{ohms}$$. What is the rms value for the voltage and the current and what is the average power that your laser is dissipating? At peak voltage the peak current will be: \begin{align*} V&=IR \\ I&=\frac{V}{R} \\ &=\frac{\text{180} \times \text{10}^{\text{3}}}{795} \\ &= \text{226,42}\text{ A} \end{align*} \begin{align*} P_{rms}&= V_{rms}I_{rms} \\ &= \frac{\text{180} \times \text{10}^{\text{3}}}{\sqrt{2}} \frac{\text{226,415094}}{\sqrt{2}} \\ & = \text{20,38} \times \text{10}^{\text{6}}\text{ W} \end{align*} $$\text{20,38} \times \text{10}^{\text{6}}$$ $$\text{W}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992526113986969, "perplexity": 1310.4389943397634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00249.warc.gz"}
https://pureportal.strath.ac.uk/en/publications/on-the-behaviour-of-time-discretisations-of-the-electric-field-in
# On the behaviour of time discretisations of the electric field integral equation Penny Davies, D.B. Duncan Research output: Contribution to journalArticle 7 Citations (Scopus) ### Abstract We derive a separation of variables solution for time-domain electromagnetic scattering from a perfectly conducting in®nite ¯at plate. The time dependent part of the equations are then used as a model problem in order to study the e€ects of various time discretisations on the full scattering problem. We examine and explain how exponential and polynomial instabilities arise in the approximation schemes, and show that the time averaging which is often used in an attempt to stabilise solutions of the full problem acts to destabilise some of the schemes. Our results show that two of the time discretisations can produce good results when coupled with a space-exact approximation, and indicate that they will be useful when coupled with an accurate enough spatial approximation. Original language English 1-26 26 Applied Mathematics and Computation 107 https://doi.org/10.1016/S0096-3003(98)10146-7 Published - 2000 ### Keywords • discretisations • electric field integral equation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390864968299866, "perplexity": 1122.103976127119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00404.warc.gz"}
http://en.wikipedia.org/wiki/Barker_code
# Barker code Graphical representation of a Barker-7 code Autocorrelation function of a Barker-7 code A Barker code or Barker sequence is a finite sequence of N values of +1 and −1, $a_j$ for $j = 1, 2, \dots, N$ with the ideal autocorrelation property, such that the off-peak (non-cyclic) autocorrelation coefficients $c_v = \sum_{j=1}^{N-v} a_j a_{j+v}$ are as small as possible: $|c_v| \le 1\,$ for all $1 \le v < N$.[1] Only nine[2] Barker sequences are known, all of length N at most 13.[3] Barker's 1953 paper asked for sequences with the stronger condition $c_v \in \{-1, 0\}$ only four such sequences are known, shown in bold in the table below.[3] ## Known Barker codes Here is a table of all known Barker codes, where negations and reversals of the codes have been omitted. A Barker code has a maximum autocorrelation sequence which has sidelobes no larger than 1. It is generally accepted that no other perfect binary phase codes exist.[4][5] (It has been proven that there are no further odd-length codes,[6] nor even-length codes of N < 1022.[7]) Known Barker codes Length Codes Sidelobe level ratio[8][9] 2 +1 −1 +1 +1 −6 dB 3 +1 +1 −1 −9.5 dB 4 +1 +1 −1 +1 +1 +1 +1 −1 −12 dB 5 +1 +1 +1 −1 +1 −14 dB 7 +1 +1 +1 −1 −1 +1 −1 −16.9 dB 11 +1 +1 +1 −1 −1 −1 +1 −1 −1 +1 −1 −20.8 dB 13 +1 +1 +1 +1 +1 −1 −1 +1 +1 −1 +1 −1 +1 −22.3 dB Barker codes of length N equal to 11 and 13 ( ) are used in direct-sequence spread spectrum and pulse compression radar systems because of their low autocorrelation properties (The sidelobe level of amplitude of the Barker codes is 1/N that of the peak signal).[10] A Barker code resembles a discrete version of a continuous chirp, another low-autocorrelation signal used in other pulse compression radars. The positive and negative amplitudes of the pulses forming the Barker codes imply the use of biphase modulation or binary phase-shift keying; that is, the change of phase in the carrier wave is 180 degrees. Similar to the Barker codes are the complementary sequences, which cancel sidelobes exactly when summed; the even-length Barker code pairs are also complementary pairs. There is a simple constructive method to create arbitrarily long complementary sequences. For the case of cyclic autocorrelation, other sequences have the same property of having perfect (and uniform) sidelobes, such as prime-length Legendre sequences and $2^n-1$ Maximum length sequences (MLS). Arbitrarily long cyclic sequences can be constructed. ## Barker modulation Barker code used in BPSK modulation In wireless communications, sequences are usually chosen for their spectral properties and for low cross correlation with other sequences likely to interfere. In the 802.11b standard, an 11-chip Barker sequence is used for the 1 and 2 Mbit/sec rates. The value of the autocorrelation function for the Barker sequence is 0 or −1 at all offsets except zero, where it is +11. This makes for a more uniform spectrum, and better performance in the receivers.[11] ## References 1. ^ Barker, R. H. (1953). "Group Synchronizing of Binary Digital Sequences". Communication Theory. London: Butterworth. pp. 273–287. 2. ^ https://oeis.org/A091704 3. ^ a b Borwein, Peter; Mossinghoff, Michael J. (2008). "Barker sequences and flat polynomials". In James McKee; Chris Smyth. Number Theory and Polynomials. LMS Lecture Notes 352. Cambridge University Press. pp. 71–88. ISBN 978-0-521-71467-9. 4. ^ 5. ^ http://www.math.wpi.edu/MPI2008/TSC/TSC-MPI.pdf 6. ^ Turyn and Storer, "On binary sequences", Proceedings of the AMS, volume 12 (1961), pages 394-399 7. ^ Leung, K., and Schmidt, B., "The Field descent method", Design, Codes and Cryptography, volume 36, pages 171-188
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253573775291443, "perplexity": 4344.446271049947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120446.62/warc/CC-MAIN-20140914011200-00248-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.jiskha.com/questions/294299/what-is-the-initial-temperature-in-degrees-c-of-a-system-that-has-the-pressure-decreased
# chemistry what is the initial temperature (in degrees C) of a system that has the pressure decreased by 10 times while the volume increased by 5 times with a final temperature of 150 K? a)27 b)75 c)300 d)-198 e) none of the above Please explain--I need to know how to do this--this is only a practice question. 1. 👍 0 2. 👎 0 3. 👁 202 1. Use (P1V1)/T1 = (P2V2)/T2 Make up values for those not listed, then multiply or divide by 10 or 5 to obtain the new values. Don't forget to use Kelvin for temperature. When you solve for the final T, remember to subtract 273 to change Kelvin back to C. 1. 👍 0 2. 👎 0 ## Similar Questions 1. ### chemistry Consider a system consisting of a cylinder with a movable piston containing 10^6 gas molecules at 298K at a volume of 1L. Consider the following descriptions of this system: 1. Initial system 2. Starting from the initial system, 2. ### Chemistry Consider this system at equilibrium. A(aq) B(aq) Delta H = +750 kJ/mol .. What can be said about Q and K immediately after an increase in temperature? a] Q > K because Q increased.. b] Q>K because K decreased.. c] Q 3. ### Chemistry If argon has a volume of 5.0 dm3 and the pressure is .92 atm and the final temperature is 30 degrees celcius and the final volume is 5.7 L and the final pressure is 800 mm Hg, what was the initial temperature of the argon? 4. ### algebra The volume of gas varies directly with temperature and inversely with pressure. Volume is 100 m^3 when temperature is 1508 degrees and pressure is 15 lb/cm^2. What is the volume when the temperature is 2508 degrees and the 1. ### AP Chem For the following equilibrium system, which of the following changes will form more CaCO3? CO2(g) + Ca(OH)2(s) CaCO3(s) + H2O(l) deltaH(rxn) = -113 kJ My choices are: a) Decrease temperature at a constant pressure (no phase 2. ### PHYSICS A copper vat is 10 m long at room temperature (20 degrees C). How much longer it is when it contains boiling water at 1 atm pressure? Air in a balloon does 50 J of work while absorbing 70 J of heat. What is its change in internal 3. ### chemistry a balloon is filled with 500 mL of helium at a temperature of 27 degrees celsius and 755 mmHg as the balloon rises in the atmosphere the pressure and temperature drop what volume will it have when it reaches an altitude where the 4. ### Chemistry 1) A quantity of 85 mL of .900 M HCl is mixed with 85 mL of .900 M KOH in a constant-pressure calorimeter that has a heat capacity of 325 J/C. If the initial temperatures of both solutions are the same at 18.24 degrees C, what is 1. ### chemistry How will the following system at equilibrium shift in each of the following cases? 2 SO3(g) 2 SO2(g) + O2(g) H° = 197 kJ (a) SO2(g) is added (b) the pressure is decreased by increasing the volume of the container (c) the pressure 2. ### Chemistry Sulfur trioxide gas, one of the causes of acid rain, is produced in the upper atmosphere when oxygen reacts with sulfur dioxide gas in the reaction shown below: 2SO2(g) + O2(g) 2SO3(g) deltaH0 = -197kJ The gases are placed in a 3. ### Pre-Calc You are taking a road trip in a car without A/C. The temperture in the car is 93 degrees F. You buy a cold pop at a gas station. Its initial temperature is 45 degrees F. The pop's temperature reaches 60 degrees F after 23 minutes. 4. ### chemistry a balloon is filled with 500 mL of helium at a temperature of 27 degrees celsius and 755 mmHg as the balloon rises in the atmosphere the pressure and temperature drop what volume will it have when it reaches an altitude where the
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843173384666443, "perplexity": 1533.249172209233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739370.8/warc/CC-MAIN-20200814190500-20200814220500-00034.warc.gz"}
http://tex.stackexchange.com/questions/496/what-is-wrong-with-the-line-spacing-of-my-lists-of-figures-and-tables/552
# What is wrong with the line spacing of my lists of figures and tables? Since a good picture is better than a long text, here is the problem: And here is most of my Latex file up to this point: \documentclass[12pt,a4paper,oneside]{book} \usepackage{setspace} \usepackage[francais, english]{babel} \usepackage[applemac]{inputenc} \usepackage{textcomp} \usepackage{graphicx} \usepackage{enumerate} \usepackage{array} \usepackage[bf,figurewithin=none,tablewithin=none]{caption} \usepackage{verbatim} \usepackage{tocloft} % Margins \usepackage[left=4cm, right=2cm, top=3cm, bottom=2cm]{geometry} % Nomenclature -> List of Abbreviations \usepackage[intoc]{nomencl} \renewcommand{\nomname}{List of Abbreviations} \makenomenclature % Make magic URL \usepackage{hyperref} \usepackage[all]{hypcap} \hypersetup{ pdfborder = {0 0 0 0} } % APA style referencing \usepackage{natbib} % Hack for URL serif font formating \let\oldUrl\url \renewcommand{\url}[1] { \urlstyle{same}\oldUrl{#1} } % ToC & Abbreviations in the ToC \usepackage[chapter]{tocbibind} % Widows & Orphans \widowpenalty=10000 \clubpenalty=10000 \begin{document} \selectlanguage{english} \frontmatter \setcounter{tocdepth}{1} \tableofcontents % --------------------------List of Abbreviations---------- \clearpage \printnomenclature[3 cm] % --------------------------List of Figures------------------- \clearpage \listoffigures \listoftables Do you have any idea what is going on with the line spacing? - Could you provide a minimal example that reproduces the problem? Do you really need to load all those packages to get the bug to kick in? –  Juan A. Navarro Jul 28 '10 at 17:20 In addition to Juans comment I would suggest that your example should compile without further edits. So if I copy your example into my editor and compile it, it should produce the output you wish, –  qbi Jul 28 '10 at 17:57 Maybe this could be taken to the meta. In a sense how should (minimal) examples be, so best answers are possible. –  Nils Schmidt Jul 28 '10 at 22:51 @Nils: I'm not trying to silence any discussion, but I do think the notion of a minimal example is straightforward. It is an example which: 1) will actually compile as given on any standard tex system, and 2) contains only that which is necessary to reproduce the behaviour exemplified. –  vanden Jul 29 '10 at 14:59 @vanden: I totally agree, but maybe it is a topic that should be addressed in meta, or maybe taken into the FAQs. –  Nils Schmidt Jul 30 '10 at 11:11 On my system, appending the following to your example \begin{figure}\caption{Something}\end{figure} \begin{figure}\caption{Something}\end{figure} \begin{figure}\caption{Something}\end{figure} \begin{table}\caption{Something}\end{table} \begin{table}\caption{Something}\end{table} \begin{table}\caption{Something}\end{table} \end{document} produces a document where spacing between lines are all fine. Please do try to build a minimal and complete example that reproduces the bug. - After much trying to replicate the behavior in a "minimal example", I actually found out what is happening. First let me say that I tried both ways: trying to recreate the behavior from scratch by adding bricks, and trying to provide a minimal example by removing bricks from my original document. Finally, and sort of thanks to this other question, I found out that my "behavior" is not actually a bug, but a feature! In fact, what you can't really guess from my picture up there, is that all my figures were in the same chapter (document class is "book"), while all tables are in different chapters. Hence the line spacing for the tables, but not the figures, which symbolizes the chapter changes. Anyway, sorry for bothering you and for not getting back to you earlier. Now I am going to try and see how I can add a line space between all figures and tables, independently from the fact that they are in the same chapter or not. - We should eventually add this to the faq, including some hints on how to build minimal examples. I can give you some preview: usually the removing bricks option works best; also usually, as you did, it is pretty common that you find the cause of your problem just by trying to build a minimal example. –  Juan A. Navarro Jul 29 '10 at 19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486368060112, "perplexity": 1115.5428214391445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00257-ip-10-180-206-219.ec2.internal.warc.gz"}
http://rinterested.github.io/statistics/tensors.html
#### TENSORS, TANGENT AND ORTHOGONAL VECTORS IN GENERALIZED CURVILINEAR COORDINATES: ##### Abstract: Vector is a physical quantity and it does not depend on any co-ordinate system. It need to be expanded in some basis for practical calculation and its components do depend on the chosen basis. The expansion in orthonormal basis is mathematically simple. But in many physical situations we have to choose an non-orthogonal basis (or oblique co-ordinate system). But the expansion of a vector in non-orthogonal basis is not convenient to work with. With the notion of contravariant and covariant components of a vector, we make non-orthogonal basis to behave like orthonormal basis. We introduce $$\vec a = e_1, \; \vec b=e_2,\; \vec c=e_3$$ for contravariant basis and $$\vec a' = e^1, \; \vec b'=e^2,\; \vec c'=e^3$$ for covariant basis. With this notation equation: $\vec a\cdot \vec a' = \vec b\cdot \vec b'=\vec c\cdot \vec c'=1;\; \vec a\cdot \vec b'=\vec a\cdot \vec c'=0;\;\vec b\cdot \vec a'=\vec b\cdot \vec c'=0;\;\vec c\cdot \vec a'=\vec c\cdot \vec b'=0$ becomes $$I = e_\mu e^\mu\tag {23}$$ and equation $I = \vec a \vec a' + \vec b \vec b' + \vec c \vec c'$ becomes $$e_i\cdot e^j =\delta_i^j\tag {24}$$ where summation over dummy indices is understood. $$\delta_i^j$$is standard Kronecker delta function. With the introduction of superscript and subscript notation we generalise equation (23) and equation (24) to n-dimensional Euclidean space. The contravariant component of any arbitrary vector $$\vec A$$ is $$A^i$$ with superscript index and covariant component is $$A_i$$ with subscript index are taken to be understood. The dimension of contravariant vector is the inverse of the covariant vector and hence we expect the behaviour of contravariant vector and covariant vector under co-ordinate transformation inverse to each other. KEY point: In a Cartesian system, covariant and contravariant components are the same. Imagining a differential displacement vector in two different coordinate systems, $$X$$ and $$Y:$$ What follows is prdicated on the assumption that we know the equations relating each component ($$m$$) in the $$X$$ coordinate system to the $$Y$$ coordinate frame: $$Y^n = f(X^m)$$ and $$X^p = g(Y^z).$$ The change in coordinates of the differential displacement vector knowing the transformation equations is given by: \begin{align} dy^1 &= \frac{\partial y^1}{\partial x^1} dx^1 + \frac{\partial y^1}{\partial x^2} dx^2 + \frac{\partial y^1}{\partial x^3} dx^3 + \cdots + \frac{\partial y^1}{\partial x^n} dx^n\\ dy^2 &= \frac{\partial y^2}{\partial x^1} dx^1 + \frac{\partial y^2}{\partial x^2} dx^2 + \frac{\partial y^2}{\partial x^3} dx^3+\cdots+\frac{\partial y^2}{\partial x^n} dx^n\\ \vdots\\ dy^d &= \frac{\partial y^d}{\partial x^1} dx^1 + \frac{\partial y^d}{\partial x^2} dx^2 + \frac{\partial y^d}{\partial x^3} dx^3+\cdots+\frac{\partial y^d}{\partial x^n} dx^n \end{align} So any particular component in the new coordinate system would be of the form: $dy^n = \frac{\partial y^n}{\partial x^\color{blue}{m}} dx^{\color{blue}{m}} \tag{Ref.1}$ with the color coding indicating Einstein’s convention. Expressed in matrix form: $\begin{bmatrix} dy^1\\dy^2\\dy^3\\\vdots\\dy^d \end{bmatrix}= {\begin{bmatrix} \frac{\partial y^1}{\partial x^1} & \frac{\partial y^1}{\partial x^2} & \frac{\partial y^1}{\partial x^3} &\cdots& \frac{\partial y^1}{\partial x^n}\\ \frac{\partial y^2}{\partial x^1} & \frac{\partial y^2}{\partial x^2} & \frac{\partial y^3}{\partial x^3} &\cdots& \frac{\partial y^n}{\partial x^n}\\ \vdots&\vdots&\vdots&&\vdots\\ \frac{\partial y^d}{\partial x^1} & \frac{\partial y^d}{\partial x^2} & \frac{\partial y^d}{\partial x^3} &\cdots& \frac{\partial y^d}{\partial x^n}\\ \end{bmatrix}} \large\color{red}{\begin{bmatrix} dx^1\\dx^2\\dx^3\\\vdots\\dx^n \end{bmatrix}}$ We can generalize to a vector $$V$$ (column vector in red), which can be transfored from $$X$$ to $$Y$$ coordinate systems as: $\bbox[yellow, 5px]{V^n_{(Y)} = \frac{\partial y^{n}}{\partial x^{\color{red}{m}}}\;V^{\color{red}{m}}_{(X)}}$ Notice that in this case $$n = d$$ (the $$d$$ in the matrix above), while $$m$$ is a dummy index, but it is in this case equal to $$n$$. So $$n = d$$ is the dimension of the vector in $$Y$$, or the number of rows of the transformation matrix; and $$m$$ is the number of columns, or the dimension of the vector in the $$X$$ coordinate system. If we can find the vector component $$n$$ in $$Y$$ of $$V$$ by taking these types of derivatives, we talk about a contravariant vector. The components are expressed as a superscript. Now let’s take two contravariant vectors: $$A_Y^m$$ ($$m$$-th component of the $$A$$ vector in the $$Y$$ frame): $$\large A_{(Y)}^m= \frac{\partial y^m}{\partial x^r} A_{(X)}^r$$; and the second vector $$\large B_{(Y)}^n= \frac{\partial y^n}{\partial x^s} B_{(X)}^s.$$ If we multiply these two vectors together: $\large A^m_{(Y)} B^n_{(Y)}$ we have to take the $$d$$ number of components that $$A$$ has and modify each one by the $$d$$ number of components $$B$$ has, expressing it as: $\large \bbox[10px, border:2px solid red]{T^{mn}_{(Y)} = \large A^m_{(Y)} B^n_{(Y)} =\frac{\partial y^m}{\partial x^r} \; \frac{\partial y^n}{\partial x^s}\; A_{(X)}^r\; B_{(X)}^s= \frac{\partial y^m}{\partial x^\color{blue}{r}} \; \frac{\partial y^n}{\partial x^\color{blue}{s}}\;T^{\color{blue}{rs}}_{(X)}}.$ These are contravariant tensors of the second rank. The $$m,n,r,s$$ superscript are the vector components (elements or entries), while $$(X),(Y)$$ are coordinate systems. So we note that tensors enter when there is a transformation between coordinate systems of more than one vector. This is consistent with the Wikipedia entries both of vectors as multilinear maps: A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach is to define a tensor as a multilinear map. In that approach a type $$(p, q)$$ tensor $$T$$ is defined as a map, $T:\underbrace{V^{*}\times \dots \times V^{*}}_{p{\text{ copies}}}\times \underbrace{V\times \dots \times V}_{q{\text{ copies}}}\rightarrow \mathbf {R}$ where $$V$$ is a (finite-dimensional) vector space and $$V^∗$$ is the corresponding dual space of covectors, which is linear in each of its arguments. By applying a multilinear map $$T$$ of type $$(p, q)$$ to a basis $$\{e_j\}$$ for $$V$$ and a canonical cobasis $$\{\epsilon^i\}$$ for $$V^∗$$, $T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T(\mathbf{\varepsilon }^{i_{1}},\ldots ,\mathbf {\varepsilon }^{i_{p}},\mathbf{e}_{j_{1}},\ldots ,\mathbf{e}_{j_{q}})$ a $$(p + q)$$-dimensional array of components can be obtained. However, the most fitting definition is as multidimensional arrays: Just as a vector in an n-dimensional space is represented by a one-dimensional array of length n with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the scalar components of the tensor or simply its components. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order $$2$$ tensor $$T$$ could be denoted $$T_{ij}$$, where $$i$$ and $$j$$ are indices running from $$1$$ to $$n$$, or also by $$T_i^j$$. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of indices required to identify each component uniquely is equal to the dimension of the array, and is called the order, degree or rank of the tensor. However, the term “rank” generally has another meaning in the context of matrices and tensors. Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors), where the new basis vectors $$\displaystyle \mathbf {\hat {e}} _{i}$$ are expressed in terms of the old basis vectors $$\displaystyle \mathbf {e} _{j}$$ as, $\displaystyle \mathbf {\hat {e}}_{i}=\sum_{j=1}^{n}\mathbf {e}_{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.$ Here $$R^j_i$$ are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention. The components $$v^i$$ of a column vector $$v$$ transform with the inverse of the matrix $$R$$, $\displaystyle {\hat {v}}^{i}=(R^{-1})_{j}^{i}v^{j},$ where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector transforms by the inverse of the change of basis. In contrast, the components, $$w_i$$, of a covector (or row vector), $$w$$ transform with the matrix $$R$$ itself, $\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.$ This is called a covariant transformation law, because the covector transforms by the same matrix as the change of basis matrix. The components of a more general tensor transform by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is traditionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript). To move on to covariant tensors it is necessary to discuss what a gradient vector is: So if a scalar $$\varphi$$ is a function of $$X^1$$ and $$X^2$$, and we see a differential displacement $$\vec{dl}$$, the change in $$\varphi$$ will be given by: $\underset{\color{red}{\text{SCALAR}}}{\underbrace{\Huge{d\varphi}}} =\underset{\text{grad. vec.}}{\underbrace{\frac{\partial \varphi}{\partial x^1}}}\,dx^1 + \underset{\text{grad. vec.}}{\underbrace{\frac{\partial \varphi}{\partial x^2}}}\,dx^2\tag 1$ KEY POINT: The gradient vector is in the dual space, taking in a “regular” vector and producing a scalar. In the case of the contravariant vector, a vector in a coordinate frame was transformed into another vector in a different frame. We also have that $\vec{dl}= dx^1 \vec{X^1} + dx^2 \vec{X^2}\tag 2$ with $$\vec{X^1}$$ and $$\vec{X^2}$$ representing the unit vectors. We want a vector that dotted with equation $$(2)$$ results in equation $$(1).$$ Keeping in mind that $$\vec{X^1}$$ and $$\vec{X^2}$$ are unit vectors, the vector we are looking for is the gradient of the scalar $$\varphi$$: $\vec \nabla \varphi= \frac{\partial \varphi}{\partial x^1}\,\vec{X^1} + \frac{\partial \varphi}{\partial x^2}\,\vec{X^2}\tag 3$ Here’s the dot product: $d\varphi=\vec{dl}\,\vec{\nabla}\varphi=\color{brown}{ \begin{bmatrix}dx^1 \vec{X^1} & dx^2 \vec{X^2} \end{bmatrix} \begin{bmatrix} \frac{\partial \varphi}{\partial x^1}\,\vec{X^1} \\ \frac{\partial \varphi}{\partial x^2}\,\vec{X^2} \end{bmatrix}}=\frac{\partial \varphi}{\partial x^1}\,dx^1 + \frac{\partial \varphi}{\partial x^2}\,dx^2$ So, $d\varphi = \vec{dl}\,\vec{\nabla}\varphi$ Generalizing equation $$(3)$$, $\vec\nabla\varphi=\underset{coord. comp. grad. vec.}{\underbrace{\Large{\frac{\partial\varphi}{\partial x^\color{blue}{m}}}}}\;\vec{X^\color{blue}{m}}$ is the expression of the gradient in the $$X$$ coordinate frame. In the $$Y$$ coordinate frame it would be: $\vec\nabla\varphi=\Large{\frac{\partial\varphi}{\partial y^\color{blue}{n}}}\;\vec{Y^\color{blue}{n}}$ Applying the chain rule: $\color{red}{\frac{\partial \varphi}{\partial y^n}}= \frac{\partial \varphi}{\partial x^m} \frac{\partial x^m}{\partial y^n}=\frac{\partial x^m}{\partial y^n}\color{red}{\frac{\partial \varphi}{\partial x^m}}$ This last equation relates the components of the gradient vector in the $$X$$ coordinate frame to the components in the $$Y$$ frame. Notice that the arrangement of the dummy indices is: $\frac{\partial \varphi}{\partial y^n}= \frac{\partial x^{\color{red}{m}}}{\partial y^n}\frac{\partial \varphi}{\partial x^{\color{red}{m}}}$ In matrix form: $\begin{bmatrix} \frac{\partial \varphi}{\partial y^1}\\\frac{\partial \varphi}{\partial y^2}\\\frac{\partial \varphi}{\partial y^3}\\\vdots\\\frac{\partial \varphi}{\partial y^d} \end{bmatrix}= {\begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^2}{\partial y^1} & \frac{\partial x^3}{\partial y^1} &\cdots& \frac{\partial x^n}{\partial y^1}\\ \frac{\partial x^1}{\partial y^2} & \frac{\partial x^2}{\partial y^2} & \frac{\partial x^3}{\partial y^2} &\cdots& \frac{\partial x^n}{\partial y^2}\\ \vdots&\vdots&\vdots&&\vdots\\ \frac{\partial x^1}{\partial y^d} & \frac{\partial x^2}{\partial y^d} & \frac{\partial x^3}{\partial y^d} &\cdots& \frac{\partial x^n}{\partial y^d}\\ \end{bmatrix}} \large\color{red}{\begin{bmatrix} \frac{\partial \varphi}{\partial x^1}\\\frac{\partial \varphi}{\partial x^2}\\\frac{\partial \varphi}{\partial x^3}\\\vdots\\\frac{\partial \varphi}{\partial x^n} \end{bmatrix}}$ This arrangement (red column vector - a gradient vector in coordinate system $$X$$) is the form that defines covariant vectors - for example $$W:$$ $\bbox[yellow, 5px]{W^{(Y)}_n = \frac{\partial x^{\color{red}{m}}}{\partial y^n}\, W^{(X)}_{\color{red}{m}}}$ Their components transform from one to another coordinate system like gradient vectors do. The components are subscripts! Let´s say we have two covariant vectors $$A$$ and $$B$$ with $$d$$ components: $C_m^{(y)}=\frac{\partial x^r}{\partial y^m} C_r^{(x)}$ $D_n^{(y)}=\frac{\partial x^s}{\partial y^n} D_s^{(x)}$ Multiplying them, $C_m^{(y)}D_n^{(y)}=\frac{\partial x^r}{\partial y^m}{(y)}\frac{\partial x^s}{\partial y^n}C_r^{(x)}D_s^{(x)}$ $\Large \bbox[10px, border:2px solid red]{T_{mn}^{\small(Y)}= \frac{\partial x^{\color{blue}{r}}}{\partial y^m}\frac{\partial x^{\color{blue}{s}}}{\partial y^n}T_{\color{blue}{rs}}^{(x)}}$ This is a covariant tensor! There are mixed tensors, such as: $\Large T^n_m{\small (Y)} =\frac{\partial x^{\color{red}{r}}}{\partial y^m}\frac{\partial y^n}{\partial x^{\color{blue}{s}}}T^{\color{blue}{s}}_{\color{red}{r}}\small (X)$ In a generalized curvilinear coordinate system, the three lines in the diagram can represent the magnitude or position of spherical coordinates: with the two angles involved (in $$3$$ dimensional space) assigned to the other two curvilinar coordinates - say $$u_1$$ represents the magnitude in spherical; $$u_2$$ is for the $$\theta$$ angle; and $$u_3$$ stands for $$\phi.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971214771270752, "perplexity": 243.72126206835085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00329.warc.gz"}
http://www.math.columbia.edu/~woit/wordpress/?p=492
# More On Geometric Langlands (a Grand Unified Theory of Math?) After mentioning in the last posting that Witten is giving talks in Berkeley and Cambridge this week, I found out about various recent developments in Geometric Langlands, some of which Witten presumably will be talking about. Edward Frenkel has put a draft version of his new book Langlands Correspondence for Loop Groups on his web-site. In the introduction he describes the Langlands Program as “a kind of Grand Unified Theory of Mathematics”, initially linking number theory and representation theory, now expanding into relations with geometry and quantum field theory. The book is nearly 400 pages long, and to be published by Cambridge University Press. Frenkel also notes that recent developments in geometric Langlands have focused on extending the story from the case of flat connections on a Riemann surface to connections with ramification (i.e. certain point singularities are allowed). He has a new paper out on the arXiv about this, entitled Ramifications of the geometric Langlands program, and he writes that: in a forthcoming paper [by Gukov and Witten] the geometric Langlands correspondence with tame ramification is studied from the point of view of dimensional reduction of four-dimensional supersymmetric Yang-Mills theory. The title of the forthcoming Gukov-Witten paper is supposedly “Gauge theory, ramification, and the geometric Langlands program.” At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years. Update: Slides from Witten’s Berkeley lectures are here. And many thanks to David Ben-Zvi for the informative comments! This entry was posted in Uncategorized. Bookmark the permalink. ### 50 Responses to More On Geometric Langlands (a Grand Unified Theory of Math?) 1. A.J. says: Hi Peter, Witten has only delivered one lecture so far, and it was devoted to reviewing background material: mostly S-duality and a few words about topological twisting, all of which can be found in the Kapustin-Witten paper. 2. Peter Woit says: Thanks A.J.! It would be great if you could keep us informed about the rest of the lectures… 3. SFB says: It sounds like they are doing interesting math, but leaving physics to the LQG crowd. 4. atrings says: I agree with SFB for the “interesting math “,but not for the”LQG crowd”. 5. Richard says: “At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years.” Perhaps a domination of mathematical physics, but the claim of a grand unification of mathematics is in fact way over the top unless you believe that mathematics is nothing but mathematical physics. It probably all depends on your own personal values, biases, points of view, and even whom you believe owns mathematics. Recall Lubos’ wild claim that someday mathematics will be completely subsumed by string theory? 6. onymous says: I expect many of the people who have been working on geometric Langlands for years would be kind of shocked to be called mathematical physicists, Richard. Do they all instantly become mathematical physicists just because Witten got interested in what they’re doing? 7. Richard says: Onymous – I don’t believe I said that. 8. onymous says: Apologies, I misread Peter’s original statement — didn’t notice that he specifically singled out mathematical physics — and so misinterpreted your “…unless you believe that mathematics is nothing but mathematical physics” as an implication that geometric Langlands is mathematical physics. Never mind. 9. David Ben-Zvi says: Hi and thanks for the references! (all notes on my page should be taken with many grains of salt..) I should point out that the preprint by Gukov and Witten doesn’t actually talk at all about link homology, so my talk description was perhaps premature, but a connection between geometric Langlands and some kind of link homology is to be expected following their ideas (cf Gukov’s Strings talk). Cautis and Kamnitzer also have very interesting work in progress on such a relation. After all, geometric Langlands is a very general categorification program in representation theory, so one would expect it to relate to the kinds of categorifications that give rise to Khovanov homology. There just aren’t too many fundamental structures associated with a semisimple Lie group, and they all connect.. Of course it’s a joke to speak of geometric Langlands as a grand unified theory… but the Langlands duality is certainly among the broadest themes in math, a kind of nonabelian generalization of the Fourier transform, and it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories! 10. relativist says: David Ben-Zvi says “it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories!” Can you expand on that? Sounds very interesting. 11. urs says: it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories! Can you expand on that? Sounds very interesting. This is the insight of the Kapustin-Witten paper. You can find a summary here. 12. urs says: a kind of nonabelian generalization of the Fourier transform Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform? It seemed to me that much of Langlands can be nicely understood as taking place in categorified linear algebra. I have made remarks on how the Hecke operator looks like a 2-linear map for instance here. 13. urs says: It would be great if you could keep us informed about the rest of the lectures… If anyone feels like reporting on interesting lectures online, we have a guest account for that over on the n-Café. For instance we had David Roberts guest-reporting from a lecture by Brian Wang here, similar to the many guest reports we had # at the string coffee table. 14. David Ben-Zvi says: Urs: “Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform?” well it’s both.. the main difficulty is the nonabelian nature rather than the categorification, and that is where Langlands tells us what to do (in the geometric or classical, noncategorified setting). Categorifications of the Fourier transform have been used for almost 30 years I think (starting with the Fourier-Deligne transform, see eg Laumon’s first ICM), and the geometric Langlands program suggests that one can extend this to nonabelian settings (G-bundles on curves). By the way maybe this is an excuse to air one of my pet peeves, the use of the term “Fourier-Mukai” to refer to any functor between derived categories given by an integral kernel.. I would be surprised if an analyst referred to any map on function spaces given by integration against a kernel (or any matrix) as a Fourier transform, and the same should hold in the categorified setting — in some precise sense (due to Toen and which I’m badly paraphrasing) all functors between derived categories are given by integral kernels! “Honest” Fourier-Mukai transforms should have additional structure and properties (for example taking convolution to tensor product). Similarly not any duality is a T-duality! 15. urs says: well it’s both.. […] Categorifications of the Fourier transform have been used […] and the geometric Langlands program suggests that one can extend this to nonabelian settings […] Great, thanks! That’s what I was hoping some expert would say. Probably I just talked to the wrong experts so far! Because each time I’d ask a question along the lines “isn’t an eigenbrane just a categorified eigenvector in some 2-vector space” the answer I’d get would be something like “no, 2-vector space only appear after we categorify Langlands itself, like Kapranov discussed.” http://golem.ph.utexas.edu/category/2006/10/quantization_and_cohomology_we_1.html#c005444 16. Peter Orland says: I don’t know much category theory, but I thought that the non-Abelian generalization of the Fourier transform is the character expansion (or Plancherel transform in the non-compact case) for functions on non-Abelian groups. Aside from a character formula, that is the simplest generalization. Obviously, I am missing the point and something deeper is meant. Can anyone explain this to a dumb theoretical physicist? 17. Peter Orland says: I just wanted to add that the sort of examples I mentioned don’t help much with non-Abelian duality in classical or quantum field theory. To perform a duality transformation, a zero-curvature condition is Fourier transformed and the parameter integrated over is the dual field. This only really works in the Abelian case. There are non-Abelian generalizations of duality done this way, but they are rather messy, and not obviously useful. 18. A.J. says: Well, Witten finished his lectures, but ran out of time to say much of anything about ramification. There’s just too much information to be covered in (somewhat less than) 3 hours. Most of what he said is pretty well covered in David Ben-Zvi’s notes, and in Urs’s posts on the subject, or in the Kapustin-Witten paper for that matter. We did get scans of his notes, so perhaps those will be available online one of these days. 19. urs says: Can anyone explain this to a dumb theoretical physicist? One way to get an intuition for what is going on with these Hecke operators and similar transformations is to consider the drastically oversimplified baby toy example situation where the underlying spaces are in fact just – finite sets. A vector bundle over a finite set is then just an array of finitely many vector spaces. Think of that as a vector whose entries are vector spaces. Such a beast is known as a (Kapranov-Voevodsky) 2-vector. The categorification involved here is that which takes the monoid of complex numbers and replaces it by the monoidal category of complex vector spaces. So we can imagine doing linear algebra with these vectors whose entries are vector spaces by replacing sums of complex numbers by direct sums of vector spaces and products of complex numbers by tensor products of vector spaces. In particular, let X any Y be two finite sets and consider a vector bundle L over X x Y . By the above, this is now like a |X| x |Y| matrix with entries being vector spaces. Using the above dictionary, we can define the categorified matrix product of L with a 2-vector over Y, simply by using the ordinary prescription for matrix multiplication but replacing sums of numbers by direct sums of vector spaces and products of numbers by tensor products of vector spaces. One can convince onself, that this categorified action of a 2-matrix on a 2-vector can equivalently be reformulated in a more arrow-theoretic way as follows: We have projections p1 and p2 from X x Y to X and to Y, respectively. This makes X x Y into a span http://golem.ph.utexas.edu/category/2006/10/klein_2geometry_vi.html#c005232 Given a 2-vector V -> X over X, we may pull it back along p1 to X x Y, tensor the result componentwise with L and push the result of that back along p2. This operation produces precisely the naive categorified matrix product that I mentioned above. But the nice thing is that this pullback-tensor-pushforward along a “correspondence” like X x Y generalizes to vastly more interesting situations. There is an entire zoo of well-known operations of this kind. The Fourier-Moukai transformation is one example. The Hecke transformation that appears in geometric Langlans is another. In the above sense, all of these operations can be understodd as linear maps on 2-vector spaces. A description of what I just said, including some helpful diagrams and links to further material can be found here: 20. urs says: Concerning the abelian vs. nonabelian categorified Fourier transform: there is something called the “classical limit” of geometric Langlands, as decribed for instance here: Pantev on Langlands, II The Hecke operation in geometric Langlands is a generalization of the categorified Fourier transformation: is a “2-linear map” in the sense of my comment above http://www.math.columbia.edu/~woit/wordpress/?p=492#comment-19258 such that it coincides with the Fourier-Moukai transformation in this “classical limit”. In other words, the Hecke operation is a deformation of the Fourier-Moukai transformation. 21. Bert Schroer says: I never understood what is the relation of elliptic cohomology (not that I don’t know what it presents mathematically since I have followed the area with an ever increasing distance since the days of the Atiyah-Singer index theorem) with particle physics except that Witten has generated a certain enthusiasmus with some particle physicists. Since I have learned to make a distiction between physics and what (some) physicists are doing and since this blog (as Peter’s book) is primarily about the present state of particle physics I think it is a legitimate question to ask about its relation to particle physics. If this is not permitted then this will be my last contribution to this blog. 22. Anon says: To Peter Orland: You are correct about the Plancheral theorem. But that tells you that if you know the irreducible representations, and their dimensions/characters, you know how to decompose functions. It doesnt tell you what the characters are. In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group. The categorification nonsense is an elaboration of this, to say *all* information you can extract comes from this dual group. 23. Peter Orland says: Urs and Anon, Thanks for the responses. I understand that a character formula of some sort is need to make Plancheral meaningful. What I worry about is that even with such a character formula, there isn’t enough for non-Abelian electromagnetic duality. In fact, I am skeptical a USEFUL duality for pure Yang-Mills theorists exists. To carry out a duality transformation, the Bianchi identity needs to be imposed by integrating over a new field (in 3+1 dimensions, this field is a one-form). Then we would like to integrate out the orginal gauge field to obtain a action in this new field. Doing this in practice is tough. There are tricks for doing it with certain character formulas, but the dual theory is a mess, since the dual fields are discretely valued (o.k. on the lattice, but without a good continuum interpretation). Are these new techniques are somehow better? If so, it would be very interesting. 24. urs says: In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group. That’s the original “algebraic” Langlands thing. The categorification nonsense is an elaboration of this, to say *all* I think the categorification nonsense comes in when you pass from the original to the geometric Langlands correspondence. In the original Langlands setup, the Hecke operator is an ordinary linear map, acting on a space of modular forms. In the geometric version of the theory, it becomes the Hecke operator that acts on derived coherent sheaves on some moduli space. And that guy is no longer an ordinary linear map. But it is a categorified linear map, if you like (and also if you don’t like it). In particular, in a special limit it is nothing but a certain categorification of the Fourier transformation. 25. urs says: I never understood what is the relation of elliptic cohomology […] with particle physics Elliptic cohomology is not about particle physics. It is about string physics. Elliptic cohomology is to strings like particles are to K-cohomology #. But what is the direct relation of elliptic cohomology to geometric Langlands, that made you bring this up here? 26. Bert Schroer says: Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected. 27. Bert Schroer says: Urs Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected. I guess I got into the Langland’s column by accident, but without this accident I probably would not have received such a precise answer. 28. urs says: Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. Yes, check out the table at the beginning of the introduction of those notes. Generalized cohomology theories are labelled by something called their “chromatic filtration”. The idea is that a cohomology theory of chromatic level p comes from the physics of “p-particles” – otherwise known as (p-1)-branes. K-cohomology has filtration 1. It corresponds to 1-particles (0-branes). Ordinary points, that is. Elliptic cohomology has filtration 2. It corresponds to 2-particles, otherwise known as 1-branes or strings. Ordinary (singular) cohomology has filtration level 0. There is a precise sense in which it corresponds to 0-particles (or (-1) branes). I expect this table is open ended. But I have never seen anything about cohomology theories of chromatic filtration larger than 2. 29. urs says: I am skeptical a USEFUL duality for pure Yang-Mills theorists exists. It is a famous conjecture that 4-dimensional Yang-Mills theory has a duality called S-duality. Yang-Mills theories (in a given dimension, for a fixed number of supercharges) are parameterized by a complex number tau , the coupling constant, and a Lie group G, the gauge group. For N=4 supersymmetric Yang-Mills, there is conjectured to be an isomorphism between Yang-Mills theory for (tau,G) and that for (-1/tau , G^L) . -1/tau is, roughly, the inverted coupling constant (therefore: “weak-strong coupling duality”) and G^L is the Lie group that is Langlands dual to G. See the first few paragraphs of this, for instance. That this is indeed an isomorphism of field theories is not a theorem, but it is supported by enough evidence that makes everybody assume it is indeed true. This is the S-duality conjecture. Since the Langlands dual group appears in this conjecture, it has long been speculated that there is indeed a relation between S-duality and the Langlands program. But until recently nobody could really substantiate this. The achievement of the Kapustin-Witten work is to show that for the special case that the 4-dimensional Yang-Mills theory is suitably compactified down to two dimensions, the S-dualiy conjecture for Yang-Mills theory is essentially equivalent to the geometric Langlands conjecture. All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory. Notice that this amounts to further support for the S-duality conjecture, because it increases the number of people that truest the S-duality conjecture by those mathematicians that trust the geometric Langland conjecture. But it might also be noteworthy that this suggests that the geometric Langlands duality is only a tiny aspect of a much bigger story – since it is (apparently) just the special case of S-duality applied to a very specific compactification of Yang-Mills theory only. 30. Peter Orland says: Urs, Yes, I know about the S-duality conjecture (I would much more interested in a similar conjecture about pure Yang-Mills than N=2 or N=4 Yang-Mills. Theories with adjoint matter are very different from those we know about in nature). Though a conjecture is nice, to really prove it operator equivalences are needed. The procedure I discussed before, character expansions of the Bianchi identity, etc., is the first step to find such equivalences. In Abelian theories, this is how Kramers-Wannier duality works. There are some non-Abelian constructions due to Sharachandra and Anishetty, they haven’t proved useful yet. 31. A.J. says: Urs, All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory. The geometric Langlands correspondence is stated in terms of D-modules on the moduli stack of not-necessarily stable G-bundles. Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus. As I far as I can tell, the relation between N=4 SYM and the Langlands correspondence for D-modules on the full stack hasn’t been completely spelled out. 32. ks says: Question to Urs. First of all thanks a lot for all your explanations. You work on cool stuff anyway ( though it is a little over my head at this time ). Do I understand your research program correctly when I assume that You try to link the standard model and ST in purely algebraic terms by means of higher category theory? Hence when changing the algebraic setting they do not look much different but are connected through certain higher morphisms? 33. Bert Schroer says: Peter Orland Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions. Whereas o-d is a phenomenon which has a solid operator algebraic intrinsic understanding (if you want I can provide you with recent literature) there is nothing like this for the S conjecture. By now Wikipedia has more material on wild conjectures than about genuine results. There is the danger that we may be fooled to our own simulacrums and metaphors in particular that conjectures solidify because they comes from somebody with a high status in the community or because they have been hanging around for a long time so that several generations have stepped on them. 34. urs says: wild conjectures S-duality is certainly a conjecture, but hardly a wild conjecture. I mean, that’s the point: S-duality is apparently as wild as geometric Langlands. 35. urs says: Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus. Right, thanks. There are probably a couple of such technicalities. I am not working on this stuff, so it’s hard to keep them all in mind. So what about that “classical limit” in which, apparently, geometric Langlands is only proven so far. Does compactified SYM exactly coincide with the geometric Langlands data in that limit? 36. David says: A couple of comments: Kapustin-Witten’s theory does (as far as I understand) cover the full stack of bundles, not just the semistable locus. The sigma-model/mirror symmetry description fails outside the semistable locus, but they emphasize in the paper that the gauge theory sees the entire stack of bundles — I think the problem is us geometers have only been able in the past really to process the classical aspects of the theory (solns of the equations of motion etc) but quantum gauge theory is a lot smarter than we are (speaking for myself at least). As far as I know they can’t completely say what S-duality predicts off the semistable locus, but the important point is it does actually apply there. The classical limit of Langlands is only proven generically, missing the hardest locus — it’s a beautiful result and one of the best in the subject, but saying classical geometric Langlands is understood is on the same level as saying you understand (noncompact) Lie groups when you understand their diagonalizable elements — the hardest part involved unipotents.. Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli of bundles (and sheaves on them), while the Fourier-Mukai type transforms relate G and G^ the dual group. One sense (of many) in which geometric Langlands is a nonabelian categorified generalization of the Fourier transform is that while Plancherel helps you decompose spaces of functions on a group, CATEGORY of all representations of a group — since these categories are not semisimple there’s a big difference between listing irreducibles and their characters and actually describing the structure of general representations. (Geometric Langlands ideas can be used to study for example the category of Harish-Chandra modules for a real semisimple Lie group). 37. Peter Orland says: Bert, I cannot understand your explanation especially well. In my attempt to translate your statement into simple language, I conclude you mean more conjectures than solid statements dominate our field. I don’t need to be reminded of this, since I have seen it all over the literature for the last decade or so. I was asking if the experts on Langlands believe a useful concrete electric-magnetic duality transformation can be constructed from non-Abelian Fourier transforms (character expansions). I suspect the answer is no, since no one gave me a simple “yes”. 38. I’m probably mixing algebraic number theory with analytic number theory but is there a relationship between elliptic cohomology and elliptic Mobius transformations? 39. urs says: Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli of bundles Oh, sorry, I misspoke if I said that. The Langlands correspondence is analogous to the Fourier transform, exchanging skyscraper sheaves (analogous to delta-functions) with Hecke-eigensheaves (analogous to plane waves). So, in this analogy, the Hecke operator is like a categorified derivative. 40. Bert Schroer says: I am afraid the sad truth is the answer is “no”. It is better to live in quantum reality than to become complacent with a Disney version of it. I was not trying to explain anything in technical terms but only pointing to the obvious observation that Kramers-Wannier on a microscopic level (achieved by Leo Kadanoff) was quantum from the beginning whereas the Seiberg Witten duality is from a physical Disney dreamland which precisely of this is so useful to a large part of mathematics. The kind of mathematics for which it had no use is the operator-algebraic mathematical setting of QT which dates back to von Neumann and has been enriched by the locality principle in AQFT. By the way the manner Kadanoff has extracted (noncommutative) operator commutation relations for the (what we nowadays call) the Ising primary fields from the Euclidean lattice setting (via a partially guessed properties of the transfer matrix formalism) had my deep admiration; the Leitmotiv of all my work with Swieca in the early 70s was related to adapt ate Kadanoff’s order/disorder ideas to the continuous setting of QFT; in many cases we even succeeded to read this back into a continuous functional integrals setting by using an Aharonov-Bohm analog language. Later, when I was working with Rehren on an algebraic approach to chiral conformal QFT I remembered those Kadanoff ideas and we found a completely explicit operator version of an “exchange algebra” for the conformal Ising field theory from which it was possible to compute its n-point Wightman functions. A historical review can be found in http://br.arxiv.org/abs/hep-th/0504206 In those days we also convinced ourselves that this order-disorder idea has no electric-magnetic counterpart in the full QFT setting. 41. Peter Orland says: Bert, I also worked extensively on duality. Like you, I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. But there are intriguing exceptions of systems with non-Abelian systems which do have duality transformations and disorder operators. In my Ph.D. thesis I found lattice systems with permutation-group $S_{N}$ symmetry which have nontrivial duals. But I will spare people here from a list of more publications on the subject. Regards, Peter (O.) Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions. ??? The 3D Ising model on a cubic lattice is Kramers-Wannier dual to Ising gauge theory on the same lattice. Why is this not o-d duality in higher dimensions? 43. urs says: I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. Is this saying that you consider the S-duality conjecture to be in fact false? If so, I’d be interested in the details of the assumptions that go into this. I recall that Bert Schroer was (similarly ?) claiming that the AdS/CFT duality conjecture (in the sense of Maldacena) is false, and that the correct duality statement was along the lines of Rehren’s work. In that case I got the impression that two rather different concepts were being compared, and that in fact Rehren’s work had little relation to the setup considered by Maldacena et al. Compare for instance Jacques Distler’s account. The crucial difference in this case is that Rehren’s work was based on a fixed and precise axiom set, while Maldacena’s work uses notions of quantum field theory that have not been axiomatized yet. For people like Bert Schroer this is reason enough to completely reject all QFT that does not fit into the AQFT axioms. For other people, in contrast, the restrictive applicability of the AQFT axioms is reason enough to reject those. To some extent it is a matter of taste concerning which role of rigour you find useful in physics research. I can easily tolerate both these standpoints. But I would like to know in each case which one is assumed by which participant. 44. woit says: Urs, You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM. There’s a beautiful story about duality in non-supersymmetric abelian gauge theories, and many people (including Peter) have tried hard to generalize this to the non-abelian case. I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no. 45. Peter Orland says: Urs, Sorry that I am giving long-winded answers to your questions. I am mainly interested in advancing methods in asymptotically-free field theories and in constructions which could eventually facilitate calculations. I try to learn other stuff, because I can’t predict what I may need to know in the future. But I am more interested in theoretical, rather than mathematical physics (as people abuse use the term nowadays, to study mathematical techniques, rather than to prove theorems). I believe (after some years of trying to show the contrary) there is no USEFUL version of Kramers-Wannier duality which is true for PURE non-Abelian gauge theories. There are non-Abelian dualities for some special $S_N$-invariant systems, which I mentioned above (there is also non-Abelian Bosonization in two dimensions). The general problem for duality in non-Abelian theories is constructing dual fields with local commutation or anti-commutation relations. Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian. These theories are interesting in their own right, but to my way of thinking, they are not as important as Yang-Mills theories coupled only to fundamental (not adjoint) Fermion color charges, or pure Yang-Mills theories. There are other notions of duality in QCD. The ‘t Hooft loop is the disorder operator. Unfortunately, there is probably no useful local dual-field-theory formulation for which it is the order parameter. 46. urs says: You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM. In as far as I am ignoring anything, it is not on purpose. I’d be glad to be enlightened. Maybe I found Peter Orland’s statement I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. # seemed to refer to arbitrary gauge theories. I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no. Hm, maybe here is the source of the misunderstanding. Kapustin-Witten show that geometric Langlands does give insight into the type of duality present in N=4 SYM. So in far as this is different to other types of duality, geometric Langlands apparently does not apply to these. Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian. Could you expand on what you mean by “effectively abelian” here? Thanks! 47. Peter Orland says: Urs, By “effectively Abelian”, I mean that that the magnetic-monopole charge is well-defined and quantized. In QCD or pure Yang-Mills, there is no precise definition of magnetic-monopole charge. In the Georgi-Glashow model (an the related deformation of N=2 supersymmetric gauge theory) a Higgs field breaks the gauge group down to the Cartan subgroup. Thus there are Abelian monopoles, with quantized charge, etc. These theories have a confined phase for sufficiently small monopole mass, which goes back to Polyakov’s observations in the 70’s. Duality for such theories is not so different from those of Abelian Wilson lattice gauge theories. They are, however, quite different from QCD. Now there is an old result made by many people (Fradkin, Shenker, Rabinovici and others) that there is little difference between a Higgs field in a gauge theory and a scalar field in that gauge theory without a Higgs potential. The basic point is that the operator creating a massive vector Boson in the Higgs theory looks just like the operator creating a “meson” built from scalars in the confined phase. From this point of view, any theory with scalar matter is not so different from a Higgs theory. In particular, it is possible to define magnetic charge, no matter what the scalar potential happens to be. So in such theories charge-monopole duality is a sensible concept. The reason why the possibility of duality for Yang-Mills theories is interesting is because it could yield insight into the confinement phase. Some sort of magnetic condensation occurs, producing confinement and a mass gap, as simulations show, but we want to know why. 48. anonymous says: Off-topic mathematical physics fun: Andre LeClair is claiming there’s a physical system, which, on physical grounds, suggests the Riemann hypothesis is true. Are there any experts around to comment on whether it’s plausible? http://www.arxiv.org/abs/math-ph/0611043 49. relativist says: For those like me who don’t know much about the Langlands programme but would like to, a useful account is an older one by Frenkel: `Lectures on the Langlands Program and conformal field theory’, at http://www.arxiv.org/PS_cache/hep-th/pdf/0512/0512172.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026428818702698, "perplexity": 955.1827262427911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00158-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/198410-algebra-2-a.html
# Math Help - Algebra 2 1. ## Algebra 2 True or False. Please explain, give an example, or provide the correct answer to support their choice. 1. y = 2x + 1 is an example of a quadratic equation. 2. ln is a special logarithm where the base equals to 10 3. The graph of y = 2x passes through the point (1, 0). 2. ## Re: Algebra 2 Hello, eric132! True or False. Please explain, give an example, or provide the correct answer to support their choice. 1. y = 2x + 1 is an example of a quadratic equation. . false A quadratic equation has a variable to the second power (which is the variable's highest power). Examples: . $\begin{array}{ccc}y \:=\:2x^2+1 \\ y^2 \:=\:2x+1 \end{array}$ 2. ln is a special logarithm where the base equals 10. . false $ln$ is a logarithm with the base e, which is approximately $2.71828...$ 3. The graph of y = 2x passes through the point (1, 0). . false $\text{The point }(1, 0)\text{ means: }\:x = 1,\;y = 0$ $\text{Substituting, we get: }\:0 \,=\,2^1\:\text{ which is }not\text{ a true statement.}$ 3. ## Re: Algebra 2 Originally Posted by Soroban Hello, eric132! A quadratic equation has a variable to the second power (which is the variable's highest power). Examples: . $\begin{array}{ccc}y \:=\:2x^2+1 \\ y^2 \:=\:2x+1 \end{array}$ Actually, every linear function is considered to be a degenerate form of a quadratic equation, just with the coefficient of x^2 being 0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303940892219543, "perplexity": 989.7275000690108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900031.50/warc/CC-MAIN-20141030025820-00212-ip-10-16-133-185.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/5797/how-exactly-does-def-pfigurefigure-work/5799
# How exactly does \def\p@figure{Figure~} work? I took this line from my thesis style and added it into my document (preceded by \makeatletter) and what it does that whenever I used \ref{figureid} instead of putting just the figure number, it puts "Figure 1". I can do the same thing for tables, by having \def\p@table{Table~) but what I don't know is what exactly is: \p@figure \p@table I assume they're used internally by ref, but is there anything more (such as \p@(object) is always used as the beginning of something ...) - Great question! Finally I understand what's happening in this post and the answers to this post. (Also nice answers so far.) –  Hendrik Vogt Nov 23 '10 at 15:13 These are LaTeX kernel macros that are associated with environments. In simple terms anything that is enclosed with a \begin{foo}...\end{foo} is an environment. For example a figure or a table. Every time you insert a table a counter is incremented. This counter let us call it foo has an associated macro named \p@foo. This macro expands to a printed reference prefix' of counter foo. Any \ref to a value created by counter foo will produce the expansion of \p@foo\thefoo when the \label command is executed. \thefoo justs prints the value of counter foo'. Change foo to figure or table and it will make more sense. The \def part defines the macro. You can for example say \def\milan{Milan Ramaiyan} and every time you type \milan it will expand too .. Milan Ramaiyan. The @ symbol is just a special symbol used by TeX and LaTeX to avoid overriding commands accidentally. It needs special treatment and that is why the makeatletter and makeatother are required. - In addition to Yiannis fine explanation: you may even extend these \p@ macros such as \p@figure to become more than just a prefix. source2e.pdf gives hints in section 53.2 An extension of counter referencing. This modification \makeatletter \renewcommand*\refstepcounter[1]{\stepcounter{#1}% \protected@edef\@currentlabel {\csname p@#1\expandafter\endcsname\csname the#1\endcsname}} \makeatother allows you to define \p@figure with an argument, which allows more sophistocated formatting. And it's still backwards compatible to the original version. An arbitrary example: \renewcommand*{\p@figure}[1]{\emph{figure~(#1)}} gives see figure (1). This way you may add a prefix and a suffix plus formatting. - I would strongly advise against redefining one of the base latex commands. It can break packages that redefine it, e.g. hyperref. Rather go down to a deeper level, for example to change the reference to a second level enumerated list from 2a to 2(a) \makeatletter \renewcommand{\p@enumii}{\expandafter\p@@enumii} \newcommand{\p@@enumii}[1]{\theenumi(#1)} \makeatother –  Danie Els Mar 4 '11 at 8:47 A packaged alternative to Stefan's answer is to use the fncylab package, which makes the change he mentions and also defines a \labelformat macro to manipulate \p@counters easily. For example, \labelformat{equation}{(#1)} makes \ref act like \eqref when applied to labels on equations. (I have this activated by default in my documents, actually. The bare number is rarely useful.) - I used \usepackage{hyperref} and applied \autoref{tab:minimum-cut-set}. It would generate "Table 1.2" (for instance) what we want to see. However, for algorithm, although "Algorithm" capitalizes but it shows "algorithm 2.1" (for instance). The caption table, figure are shown properly. Only algorithm occurs the issue as I have mentioned above. - Welcome to TeX.sx! A tip: You can use backticks to mark your inline code as I did in my edit. :)` –  Paulo Cereda May 9 '12 at 16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551147222518921, "perplexity": 3489.173299170968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00335-ip-10-234-18-248.ec2.internal.warc.gz"}
https://brilliant.org/problems/sunny-shoot-out/
# Sunny Shoot-out You are planting 5 sunflowers in each of the 2 gardens, where these sets of plants shoot out in varying heights. Shown above is the graph depicting the height of each sunflower, where the red line indicates the mean height of sunflower population $\mu$. For example, the shortest sunflower in Garden A is 5 cm shorter than average while the highest one in Garden B is 7 cm higher than average. Which set of sunflowers has higher population variance? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387437105178833, "perplexity": 2444.5407853129664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00126.warc.gz"}
https://studyadda.com/question-bank/co-ordinate-geometry_q20/2130/202090
• # question_answer Find a point on the X-axis which is equidistant from the points (5, 4) and (-3, 3). A)  (2, 0)                     B)         (0, 3) C)  (-2, 2)                   D)         (3, 0) Since, the required point (say P) is on the X-axis, its ordinate will be zero. Let the abscissa of the point be x. Therefore, coordinates of the point P are$\Rightarrow$. Let A and B denote the points (5, 4) and (-2, 3) respectively. Given that AP = BP, we have $\frac{x-4}{4}=\frac{3x-19}{x-3}$ i.e., $\Rightarrow$ $x=11\,or\,8$ $\therefore$ $'x'$ Thus, the required point is (2, 0).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789516925811768, "perplexity": 1508.5734703735873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401643509.96/warc/CC-MAIN-20200929123413-20200929153413-00264.warc.gz"}
http://tex.stackexchange.com/questions/77932/the-right-way-to-get-sans-serif-math?answertab=oldest
The right way to get sans-serif math? I notice that beamer has everything in sans-serif by default, including math. In a regular article, simply using \sffamily doesn't cause math to be set in sans-serif. Using \renewcommand{\familydefault}{\sfdefault} doesn't work and the sansmath package sort of works, but seems to produce varying results with respect to whether letters are italic or not (e.g., in beamer \Gamma is not italicized, but with sansmath it is.) Is there one "right" way to do this? Edit: another problem is that \sansmath seems to turn \beta into "fi". - There are not many real sans serif math fonts. You can try \usepackage{cmbright} that has math symbol fonts, except for the "large symbols". Perhaps decent results can be obtained by loading the Iwona font: \documentclass{article} \usepackage{cmbright} \SetSymbolFont{largesymbols}{normal}{OMX}{iwona}{m}{n} \begin{document} $abc+\sum_{k=1}^{n}\int_{0}^{k}\sqrt{2}f(x)\,dx$ \end{document} A different approach could be with the Arev fonts; changing the preamble above into \usepackage{arevtext,arevmath} you'd get the following You find an extensive description of (free) math fonts at this address http://mirrors.ctan.org/info/Free_Math_Font_Survey/en/survey.pdf - Is it possible to switch to cmbright or some other font for a certain portion of the document and then switch back? –  jtbandes Oct 16 '12 at 21:51 See this question: tex.stackexchange.com/questions/33165/… –  egreg Oct 16 '12 at 21:54 That solution is helpful, but when I try it the uppercase Greek letters are not sans-serif... any ideas why? –  jtbandes Oct 18 '12 at 22:09 This happens when using mathpazo only. –  jtbandes Oct 18 '12 at 22:18 The survey by Stephen Hartke is a bit dated. There is a more extensive survey by Günter Milde (2008, 2010) milde.users.sourceforge.net/Matheschriften/matheschriften.xhtml (Freie Mathematikschriften für LaTeX). In german, though (but see the links at the bottom of the document). –  jfbu Oct 19 '12 at 12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702044486999512, "perplexity": 3850.0708578356553}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640001.64/warc/CC-MAIN-20150417045720-00221-ip-10-235-10-82.ec2.internal.warc.gz"}
https://arxiv.org/abs/1809.02308
math.AC Title:Splittings and symbolic powers of square-free monomial Ideals Abstract: We study the symbolic powers of square-free monomial ideals via symbolic Rees algebras and methods in prime characteristic. In particular, we prove that the symbolic Rees algebra and the symbolic associated graded algebra are split with respect to a morphism which resembles the Frobenius map and that exists in all characteristics. Using these methods, we recover a result by Hoa and Trung which states that the normalized $a$-invariants and the Castelnuovo-Mumford regularity of the symbolic powers converge. In addition, we give a sufficient condition for the equality of the ordinary and symbolic powers of this family of ideals, and relate it to Conforti-Cornuéjols conjecture. Finally, we interpret this condition in the context of linear optimization. Comments: 12 pages. To appear in IMRN Subjects: Commutative Algebra (math.AC); Combinatorics (math.CO); Optimization and Control (math.OC) MSC classes: 05C65, 90C27, 13F20, 13F55, 13A35 Cite as: arXiv:1809.02308 [math.AC] (or arXiv:1809.02308v2 [math.AC] for this version) Submission history From: Jonathan Montaño [view email] [v1] Fri, 7 Sep 2018 04:28:47 UTC (15 KB) [v2] Fri, 26 Jul 2019 02:26:38 UTC (17 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252593278884888, "perplexity": 1004.8357620205865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00509.warc.gz"}
https://brilliant.org/discussions/thread/raising-a-mathematician-ram-program-for-ages-13-15/
× # Raising A Mathematician (RAM) Program for ages 13-15 Hi friends! There's a program called 'Raising A Mathematician', which is going to take place in Mumbai from 5th to 10th of May! For more details regarding this program and registration, follow this link. The last date for registration for this program is FEB. 15th. Note by Sanghamitra Anand 3 years, 8 months ago Sort by: Is it online?(I'm not in Mumbai!) - 3 years, 8 months ago It is not online Satvik Golechha! It is a six day residential camp, which will be held at Mumbai from May 5th to 10th. The camp charge is free (inclusive of food & stay)! The students will have to bear only the travel charges to that place! I think it takes place at Ram Ratna Vidya Mandir school! Did you visit the link I shared? - 3 years, 8 months ago it is for what age students ? have u applied for that ? - 3 years, 7 months ago Soham Zemse, it is for ages 13 to 15 and I've applied for that! - 3 years, 7 months ago Nice initiative. - 3 years, 8 months ago good morning sir im from hyderabad, Mumbai is so far from me that's why im not coming to that program and my request is if u have any chance please conduct this programe in hyderabad also. thanku - 3 years, 8 months ago I think the program is for children in the age group 13-15. - 3 years, 8 months ago I have a certain query regarding this that I would like you to clear. Actually, the thing is that I am going to turn 16 on 9th of May, 2014 and since the program will be held during this duration, so am I eligible to enter this program which is for children from ages 13-15 ?? Also, then if I am eligible, technically, what age should I specify in the admission form for the program if I wish to enter the program ?? Also, is this a competition or a free coaching sort of thing ?? P.S.- Are you also considering joining for this program ?? - 3 years, 8 months ago - 3 years, 8 months ago Yes, I followed the link and downloaded the 3 forms. Thanks for clearing the whole thing out. I just have one last question. It is obvious that if we get selected and attend the program, then we would bring our parents along with us. Now, the thing that I want to ask is whether the program will be facilitating the staying of parents/guardians of the students there or would they have to bear the accomodation costs there by themselves ??? - 3 years, 8 months ago Uhm..., I hope parents will not be facilitated with staying! Parents/guardians will have to take care of their accomodations, bearing the costs. The staying will only be facilitated to the students! This camp will be held at Ram Ratna Vidya Mandir School in Mumbai! How far is Kolkata from Mumbai? If it is a considerably long distance, the staying preference will be given to you! - 3 years, 8 months ago Well, I looked up on Google! It says the distance is 1954.3 km. Do you think that this would be a considerably long distance for getting the staying preference ?? - 3 years, 8 months ago Yeah Prasun Biswas, definitely this would be a considerably long distance for getiing staying preference! So don't worry, good luck! Have you sent those forms through your school? Only 2 days are left! - 3 years, 8 months ago aisa kuch chandigarh mein bhi karwa do - 3 years, 8 months ago good - 3 years, 8 months ago is it open for all raises of people or it should be a Hindu ? - 3 years, 8 months ago is this program only available in mumbai...... - 3 years, 8 months ago yes Harish Kp, this program is available only in Mumbai! - 3 years, 8 months ago is that only for Indians? - 3 years, 8 months ago I hope, if you are in India, you could participate! I don't think that they've specified the camp only for Indians! - 3 years, 8 months ago
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500759601593018, "perplexity": 3719.084345398633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820930.11/warc/CC-MAIN-20171017072323-20171017092323-00782.warc.gz"}
https://www.tutorialspoint.com/voltage-regulation-of-alternator-or-synchronous-generator
# Voltage Regulation of Alternator or Synchronous Generator The voltage regulation of an alternator or synchronous generator is defined as the rise in the terminal voltage when the load is decreased from full-load rated value to zero. The speed and field current of the alternator remain constant. In other words, the voltage regulation of the alternator can be defined as the change in terminal voltage from no-load to full load rated value divided by the full-load rated voltage, i.e., $$\mathrm{Per\:unit\:voltage\:regulation =\frac{|𝐸_{𝑎}| − |𝑉|}{|𝑉|}}$$ Also, the percentage voltage regulation of the alternator is given by, $$\mathrm{Percentage\:voltage\:regulation =\frac{|𝐸_{𝑎}| − |𝑉|}{|𝑉|}× 100\%}$$ Where, • |𝐸𝑎| is the magnitude of generated voltage (or no-load voltage) perphase • |𝑉| is the magnitude of full-load rated terminal voltage per phase The voltage regulation is like the figure-of-merit of an alternator. The smaller the value of the voltage regulation of a synchronous generator or alternator, the better is the performance of the alternator. For an ideal alternator, the value of the voltage regulation is zero. The voltage regulation of an alternator depends upon the power factor of the load, i.e., • An alternator operating at a unity power factor has a small positive voltage regulation. • An alternator operating at a lagging power factor has a large positive voltage regulation. • An alternator operating at lower leading power factors, the voltage rises with increase of the load and hence, the voltage regulation is negative. • For a certain leading power factor, the full-load voltage regulation is zero. In this case, both the full-load and no-load terminal voltages are the same. ## Voltage Regulation of Alternator using Direct Loading Method In the direct load test, the alternator is run at synchronous speed and its terminal voltage is adjusted to its rated value (V). Now, the load is varied until the ammeter and wattmeter connected in the test circuit indicate the rated values at the given power factor. Then, the load is removed and the speed and the field excitation of the alternator are kept constant and the no-load voltage (Ea) of the alternator is recorded. The voltage regulation of the alternator can be determined using these values as follows − $$\mathrm{Percentage\:voltage\:regulation =\frac{|𝐸_{𝑎}| − |𝑉|}{|𝑉|}× 100\%}$$ The direct load test of alternator for determining the voltage regulation is suitable only for small alternators of power rating less than 5 kVA. For large alternators, the following three indirect methods are used to determine the voltage regulation of the alternator, which are given as follows − • Synchronous impedance method or EMF method. • Ampere-turn method or MMF method. • Zero power factor method or Potier triangle method. Advertisements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898700475692749, "perplexity": 1830.752960186459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00296.warc.gz"}
https://ccjou.wordpress.com/2015/09/14/%E6%AF%8F%E9%80%B1%E5%95%8F%E9%A1%8C-september-14-2015/
## 每週問題 September 14, 2015 Let $A=[a_{ij}(t)]$ be an $n\times n$ matrix, where each entry $a_{ij}(t)$ is a differentiable function of $t$. Prove that $\displaystyle \frac{d\det A}{dt}=\det B_1+\cdots+\det B_n$, where $B_k$ is identical to $A$ except that the entries in the $k^{th}$ column are replaced by their derivatives, i.e., $(B_k)_{ij}=a_{ij}$ if $k\neq j$, $\displaystyle(B_k)_{ij}=\frac{da_{ij}}{dt}$ if $k=j$. $\displaystyle \det A=\sum_{p}\sigma(p)a_{p_11}a_{p_22}\cdots a_{p_nn}$ \displaystyle \begin{aligned} \frac{d\det A}{dt}&=\frac{d}{dt}\sum_{p}\sigma(p)a_{p_11}a_{p_22}\cdots a_{p_nn}\\ &=\sum_p\sigma(p)\frac{d(a_{p_11}a_{p_22}\cdots a_{p_nn})}{dt}\\ &=\sum_p\sigma(p)\left(\frac{da_{p_11}}{dt}a_{p_22}\cdots a_{p_nn}+a_{p_11}\frac{da_{p_22}}{dt}\cdots a_{p_nn}+\cdots+a_{p_11}a_{p_22}\cdots \frac{da_{p_nn}}{dt}\right)\\ &=\sum_p\sigma(p)\frac{da_{p_11}}{dt}a_{p_22}\cdots a_{p_nn}+\sum_p\sigma(p)a_{p_11}\frac{da_{p_22}}{dt}\cdots a_{p_nn}+\cdots\\ &~~~~~+\sum_p\sigma(p)a_{p_11}a_{p_22}\cdots \frac{da_{p_nn}}{dt}\\ &=\det B_1+\det B_2+\cdots+\det B_n. \end{aligned}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951832890510559, "perplexity": 4135.095991979695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00300.warc.gz"}
http://numericalmethodsaleja.blogspot.com/
## martes, 20 de julio de 2010 ### ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS In general, these methods are based on fixed point method and the process is iterated so substitute in a formula. Iterative methods compared with direct, we do not guarantee a better approach, however, are more efficient when working with large matrices. In computational mathematics, is an iterative method to solve a problem (as an equation or a system of equations) by successive approximations to the solution, starting from an initial estimate. This approach contrasts with the direct methods, which attempt to solve the problem once (like solving a system of equations Ax = b by finding the inverse of the matrix A). Iterative methods are useful for solving problems involving a large number of variables (sometimes in the millions), where direct methods would be prohibitively expensive even with better computer power available. • JACOBI METHOD The basis of the method is to construct a convergent sequence defined iteratively. The limit of this sequence is precisely the solution of the system. For practical purposes if the algorithm stops after a finite number of steps leads to an approximation of the value of x in the solution of the system. • GAUSS SEIDEL METHOD The methods of Gauss and Cholesky methods are part of direct or finite. After a number of operations nito, in the absence of errors rounding, we get x solution of the system Ax = b. The Gauss-Seidel method is part of the so-called indirect methods or iterative. They start with x0 = (x01, X02,:::; x0 n), an approximation initial solution. Since x0 is building a new approach of the solution, x1 = (x11, x12;:::; x1n). From built x1 x2 (here the superscript indicates the iteration and does not indicate a power). So on construye fxkg a sequence of vectors, with the aim, not always guaranteed to quelimk! 1xk = x: Generally, indirect thods are a good option when the matrix is very large and dispersed or sparse (sparse), ie when the number of onzero elements is small compared to n2, total number of elements A. In these cases you must use an appropriate data structure lets you store only the nonzero elements. In each iteration of the Gauss-Seidel method, there are n subiteraciones. In ca first subiteracion be amended only x1. The other coordinates x2, x3, ..., xn are not modified can. fuente ### DIRECT METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS DIRECT METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS In this lesson we study the solution of a Cramer system Ax = B, which means that A is ° c invertible regular or verify ° c or det (A) 6 = 0, using a direct method. Since this is any allowing, in the absence of errors, through a number of steps nito obtain the exact solution. In property, this does not happen in general because of the inevitable rounding errors. • Gauss- Jordan Elimination • In mathematics, Gaussian elimination, Gaussian elimination or Gauss-Jordan elimination, so named because Carl Friedrich Gauss and Wilhelm Jordan, are linear algebra algorithms to determine the solutions of a system of linear equations, matrices and inverse finding. Asystem of equations is solved by the Gauss when their solutions are obtained by reducing an equivalent system given in which each equation has one fewer variables than the last. When applying this process, the resulting matrix is known as "stagger." • ALGORITHM GAUSS JORDAN • 1. Go to the far left column is not zero 2. If the first line has a zero in this column, swap it with another that does not have 3. Then, get below zero this item forward, adding appropriate multiples of row than h e row below it 4. Cover the top row and repeat the above process with the remaining submatrix. Repeat with the rest of the lines (at this point the array is in the form of step) 5. Starting with the last line is not zero, move up: for each row get a 1 up front and introduce zero multiples of this sum for the rows corresponding An interesting variant of Gaussian elimination is what we call Gauss-Jordan, (due to Gauss and Wilhelm Jordan mentioned), this is to be a front for getting the steps 1 to 4 (called direct path) and the time these completed and will result in the reduced echelon form matrix. • LU DESCOMPOSITION ts name is derived from the English words "Lower" and "Upper", which in Spanish translates as "Bottom" and "Superior." Studying the process followed in the LU decomposition is possible to understand why this name, considering how original matrix is decomposed into two triangular • LU decomposition involves only operations on the coefficient matrix [A], providing an efficient means to calculate the inverse matrix or solving systems of linear algebra. First you must obtain the matrix [L] and the matrix [U].. [L] is a diagonal matrix with numbers less than 1 on .the diagonal. [U] is an upper diagonal matrix on the diagonal which does not necessarily have to be number one. The first step is to break down or transform [A] [L] and [U], ie to obtain the lower triangular matrix [L] and the upper triangular matrix [U] • INVERSE MATRIX Is the matrix we get from chang ing rows by columns. The transpose of that represented by AT. In mathematics, especially in linear algebra, a square matrix of order n is said to be invertible, nonsingular, nondegenerate or regular if there is another square matrix of order n, called the inverse matrix of A and represented as A-1matrices. fuente • Shen Kangshen et al. (ed.) (1999). Nine Chapters of the Mathematical Art, Companion and Commentary, Oxford University Press. cited byOtto Bretscher (2005). • Linear Algebra and Its Applications, Thomson Brooks/Cole, pp. 46 ## sábado, 15 de mayo de 2010 ### MATHEMATICAL APPROXIMATION MATHEMATICAL APPROXIMATION Numerical approximation is defined as a figure reoresenta to a number whose exact value is the twelfth to the extent that the figure was closer to the exact value, it will be a better approximation of that number. SIGNIFICANT FIGURES The significant figures (or significant digits) represent the use of a level of uncertainty under certain approximations The use of these considers the last digit of approach is uncertain, for example, to determine the volume of a liquid using a graduated cylinder with a precision of 1 ml, implies an uncertainty range of 0.5 ml. It may be said that the volume of 6ml of 5.5 ml will be really to 6.5 ml. The previous volume is represented as (6.0 ± 0.5) ml. For specific values closer would have to use other instruments of greater precision, for example, a specimen finest divisions and thus obtain (6.0 ± 0.1) ml or something more satisfying as the required accuracy. ACCURACY AND PRECISION Accuracy refers to the dispersion of the set of values from repeated measurements of a magnitude. The lower the spread the greater the accuracy. A common measure of variability is the standard deviation of measurements and precision can be estimated as a function of it. Accuracy refers to how close the actual value is the measured value. In statistical terms, accuracy is related to the bias of an estimate. The smaller the bias is a more accurate estimate. When we express the accuracy of a result is expressed by the absolute error is the difference between the experimental value and the true value. NUMERICAL STABILITY In the mathematical subfield of numerical analysis, numerical stability is a property of numerical algorithms. Describe how errors in the input data are propagated through the algorithm. In a stable method, errors due to approximations are mitigated as appropriate computing. In an unstable method, any error in the processing is magnified as the calculation applicable. Methods unstable quickly generate waste and are useless for numerical processing. CONVERGENCE In mathematical analysis, the concept of convergence refers to the property they own some numerical sequences tend to a limit. This concept is very general and depending on the nature of the set where the sequence is defined, it can take several forms. fuente • George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler. Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 5.) • William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. NumericalRecipes in C. Cambridge, UK: Cambridge University Press, 1988. (See Chapter 4.) ### MATHEMATICAL MODEL MATHEMATICAL MODEL A product model is an abstraction of a real system, eliminating the complexities and making relevant assumptions, applies a mathematical technique and obtained a symbolic representation of it. A mathematical model comprises at least three basic sets of elements: • Decision variables and parameters The decision variables are unknowns to be determined from the model solution. The parameters represent the values known to the system or that can be controlled. • Restrictions Constraints are relations between decision variables and magnitudes that give meaning to the solution of the problem and delimit values feasible. For example if one of the decision variables representing the number employees of a workshop, it is clear that the value of that variable can not be negative. • Objective Function The objective function is a mathematical relationship between variables decision parameters and a magnitude representing the target or product system. For example if the objective is to minimize system costs operation, the objective function should express the relationship between cost and decision variables. The optimal solution is obtained when the value of cost is minimal for a set of feasible values of the variables. Ie there to determine the variables x1, x2, ..., xn that optimize the value of Z = f (x1, x2, ..., xn) subject to constraints of the form g (x1, x2, ..., xn) b. Where x1, x2, ..., Xn are the decision variables Z is the objective function, f is a function mathematics. HOW TO DEVELOP A MATEMATICAL MODEL 1. Find a real world problem. 2. Formulate a mathematical model of the problem, identifying variables (dependent and independent) and establishing hypotheses simple enough to be treated mathematically. 3. Apply mathematical knowledge that has to reach mathematical conclusions. 4. Compare the data obtained as predictions with real data. If the data are different, the process is restarted. CLASSIFICATION OF MODELS • Heuristic Models: (Greek euriskein 'find, invent'). Are those that are based on the explanations of natural causes or mechanisms that give rise to the phenomenon studied. • Empirical models: (Greek empeirikos on the 'experience'). They are using direct observations or the results of experiments studied phenomenon. Mathematical models are also different names in various applications. The following are some types to which you can adapt a mathematical model of interest. According to its scope models: • Conceptual models :Are those that reproduce by mathematical formulas and algorithms more or less complex physical processes that occur in nature. • Mathematical model of optimization :Mathematical optimization models are widely used in various branches of engineering to solve problems that are by nature indeterminate, ie have more than one possible solution. CATEGORIES FOR ITS APPLICATION For use commonly used in the following three areas, however there are many others such as finance, science and so on. • Simulation: In situations accurately measurable or random, for example linear programming aspects precisely when, and probabilistic or heuristic when it is random. • Optimization :To determine the exact point to resolve any administrative problems, production, or other status. When the optimization is complete or nonlinear, combination, refers to mathematical models little predictable, but they can fit into any existing alternative and approximate quantification. • Control: To find out precisely how is something in an organization, research, area of operation, etc.. fuente: • http://www.investigacion-operaciones.com/Formulacion%20Problemas.htm • Ríos, Sixto (1995). Modelización. Alianza Universidad. ## miércoles, 12 de mayo de 2010 ### ROOTS OF EQUATIONS ROOTS OF EQUATIONS The purpose of calculating the roots of an equation to determine the values of x for which holds: f (x) = 0 (28) The determination of the roots of an equation is one of the oldest problems in mathematics and there have been many efforts in this regard. Its importance is that if we can determine the roots of an equation we can also determine the maximum and minimum eigenvalues of matrices, solving systems of linear differential equations, etc ... The determination of the solutions of equation (28) can be a very difficult problem. If f (x) is a polynomial function of grade 1 or 2, know simple expressions that allow us to determine its roots. For polynomials of degree 3 or 4 is necessary to use complex and laborious methods. However, if f (x) is of degree greater than four is either not polynomial, there is no formula known to help identify the zeros of the equation (except in very special cases). In general, the methods for finding the real roots of algebraic equations and transcendental methods are divided into intervals and open methods. • INTERVAL METHODS: exploit the fact that typically a function changes sign in the vicinity of a root. They get this name because it needs two initial values to be "encapsulated" to the root. Through such methods will gradually reduce the size of the interval so that the repeated application of the methods always produce increasingly close approximations to the actual value of the root, so methods are said to be convergen. In Figure 2.1 is seen as the function changes + f (x) a - f (x), as it passes through the root c. This is because f (c) = 0 and necessarily pass function of positive to negative quadrant x. In some cases, to be seen later this does not happen, for now it will be assumed as shown. The methods they use open sign changes in order to place the root (point c), but it must then establish a range (such as [a, b]). Similarly happens when the function passes through the point e, the change occurs-f (x) + f (x), to find the root of the method requires an interval as [d, f]. The main methods are Interval: a. Graphical Method b. Bisection Method c. Linear Interpolation Method d. methods of false position • OPEN METHODS: in contrast, are based on formulas that require a single initial value x (initial approach to the root). Sometimes these methods away from the real value of the root grows the number of iterations. Open the main methods are: a. Newton Raphson method. b. Secant method c. Multiple roots . fuente: · Burden Richard L. & Faires J. Douglas, Análisis numérico. 2ª. ed., México, Grupo Editorial Iberoamérica, 1993. · Chapra Steven C. & Canale Raymond P., Métodos numéricos para ingenieros. 4ª. ed., México, McGraw-Hill, 2003
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308584690093994, "perplexity": 683.8860037815502}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465599.34/warc/CC-MAIN-20150226074105-00324-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/abstract-algebra.118212/
# Abstract algebra • #1 22 0 hello i have two questions and i need answers for them first one: show that nZ intersection mZ= lZ , where l is the least common multiple of m and n. The second question is : Given H and K two subgroups of a group G , show the following: (H union K) subgroup of G if and only if H subset of K or K subset of H • #2 22 0 • #3 matt grime Homework Helper 9,395 4 Post them in the correct place and you might get some answers. Try the homework forum, or the maths forum, not this one. Plus, saying things like 'i need the answers quickly' indicates this is for a homework assignment. You won't just be given the answers, this isn't a place where you get your homework done, so bear that in mind, • Last Post Replies 5 Views 3K • Last Post Replies 4 Views 1K • Last Post Replies 3 Views 1K • Last Post Replies 5 Views 786 • Last Post Replies 2 Views 1K • Last Post Replies 17 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 8 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 3K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549607992172241, "perplexity": 2837.8667596800174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00435.warc.gz"}
https://rd.springer.com/chapter/10.1007/978-3-319-75599-1_16
# Waves, Light Waves, Sound Waves and Ultrasound (The Physics of) • Martin Caon Chapter ## Abstract Mechanical waves (sound waves, waves on water) are a mechanism for transferring energy through a medium (the air or water) without transferring matter. Another definition is a periodic disturbance in some property of the medium, the medium itself remaining relatively at rest. Waves have the following measurable properties: 1. 1. Wavelength (symbol λ) is the distance between two successive crests (in metres, m). A typical value is ~500 nm for light and ~20 cm for sound. 2. 2. Frequency (f) is the number of λ that passes by in 1 s (in hertz, Hz). Typical values are 500 THz for light and 500 Hz for sound. Frequency is related to pitch (for sound) and colour (for visible light). 3. 3. Period (T) is the time it takes for one λ to pass by (in seconds, s). 4. 4. Speed (v) is how fast a wave is moving in the direction of propagation (in metres per second, m/s). The speed of light travelling through air is 3 × 108 m/s, while for sound, speed in air is about 330 m/s. In tissue, sound moves faster, at about 1560 m/s. 5. 5. Amplitude (A) is the maximum displacement from the mean (or rest) position. For example, the vertical distance between a trough and a crest of a wave in water is two times the amplitude. Amplitude (or intensity) is related to loudness of sound and brightness of light and to the amount of energy being carried by the wave. 6. 6. Phase refers to how far out of step the oscillation of one part of a wave is when compared with another part. A phase of 0° or 360° means that the two parts are in step, while a phase difference of 180° means that the two points are completely out of step. Differences in phase between the sounds entering each ear allow us to localise the source of a sound.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227640986442566, "perplexity": 1048.0304876123607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936981.24/warc/CC-MAIN-20180419150012-20180419170012-00058.warc.gz"}
https://iwaponline.com/hr/article/34/4/343/494/Saltation-Layer-of-Particles-in-Water-Flows
A theoretical model has been developed to determine the maximum saltation layer thickness of sediment particles in water associated with the migration velocity of particle in the bed layer. This is consistent with Owen's (1964) hypothesis for saltation of uniform grain in air. The equation for mean particle velocity at the bed is derived by balancing the horizontal forces acting on the particle in the bed. The modified expression for mean particle velocity includes the effects of drag and lift coefficients, bed shear stress, coefficient of dynamic friction, settling velocity and pivoting angle. The saltation layer model presented here extends a reasonable physical assumption by converting the average horizontal particle velocity to a vertical component of velocity due to collisions with particles resting on the bed. This explicitly shows a functional dependence of saltation height on mean particle velocity and take-off angle. The proposed model has been tested using available experimental data and the agreement with particle velocities and saltation heights is excellent. An interesting outcome is that a quadratic relationship is suggested between the higher transport stage (upper plane bed) and the take-off angle of particle. This shows that the take-off angle decreases with increase in transport stage. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317797422409058, "perplexity": 925.0619552398849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675316.51/warc/CC-MAIN-20191017122657-20191017150157-00527.warc.gz"}
https://mathhelpboards.com/threads/calculating-partial-derivatives-in-different-coordinate-systems.2831/
calculating partial derivatives in different coordinate systems oblixps Member May 20, 2012 38 let f = x2 + 2y2 and $$x = rcos(\theta), y = rsin(\theta)$$. i have $$\frac{\partial f}{\partial y}$$ (while holding x constant) $$= 4y$$. and $$\frac{\partial f}{\partial y}$$ (while holding r constant) $$= 2y$$. i found these partial derivatives by expressing f in terms of only x and y, and then in terms of only r and y. But i am sure there are times where it can be very difficult to solve for one variable or to express some function in terms of specific variables. Is there a way to relate the 2 partial derivatives with respect to y (one holding x constant and one holding r constant) using the chain rule or something? Ackbach Indicium Physicus Staff member Jan 26, 2012 4,202 let f = x2 + 2y2 and $$x = r \cos(\theta), y = r \sin(\theta)$$. i have $$\frac{\partial f}{\partial y}$$ (while holding x constant) $$= 4y$$. and $$\frac{\partial f}{\partial y}$$ (while holding r constant) $$= 2y$$. This is a very confusing procedure. I would agree with your first result. That's a straight-forward application of the definition of partial derivative. However, for your second result, you seem to be defining the function $f=f(r,y)$. I'm not sure I would consider that to be a very good definition, because $y=y(r,\theta)$, so the variables you are putting forth as "independent" are not actually independent. Typically, you would write $f=f(r,\theta)=r^{2}(1+\sin^{2}(\theta))$, and then compute either $\partial f/ \partial r$ or $\partial f/ \partial \theta$. i found these partial derivatives by expressing f in terms of only x and y, and then in terms of only r and y. But i am sure there are times where it can be very difficult to solve for one variable or to express some function in terms of specific variables. Is there a way to relate the 2 partial derivatives with respect to y (one holding x constant and one holding r constant) using the chain rule or something? Klaas van Aarsen MHB Seeker Staff member Mar 5, 2012 9,598 Hi oblixps! Apparently you want to calculate the total derivative under the condition that r is constant. This is typically written as something like: $({df \over dy})_{r \text{ constant}}$ It can be calculated with repeated application of the multi variable chain rule as follows: [1] $({df \over dy})_{r \text{ constant}} = ({d \over dy}f(x(r,y),y))_{r \text{ constant}} = ({\partial f \over dx}({\partial x \over \partial r}{dr \over dy} + {\partial x \over \partial y}{dy \over dy}) + {\partial f \over \partial y}{dy \over dy})_{r \text{ constant}} = {\partial f \over \partial x}{\partial \over \partial y}x(r,y) + {\partial f \over \partial y}$ To calculate ${\partial \over \partial y}x(r,y)$, we can use that: $x^2 + y^2 = r^2$ Therefore $2x dx + 2y dy = 2r dr$ Meaning $dx = \frac r x dr - \frac y x dy$ It follows that [2] ${\partial \over \partial y}x(r,y) = - \frac y x$ Substituting [2] in [1] and using that $f(x,y)=x^2+2y^2$ gives: $({df \over dy})_{r \text{ constant}}= {\partial f \over \partial x}{\partial \over \partial y}x(r,y) + {\partial f \over \partial y} = 2x \cdot - \frac y x + 4y = 2y$ As you can see this is the same result you already derived by making the relation explicit in r and y. Last edited:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618478417396545, "perplexity": 323.2424670222943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00117.warc.gz"}
https://www.physicsforums.com/threads/the-magnus-effect.59046/
# The magnus effect 1. Jan 8, 2005 for a soccer game I'm programming, i want to calculate the position and velocity of the ball. I can get those values when I have constant acceleration, but I don't understand how to add the Magnus force. I have read some articles in the internet, and I found a formula I just can't understand: $$F_{m}=\frac{2\pi^2 \rho\ \omega vr^4} {2r}$$ is it right? so I could say: $$F_{m}=\pi^2 \rho\ \omega vr^3$$ But which is the velocity it refers? and, how can I relate it to v(t) and x(t)? Could somebody explain it to me, and how to calculate position adding the magnus force?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369097113609314, "perplexity": 356.48965764062456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742483.3/warc/CC-MAIN-20181115033911-20181115055911-00358.warc.gz"}
https://socratic.org/questions/the-point-p-lies-in-the-first-quadrant-on-the-graph-of-the-line-y-7-3x-from-the-
Algebra Topics # The point P lies in the first quadrant on the graph of the line y= 7-3x. From the point P, perpendiculars are drawn to both the x-axis and y-axis. What is the largest possible area for the rectangle thus formed? Apr 9, 2017 $\frac{49}{12} \text{ sq.unit.}$ #### Explanation: Let $M \mathmr{and} N$ be the feet of $\bot$ from $P \left(x , y\right)$ to the $X -$ Axis and $Y -$ Axis, resp., where, P in l={(x,y) | y=7-3x, x>0; y>0} sub RR^2....(ast) If $O \left(0 , 0\right)$ is the Origin, the, we have, $M \left(x , 0\right) , \mathmr{and} , N \left(0 , y\right) .$ Hence, the Area A of the Rectangle $O M P N ,$ is, given by, $A = O M \cdot P M = x y , \text{ and, using } \left(\ast\right) , A = x \left(7 - 3 x\right) .$ Thus, $A$ is a fun. of $x ,$ so let us write, $A \left(x\right) = x \left(7 - 3 x\right) = 7 x - 3 {x}^{2.}$ For ${A}_{\max} , \left(i\right) A ' \left(x\right) = 0 , \mathmr{and} , \left(i i\right) A ' ' \left(x\right) < 0.$ $A ' \left(x\right) = 0 \Rightarrow 7 - 6 x = 0 \Rightarrow x = \frac{7}{6} , > 0.$ Also, $A ' ' \left(x\right) = - 6 , \text{ which is already } < 0.$ Accordingly, ${A}_{\max} = A \left(\frac{7}{6}\right) = \frac{7}{6} \left\{7 - 3 \left(\frac{7}{6}\right)\right\} = \frac{49}{12.}$ Therefore, the largest possible area of the rectangle is $\frac{49}{12} \text{ sq.unit.}$ Enjoy Maths.! ##### Impact of this question 1881 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009921550750732, "perplexity": 1112.064707244809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00256.warc.gz"}
https://zbmath.org/?q=an:0192.04401
× # zbMATH — the first resource for mathematics The independence of the continuum hypothesis. I, II. (English) Zbl 0192.04401 These two papers contain the solution of the famous Continuum Hypothesis. The author shows that the continuum hypothesis cannot be derived from the axioms of set theory. The proof is based an his method of forcing, which has become since very popular and led to numerous discoveries in the foundations of mathematics. The author starts with an assumption that there is a countable transitive model $$M$$ of Zermelo-Fraenkel set theory ZF which satisfies the axiom of constructibility and constructs an extension $$N$$ of $$M$$ which is a model of ZF + Axiom of Choice+$$2^{\aleph_0}>\aleph_1$$. The idea is to obtain $$N$$ by adjoining to $$M$$ a sequence $$\{a_{\delta}: \delta<\aleph_{\tau}\}$$ of “generic” sets of integers, where $$\aleph_{\tau}$$ is a cardinal number in $$M$$, greater than $$\aleph_1$$. The key device is the notion of forcing. The forcing language consists of names of all elements of $$M$$ and of generic sets $$a_\delta$$, and of expressions using logical symbols and set-theoretical operations. A condition is a finite consistent set of expressions $$n\in \widehat{a_{\delta}}$$ or $$n\notin \widehat{a_{\delta}}$$. A condition $$P$$ forces $$n\in \widehat{a_{\delta}}$$ if $$(n\in a_{\delta})\in P$$. Similarly, we can define the relation “$$P$$ forces $$\varphi$$” for any condition $$P$$ and any formula $$\varphi$$ of the forcing language. This definition is carried out inside $$M$$ and has the following properties (a) for each $$\varphi$$, no $$P$$ forces both $$\varphi$$ and $$\neg \varphi$$; (b) if $$P$$ forces $$\varphi$$ and $$Q\supset P$$ then $$Q$$ forces $$\varphi$$; (c) for each $$\varphi$$ and each $$P$$, there is $$Q\supset P$$ which decides $$\varphi$$ (i.e. $$Q$$ forces $$\varphi$$ or $$Q$$ forces $$\neg \varphi$$). Since $$M$$ is countable, there is a sequence $$P_0\subseteq P_1\subseteq \dots P_s\dots$$ (outside $$M$$) such that each formula is decided by some $$P_s$$. The extension $$N$$ is then obtained by adjoining to $$M$$ the sequence $$\{a_{\delta}: \delta<\aleph_{\tau}\}$$ where $$a_{\delta}=\{n:n\in a_{\delta}\text{ belongs to some } P_s\}$$. The significance of the forcing method in the construction of $$N$$ is expressed by the following Lemma: A formula is true in $$N$$ if and only if it is forced by some $$P_s$$. Using this, the author proves that $$N$$ is a model of ZF + Axiom of Choice and that $$\{a_{\delta}: \delta<\aleph_{\tau}\}$$ is a sequence of distinct sets of integers. The proof is completed when verified that every cardinal number in $$M$$ is also a cardinal number in $$N$$, so that $$N$$ satisfies $$2^{\aleph_0}>\aleph_1$$. Finally, it is shown how the construction described above yields the relative consistency proof of ZF + Axiom of Choice + $$2^{\aleph_0}>\aleph_1$$. To verify the truth of a statement in $$N$$ we need the truth of only finitely many axioms in $$M$$; and since every finite collection of axioms of ZF has a countable transitive model, every contradiction in ZF + AC + $$2^{\aleph_0}>\aleph_1$$ leads to a contradiction in ZF. Reviewer: Tomas Jech ##### MSC: 03Exx Set theory set theory Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816849827766418, "perplexity": 125.2306847278686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363211.17/warc/CC-MAIN-20210302003534-20210302033534-00248.warc.gz"}
http://mathhelpforum.com/advanced-algebra/167920-finding-root-polynomial.html
# Math Help - Finding Root of a Polynomial 1. ## Finding Root of a Polynomial Let p be a prime number. Find all roots of x^(p-1) in Z_p I have this definition. Let f(x) be in F[x]. An element c in F is said to be a root of multiplicity m>=1 of f(x) if (x-c)^m|f(x), but (x-c)^(m+1) does not divide f(x). I'm not sure if I use this idea somehow or not. 2. Originally Posted by kathrynmath Let p be a prime number. Find all roots of x^(p-1) in Z_p I have this definition. Let f(x) be in F[x]. An element c in F is said to be a root of multiplicity m>=1 of f(x) if (x-c)^m|f(x), but (x-c)^(m+1) does not divide f(x). I'm not sure if I use this idea somehow or not. If you meant $\mathbb{Z}_p:=\mathbb{Z}/p\mathbb{Z}$ , then we have here a field, and in it $x^r=0\Longleftrightarrow x=0\,,\,\,\forall r\in\mathbb{N}$ , so now you can solve your problem. Tonio 3. Originally Posted by tonio If you meant $\mathbb{Z}_p:=\mathbb{Z}/p\mathbb{Z}$ , then we have here a field, and in it $x^r=0\Longleftrightarrow x=0\,,\,\,\forall r\in\mathbb{N}$ , so now you can solve your problem. Tonio I think, perhaps, the OP meant the polynomail $x^{p-1}-1$...otherwise the question is boring and we are left wondering why the exponent was p-1; why is it not just arbitrary? If this is the case, then a simple application of Fermat's Little Theorem does the trick. 4. No the question was just x^(p-1) 5. I don't understand how you ot x^r=0 6. Originally Posted by kathrynmath I don't understand how you ot x^r=0 The reason is that $\mathbb{Z}_p$ is a field. As a field has no zero divisors, then you are immediately done. If you have no idea what a field is, then continue reading... If $x \in \{1, \ldots, p-1\}$ then $gcd(x, p)=1$, as $p$ is prime. Therefore, there exists $a, b \in \mathbb{Z}$ such that $ax+bp=1$ (using the Euclidean algorithm, etc.) That is, there exists $a^{\prime} \in \mathbb{Z}_p$ such that $a^{\prime}x \equiv 1 \text{ mod } p$ ( $a^{\prime}$ is just $a \text{ mod } p$). So, if $x^r=0$ then $1=a^{-r}x^r=0$, a contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96823650598526, "perplexity": 481.39823303257526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832662.33/warc/CC-MAIN-20140820021352-00457-ip-10-180-136-8.ec2.internal.warc.gz"}
http://cms.math.ca/10.4153/CJM-2001-021-3
location:  Publications → journals → CJM Abstract view Bivariate Polynomials of Least Deviation from Zero Published:2001-06-01 Printed: Jun 2001 • Borislav D. Bojanov • Werner Haußmann • Geno P. Nikolov Format: HTML LaTeX MathJax PDF PostScript Abstract Bivariate polynomials with a fixed leading term $x^m y^n$, which deviate least from zero in the uniform or $L^2$-norm on the unit disk $D$ (resp. a triangle) are given explicitly. A similar problem in $L^p$, $1 \le p \le \infty$, is studied on $D$ in the set of products of linear polynomials. MSC Classifications: 41A10 - Approximation by polynomials {For approximation by trigonometric polynomials, see 42A10} 41A50 - Best approximation, Chebyshev systems 41A63 - Multidimensional problems (should also be assigned at least one other classification number in this section)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884420990943909, "perplexity": 3909.2514322500483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.29/warc/CC-MAIN-20150521113208-00122-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/angular-velocity-and-acceleration-stumped.630994/
# Angular Velocity and Acceleration - Stumped! 1. Aug 25, 2012 ### bruvvers Hi guys, I'm stumped on just one question and not sure where to go with it now. Any help would be greatly appreciated... Question: Calculate the angular acceleration and angular velocity of a 2kg object rotating in a circle of 1.5m radius in a time of 3s. My first answer i realise now was wrong due to calculating linear velocity Can anyone offer some assistance on where i'm going wrong here please? 2. Aug 25, 2012 ### kushan The acceleration you are calculating is centripetal acceleration . :O 3. Aug 25, 2012 ### voko To compute the angular velocity, all you need is the period of on rotation. It does not depend on the radius, nor does it depend on the mass. To compute the angular acceleration, you need to know how angular velocity changes. The problem has no data on this. 4. Aug 25, 2012 ### tiny-tim welcome to pf! hi bruvvers! welcome to pf! hmm … you're obviously completely confused about the difference between angular and linear measurements, and between angular acceleration and centripetal acceleration ω2r (= v2/r) is the formula for centripetal acceleration centripetal acceleration is simply the component of linear acceleration in the (negative) radial direction centripetal acceleration is measured in m/s2 centripetal acceleration has nothing to do with angular acceleration! angular acceleration is measured in rad/s2 i'm not sure what you've done here (and your arithmetic isn't correct anyway ) the question is … … does this mean that it is rotating at a constant angular speed? if so, the angular acceleration is obviously zero! … or does it mean that it starts from rest, accelerates uniformly, and completes its first circle in 3s ? if so, use the standard constant acceleration formulas, adapted for constant angular acceleration show us what you get Similar Discussions: Angular Velocity and Acceleration - Stumped!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847636222839355, "perplexity": 947.8129485845986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00232.warc.gz"}
http://math.stackexchange.com/questions/149373/does-the-fact-that-sum-n-1-infty-1-2n-converges-to-1-mean-that-it-equal
# Does the fact that $\sum_{n=1}^\infty 1/2^n$ converges to $1$ mean that it equals $1$? I have a clueless friend who believes that $$\sum_{n=1}^\infty \frac{1}{2^n}$$ doesn't equal $1$ in the 'normal arithmetical sense'. He doesn't believe that this series flat out equals $$1$$ Is he correct? - What is the "normal arithmetical sense" in which we can interpret infinite series? –  Chris Eagle May 24 '12 at 20:40 @Fried: $\sum \frac{1}{2^n}$ is a symbol. It denotes that least upper bound. –  Qiaochu Yuan May 24 '12 at 20:44 @Marvis: I don't understand your reasoning. –  Qiaochu Yuan May 24 '12 at 20:44 The point is that since there is no way to sum an infinite collection of nonzero numbers one-by-one, the meaning we ascribe to $\sum_{n=1}^\infty a_n$ is the limit of the partial sums, if that limit exists. You might not call this "the normal arithmetical sense", but then it's up to you to say what (if anything) is "the normal arithmetical sense" for such a series. –  Robert Israel May 24 '12 at 20:53 possible duplicate of Does .99999... = 1? –  MJD May 24 '12 at 20:54 As far as I can tell, your friend is not distinguishing appropriately between a series and its sum. A series is a sequence of numbers $s_1, s_2, s_3, ...$ which one specifies by specifying $a_1 = s_1, a_n = s_n - s_{n-1}$. The notation $$\sum_{n=1}^{\infty} a_n$$ denotes the limit of the sequence $s_i$, if it exists, and is called the sum of the series. It is a number which is unique if it exists and should not be identified with the series itself. - +1) You should perhaps also mention that $\sum_1^\infty a_n$ denotes the series $(s_n)$, whose limit when it converges also is denoted by $\sum_1^\infty a_n$. –  AD. May 24 '12 at 21:53 @AD.: I think this is a bad convention for precisely the reason that one ought to distinguish a series from its sum. –  Qiaochu Yuan May 24 '12 at 22:14 That might be, but still it is very common. –  AD. May 25 '12 at 5:17 If you want to give your friend a visual approach, try this. Draw a square. Bisect it vertically and fill in the left side (that corresponds to $1/2$). Then bisect the right rectangle horizontally, and fill in the bottom ($1/2+1/4$). Bisect the unfilled square vertically, and fill in the left ($1/2+1/4+1/8$). Continue on like this to give your friend the general idea. The reason that it is equal to $1$ (i.e.: to the whole square) is that for every point inside the square, we can iterate this procedure far enough so that that point gets shaded over. In other words, this procedure fills in the square, taken to the limit, so the corresponding area (number) is at least $1$. But at every stage, we are only filling in sections inside the square, so it is at most $1$, too, and thus, equal to $1$. - I had a question closed a while back about something very similar - write the sum in base 2. It comes out as 0.1111111 ... Is this the same as 1? Well, yes, because that's how limits are defined. But my daughter was struggling towards some language about open sets and limit points (therefore closed sets) and wanted 0.999999 (use base 10) to be different from 1 to indicate (effectively) that the set of partial sums did not include the limit point. I reckon that a 13-year-old who can even think of conceptualising that kind of thing (no suggestion from me) is doing pretty well. Especially as this is one of the harder conceptual leaps between what most students get at High School and what they have to deal with at university. The question is mathematically resolved, but the resolution is much more subtle than the indoctrinated elite (like me) sometimes imagine. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146055579185486, "perplexity": 500.17661440099175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00195-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1746485/decreasing-sequence-of-non-negative-lebesgue-measurable-functions-and-mct
# Decreasing sequence of non-negative Lebesgue measurable functions and MCT I'm learning about measure theory, specifically the Lebesgue integral of nonnegative functions, and need help with the following problem: Suppose that $f$ and $f_n$ are nonnegative measurable functions, that $f_n$ decreases pointwise to $f$, and that $\int_{\mathbb{R}}f_1 < \infty$. Prove that $\int_{\mathbb{R}}f = \lim\int_{\mathbb{R}}f_n$. [Hint: Consider $g_n = f_1 - f_n$]. Show with a counterexample that the assumption that $\int_{\mathbb{R}}f_1 < \infty$ is necessary. The assumptions are (I always rewrite the problem to see if I understand it correctly): 1. $\forall x \in \mathbb{R}, f_n(x) \geq f_{n+1}(x)$. 2. $\int_{\mathbb{R}}f_1 < \infty$,that is $f_1 \in L^1(\mathbb{R})$. 3. $f_n \to f$ pointwise in $\mathbb{R}$. My work and thoughts: As $f_n$ is monotone non-increasing, $g_n = f_1 - f_n$ is a monotone non-decreasing sequence of non-negative Lebesgue measurable functions, i.e $0 \leq g_1 \leq g_2 \leq \cdots \leq f_1 - f$ and $\lim g_n = f_1 - f$. Therefore, by the monotone convergence theorem $$\int_{\mathbb{R}}\lim_{n \to \infty} g_n = \int_{\mathbb{R}}(f_1 - f) = \lim_{n \to \infty}\int_{\mathbb{R}}(f_1 - f_n).$$ Because $\int_{\mathbb{R}}f_1 < \infty$ and $f_n \leq f_1$ for all $n \in \mathbb{N}$, this implies that $\int_{\mathbb{R}}f_n < \infty$ for all $n \in \mathbb{N}$. This is where I'm stuck. I think I'm really close to the desired result. How do I continue from here? Also how do I show that the assumption that $\int_{\mathbb{R}}f_1 < \infty$ is necessary using a counterexample? $$\int_{\mathbb{R}}f_1 - \int_{\mathbb{R}}f =\int_{\mathbb{R}}(f_1 - f) =\int_{\mathbb{R}}\lim_{n \to \infty} g_n = \lim_{n \to \infty}\int_{\mathbb{R}}(f_1 - f_n) = \int_{\mathbb{R}}f_1 - \lim_{n \to \infty}\int_{\mathbb{R}}f_n$$ Subtract $\int_{\mathbb{R}}f_1$ from both sides and you are done. • @VonKar You can subtract $\int_{\mathbb{R}}f_1$ from both sides, because $\int_{\mathbb{R}}f_1<\infty$. – Ramiro Apr 18 '16 at 2:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878865480422974, "perplexity": 96.46881452779898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00476.warc.gz"}
http://mathoverflow.net/questions/7687/clifford-algebra-as-an-adjunction/101209
# Clifford algebra as an adjunction? Background For definiteness (even though this is a categorical question!) let's agree that a vector space is a finite-dimensional real vector space and that an associative algebra is a finite-dimensional real unital associative algebra. Let $V$ be a vector space with a nondegenerate symmetric bilinear form $B$ and let $Q(x) = B(x,x)$ be the associated quadratic form. Let's call the pair $(V,Q)$ a quadratic vector space. Let $A$ be an associative algebra and let's say that a linear map $\phi:V \to A$ is Clifford if $$\phi(x)^2 = - Q(x) 1_A,$$ where $1_A$ is the unit in $A$. One way to define the Clifford algebra associated to $(V,Q)$ is to say that it is universal for Clifford maps from $(V,Q)$. Categorically, one defines a category whose objects are pairs $(\phi,A)$ consisting of an associative algebra $A$ and a Clifford map $\phi: V \to A$ and whose arrows $$h:(\phi,A)\to (\phi',A')$$ are morphisms $h: A \to A'$ of associative algebras such that the obvious triangle commutes: $$h \circ \phi = \phi'.$$ Then the Clifford algebra of $(V,Q)$ is the universal initial object in this category. In other words, it is a pair $(i,Cl(V,Q))$ where $Cl(V,Q)$ is an associative algebra and $i:V \to Cl(V,Q)$ is a Clifford map, such that for every Clifford map $\phi:V \to A$, there is a unique morphism $$\Phi: Cl(V,Q) \to A$$ extending $\phi$; that is, such that $\Phi \circ i = \phi$. (This is the usual definition one can find, say, in the nLab.) Question I would like to view the construction of the Clifford algebra as a functor from the category of quadratic vector spaces to the category of associative algebras. The universal property says that if $(V,Q)$ is a quadratic vector space and $A$ is an associative algebra, then there is a bijection of hom-sets $$\mathrm{hom}_{\mathbf{Assoc}}(Cl(V,Q), A) \cong \mathrm{cl-hom}(V,A)$$ where the left-hand side are the associative algebra morphisms and the right-hand side are the Clifford morphisms. My question is whether I can view $Cl$ as an adjoint functor in some way. In other words, is there some category $\mathbf{C}$ such that the right-side is $$\mathrm{hom}_{\mathbf{C}}((V,Q), F(A))$$ for some functor $F$ from associative algebras to $\mathbf{C}$. Naively I'd say $\mathbf{C}$ ought to be the category of quadratic vector spaces, but I cannot think of a suitable $F$. I apologise if this question is a little vague. I'm not a very categorical person, but I'm preparing some notes for a graduate course on spin geometry next semester and the question arose in my mind. - I was just wondering this a week ago! I'd also be very interested in the answer to this question. –  Qiaochu Yuan Dec 3 '09 at 18:04 Are categorical questions not definite? :P –  Mariano Suárez-Alvarez Dec 3 '09 at 18:14 I(and likely others) would appreciate it if you posted these notes for download. There are some references that I know of that address relationships between Clifford algebras and other branches of modern physics(Girard, Iordănescu), but I would like to see what perspective you are coming from on this. Thanks for the question though, it is interesting! –  B. Bischof Dec 3 '09 at 22:17 Certainly. I put all my notes online as a matter of principle. I've only just started preparing them, though... so it may be a while. Thanks for your interest! –  José Figueroa-O'Farrill Dec 4 '09 at 7:06 Disqualifier: this isn't a complete answer. There's a basic "chalk and cheese" problem here. The "categories" that you are comparing are of two different types, although they do seem similar on the surface. On the one hand you have an honest algebraic category: that of associative algebras. But the other category (which, admittedly, is not precisely defined) is "vector spaces plus quadratic forms". This is not algebraic (over Set). There's no "free vector space with a non-degenerate quadratic form" and there'll (probably) be lots of other things that don't quite work in the way one would expect for algebraic categories. For example, as you require non-degeneracy, all morphisms have to be injective linear maps which severely limits them. You could add degenerate quadratic forms (which means, as AGR hints, that you regard exterior algebras as a sort of degenerate Clifford algebra - not a bad idea, though!) but this still doesn't get algebraicity: the problem is that the quadratic form goes out of the vector space, not into it, so isn't an "operation". However, you may get some mileage if you work with pointed objects. I'm not sure of my terminology here, but I mean that we have a category $\mathcal{C}$ and some distinguished object $C_0$ and consider the category $(C,\eta,\epsilon)$ where $\eta : C_0 \to C$, $\epsilon : C \to C_0$ are such that $\epsilon \eta = I_{C_0}$. In Set, we take $C_0$ as a one-point set. In an algebraic category, we take $C_0$ as the free thing on one object. Then the corresponding pointed algebraic category is algebraic over the category of pointed sets (I think!). The point (ha ha) of this is that in the category of pointed associative algebras one does have a "trace" map: $\operatorname{tr} : A \to \mathbb{R}$ given by $(a,b) \mapsto \epsilon(a \cdot b)$. Thus one should work in the category of pointed associative $\mathbb{Z}/2$-graded algebras whose trace map is graded symmetric. In the category of pointed vector spaces, one can similarly define quadratic forms as operations. You need a binary operation $b : |V| \times |V| \to |V|$ (only these products are of pointed sets) and the identity $\eta \epsilon b = b$ to ensure that $b$ really lands up in the $\mathbb{R}$-component of $V$ (plus symmetry). Whilst adding the pointed condition is non-trivial for algebras, it is effectively trivial for vector spaces since there's an obvious functor from vector spaces to pointed vector spaces, $V \mapsto V \oplus \mathbb{R}$ that is an equivalence of categories. Assuming that all the $\imath$s can be crossed and all the $l$s dotted, the functor that you want is now the forgetful functor from pointed associative algebras to pointed quadratic vector spaces. - I'm accepting this answer; although to be honest I'm still not close to understanding the "pointed" point of view. I'm learning category theory in earnest, so perhaps I can come back to this a little later. Thanks! –  José Figueroa-O'Farrill Mar 3 '10 at 0:02 Gosh, I'd forgotten about this completely! Well done for going back through your questions and sorting them out. If, at some point, you want to work through the details then I'd like to see the workings (even help if that's allowed). You should feel free to do this on the nlab if you wanted! –  Andrew Stacey Mar 3 '10 at 9:20 If I understand the definitions correctly: Let $C$ be the category of pairs (V,q) where V is a vector space on a fixed field and q is a quadratic form. A morphism $f: (V,q) \rightarrow (V',q')$ is a linear map $V \to V'$ preserving the quadratic form. Let $D$ be the category of unital algebras over the field. Morphisms are linear maps preserving multiplication and identity. We've got a forgetful functor $D \rightarrow C$ that maps an algebra V to the quadratic vector space $(V,q)$ where $q(x)=(x \cdot x) \cdot 1$. This functor has as left adjoint the Clifford algebra construction. (I'm inexperienced, so this might be plain wrong. But surely an adjoint functor is hiding here.) - q(x) isn't a quadratic form. But I think if e denotes the identity and e* denotes its dual then defining q(x) = e* x^2 works. –  Qiaochu Yuan Dec 3 '09 at 18:11 Also, I'm not sure if it matters or not whether you want algebra homomorphisms to preserve the identity. –  Qiaochu Yuan Dec 3 '09 at 18:19 Thanks, corrected. –  sdcvvc Dec 3 '09 at 18:45 "its dual"? What is the dual to the identity? –  Theo Johnson-Freyd Dec 3 '09 at 19:39 I am not sure if this is whole answer, but it seems to be in the right direction. First, $q(x) = - e^* x^2$ seems better, the way I have defined Clifford maps. My main concern is that $e^*$ is not canonically defined. Perhaps one has to add more structure to the algebras... –  José Figueroa-O'Farrill Dec 3 '09 at 19:41 show 1 more comment This answer builds on sdcvvc's answer and the comments below it, and in particular concerns the (non)existence of a canonical quadratic form $q$ (in sdcvvc's notation). Let me denote by $\mathcal{Q}$ the category of quadratic real vector spaces (where the symmetric bilinear form is not necessarily nondegenerate), and by $\mathcal{A}$ some subcategory of the category $\mathcal{A}ss$ of finite-dimensional real unital associative algebras that contains the image of the Clifford functor $\mathcal{C}l: \mathcal{Q} \to \mathcal{A}ss$. Notice that $\mathcal{Q}$ contains $\mathrm{\mathbf{Vect}}_\mathbb{R}$ as the full subcategory whose objects of the form $(V, 0)$, and that the restriction of the functor $\mathcal{C}l: \mathcal{Q} \to \mathcal{A}$ to this subcategory is the exterior algebra functor $V \mapsto \Lambda^{\ast}V$. Then, $$\mathrm{Hom}_{\mathcal{A}}(\Lambda^\ast V, A) \cong \lbrace \phi: V \to A \; | \; \phi(v)^2 = 0 \rbrace$$ You can make $\Lambda^{\ast}(-)$ into a left adjoint by restricting $\mathcal{A}$ to be the category of $\mathbb{Z}_2$-graded supercommutative algebras (maybe you can take a bigger subcategory?). The right adjoint should then be the functor taking such an algebra to its odd-degree part considered as a vector space. This makes the Clifford condition $\phi(v)^2 = 0$ trivially true. It is the latter observation the one that allows us to cook up such an $\mathcal{A}$. However, in the general case the Clifford condition does involve the quadratic form on the vector space that is the domain, and so it doesn't seem possible to me that we could do something like the above universally. - UPDATE: the following argument is wrong, see the comments. If $\mathcal{C}l$ admits a right adjoint then it preserves colimits, and coproducts in particular. Now, in your category of quadratic vector spaces, the coproduct of $(V, Q)$ and $(V', Q')$ is $(V \oplus V', Q \oplus Q')$; for associative algebras $A$ and $A'$, its coproduct is given by tensor product over $\mathbb{R}$. Hence, it is necessary that $$\mathcal{C}l(V \oplus V', Q \oplus Q') \cong \mathcal{C}l(V, Q) \otimes_{\mathbb{R}} \mathcal{C}l(V', Q')$$ Here's a counterexample: take $V = V' = \mathbb{R}$ with $Q = Q' = -1$. By the classification of Clifford algebras, we know that $\mathcal{C}l(\mathbb{R}, -1) \cong \mathbb{C}$ and $\mathcal{C}l(\mathbb{R}^2, \mathrm{diag}(-1,-1)) \cong \mathbb{H}$. It is now enough to observe that $$\mathbb{H} \not\cong \mathbb{C} \otimes_{\mathbb{R}} \mathbb{C}$$ - The coproduct in the category of associative algebras is not the tensor product: two maps $f:A\to C$ and $f:B\to C$ give a map $A\otimes B\to C$ only if the images of $f$ and of $g$ commute. –  Mariano Suárez-Alvarez Dec 3 '09 at 18:42 Curiously, since Clifford algebras are $\mathbb{Z}_2$-graded, if you were to take the $\mathbb{Z}_2$-graded tensor product in your first displayed equation, then this would be a true statement. This perhaps suggestst that I have to consider the category of $\mathbb{Z}_2$-graded associative algebras. –  José Figueroa-O'Farrill Dec 3 '09 at 18:53 My bad: Clifford algebras are Z_2-graded, and hence the coproduct is the Z_2-graded tensor product. Proposition 1.5 on page 11 of Lawson and Michelsohn's "Spin Geometry" asserts that Cl indeed preserves coproducts. –  Alberto García-Raboso Dec 3 '09 at 18:54 In any case, the Clifford algebra of an orthogonal direct sum of quadratic spaces is isomorphic to the twisted tensor product of the corresponding tensor algebras: this is the "super" tensor product, the one which introduces signs by the parity graduation of the Clifford algebras. –  Mariano Suárez-Alvarez Dec 3 '09 at 18:55 Yes, but there are useful ways of being wrong :) –  José Figueroa-O'Farrill Dec 3 '09 at 19:44 show 1 more comment I'm hoping for a second opinion on this question. The same question occurred to me, and google led me to this thread. At first glance, the consensus answer here (there is no right-adjoint to $Cl$) seems a plausibly argued. But after some thought, I'm not convinced. We know that a universal construction, if it exists for every object in the source category, always gives an adjunction between categories. An object satisfying the universal property for a Clifford algebra can be explicitly constructed from any vector space with quadratic form as a quotient of the tensor algebra. So an object satisfying the universal property always exists, therefore it is a left-adjoint. And what should the right-adjoint functor to the Clifford functor? Why nothing other than the underlying map from associative algebras to quadratic spaces, with quadratic form $q(x)=x^2$. This is the only possible quadratic form on the underlying vector spaces which will make the stipulation in the universal construction about the linear maps into morphisms in the category of quadratic vector spaces. I should conclude that the right-adjoint of $Cl$ is a forgetful functor $k\text{-Alg}\to k\text{-Quad}$ which takes an associative algebra and forgets multiplication but remembers how to square vectors. The unit of this adjunction is the Clifford algebra structure map, and the counit is the map from the Clifford algebra on the quadratic vector space underlying any algebra $A$ to $A$ which takes $a_1\cdot a_2\mapsto a_1a_2$. This is of course exactly the unaccepted answer that sdcvvc gives above, though without much detail. Qiaochu Yuan says that the claimed quadratic form $q(x)=x^2$ on the underlying vector space of an associative algebra is not actually quadratic. I cannot see why not. Why is sdcvvc's answer incorrect? Alberto García-Raboso gives an answer as well, where in the discussion it is settled that $Cl$ preserves finite coproducts. If we can also show that it preserves cokernels then we know that it must have a right-adjoint, by Freyd adjoint functor theorem, right? And have I misunderstood the relationships between universal morphisms and adjunctions? Is it not the case that we can simply read off the adjoint functor out of the universal property? And do we really need to consider, as Andrew Stacy suggests, some kind of pointed vector spaces? If so, why? I wanted to post my questions as comments, not an answer, but I guess I don't have enough rep. Please forgive me. - $q(x) = x^2$ doesn't take values in the underlying field! –  Qiaochu Yuan Jul 3 '12 at 5:44 Right, duh. Thank you, Qiaochu, for explaining to the slow kid. So how do we reconcile the fact that the Clifford functor isn't a left-adjoint with the fact that every universal property determines an adjunction? I guess we conclude that the universal property which characterizes the Clifford algebra doesn't actually meet the technical definition of a universal morphism, in the sense that there is no functor for which the Clifford construction is initial in the slice over it? Are there other examples of universal properties which are not universal morphisms like this? –  Joe Hannon Jul 3 '12 at 14:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594206809997559, "perplexity": 265.3654376175348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
http://homepages.wmich.edu/~zhang/abs6.html
### Extremal Problems in Geodetic Graph Theory For two vertices $u$ and $v$ of a graph $G$, the set $H(u, v)$ consists of all vertices lying on some $u-v$ geodesic in $G$. For a set $S$ of vertices of $G$, $H(S)$ is the union of all sets $H(u,v)$ for $u, v \in S$. The geodetic number $g(G)$ is the minimum cardinality among the subsets $S$ of $V(G)$ with $H(S)=V(G)$. For integers $n$ and $m$ with $n-1 \leq m \leq {n \choose 2}$, $\min (g; n, m)$ and $\max (g; n, m)$ represent the minimum and maximum geodetic numbers, respectively, among all connected graphs of order $n$ and size $m$. It is shown that $\min (g; n, m) =2$ unless $m = {n \choose 2}$. The number $\max(g; n, m)$ is investigated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115838170051575, "perplexity": 43.04274718061431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024001059-00333.warc.gz"}
http://mathhelpforum.com/advanced-statistics/78905-regression-chart-analysis.html
# Math Help - Regression chart Analysis 1. ## Regression chart Analysis Can someone please help me with this problem, I do not know where to look for the answer on the chart. The chart is in the attachment folder. Attached Files 2. The test to kill all three variables in one step is the F test with test stat 12.82165585 and you compare it to an $F_{3,36}$. The p-value is $P(F_{3,36}>12.82165585)$ and the printout gives that value as 7.48476E-06. So as long as $\alpha>7.48476E-06$, you reject $H_0$ which is that all three variables can be dropped. But with such a small p-value the conclusion is that we do indeed need at least one of those three terms to explain these people's salary.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674110770225525, "perplexity": 449.50380804985895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638820.85/warc/CC-MAIN-20150417045718-00302-ip-10-235-10-82.ec2.internal.warc.gz"}