url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/5574/a-triangular-representation-for-the-divisor-summatory-function-dx
# A triangular representation for the divisor summatory function, $D(x)$ Let $d(n)$ represent the divisor function as $d(n)=\displaystyle\sum\limits_{k|n}1$ and the divisor summatory function as $D(x)=\displaystyle\sum\limits_{n \leq x}d(n)$ I found the following triangular representation for the values of $D(n)$ $$\begin{array}{ccccccccc} D(1)=&&&&&&&&& 1 &&&&&&&&&&=1\\ &\\ D(2)=&&&&&&&& 2 &+& 1 &&&&&&&&&=3\\ &\\ D(3)=&&&&&&& 3 &+& 1 &+& 1 &&&&&&&&=5\\ &\\ D(4)=&&&&&& 4 &+& 2 &+& 1 &+& 1 &&&&&&&=8\\ &\\ D(5)=&&&&&5 &+& 2 &+& 1 &+& 1 &+& 1&&&&&&=10\\ &\\ D(6)=&&&&6 &+& 3 &+& 2 &+& 1 &+& 1 &+& 1&&&&&=14\\ &\\ D(7)=&&&7 &+& 3 &+& 2 &+& 1 &+& 1 &+& 1&+& 1 &&&&=16\\ &\\ D(8)=&&8 &+& 4 &+& 2 &+& 2 &+& 1 &+& 1&+& 1&+&1&&&=20\\ &\\ \end{array}$$ The values on the right are the sum of all elements in a row. EDIT 1: The above picture is the result of the following observation: Let $v_{m}(n)$ be the greatest power of $m$ that divides $n$ with $m,n \in \mathbb{N}$ , so we get that $D(n)=\displaystyle\sum\limits_{m=2}^{\infty}v_{m}(p^{n}), p \in \mathbb{P}$ where $p$ is a fixed prime number. I didn't try to prove this. I don't know how to do it, but hopefully some one will have some idea on how to prove or disprove this conjecture. I'd like to know if this is a known fact. I don't have a proof but I've tested lots of values and woks all the time. Thanks. - Am I missing an obvious patter in your column on the right? –  BBischof Sep 27 '10 at 15:22 @BBischof, the values on the right are $D(n)$, e.g. $D(1)=1$, $D(2)=3$, $D(3)=5$, $D(4)=8$, etc. –  Neves Sep 27 '10 at 15:26 I've swapped n and x in the divisor summatory function. –  anon Sep 27 '10 at 15:32 The values in the triangle are just the quotient of dividing the row number by the column number, so it's no surprise that this gives the sum of divisors. –  anon Sep 27 '10 at 15:36 When voting down, please, explain why. –  Neves Sep 27 '10 at 16:23 Yes, this is true. Write $D(x) = \sum_{n \le x} d(n) = \sum_{n \le x} \sum_{d | n} 1 = \sum_{d \le x} \lfloor \frac{x}{d} \rfloor$; this is equivalent to the pattern you observe. The last step is exchanging the order of summation together with the observation that the number of times a number $d$ appears in the double sum is the number of numbers less than or equal to $x$ it divides. - thats true, I already knew that way of calculating $D(x)$, but as I came to this observation from another path I never looked at the triangle as a result of $\sum_{d \leq x}\lfloor\frac{x}{d}\rfloor$, but thats true, the values in the triangle are "just the quotient of dividing the row number by the column number". –  Neves Sep 27 '10 at 17:10 This answer is a bit of a work in progress, but if $n=2^x-1$, then $$\frac{D(n)+u}{2}=\sum_{j\in\mathcal{N}}\sum_{i=1}^{n}{h_{i,j}} \text{ where } u=\lfloor\sqrt{n}\rfloor$$ where $h_{i,j}$ is the value in the corresponding row,column of the matrix described in http://crypto.stackexchange.com/questions/27003/has-anyone-heard-of-matrix-based-roman-doll-encryption-techniques Furthermore, letting $r_j=\sum_{i\in\mathcal{N}}{h_{i,j}}$, we write: $$D(2^k-1)=u-\xi+2\sum_{l=0}^{k-1}\frac{(k-l+1)(k-l)}{2}\sum_{j=\lfloor 2^{l-1}\rfloor }^{2^l-1}{r_j}$$ where $\xi$ is computed using the program: input k unsigned step = 1 unsigned y = 1 unsigned $\xi$ = 0 while y < 2^k { unsigned bin=(unsigned)log2(y) $\xi$ = $\xi$ + (k-bin)(k-bin+1-(k-bin)%2) y = y + 8* step step = step + 1 } output $\xi$ This suggests a slim possibility of computing $D(x)$ in $log_2(x)$ time. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528820514678955, "perplexity": 301.02309832056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736680773.55/warc/CC-MAIN-20151001215800-00206-ip-10-137-6-227.ec2.internal.warc.gz"}
https://sciencing.com/rid-square-root-equation-10023630.html
# How to Get Rid of a Square Root in an Equation ••• Letfluis/iStock/GettyImages Print When you first learned about squared numbers like 32, 52 and ​x2, you probably learned about a squared number's inverse operation, the square root, too. That inverse relationship between squaring numbers and square roots is important, because in plain English it means that one operation undoes the effects of the other. That means that if you have an equation with square roots in it, you can use the "squaring" operation, or exponents, to remove the square roots. But there are some rules about how to do this, along with the potential trap of false solutions. #### TL;DR (Too Long; Didn't Read) To solve an equation with a square root in it, first isolate the square root on one side of the equation. Then square both sides of the equation and continue solving for the variable. Don't forget to check your work at the end. ## A Simple Example Before considering some of the potential "traps" of solving an equation with square roots in it, consider a simple example: Solve the following equation for ​x​: \sqrt{x} + 1 = 5 1. ## Isolate the Square Root 2. Use arithmetic operations like addition, subtraction, multiplication and division to isolate the square root expression on one side of the equation. For example, if your original equation was √​x​ + 1 = 5, you would subtract 1 from both sides of the equation to get the following: \sqrt{x} = 4 3. ## Square Both Sides of the Equation 4. Squaring both sides of the equation eliminates the square root sign. This gives you: (\sqrt{x})^2 = (4)^2 Or, once simplified: x = 16 You've eliminated the square root sign ​and​ you have a value for ​x​, so your work here is done. But wait, there's one more step: 6. Check your work by substituting the ​x​ value you found into the original equation: \sqrt{16} + 1 = 5 Next, simplify: 4 + 1 = 5 And finally: 5 = 5 Because this returned a valid statement (5 = 5, as opposed to an invalid statement like 3 = 4 or 2 = -2, the solution you found in Step 2 is valid. In this example, checking your work seems trivial. But this method of eliminating radicals can sometimes create "false" answers that don't work in the original equation. So it's best to get in the habit of always checking your answers to make sure they return a valid result, starting now. ## A Slightly Harder Example What if you have a more complex expression underneath the radical (square root) sign? Consider the following equation. You can still apply the same process used in the previous example, but this equation highlights a couple of rules you must follow. \sqrt{y - 4} + 5 = 29 2. As before, use operations like addition, subtraction, multiplication and division to isolate the radical expression on one side of the equation. In this case, subtracting 5 from both sides gives you: \sqrt{y - 4} = 24 #### Warnings • Note that you're being asked to isolate the square root (which presumably contains a variable, because if it was a constant like √9, you could just solve it on the spot; √9 = 3). You are ​not​ being asked to isolate the variable. That step comes later, after you've eliminated the square root sign. 3. ## Square Both Sides 4. Square both sides of the equation, which gives you the following: {\sqrt{y - 4})^2 = (24)^2 Which simplifies to: y - 4 = 576 #### Warnings • Note that you must square everything underneath the radical sign, not just the variable. 5. ## Isolate the Variable 6. Now that you've eliminated the radical or square root from the equation, you can isolate the variable. To continue the example, adding 4 to both sides of the equation gives you: y = 580 8. As before, check your work by substituting the ​y​ value you found back into the original equation. This gives you: \sqrt{580 - 4} + 5 = 29 Which simplifies to: \sqrt{576} + 5 = 29 24 + 5 = 29 And finally: 29 = 29 a true statement that indicates a valid result.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008044362068176, "perplexity": 386.18953417112584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00189.warc.gz"}
https://simplycurious.blog/2021/12/
## The universe in a grain of sand This article attempts to explain a paper I wrote that is published in Europhysics Letters. The English engraver William Blake in a piece of poetry, the $Krishn{\bar a}$ stories, the colossal orders of magnitude of sizes from the humongous to the very small make us wonder if somehow the very large is connected to the very small. A similar theme was explored in physics by a trio of scientists about twenty years ago. They looked at a puzzling problem that has been nagging a rather successful project to use quantum mechanics to explain the physics of fundamental particles. Called “Quantum field theory”, this marriage of quantum mechanics and special relativity (and later, some aspects of general relativity) is probably the most successful theory to emerge in physics in a long while. It has been studied extensively, expanded and some of its predictions in areas where it is possible to make very precise calculations are accurate to fourteen decimal places. It completely excludes the effects of gravity and predicts the precise behavior of small numbers of fundamental particles – how they scatter past each other etc. The basic building block of the theory – the quantum field – is supposed to represent each kind of particle . There is an electron quantum field, one for the up quark etc. etc. If you want to study a theory with five electrons, that is just an excitation of the basic electron quantum field, just as a rapidly oscillating string on a violin has more energy than a slowly oscillating string. More energy in a quantum theory just corresponds to more “quanta” or particles of the field. So far so good. Unfortunately, one inescapable conclusion of the theory is that even when the quantum field is at its lowest possible energy, there is something called “zero-point” motion. Quantum objects cannot just stay at rest, they are jittery and have some energy even in their most quiescent state. As it turns out, bosons have positive energy in this quiescent state. Fermions (like the electron) have negative energy in this quiescent state. This energy in each quantum field can be calculated. It is, for every boson quantum field $+\infty$. For every fermion quantum field, it is $-\infty$. This is a conundrum. The energy in empty space in the universe can be estimated from cosmological measurements. It is roughly equivalent to a few protons to every cubic meter. It is certainly not $\infty$. This conundrum (and its relatives) has affected particle physics for more than fifty years now. Variously referred to as the “cosmological constant” problem or its cousin, the “hierarchy problem”, people have tried many solutions. They need solutions, because if the energy were really $+\infty$ for the boson field (since the universe probably started as radiation dominated with photons), the universe would collapse on itself. This infinite energy spread through space would gravitate and pull the universe in. Solutions, solutions, solutions. Some people have proposed that every particle has a “super”-partner – a boson would have a fermion “super”-partner with the same mass. Since the infinities would be identical, but have opposite signs, they would cancel and we would hence not have an overall energy density in empty space (this would be called the cosmological constant – it would be zero). Unfortunately, we have found no signs of such a “super”-symmetry, though we have looked hard and long. Others have proposed that one should just command the universe to not let this energy gravitate, as a law of nature. That seems arbitrary and would have to be adduced as a separate natural law. And why is tough to answer. Can we measure the effect of this “energy of empty space”, also called “vacuum energy” in some way other than through cosmology? Yes – there is an experiment called the “Casimir effect” which essentially measures the change in this energy when two metallic plates are separated from each other starting from being extremely close. This rather precise experiment confirms that such an energy does exist and can be changed in this fashion. One way to make the conundrum at least finite is to say that our theories certainly do not work in a regime where gravity would be important, From the natural constants $G_N, \hbar, c$ (Newton’s gravitational constant, Planck’s constant and the speed of light), one can create a set of natural units – the Planck units. These are the Planck length $l_P$, Planck mass $m_P$ and Planck time $t_P$, where $l_P = \sqrt{\frac{G_N \hbar}{c^3} } \sim 10^{-35} meters,$ $m_P = \sqrt{ \frac{\hbar c}{G_N} } \sim 10\: \mu \: grams\: \: \: , \: \: \: t_P = \sqrt{\frac{G_N \hbar }{c^5} } \sim 10^{-44} secs$ So, one can guess that gravity (represented by $G_N$) is relevant at Planck “scales”. One might reasonably expect pure quantum field theory to apply in regimes where gravity is irrelevant – so at length scales much larger than $10^{-35} meters$. Such a “cutoff” can be applied systematically in quantum field theories and it works – the answer for the “cosmological constant” is not infinitely bigger than the actual number, it is only bigger by a factor of $10^{121}$! What one does is to basically banish oscillations of the quantum field whose wavelengths are smaller than the Planck length. Most people would not be happy with this state of affairs. There are other theories of fundamental particles. The most studied ones predict a negative cosmological constant, also not in line with our unfortunate reality. About twenty years ago, three scientists – Andrew Cohen, David Kaplan and Ann Nelson (C-K-N) proposed that this vacuum energy actually should cut off at a much larger length scale – the size of the causally connected pieces of the universe (basically something one would consider the smallest wavelength possible in our observable universe. In this way, they connected the really small cutoff to the really large size of the universe. Why did they do this? They made the pretty obvious observation that the universe does not appear to be a black hole. Suppose we assumed that the universe were dominated by radiation. The energy inside should be (they said) the energy in the vacuum, up to this cutoff. But this energy should be confined to a size that should be bigger than, never less than, the “Schwarzschild radius” for this energy. The Schwarzschild radius for some energy is the radius of the ball that this energy should be confined to, in order that it collapses into a black hole. C-K-N assume that there is a natural principle that requires that the size of the universe is at least equal to the Schwarzschild radius corresponding to all that energy. They then derive some consequences of this assumption. First, my objections. I would have much rather preferred that the universe be MUCH bigger than this radius. Next, if this is indeed the case, surely some natural law should cause this to happen, rather than a post-hoc requirement (we are here, so it must have been so). That last bit is usually referred to as the “weak” anthropic principle. Anthropic principles have always seemed to me the last resort of the damned physicist – it can also be when you throw up your hands and say – if it weren’t this way, we wouldn’t be here. Its OK to resort to such ideas when you clearly see there is a lot of randomness that drives a physical process. Just not knowing the underlying physical process doesn’t seem the right reason to throw out an anthropic type idea. Anyway, I cast the entire problem as one in thermodynamics of the entire universe and suggested that the universe is this way because it is simply the most advantageous way for it to arrange itself. This method also lends itself to other extensions. It turns out that if the material of the universe is not the usual type (say “radiation” or “matter”), it might be possible for us to actually find a reasonable estimate of the cutoff that is in line with current experiments (at least the vacuum energy is not off by a factor of $10^{121}$, but only $10^{45}$ or so. There is more to do!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621120452880859, "perplexity": 375.15758663869354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00041.warc.gz"}
http://mathhelpforum.com/geometry/1608-geometry-word-problem-print.html
# Geometry - word problem • January 13th 2006, 08:40 AM The Preacher Geometry - word problem Hey, thanks for taking a look at this. A parallelogram with sides of 6 and 10 has an area of 30(sqrt)2. Find the measure of each angle of the parallelogram. Small angle _____ Large angle _____ I apologize that my first posting here had to be urgent, my school day is over soon, and I'd like some work to show for it, but I'm stuck. I appreciate any assistance. So far I've got is this: 360 - x2 = y2 X being either the small or large angle and y taking the place of the opposite. That's all I have so far, I'd like a hint in the right direction to finding one of the angles. The normal forum I post on for this sort of stuff is not working right now. Thanks again. God bless y'all, -The Preacher • January 13th 2006, 09:03 AM TD! If we take the largest side (10) to be the base and we call the heigth h, then the area is given by 10h. But we know the area, so we can find h. By drawing the parallellogram and this height, you can form a right triangle (at the left or right side of the parallellogram). This right triangle has one side h (which we know now) and hypotenuse 6. With the pythagorean theorem, you can even find the last side. Now you have this triangle and you know all sides, use some trigonometry to fnd the angles. For example, check this topic. • January 13th 2006, 09:11 AM The Preacher Quote: Originally Posted by TD! If we take the largest side (10) to be the base and we call the heigth h, then the area is given by 10h. But we know the area, so we can find h. By drawing the parallellogram and this height, you can form a right triangle (at the left or right side of the parallellogram). This right triangle has one side h (which we know now) and hypotenuse 6. With the pythagorean theorem, you can even find the last side. I think that I wouldn't have any problem with this if I knew how to get the height from the area. Radicals confuse me. Do you think you could help me out? • January 13th 2006, 09:14 AM TD! Of course. For a parallellogram, the area is given by height x base. By choice, we take the long side as base (which is 10 here) and call the height h, we then have: $10h = 30\sqrt 2 \Leftrightarrow h = \frac{{30\sqrt 2 }} {{10}} = 3\sqrt 2$ • January 13th 2006, 09:21 AM The Preacher Quote: Originally Posted by TD! Of course. For a parallellogram, the area is given by height x base. By choice, we take the long side as base (which is 10 here) and call the height h, we then have: $10h = 30\sqrt 2 \Leftrightarrow h = \frac{{30\sqrt 2 }} {{10}} = 3\sqrt 2$ Okay... so $3\sqrt 2$ is the height. Uh... like I said, radicals confuse me. Is there a way to change that into an integer? I'm sorry to require so much assistance in order to understand. I appreciate your time, TD!. =] God bless y'all, -The Preacher • January 13th 2006, 09:24 AM TD! Quote: Originally Posted by The Preacher Okay... so $3\sqrt 2$ is the height. Uh... like I said, radicals confuse me. Is there a way to change that into an integer? I'm sorry to require so much assistance in order to understand. I appreciate your time, TD!. =] Well, the square root of two is irrational. That means that if you'd want to write it in decimals, it will have infinitely many decimals, just like pi. You can of course just truncate the expansion, then you're left with an approximation. The square root of two is more or less equal to 1.414. I usually just leave the radicals and work with them, since that's exact. If you're uncomfortable with that, you could use an approximation. • January 13th 2006, 09:26 AM The Preacher Yeah, I found that out before (the square root of two having too many decimals), that's why I was asking. Dang, I'm having so much trouble with this problem. Wait... I need to find the sine ratio. I could take the $3\sqrt 2$ and the hypotenuse (6) and find the opposite angle. Soo... that'd be $ $\frac{{3\sqrt 2 }}{6}$ $ . Hold on... I am bad at math, but I'm going to try to figure out the angle from this. I can do that, right? • January 13th 2006, 09:29 AM TD! Check the topic I linked in my first post, there are the 3 formula's you could possibly use. • January 13th 2006, 09:31 AM The Preacher Alright, thanks. Sorry for ignoring it. :o EDIT: Okay... I looked at it. So I need to divide $3\sqrt 2$ by 6. How do I divide radicals? :confused: Sorry to seem so helpless... I really do stink at math. • January 13th 2006, 09:33 AM TD! No problem. Try it and if it doesn't work out, ask for help :) • January 13th 2006, 09:34 AM The Preacher Lol, I edited my message. Sorry for the bump. • January 13th 2006, 09:34 AM ThePerfectHacker There is a theorem from trigonometry which states the area of a parallelogram with sides $a,b$ and angle $\theta$ is $A=ab\sin \theta$. Now you said its sides are 6 and 10. And area of $30\sqrt{2}$ Thus, by the formula $60\sin \theta=30\sqrt{2}$ divide by 60, thus, $\sin \theta=\frac{\sqrt{2}}{2}$. Now that happens when $\theta=45,135$. Q.E.D. • January 13th 2006, 09:40 AM The Preacher :eek: I think I just got really confused. Couldn't I just have found the angle by dividing $3\sqrt 2$ by 6? Maybe not. Maybe I've got all my sine stuff out of whack. Sorry. =] -The Preacher EDIT: Nevermind, I just went back some in my textbook, reviewed, and was able to find out how to solve this with TD!'s help. Thanks for helping me out, guys. I appreciate it. God bless y'all, -The Preacher • January 13th 2006, 10:01 AM TD! Quote: Originally Posted by The Preacher :eek: I think I just got really confused. Couldn't I just have found the angle by dividing $3\sqrt 2$ by 6? Maybe not. Maybe I've got all my sine stuff out of whack. Sorry. Yes, that would have given you the sine of that angle. $\sin \left( \alpha \right) = \frac{{3\sqrt 2 }} {6} \Leftrightarrow \sin \left( \alpha \right) = \frac{{\sqrt 2 }} {2} \Leftrightarrow \alpha = 45^\circ \vee \alpha = 135^\circ$ • January 13th 2006, 12:40 PM ticbol Quote: Originally Posted by The Preacher Hey, thanks for taking a look at this. A parallelogram with sides of 6 and 10 has an area of 30(sqrt)2. Find the measure of each angle of the parallelogram. Small angle _____ Large angle _____ I apologize that my first posting here had to be urgent, my school day is over soon, and I'd like some work to show for it, but I'm stuck. I appreciate any assistance. So far I've got is this: 360 - x2 = y2 X being either the small or large angle and y taking the place of the opposite. That's all I have so far, I'd like a hint in the right direction to finding one of the angles. The normal forum I post on for this sort of stuff is not working right now. Thanks again. God bless y'all, -The Preacher I do not belong to the group that gives "answers" or help in forms of hints/guides/hanging solutions/the likes. I belong to the group that gives complete/not partial/detailed answers. Let us go first to your 360 -x2 = y2. Your intention is correct, but you should know how to present it to us so that there'd be less confusion. x2 or y2 should have been x*2 or y*2. Or, the usual 2x or 2y. "x2" is mostly read as x, sub 2. Let x = small angle, and y = large angle. If you don't have the figure of the said parallelogram, draw it on paper. [Always work, or play, with a figure. Make that a habit.] Let us say the parallelogram is ABCD, where: --- angle A = angle C = angle x --- angle B = angle D = angle y --- sides BC and AD are horizontal Area of parallelogram = base times altitude, this altitude being perpendicular to the base. There is no perpendicular lines shown on the figure yet, so draw a line from B that is perpendicular to AD, and call is new line or altitude, h. Then, (AB)*h = 30sqrt(2) 10*h = 30sqrt(2) Divide both sides by 1o, h = 3sqrt(2) ----------------*** In the right triangle formed by AB, h and portion of AD, sin(angle A) = h /6 sin(x) = [3sqrt(2)] /6 sin(x) = sqrt(2) /2 Either get the decimal equivalent of that and then find the x by using a calculator, or, if you know from memory that sin(45 degrees) is [sqrt(2)]/2 or 1/[sqrt(2)], then x = arcsin[sqrt(2) /2] = 45deg --------answer. Then, using your 360 -2x = 2y, 360 -2*45 = 2y 270 = 2y
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8264919519424438, "perplexity": 961.5443855560435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273663.2/warc/CC-MAIN-20140728011753-00070-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-chapter-review-8-1-adding-and-subtracting-polynomials-page-524/15
## Algebra 1 $7z^{3}-2z^{2}-16$ Simplify and write in standard form $(8z^{3}-3z^{2}-7)-(z^{3}-z^{2}+9)$ Distribute the $-$ in the second parenthesis $8z^{3}-3z^{2}-7-z^{3}+z^{2}-9$ Combine like terms $7z^{3}-2z^{2}-16$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306120872497559, "perplexity": 557.2060473686707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00502.warc.gz"}
http://mathhelpforum.com/algebra/43069-complex-conjugate-roots.html
# Math Help - complex conjugate roots 1. ## complex conjugate roots I have the following problem which i hope someone can point me in the right direction of solving: The equation X^4+40x+39 has 4 roots, if two of the roots are the complex conjugate roots 2+J3 and 2-J3 by a process of long divison and slving a quadratic equation find the other two roots. The exaple I am given in the book i have gives you the real roots to start with but gives no example of an equation that gives you the imaginary roots. I am looking for somehelp with how to get started on this one. Any help is apprecciated 2. Hello, $2 \pm 3j$ is a root. Therefore the polynomial can be factored by $[x-(2+3j)][x-(2-3j)]=[(x-2)-3j][(x-2)+3j]$ We know that $(a-b)(a+b)=a^2-b^2$. So the previous line equals to : $=(x-2)^2-(3j)^2=x^2-4x+4-9\underbrace{j^2}_{-1}=x^2-4x+13$ Now, you can try the division process... 3. Originally Posted by ally79 I have the following problem which i hope someone can point me in the right direction of solving: The equation X^4+40x+39 has 4 roots, if two of the roots are the complex conjugate roots 2+J3 and 2-J3 by a process of long divison and slving a quadratic equation find the other two roots. The exaple I am given in the book i have gives you the real roots to start with but gives no example of an equation that gives you the imaginary roots. I am looking for somehelp with how to get started on this one. Any help is apprecciated Note: Only one of the roots needed to be given since all coefficients of the quartic are real and so the conjugate root theorem could be used to get the other.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085767865180969, "perplexity": 278.45742804591293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00187-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.libretexts.org/Bookshelves/Mathematical_Logic_and_Proof/Book%3A_Mathematical_Reasoning_-_Writing_and_Proof_(Sundstrom)/5%3A_Set_Theory
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 5: Set Theory $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ • 5.1: Sets and Operations on Sets We have used logical operators (conjunction, disjunction, negation) to form new statements from existing statements. In a similar manner, there are several ways to create new sets from sets that have already been defined. In fact, we will form these new sets using the logical operators of conjunction (and), disjunction (or), and negation (not). • 5.2: Proving Set Relationships In this section, we will learn how to prove certain relationships about sets. Two of the most basic types of relationships between sets are the equality relation and the subset relation. So if we are asked a question of the form, “How are the sets A and B related?”, we can answer the question if we can prove that the two sets are equal or that one set is a subset of the other set. There are other ways to answer this, but we will concentrate on these two for now. • 5.3: Properties of Set Operations This section contains many results concerning the properties of the set operations. We have already proved some of the results. Others will be proved in this section or in the exercises. The primary purpose of this section is to have in one place many of the properties of set operations that we may use in later proofs. These results are part of what is known as the algebra of sets or as set theory. • 5.4: Cartesian Products When working with Cartesian products, it is important to remember that the Cartesian product of two sets is itself a set. As a set, it consists of a collection of elements. In this case, the elements of a Cartesian product are ordered pairs. We should think of an ordered pair as a single object that consists of two other objects in a specified order. • 5.5: Indexed Families of Sets • 5.S: Set Theory (Summary) Thumbnail: A Venn diagram illustrating the intersection of two sets. Image used with permission (Public Domain; Cepheus).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417199492454529, "perplexity": 228.524524497851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00489.warc.gz"}
https://imomath.com/index.php?options=484&lmm=0
# Change of Variables in Multiple Integrals ## Introduction Substitution (or change of variables) is a powerful technique for evaluating integrals in single variable calculus. An equivalent transformation is available for dealing with multiple integrals. The idea is to replace the original variables of integration by the new set of variables. This way the integrand is changed as well as the bounds for integration. If we are lucky enough to find a convenient change of variables we can significantly simplify the integrand or the bounds. ## Change of variables formula Two dimensional pictures are the easiest to draw so we will start with functions of two variables. Our first task is to get familiar with transformations of two dimensional regions. ### Transformations in $$\mathbb R^2$$ Assume that $$S$$ is a region in $$\mathbb R^2$$. We want to study the ways in which this region can be transformed to another region $$T$$. This is easiest to explain by considering an example. Let $$S=[0,2]\times[0,2]$$. Consider the functions $$u:S\to \mathbb R$$ and $$v:S\to\mathbb R$$ defined in the following way: \begin{eqnarray*} u(x,y)&=&x+2y\\ v(x,y)&=&x-y.\end{eqnarray*} To every point $$(x,y)\in S$$ ($$S$$ is painted blue in the diagram on the left) we can assign a new green point with coordinates $$(u(x,y),v(x,y))$$. This way we obtain a green region $$T$$. The mapping $$(x,y)\mapsto (u(x,y),v(x,y))$$ is one-to-one and onto, hence a bijection (you may want to review these terms in the section Functions). We can also write the inverse transformation, that maps each point $$(u,v)\in T$$ to the point $$(x(u,v),y(u,v))$$ in the following way: \begin{eqnarray*} x(u,v)&=&\frac{u+2v}3 \\ y(u,v)&=&\frac{u-v}3. \end{eqnarray*} ### Change of variables in double integrals Assume that $$S\subseteq \mathbb R^2$$ is a region in the plane. Let $$T\subseteq\mathbb R^2$$ be another region and assume that there are continuously differentiable functions $$X:T\to\mathbb R$$ and $$Y:T\to\mathbb R$$, such that the mapping $$\Phi(u,v)= (X(u,v),Y(u,v))$$ is a bijection between $$T$$ and $$S$$. The Jacobian of the mapping $$\Phi$$ is defined as $\frac{\partial(X,Y)}{\partial(u,v)}=\det\left|\begin{array}{cc} \frac{\partial X}{\partial u}& \frac{\partial X}{\partial v}\\ \frac{\partial Y}{\partial u}& \frac{\partial Y}{\partial v}\end{array}\right|.$ Theorem (Change of variables in double integrals) Assume that $$S$$ and $$T$$ are domains in $$\mathbb R^2$$ and that there are two continuously differentiable functions $$X,Y:T\to \mathbb R$$ such that $$\Phi: T\to S$$ defined by $$\Phi(u,v)=(X(u,v),Y(u,v))$$ is a bijection whose Jacobian is never $$0$$. For each continuous bounded $$f:S\to\mathbb R$$ the following equality holds: $\iint_S f(x,y)\,dxdy=\iint_T f(X(u,v),Y(u,v)) \cdot \left|\frac{\partial(X,Y)}{\partial(u,v)}\right|\,dudv.$ Example 1. Using the substitution $$u=2x+3y$$, $$v=x-3y$$, find the value of the integral $\iint_D e^{2x+3y}\cdot \cos(x-3y)\,dxdy,$ where $$D$$ is the region bounded by the parallelogram with vertices $$(0,0)$$, $$\left(1,\frac13\right)$$, $$\left(\frac43,\frac19\right)$$, and $$\left(\frac13,-\frac29\right)$$. ### Change of variables in triple integrals Assume that $$S, T\subseteq \mathbb R^3$$ are two regions in space. Assume that there are continuously differentiable functions $$X:T\to\mathbb R$$, $$Y:T\to\mathbb R$$, and $$Z:T\to\mathbb R$$, such that the mapping $$\Phi:T\to S$$ defined as $$\Phi(u,v,w)= (X(u,v,w),Y(u,v,w),Z(u,v,w))$$ is a bijection. The Jacobian of the mapping $$\Phi$$ is defined as $\frac{\partial(X,Y,Z)}{\partial(u,v,w)}=\det\left|\begin{array}{ccc} \frac{\partial X}{\partial u}& \frac{\partial X}{\partial v}& \frac{\partial X}{\partial w}\\ \frac{\partial Y}{\partial u}& \frac{\partial Y}{\partial v}& \frac{\partial Y}{\partial w} \\ \frac{\partial Z}{\partial u}& \frac{\partial Z}{\partial v}& \frac{\partial Z}{\partial w}\end{array}\right|.$ Theorem (Change of variables in triple integrals) Assume that $$S$$ and $$T$$ are domains in $$\mathbb R^2$$ and that there are three continuously differentiable functions $$X,Y,Z:T\to \mathbb R$$ such that $$\Phi: T\to S$$ defined by $$\Phi(u,v,w)=(X(u,v,w),Y(u,v,w),Z(u,v,w))$$ is a bijection whose Jacobian is never $$0$$. For each continuous bounded function $$f:S\to\mathbb R$$ the following equality holds: $\iiint_S f(x,y,z)\,dxdydz=\iiint_T f(X(u,v,w),Y(u,v,w),Z(u,v,w)) \cdot \left|\frac{\partial(X,Y,Z)}{\partial(u,v,w)}\right|\,dudvdw.$ ## Polar, cylindrical, and spherical substitutions We will now study very important substitutions that are used to simplify integrations over circular, spherical, cylindrical, and elliptical domains. One of them is applicable to double integral and is called polar change of variables and the other two, cylindrical and spherical, are used in triple integralds. ### Polar substitution The following change of variables is called the polar substitution: \begin{eqnarray*} x&=&r\cos\theta\\ y&=&r\sin \theta. \end{eqnarray*} The Jacobian for the polar substitution is equal to: $\frac{\partial(x,y)}{\partial(r,\theta)}=\det\left|\begin{array}{cc} \cos\theta&-r\sin\theta\\ \sin\theta&r\cos\theta\end{array}\right|=r\cos^2\theta+r\sin^2\theta=r.$ The variables $$r$$ and $$\theta$$ have the geometric meaning in the $$xy$$-coordinate system. The distance between $$(x,y)$$ and the origin is precisely $$r$$, while $$\theta$$ is the angles between the $$x$$-axis and the line connecting $$(x,y)$$ with $$(0,0)$$. Example 2. Evaluate the integral $\iint_D \cos\left(x^2+y^2\right)\,dxdy,$ where $$D$$ is the disc of radius $$3$$ centered at the origin. When dealing with ellipses it is very common to use the modified polar substitution. If the equation of the ellipse is $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$, the following substitution is used to describe its interior: \begin{eqnarray*} x&=&ar\cos\theta\\ y&=&br\sin\theta\\ 0\leq&r&\leq 1\\ 0\leq&\theta&\leq 2\pi. \end{eqnarray*} Example 3. Let $$a$$ and $$b$$ be two positive real numbers. Find the area of the region enclosed by the ellipse with the equation $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$. ### Cylindrical substitution In cylindrical substitution the original variables $$(x,y,z)$$ are replaced by $$(r,\theta, z)$$ using the following equations: \begin{eqnarray*} x&=&r\cos\theta\\ y&=&r\sin\theta\\ z&=&z. \end{eqnarray*} Again we can find the Jacobian by calculating the appropriate determinant: $\frac{\partial(x,y,z)}{\partial(r,\theta,z)}=\det\left|\begin{array}{ccc} \cos\theta &-r\sin\theta &0\\ \sin\theta&r\cos\theta&0\\ 0&0&1\end{array}\right|=r .$ Example 4. Determine the value of the integral $\iiint_D e^{x^2+y^2}\,dV$ where $$D$$ is the the region in bounded by the planes $$y=0$$, $$z=0$$, $$y=x$$, and the paraboloid $$z=4-x^2-y^2$$. ### Spherical substitution Spherical substitution means replacing the original variables $$(x,y,z)$$ by the variables $$(\rho,\theta, \phi)$$, where $$\rho$$ is the distance of the points $$(x,y,z)$$ from the origin $$(0,0,0)$$; $$\theta$$ is the angle that the line connecting $$(0,0,0)$$ and $$(x,y,0)$$ forms with the $$x$$-axis, and $$\phi$$ is the angle between the $$z$$-axis and the line connecting $$(x,y,z)$$ with $$(0,0,0)$$. Mathematically, the equations are: \begin{eqnarray*} x&=&\rho\cos\theta\sin\phi\\ y&=&\rho\sin\theta\sin\phi\\ z&=&\rho\cos\phi. \end{eqnarray*} We can find the Jacobian by calculating the appropriate determinant: \begin{eqnarray*} \frac{\partial(x,y,z)}{\partial(\rho,\theta,\phi)}&=&\det\left|\begin{array}{ccc} \cos\theta\sin\phi &-\rho\sin\theta\sin\phi &\rho\cos\theta\cos\phi\\ \sin\theta\sin\phi&\rho\cos\theta\sin\phi&\rho\sin\theta\cos\phi\\ \cos\phi&0&-\rho\sin\phi\end{array}\right| \\ &=&-\rho^2\cos^2\theta\sin^3\phi-\rho^2\sin^2\theta\sin\phi\cos^2\phi-\rho^2\cos^2\theta\sin\phi\cos^2\phi-\rho^2\sin^2\theta\sin^3\phi \\&=&-\rho^2\sin^3\phi-\rho^2\sin\phi\cos^2\phi=-\rho^2\sin\phi .\end{eqnarray*} Since in evaluation of the integral we are using the absolute value of the Jacobian, and $$\phi\in\left(0,\frac{\pi}2\right)$$ it is sufficient and more convenient to remember that $\left|\frac{\partial(x,y,z)}{\partial(\rho,\theta,\phi)}\right|=\rho^2\sin\phi.$ Example 5. Determine the value of the integral $\iiint_D e^{\sqrt{x^2+y^2+z^2}}\,dV$ where $$D$$ is the the region in bounded by the planes $$y=0$$, $$z=0$$, $$y=x$$, and the sphere $$x^2+y^2+z^2=9$$. 2005-2021 IMOmath.com | imomath"at"gmail.com | Math rendered by MathJax
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940517544746399, "perplexity": 166.33617915809037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00126.warc.gz"}
http://physics.stackexchange.com/tags/symmetry-breaking/new
# Tag Info 4 "Total spin conservation" means global $SU(2)$ spin-rotation symmetry (a continuous symmetry) of the Heisenberg model, and "spin wave" indicates an ordered ground state that spontaneously breaks the spin-rotation symmetry. Thus, according to Goldstone theorem, there must be a gapless mode for spin wave. 1 There are two ways to deal with a linear term in $\phi$: Complete the square, as was suggested in the comments. This is very often possible, but sometimes you do not want to do that. Interpret it as an interaction term with a $\phi$ particle popping out of the vacuum or vanishing. This will lead to non-zero tadpoles in your Feynman diagrams, so additional ... 0 The first assumption is that whatever vev the higgs picks up is constant in space, because this has less energy than one that increases the kinetic term in the Lagrangian. So we can do one global transformation to make the vev be in the second component only. You can imagine doing this prior to symmetry breaking, if you know what it is going to be ahead of ... 2 Well, after symmetry breaking, all that remains is electromagnetic $U(1)$, so the only generator that is truly a symmetry generator is $Q$. The fermions couple to the "Higgs" via the Yukawa coupling: $\mathcal{L}_y = -y_e^{ij} \bar L_{L,i} \Phi e_{R,j} - y_u^{ij} \bar Q_{L,i} \tilde{\Phi} u_{R,j} - y_d^{ij} \bar Q_{L,i} \Phi d_{R,j} + h.c.\,$ which mixes ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169447183609009, "perplexity": 375.4070726502773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345766127/warc/CC-MAIN-20131218054926-00012-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/maxwell-boltzmann-fermi-dirac-and-bose-einstein.156976/
# Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein. 1. Feb 18, 2007 ### Clau If we have indistinguishable particles, we must use Fermi-Dirac statistics. To Identical and indistinguishable particles, we use Bose-Einstein statistics. And, to distinguishable classical particles we use Maxwell-Boltzmann statistics. I have a system of identical but distinguishable particles, where the second level has a degeneracy. I was reading at Wikipedia: "Degenerate gases are gases composed of fermions that have a particular configuration which usually forms at high densities." My question is: Should I use Fermi-Dirac statistics in this case? I'm confused. I was reading Reif and it seems that I should use Maxwell-Boltzmann just to nondegenerate gases. But if my system is made by distinguishable particles, it seems that I should use MB statistics. 2. Feb 19, 2007 ### vanesch Staff Emeritus I'm not an expert on this and if I'm making an error, please correct me. But I thought that distinguishability is the key element, which determines that one should use the MB statistics. The MB statistics is ALSO a good approximation to the other distributions in certain limiting cases (such as dilute media), but I thought that if we deal with distinguishable components, that MB was exact. (the problem being, of course, that there do not exist systems of distinguishable elementary particles in nature)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245441317558289, "perplexity": 1040.6445508601932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00389-ip-10-142-188-19.ec2.internal.warc.gz"}
https://brilliant.org/problems/rain-or-shine/
# Rain Or Shine The weather forecast stated that there would be 60% chance of rain on Saturday and 30% chance of rain on Sunday. What is the probability (in percentage) of rain on at least one of these two days? (Assume the days are independent.) ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901070952415466, "perplexity": 501.7775640061285}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659063.33/warc/CC-MAIN-20190117184304-20190117210304-00350.warc.gz"}
https://forum.azimuthproject.org/plugin/viewcomment/18496
Has anyone figured out an answer for **puzzle 85**? Here's what I got so far. The "would be" left adjoint should look like: $h(x) = \bigwedge \\left\\{ y \in \mathbb{N}[S] \; : \; f(y) \to x \\right\\} .$ I've tried instantiating the formula for a resource in \$$\mathbb{N}[T] \$$ – I picked \$$[\textrm{egg}]\$$ and I've used the fact that \$$f \$$ forgets bowls and shells: $h([\textrm{egg}]) = \bigwedge \\left\\{ [\textrm{egg}], [\textrm{egg}] + [\textrm{bowl}], [\textrm{egg}] + [\textrm{shells}], [\textrm{egg}] + [\textrm{bowl}] + [\textrm{shells}], \cdots \\right\\} .$ But I cannot find an element in \$$\mathbb{N}[S] \$$ that is less than (that is, can be produced from) both \$$[\textrm{egg}] \$$ and \$$[\textrm{egg}] + [\textrm{bowl}] \$$. I'm tempted to conclude there is no left adjoint, but I doubt my reasoning as it goes against [John's optimism](https://forum.azimuthproject.org/discussion/comment/18382/#Comment_18382).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8538498282432556, "perplexity": 1572.7886003497615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00072.warc.gz"}
https://ir.amolf.nl/pub/6682
When sheared, most elastic solids including metals, rubbers and polymer gels dilate perpendicularly to the shear plane. This behavior, known as the Poynting effect, is characterized by a positive normal stress. Surprisingly, fibrous biopolymer gels exhibit a \emph{negative} normal stress under shear. Here we show that this anomalous behavior originates from the open network structure of biopolymer gels. Using fibrin networks with a controllable pore size as a model system, we show that the normal stress response to an applied shear is positive at short times, but decreases to negative values with a characteristic time scale set by pore size. Using a two-fluid model, we develop a quantitative theory that unifies the opposite behaviors encountered in synthetic and biopolymer gels.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474117517471313, "perplexity": 2263.3837787547654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146845.98/warc/CC-MAIN-20200713194203-20200713224203-00143.warc.gz"}
http://en.wikipedia.org/wiki/String_landscape
# String theory landscape (Redirected from String landscape) The string theory landscape refers to the huge number of possible false vacua in string theory.[1] The large number of theoretically allowed configurations has prompted suggestions that certain physical mysteries, particularly relating to the fine-tuning of constants like the cosmological constant or the Higgs boson mass, may be explained not by a physical mechanism but by assuming that many different vacua are physically realized.[2] The anthropic landscape thus refers to the collection of those portions of the landscape that are suitable for supporting intelligent life, an application of the anthropic principle that selects a subset of the otherwise possible configurations. In string theory the number of false vacua is thought to be somewhere between 1010 to 10100.[1] The large number of possibilities arises from different choices of Calabi–Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. If one assumes that there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete,[3] being a version of the subset sum problem. ## Anthropic principle Main article: Anthropic principle The idea of the string theory landscape has been used to propose a concrete implementation of the anthropic principle, the idea that fundamental constants may have the values they have not for fundamental physical reasons, but rather because such values are necessary for life (and hence intelligent observers to measure the constants). In 1987, Steven Weinberg proposed that the observed value of the cosmological constant was so small because it is not possible for life to occur in a universe with a much larger cosmological constant.[4] In order to implement this idea in a concrete physical theory, it is necessary to postulate a multiverse in which fundamental physical parameters can take different values. This has been realized in the context of eternal inflation. ## Bayesian probability Main article: Bayesian probability Some physicists, starting with Weinberg, have proposed that Bayesian probability can be used to compute probability distributions for fundamental physical parameters, where the probability $P(x)$ of observing some fundamental parameters $x$ is given by, $P(x)=P_{\mathrm{prior}}(x)\times P_{\mathrm{selection}}(x),$ where $P_\mathrm{prior}$ is the prior probability, from fundamental theory, of the parameters $x$ and $P_\mathrm{selection}$ is the anthropic selection function, determined by the number of "observers" that would occur in the Universe with parameters $x$. These probabilistic arguments are the most controversial aspect of the landscape. Technical criticisms of these proposals have pointed out that: • The function $P_\mathrm{prior}$ is completely unknown in string theory and may be impossible to define or interpret in any sensible probabilistic way. • The function $P_\mathrm{selection}$ is completely unknown, since so little is known about the origin of life. Simplified criteria (such as the number of galaxies) must be used as a proxy for the number of observers. Moreover, it may never be possible to compute it for parameters radically different from those of the observable universe. (Interpreting probability in a context where it is only possible to draw one sample from a distribution is problematic in frequentist probability but not in Bayesian probability, which is not defined in terms of the frequency of repeated events.) Various physicists have tried to address these objections, and the ideas remain extremely controversial both within and outside the string theory community. These ideas have been reviewed by Carroll.[5] ## Simplified approaches Tegmark et al. have recently considered these objections and proposed a simplified anthropic scenario for axion dark matter in which they argue that the first two of these problems do not apply.[6] Vilenkin and collaborators have proposed a consistent way to define the probabilities for a given vacuum.[7] A problem with many of the simplified approaches people have tried is that they "predict" a cosmological constant that is too large by a factor of 10–1000 (depending on one's assumptions) and hence suggest that the cosmic acceleration should be much more rapid than is observed.[8][9][10] ## Criticism Although few dispute the idea that string theory appears to have an unimaginably large number of metastable vacua, the existence, meaning and scientific relevance of the anthropic landscape remain highly controversial. Prominent proponents of the idea include Andrei Linde, Sir Martin Rees and especially Leonard Susskind, who advocate it as a solution to the cosmological-constant problem. Opponents, such as David Gross, suggest that the idea is inherently unscientific, unfalsifiable or premature. A famous debate on the anthropic landscape of string theory is the Smolin–Susskind debate on the merits of the landscape. The term "landscape" comes from evolutionary biology (see Fitness landscape) and was first applied to cosmology by Lee Smolin in his book.[11] It was first used in the context of string theory by Susskind. There are several popular books about the anthropic principle in cosmology.[12] Two popular physics blogs are opposed to this use of the anthropic principle.[13] ## References 1. ^ a b The most commonly quoted number is of the order 10500. See M. Douglas, "The statistics of string / M theory vacua", JHEP 0305, 46 (2003). arXiv:hep-th/0303194; S. Ashok and M. Douglas, "Counting flux vacua", JHEP 0401, 060 (2004). 2. ^ L. Susskind, "The anthropic landscape of string theory", arXiv:hep-th/0302219. 3. ^ Frederik Denef; Douglas, Michael R. (2006). "Computational complexity of the landscape". Annals of Physics 322 (5): 1096–1142. arXiv:hep-th/0602072. Bibcode:2007AnPhy.322.1096D. doi:10.1016/j.aop.2006.07.013. 4. ^ S. Weinberg, "Anthropic bound on the cosmological constant", Phys. Rev. Lett. 59, 2607 (1987). 5. ^ S. M. Carroll, "Is our universe natural?", arXiv:hep-th/0512148. 6. ^ M. Tegmark, A. Aguirre, M. Rees and F. Wilczek, "Dimensionless constants, cosmology and other dark matters", arXiv:astro-ph/0511774. F. Wilczek, "Enlightenment, knowledge, ignorance, temptation", arXiv:hep-ph/0512187. See also the discussion at [1]. 7. ^ See, e.g. Alexander Vilenkin (2006). "A measure of the multiverse". Journal of Physics A: Mathematical and Theoretical 40 (25): 6777–6785. arXiv:hep-th/0609193. Bibcode:2007JPhA...40.6777V. doi:10.1088/1751-8113/40/25/S22. 8. ^ Abraham Loeb (2006). "An observational test for the anthropic origin of the cosmological constant". JCAP 0605: 009. (subscription required (help)). 9. ^ Jaume Garriga & Alexander Vilenkin (2006). "Anthropic prediction for Lambda and the Q catastrophe". Prog. Theor.Phys. Suppl. 163: 245–57. arXiv:hep-th/0508005. Bibcode:2006PThPS.163..245G. doi:10.1143/PTPS.163.245. (subscription required (help)). 10. ^ Delia Schwartz-Perlov & Alexander Vilenkin (2006). "Probabilities in the Bousso-Polchinski multiverse". JCAP 0606: 010. (subscription required (help)). 11. ^ L. Smolin, "Did the universe evolve?", Classical and Quantum Gravity 9, 173–191 (1992). L. Smolin, The Life of the Cosmos (Oxford, 1997) 12. ^ L. Susskind, The cosmic landscape: string theory and the illusion of intelligent design (Little, Brown, 2005). M. J. Rees, Just six numbers: the deep forces that shape the universe (Basic Books, 2001). R. Bousso and J. Polchinski, "The string theory landscape", Sci. Am. 291, 60–69 (2004). 13. ^ Lubos Motl's blog criticized the anthropic principle and Peter Woit's blog frequently attacks the anthropic string landscape.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896881341934204, "perplexity": 1590.0559954320095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925696.30/warc/CC-MAIN-20150521113205-00179-ip-10-180-206-219.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/84215/is-it-correct-to-use-a-comma-in-this-equation
# Is it correct to use a comma in this equation? Consider: Then, the sequence $k_i$ given by (3.38) is increasing and converges to $$\label{eq:weired_equation} k^\xi = k_1\times k_2 \times c \times d,$$ where $c$ and $d$ are defined in the Theorem 2. I have the questions: 1. What are the grammatical errors in the above piece? 2. Should a comma be used after the equation \eqref{eq:weired_eqution}. - @WillHunting I don't 100% agree with you, mathematical language has some specifics, and mathematical typography as well. IMHO, this question is boundary here, but still ok. –  tohecz Nov 25 '12 at 19:45 I'm 100% sure that the comma is correct (see my answer). On the other hand, I think that (1) There should be no comma after Then, and (2) there should be no the before Theorem 2 (you write Theorem with capital T, hence it is like a name). However, I'm not a native speaker, and one such should confirm what I say. –  tohecz Nov 25 '12 at 19:48 @tohecz: math.stackexchange.com exists :-) –  Martin Schröder Nov 26 '12 at 9:50 There should be no difference in punctuation whether you write the equation on display or inside the text. So since there would be a comma if written inside the text, the comma is correct. Considering the 1st question, see my comment. In the end, the piece can look like Then the sequence $k_i$ given by (3.38) is increasing and converges to $$\label{eq:weired_equation} k^\xi = k_1\times k_2 \times c \times d,$$ where $c$ and $d$ are defined in Theorem 2. In the label, there's a typo and it should be weird_equation, but I did not correct this to avoid you problems with cross-referencing. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9637829065322876, "perplexity": 645.7315436047608}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00071-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/force-due-to-pressure-on-barrel-lid.551886/
# Force due to pressure on barrel lid 1. Nov 18, 2011 ### rubenhero 1. The problem statement, all variables and given/known data In the seventeenth century, Blaise Pacal performed the following experiment to demonstrate the properties of pressure. A very long, thin, vertical tube was inserted into the center of the top lid of a wine barrel filled with water. Water was then added slowly to the tube until the wine barrel burst. (See figure 13-50 on page 363 of your textbook, but ignore the numbers) Suppose the radius of the wine barrel lid has radius rl = 21.8 cm, the radius of the tube was rt = 2.67 mm, and the height of the water in the tube was h = 14.5 m. Find: a) F, the magnitude of the force exerted on the inside of the barrel lid due to water pressure 2. Relevant equations P = F/A , AP = F = ρgh∏r2 3. The attempt at a solution F = 1000kg/m3 * 9.81m/s2 * 14.5m * ∏ * (.218m)2 F = 21237.32775 N The answer turned out wrong but i can't figure out what is wrong. 2. Nov 18, 2011 ### dynamicsolo Isn't the applied pressure coming from the tube? This would then be the pressure distributed to the inside of the barrel lid. 3. Nov 18, 2011 ### rubenhero i took the pressure from the tube then multiplied it by the area of the lid, isn't that right? 4. Nov 19, 2011 ### dynamicsolo Disregard my first remark: I was thinking of the force value. I don't see anything obviously wrong. Are your numbers radii, and not diameters? What do you mean by saying, "The answer turned out wrong." Is there a given answer? 5. Nov 19, 2011 ### rubenhero my professor uses webassign.com to give us assignments online. The answer i plugged into the website turned out to be wrong and i only have one chance left to enter the correct answer. I did ask my professor, he would only say that i did the pressure formula wrong. I used the radius numbers given but converted them to meters before plugging them into the formula. Last edited: Nov 19, 2011 6. Nov 19, 2011 ### dynamicsolo The only thing I can think of that the problem might be looking for is that the pressure from the tube should be the sum of the hydrostatic pressure from the water, $\rho gh$, plus the atmospheric pressure of 101,300 N/m2 (which is the "hydrostatic pressure" of the air), and that this times the area of the barrel lid gives the force acting on the inside of the lid. Is one of the other parts, by any chance, asking for the force on the outside of the lid due to atmospheric pressure? The net force on the lid would then be the value you found, which would be an upward force on the lid of $\rho gh \cdot \pi \cdot r_{l}^{2}$. Have I mentioned how much I detest WebAssign? 7. Nov 19, 2011 ### rubenhero The other part of the question (part b) asked for the mass of the water inside the tube. I just emailed my professor and he said that the problem is open to air, i'm guessing the air is through the tube since the barrel is full of water. The other physics section in my college uses masteringphysics.com, but the professor i am taking only uses webassign. 8. Nov 19, 2011 ### dynamicsolo I don't think the air is going through the tube if the tube has 14.5 meters of water in it. I think he is saying that atmospheric pressure is being applied to the top of the water in the tube. So the total pressure applied at the mouth of the tube at the point where it meets the water in the barrel is $\rho gh$ + 101,300 N/m2 , the sum of the water's hydrostatic pressure and the atmospheric pressure. 9. Nov 19, 2011 ### rubenhero Thank you, it worked. I added air pressure to the formula and i got 36361.52404 N. I am glad this worked out since it was my last try for this question on webassign. I want to also thank you for your patience in helping me understand this problem. 10. Nov 19, 2011 ### dynamicsolo You're welcome! And I quite understand your frustration with WebAssign. I work with students here who also have to wrestle with physics on that system... Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100566506385803, "perplexity": 640.1249355657377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00553.warc.gz"}
http://mathhelpforum.com/calculus/47183-how-can-i-evaluate-integral.html
# Thread: How can I evaluate this integral? 1. ## How can I evaluate this integral? How can I evaluate this integral? Integral sin(ln(x)) dx If possible, can anyone show me step by step? Thank you 2. Originally Posted by noppawit How can I evaluate this integral? Integral sin(ln(x)) dx $\displaystyle \int \sin (\ln x) dx = \int e^{\ln x}\sin (\ln x)(\ln x)'dx$ By substitution you need to compute, $\displaystyle \int e^t \sin t dt$ 3. Originally Posted by noppawit How can I evaluate this integral? Integral sin(ln(x)) dx If possible, can anyone show me step by step? Thank you one way: write as $\displaystyle \int \frac xx \sin ( \ln x) ~dx$ then make the substitution $\displaystyle t = \ln x$ you will get the integral $\displaystyle \int e^t \sin t~dt$ which you would then do using integration by parts, with $\displaystyle u = \sin t$ (the part you differentiate) and $\displaystyle dv = e^t~dt$ (the part you integrate)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940198659896851, "perplexity": 2280.4359734532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00107.warc.gz"}
http://math.stackexchange.com/questions/288622/show-that-the-kappa-oscillation-set-of-a-function-is-a-closed-set
# Show that the $\kappa$-oscillation set of a function is a closed set Let the $\kappa$-oscillation of the set be : $\{x \in [a,b]: \text{osc} f \ge \kappa$} How do we show that it is a closed set? Do we prove its complement is open? If so, what would one call its compliment? I am totally lost here. If anyone can propose a proof, I will be grateful. - Its complement (note spelling) is simply $\big\{x\in[a,b]:\operatorname{osc}f<\kappa\big\}$. And yes, proving that this is open is a very reasonable approach. – Brian M. Scott Jan 28 '13 at 3:59 Exactly what definition of oscillation are you using? – Brian M. Scott Jan 28 '13 at 4:12 Thanks for noting the spelling, Brian. But I am lost as to how to go about it. Can you point me to the right direction? Also I was think whether I can use the fact that a closed set contains all its limit points. So I have to construct a sequence (I do not know what it would look like) such that is $x$ is a limit point of a sequence in the set, then $x\in$ set. – user43901 Jan 28 '13 at 4:12 the definition : osc $f = \lim_{r \to 0} \text{diam} f [x+r,x-r]$ – user43901 Jan 28 '13 at 4:14 Theorem 6.27, p.259 in Bruckner, Bruckner, Thomson: Elementary Real Analysis. The book is freely available here. – Martin Sleziak Jan 28 '13 at 6:26 HINT: Suppose that the oscillation of $f$ at $x$ is less than $\kappa$. This means that $$\lim_{r\to 0}\operatorname{diam}f\big[[x-r,x+r]\big]<\kappa\;.$$ By the definition of limit there is some $r_0>0$ such that $\operatorname{diam}f\big[[x-r,x+r]\big]<\kappa$ for all positive $r\le r_0$. Suppose that $|x-y|<\frac{r_0}2$, then $$\left[y-\frac{r_0}2,y+\frac{r_0}2\right]\subseteq[x-r_0,x+r_0]\;.$$ Can you now use that to show that the oscillation of $f$ at $y$ is less than $\kappa$ and hence that the complement of your set is open? - Let $A=\{ x\in [a,b] : \text{osc} f \geq \kappa \}$ and take a sequence $x_n\in A$ such that $x_n\to x$. We want to prove $x\in A$. To do this pick an $r>0$ and $x_n$ such that $|x_n-x|<r/2$. Then $[x_n-r/2,x_n+r/2] \subset [x-r,x+r]$ and so $$\kappa\leq\text{diam}f([x_n-r/2,x_n+r/2]) \leq \text{diam}f([x-r,x+r])$$ Take the limit and conclude. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355553984642029, "perplexity": 114.74110748884736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00084-ip-10-236-182-209.ec2.internal.warc.gz"}
http://porn3gp.ru/half-life-equation-radiometric-dating-18615.html
# Half life equation radiometric dating This is not true for zeroth- and second-order reactions.The half-life of a first-order reaction is independent of the concentration of the reactants.This becomes evident when we rearrange the integrated rate law for a first-order reaction (Equation 14.21) to produce the following equation: Figure $$\Page Index$$: The Half-Life of a First-Order Reaction. Using Activity is usually measured in disintegrations per second (dps) or disintegrations per minute (dpm). The parent-daughter ratio and half-lives elapsed hold no matter what minerals you are dealing with. To determine the age, you need to know what the minerals are, and the half-life of the parent. When the animal or plant dies, the carbon-14 nuclei in its tissues decay to nitrogen-14 nuclei by a radioactive process known as beta decay, which releases low-energy electrons (β particles) that can be detected and measured: $\ce \label$ The half-life for this reaction is 5700 ± 30 yr. Comparing the disintegrations per minute per gram of carbon from an archaeological sample with those from a recently living sample enables scientists to estimate the age of the artifact, as illustrated in Example 11. Using this method implicitly assumes that the ratio in the atmosphere is constant, which is not strictly correct.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131004810333252, "perplexity": 1842.659085118083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947421.74/warc/CC-MAIN-20180424221730-20180425001730-00010.warc.gz"}
https://math.stackexchange.com/questions/3131689/if-p-is-congruent-to-1-mod-4-where-p-is-an-odd-prime-then-x2-congruent-t
# If $p$ is congruent to 1 mod 4 where $p$ is an odd prime, then $x^2$ congruent to -1 mod $p$ has 2 solutions. If $$p = 5$$, then the values of $$x$$ that will satisfy the congruence $$x^2 \equiv -1 \bmod p$$ are $$2, 3$$ If $$p = 13$$, then the values of $$x$$ that will satisfy the above congruence are $$5, 8$$. And so on... How can i prove that for all $$p$$ s.t. $$(p \equiv 1\bmod 4)$$, then $$x^2\equiv -1\bmod p$$ has only $$2$$ solutions? And by observation, I think it will also hold on $$p^k$$? How can I prove it though? Thank you! As mentioned, the case of prime $$p$$ is quadratic reciprocity. For $$p^k$$ you can use Hensel lifting. The point is this. Suppose $$x \in [1,2, \ldots, p^j-1]$$ is a solution of $$x^2 \equiv -1 \mod p^j$$, where $$p$$ is an odd prime and $$j \ge 1$$. Thus $$x^2 = -1 + z p^j$$ for some integer $$z$$. Consider $$x + p^j y$$ where $$y \in [0,1,\ldots, p-1]$$. We have $$(x + p^j y)^2 = x^2 + 2 p^j x y + p^{2j} y^2 \equiv -1 + (z + 2x y) p^j \bmod p^{j+1}$$ so $$x + p^j y$$ will be a solution mod $$p^{j+1}$$ iff $$y \equiv -z/(2 x) \bmod p$$. Thus every solution mod $$p^j$$ lifts to a unique solution mod $$p^{j+1}$$. Induction on $$k$$ in $$p^k.$$ For $$k \geq 2,$$ we have two roots of $$-1 \pmod {p^{k-1}}.$$ Take one of them, call it $$r,$$ and pick a representative $$R$$ for $$r \pmod {p^k}.$$ We know that $$R^2 \equiv -1 + W p^{k-1} \pmod {p^k}$$ where $$W$$ is a value mod $$p,$$ if you wish, you may demand $$0 \leq W < p.$$ This $$R$$ then solve, for integer $$t,$$ $$\left( R + t p^{k-1} \right)^2 \equiv -1 \pmod {p^k} \; ?$$ $$R^2 + 2Rt p^{k-1} + t^2 p^{2k-2} \equiv -1 \pmod {p^k} \; ?$$ Since $$k \geq 2,$$ we have $$2k-2 \geq k.$$ $$R^2 + 2Rt p^{k-1} \equiv -1 \pmod {p^k} \; ?$$ $$-1 + W p^{k-1} + 2Rt p^{k-1} \equiv -1 \pmod {p^k} \; ?$$ $$W p^{k-1} + 2Rt p^{k-1} \equiv 0 \pmod {p^k} \; ?$$ $$W + 2Rt \equiv 0 \pmod p \; ?$$ Now, $$2R$$ is invertible $$\pmod p,$$ so there is one and only one solution $$\pmod p$$ to $$2Rt \equiv -W \pmod p \; ?$$ That's it, you get exactly one root on top of each root you have, in the process of going up one power of $$p$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 54, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922526478767395, "perplexity": 41.96672966505433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998339.2/warc/CC-MAIN-20190617002911-20190617024911-00550.warc.gz"}
http://www.physicsforums.com/showthread.php?s=0fc3397e43d5d3bcbca3cf100ee1e68b&p=4499371
# Radiation absorption and temperature changes in matter by Alexander83 Tags: heating, radiation, solid, temperature, x-ray P: 19 Hi there, I've been studying mechanisms by which high-energy radioactive decay products (beta particles, x-rays, gamma rays) are attenuated as they pass through matter. From my readings in introductory and intermediate level textbooks, the general mechanisms by which these particles and rays are described as interacting with matter generally involves interactions with electrons in the target material either causing ionization or electronic excitation. My gut feeling is that the net effect of all of these interactions will (eventually) be an increase in the temperature of the target material. The mechanisms that are commonly described in books for attenuation of radiation such as photoionization or Compton scatter in the case of photons, or Bremsstrahlung production or collisional ionization in the case of energetic electrons. These mechanisms would seem to result in the production of simply more particles (free electrons) or photons in the target material. My (very) naive understanding of what a change temperature means in a solid structure would be something along the lines of excitation of vibrational or rotational modes in an atomic lattice with the resulting increase in the kinetic energy of the atoms of the substance resulting in what we perceive as an increase in temperature... is this correct? My question then is this: how do the interactions I mentioned earlier ultimately result in a temperature change in the target? My gut feeling is that secondary radiation produced in a target continuously decreases in energy level until it is at the correct energy level to excite vibrational or rotational modes in the target, causing what we think of at a macroscopic level as "heating" but I can't find anywhere where this is spelled out. Introductory texts seldom go into much detail much beyond discussing the mechanism of the initial attenuation of high energy radiation and not subsequent steps. Thanks for you time! Chris. Mentor P: 11,573 All charged particles are ionizing (and the slow electrons then collide with atoms and other electrons and heat the material) and/or directly transfer momentum to atoms (-> heat). Neutral particles can hit nuclei, or produce secondary radiation with charged particles. If heating due to radiation is significant, the radiation levels are really high. P: 19 mfb, thanks for the post. I think I'm trying to ask about the process that you described as "the slow electrons then collide with atoms and other electrons and heat the material" I think I'm struggling to think about what it means to say that the material is heated... is this an excitation of vibration of the atoms in a lattice in solids which is what we conceive of as a temperature increase? If that's the case, why is it that slower moving particles cause this excitation and heating and not higher energy particles? Chris Mentor P: 11,573 Radiation absorption and temperature changes in matter Excitation of vibrations of atoms and excitations of electrons. If that's the case, why is it that slower moving particles cause this excitation and heating and not higher energy particles? Both do it, but the high-energy particles tend to transfer high energies to particles they "hit", so it takes some steps to get to thermal energies (~1/40 eV at room temperature). Related Discussions High Energy, Nuclear, Particle Physics 13 Quantum Physics 4 Introductory Physics Homework 0 General Physics 1 Classical Physics 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742457032203674, "perplexity": 570.2913471325677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997876165.43/warc/CC-MAIN-20140722025756-00042-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.unconventionalwisdomradio.com/c3meh8gr/shortest-distance-between-two-skew-lines-cartesian-form-4405ae
# shortest distance between two skew lines cartesian form Consider linesl1andl2with equations: r→ = a1→ + λ b1→ and r→ = a2→ + λ b2→ Let us discuss the method of finding this line of shortest distance. In our case, the vector between the generic points is (obtained as difference from the generic points of the two lines in their parametric form): Imposing perpendicularity gives us: Solving the two simultaneous linear equations we obtain as solution . So they clearly aren’t parallel. Abstract. We may derive a formula using this approach and use this formula directly to find the shortest distance between two parallel lines. Your email address will not be published. Ex 11.2, 15 (Cartesian method) Find the shortest distance between the lines ( + 1)/7 = ( + 1)/( − 6) = ( + 1)/1 and ( − 3)/1 = ( − 5)/( − 2) = ( − 7)/1 Shortest distance between two linesl1: ( − _1)/_1 = ( − _1)/_1 = ( − _1)/_1 l2: ( − _2)/_2 = ( − _2)/_2. Parametric vector form of a plane; Scalar product forms of a plane; Cartesian form of a plane; Finding the point of intersection between a line and a plane; . Equation of Line - We form equation of line in different cases - one point and 1 parallel line, 2 points … The shortest distance between two skew lines is the length of the shortest line segment that joins a point on one line to a point on the other line. There are no skew lines in 2-D. %�쏢 The shortest distance between two parallel lines is equal to determining how far apart lines are. "A straight line is a line of zero curvature." A line is essentially the extension of a line segment beyond the original two points. . Let the two lines be given by: L 1 = a 1 → + t ⋅ b 1 → What follows is a very quick method of finding that line. Planes. Vector Form We shall consider two skew lines L 1 and L 2 and we are to calculate the distance between them. The distance between two skew lines is naturally the shortest distance between the lines, i.e., the length of a perpendicular to both lines. If two lines are parallel, then the shortest distance between will be given by the length of the perpendicular drawn from a point on one line form another line. The distance between them becomes minimum when the line joining them is perpendicular to both. It's easy to do with a bunch of IF statements. In the usual rectangular xyz-coordinate system, let the two points be P 1 a 1,b 1,c 1 and P 2 a 2,b 2,c 2 ; d P 1P 2 a 2 " a 1,b 2 " … And length of shortest distance line intercepted between two lines is called length of shortest distance. The shortest distance between two circles is given by C 1 C 2 – r 1 – r 2, where C 1 C 2 is the distance between the centres of the circles and r­ 1 and r­ 2 are their radii. I want to calculate the distance between two line segments in one dimension. Solution of I. Lines. $\begingroup$ The result of your cross product technically “points in the same direction as [the vector that joins the two skew lines with minimum distance]”. Consider two skew lines L1 and L2 , whose equations are 1 1 . I’ve changed the directional vector of the first line, so that numbers should be correct now , Your email address will not be published. Cartesian form of a line; Vector product form of a line; Shortest distance between two skew lines; Up to Contents. d = ∣ ( a ⃗ 2 – a ⃗ 1). The idea is to consider the vector linking the two lines in their generic points and then force the perpendicularity with both lines. The shortest distance between two skew lines is the length of the shortest line segment that joins a point on one line to a point on the other line. The straight line which is perpendicular to each of non-intersecting lines is called the line of shortest distance. We will call the line of shortest distance . %PDF-1.3 It can be identified by a linear combination of a … <> x��}͏ɑߝ�}X��I2���Ϫ���k����>�BrzȖ���&9���7xO��ꊌ���z�~{�w�����~/"22222��k�zX���}w��o?�~���{ ��0٧�ٹ���n�9�~�}��O���q�.��޿��R���Y(�P��I^���WC���J��~��W5����߮������nE;�^�&�?��� 5 0 obj Parametric vector form of a plane; Scalar product forms of a plane; Cartesian form of a plane; Finding the point of intersection between a line and a plane; This formula can be derived as follows: − is a vector from p to the point a on the line. –a1. A line parallel to Vector (p,q,r) through Point (a,b,c) is expressed with $$\hspace{20px}\frac{x-a}{p}=\frac{y-b}{q}=\frac{z-c}{r}$$ Distance between parallel lines. Let’s consider an example. It doesn’t “lie along the minimum distance”. Skew Lines. The shortest distance between the lines is the distance which is perpendicular to both the lines given as compared to any other lines that joins these two skew lines. If Vt is s – r then the first term should be (1+t-k , …) not as above. If two lines intersect at a point, then the shortest distance between is 0. The distance of an arbitrary point p to this line is given by ⁡ (= +,) = ‖ (−) − ((−) ⋅) ‖. Distance between two skew lines . In our case, the vector between the generic points is (obtained as difference from the generic points of the two lines in their parametric form): Solving the two simultaneous linear equations we obtain as solution . Cartesian and vector equation of a plane. This is my video lecture on the shortest distance between two skew lines in vector form and Cartesian form. This impacts what follows. Cartesian Form: are the Cartesian equations of two lines, then the shortest distance between them is given by . But we are talking about the same thing, and this is just a pedantic issue. In 2-D lines are either parallel or intersecting. (\vec {b}_1 \times \vec {b}_2) | / | \vec {b}_1 \times \vec {b}_2 | d = ∣(a2. Then as scalar t varies, x gives the locus of the line.. $\endgroup$ – Benjamin Wang 9 hours ago Cartesian equation and vector equation of a line, coplanar and skew lines, shortest distance between two lines. They aren’t incidental as well, because the only possible intersection point is for , but when , is at , which doesn’t belong to . Given two lines and, we want to find the shortest distance. The shortest distance between two skew lines lies along the line which is perpendicular to both the lines. Overdetermined and underdetermined systems of equations put simply, Relationship between reduced rings, radical ideals and nilpotent elements, Projection methods in linear algebra numerics, Reproducing a transport instability in convection-diffusion equation. Note that this expression is valid only when the two circles do not intersect, and both lie outside each other. Angle between (i) two lines, (ii) two planes, (iii) a line and a plane.Distance of a point from a plane. The shortest distance between two skew lines r = a 1 + λ b 1 and r = a 2 + μ b 2 , respectively is given by ∣ b 1 × b 2 ∣ [b 1 b 2 (a 2 − a 1 )] Shortest distance between two parallel lines - formula Shortest distance between two skew lines in vector + cartesian form 17:39 155.7k LIKES Shortest distance between a point and a curve. �4݄4G�6�l)Y�e��c��h����sє��Çǧ/���T�]�7s�C-�@2 ��G�����7�j){n|�6�V��� F� d�S�W�ُ[���d����o��5����!�|��A�"�I�n���=��a�����o�'���b��^��W��n�|P�ӰHa���OWH~w�p����0��:O�?��x�/�E)9{\�K(G��Tvņ详�盔�C����OͰ�� L���S+X�M�K�+l_�䆩�֑P܏�� b��B�F�n��� 4X���&����d�I�. The line segment is perpendicular to both the lines. Then, the shortest distance between the two skew lines will be the projection of PQ on the normal, which is given by. The above equation is the general form of the distance formula in 3D space. E.g. . Save my name, email, and website in this browser for the next time I comment. ( b ⃗ 1 × b ⃗ 2) ∣ / ∣ b ⃗ 1 × b ⃗ 2 ∣. Start with two simple skew lines: (Observation: don’t make the mistake of using the same parameter for both lines. Class 12 Maths Chapter-11 Three Dimensional Geometry Quick Revision Notes Free Pdf We will call the line of shortest distance . 8.5.3 The straight line passing through two given points 8.5.4 The perpendicular distance of a point from a straight line 8.5.5 The shortest distance between two parallel straight lines 8.5.6 The shortest distance between two skew straight lines 8.5.7 Exercises 8.5.8 Answers to exercises Skew lines are the lines which are neither intersecting nor parallel. (टीचू) t�2����?���W��?������?�����l�f�ɂ%��%�낝����\��+�q���h1: ;:�,P� 6?���r�6γG�n0p�a�H�q*po*�)�L�0����2ED�L�e�F��x3�i�D��� Each lines exist on its own, there’s no link between them, so there’s no reason why they should should be described by the same parameter. The coordinates The shortest distance between skew lines is equal to the length of the perpendicular between the two lines. The cross product of the line vectors will give us this vector that is perpendicular to both of them. Cartesian equation and vector equation of a line, coplanar and skew lines, the shortest distance between two lines The vector → AB has a definite length while the line AB is a line passing through the points A and B and has infinite length. https://learn.careers360.com/maths/three-dimensional-geometry-chapter A gentle (and short) introduction to Gröbner Bases, Setup OpenWRT on Raspberry Pi 3 B+ to avoid data trackers, Automate spam/pending comments deletion in WordPress + bbPress, A fix for broken (physical) buttons and dead touch area on Android phones, FOSS Android Apps and my quest for going Google free on OnePlus 6, The spiritual similarities between playing music and table tennis, FEniCS differences between Function, TrialFunction and TestFunction, The need of teaching and learning more languages, The reasons why mathematics teaching is failing, Troubleshooting the installation of IRAF on Ubuntu, The equation of the line of shortest distance between the two skew lines: just replace. The vector that points from one to the other is perpendicular to both lines. There will be a point on the first line and a point on the second line that will be closest to each other. Share it in the comments! This solution allows us to quickly get three results: The equation of the line of shortest distance between the two skew lines: … Ex 11.2, 14 Find the shortest distance between the lines ⃗ = ( ̂ + 2 ̂ + ̂) + ( ̂ − ̂ + ̂) and ⃗ = (2 ̂ − ̂ − ̂) + (2 ̂ + ̂ + 2 ̂) Shortest distance between the lines with vector equations ⃗ = (1) ⃗ + (1) ⃗and ⃗ = (2) ⃗ + (2) ⃗ is | ( ( () ⃗ × () ⃗ ). Method: Let the equation of two non-intersecting lines be This solution allows us to quickly get three results: Do you have a quicker method? stream In linear algebra it is sometimes needed to find the equation of the line of shortest distance for two skew lines. thanks for catching the mistake! Hence they are not coplanar . Distance Between Skew Lines: Vector, Cartesian Form, Formula , So you have two lines defined by the points r1=(2,6,−9) and r2=(−1,−2,3) and the (non unit) direction vectors e1=(3,4,−4) and e2=(2,−6,1). Required fields are marked *. But I was wondering if their is a more efficient math formula. Hi Frank, If this doesn’t seem convincing, get two lines you know to be intersecting, use the same parameter for both and try to find the intersection point.). Then as scalar t varies, x gives the locus of the line ; Planes will! The equation of the line gives the locus of the line joining them given. In this browser for the next time i comment email, and both lie outside each other for! To do with a bunch of if statements cartesian form lines, shortest distance between two parallel lines is to! Boundary conditions affect Finite Element Methods variational formulations the extension of a line is. Both lines is just a pedantic issue / ∣ b ⃗ 1 × b 1! And Neumann boundary conditions affect Finite Element Methods variational formulations joining them given. Distance for two skew lines: ( Observation: don ’ t make the mistake of using same... It 's easy to do with a bunch of if statements (,... The lines which are neither intersecting nor parallel with a bunch of if statements ∣ / ∣ b 2! ; shortest distance and r=a2+μb2 are the cartesian equations of two non-intersecting be... Intercepted between two parallel lines is called length of a line that will be a point on the line which. Is valid only when the two lines in their generic points and then force the perpendicularity with both.! I comment mistake of using the same parameter for both lines the next time i comment to find the of. But we are talking about the same thing, and website in this browser the! Distance for two skew lines ( 1+t-k, … ) not as above is essentially the of. Mistake of using the same thing, and website in this browser for next... Finding the distance between two line segments in one dimension are neither intersecting parallel! If Vt is s – r then the shortest distance for two lines! Their is a very quick method of finding that line lines: (:. Dirichlet and Neumann boundary conditions affect Finite Element Methods variational formulations on second... S – r then the shortest distance between them is given by × ⃗. The cartesian equations of two lines is equal to determining how far apart lines.! ( \vec { a } _2 – \vec { a } _1 ) valid only when the line beyond... ’ t make the mistake of using the same thing, and this is my video lecture the! Pq on the shortest distance between them becomes minimum when the line vectors will give us this vector that from... Lines in vector + cartesian form of a line, coplanar and skew lines in +... How do Dirichlet and Neumann boundary conditions affect Finite Element Methods variational formulations vector product form of a line shortest... Minimum distance ” be ( 1+t-k, … ) not as above we may derive a formula using approach! And vector equation of two non-intersecting lines be / Space geometry Calculates the shortest.! A line ; vector product form of a line is essentially the of. Zero curvature. coordinates the shortest distance between two parallel lines are neither intersecting nor parallel the other perpendicular. 'S easy to do with a bunch of if statements 1+t-k, … ) not as above varies. Scalar t varies, x gives the locus of the line segment is perpendicular to both of them lie the. Beyond the original two points cartesian equation and vector equation of the perpendicular between the two lines called. Points and then force the perpendicularity with both lines = | ( \vec { a } _2 – \vec a! Derived as follows: − is a vector from p to the is... And r=a2+μb2 are the vector linking the two skew lines t varies, gives... Both lie outside each other other words, a straight line contains no curves line a... The shortest distance between them is given by line vectors will give us this vector that points from one the! Measuring the length of the line of shortest distance between two lines, then first... − is a more efficient math formula 155.7k LIKES shortest distance between them is given by is. How far apart lines are but i was wondering if their is a from! Essentially the extension of a line ; shortest distance between them is given by video! Give us this vector that is perpendicular to both of them us discuss the method finding! Can be identified by a linear combination of a line that will be a point on second.: ( Observation: don ’ t “ lie along the minimum distance ” finding that.. Then as scalar t varies, x gives the locus of the line L1 and L2, equations! Two parallel lines is equal to determining how far apart lines are line will... Two lines if their is a vector from p to the length of a line ; vector product form a... ) ∣ / ∣ b ⃗ 2 ) ∣ / ∣ b ⃗ 1 b. Quick method of finding that line segment is perpendicular to each of lines..., then the first line and a point on the normal, is... The cross product of the perpendicular between the two lines, shortest distance line intercepted between two skew lines Space! In their generic points and shortest distance between two skew lines cartesian form force the perpendicularity with both lines the normal which! 155.7K LIKES shortest distance between skew lines us discuss the method of finding this line of zero.. 1 and L 2 and we are talking about the same thing, and both outside. Locus of the line, a straight line which is given by finding distance... ∣ ( a ⃗ 1 ) which is given by quick method of that. Make the mistake of using the same thing, and website in this browser for the next time i.! Only when the line segment beyond the original two points and vector of... Segment is perpendicular to both of them both of them, email and! Finding this line of shortest distance between the two circles do not intersect and... This is my video lecture on the shortest distance between skew lines in Space the next time comment! Called the line joining them is given by quickly get three results: do have! Line joining them is perpendicular to both the lines the method of finding line! Given by segments in one dimension efficient math formula browser for the next time i.. Consider the vector that is perpendicular to both follows: − is a more efficient formula... Lines: ( Observation: don ’ t “ lie along the distance! I comment are to calculate the distance between two skew lines L1 and L2, equations... The minimum distance ” vector equation of two lines curvature. x gives the locus of the line +!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756966590881348, "perplexity": 479.23037273260724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00036.warc.gz"}
http://www.physicsgre.com/viewtopic.php?f=19&t=3908
## Problem discussions sphy Posts: 209 Joined: Sun Jan 30, 2011 7:23 am ### Problem discussions A mirror (Area= A, Mass= M, perfectly reflecting) is suspended in a vertical plane by a weightless string. Light (Intensity=I) falls normally on the mirror and the mirror is deflected from the vertical by a very small angle $\theta$. Obtain an expression for $\theta$. physicsworks Posts: 80 Joined: Tue Oct 12, 2010 8:00 am ### Re: Problem discussions For perfectly reflecting mirror the light pressure on it: $P=\frac{2I}{c}$, where $c$ is the speed of light. Hence, the force of the light on the mirror is $F=P \cdot A = \frac{2IA}{c}$ For small angles $\theta$: $T \theta = F$, where $T$ is the tension in the string and $T = Mg$. From these two equations we get $\theta = \frac{2IA}{Mgc}$ physicsworks Posts: 80 Joined: Tue Oct 12, 2010 8:00 am ### Re: Problem discussions Where did you get this problem? It doesn't look suitable for the PGRE preparation. sphy Posts: 209 Joined: Sun Jan 30, 2011 7:23 am ### Re: Problem discussions physicsworks wrote:Where did you get this problem? It doesn't look suitable for the PGRE preparation. Well, i was working out on some problems from previous entrance questions (India) where I got this. Why are you saying it's not suitable for the PGRE questions.? Is it a silly question or some thing? bfollinprm Posts: 1198 Joined: Sat Nov 07, 2009 11:44 am ### Re: Problem discussions sphy wrote: physicsworks wrote:Where did you get this problem? It doesn't look suitable for the PGRE preparation. Well, i was working out on some problems from previous entrance questions (India) where I got this. Why are you saying it's not suitable for the PGRE questions.? Is it a silly question or some thing? I dont think you'd be expected to know that P=2I/c for the PGRE. Maybe, but I doubt it. On second thought the I/c is perfectly reasonable, it's just the constant that I don't think they'd expect you to know (though it is a perfectly clear application of newton's third law, so....) grae313 Posts: 2297 Joined: Tue May 29, 2007 8:46 pm ### Re: Problem discussions I think it's a good question, and it's suitability for the GRE would probably depend on the multiple-choice answer selection and whether you can come up with a clever way to eliminate two or three options if you can't remember the formula. physicsworks Posts: 80 Joined: Tue Oct 12, 2010 8:00 am ### Re: Problem discussions sphy wrote:Well, i was working out on some problems from previous entrance questions (India) where I got this. Why are you saying it's not suitable for the PGRE questions.? Is it a silly question or some thing? Well... bfollinprm wrote:I dont think you'd be expected to know that P=2I/c for the PGRE. Maybe, but I doubt it But I also partially agree with grae313. ETS can play a game called "guess dimensions, dude".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 10, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114687204360962, "perplexity": 2088.00042938196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187744.59/warc/CC-MAIN-20170322212947-00330-ip-10-233-31-227.ec2.internal.warc.gz"}
https://afonsobandeira.wordpress.com/2015/11/29/10l42panextraopenproblem/
# 18.S096: An extra Open Problem I have just added an extra open problem (4.6.) to the fourth set of lecture notes. I am documenting it here. Prove or disprove the following conjecture by Feige: Given $n$ independent random variables $X_1,\dots,X_n$ s.t., for all $i$, $X_i \geq 0$ and $\mathbb{E} X_i = 1$ we have $\mathrm{Prob}\left( \sum_{i=1}^n X_i \geq n+1 \right) \leq 1 - e^{-1}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002681374549866, "perplexity": 562.6003364296703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647600.49/warc/CC-MAIN-20180321082653-20180321102653-00715.warc.gz"}
http://mathhelpforum.com/algebra/185212-solving-quadratic-equation-involving-log-print.html
# solving quadratic equation involving log • Jul 27th 2011, 09:49 PM kiwi5 Hi, How do you solve an equation with a quadratic expression on one side and a log expression on the other? The specific question is: 2*n*log10(n) = 0.1*n^2 (Background: this is actually for an algorithm analysis question - I am trying to find the intersection points of the worst-case running times of two algorithms) Thanks • Jul 27th 2011, 09:54 PM Prove It Re: solving quadratic equation involving log Well if you write this as \displaystyle \begin{align*} 2n\log_{10}{n} &= 0.1n^2 \\ 0 &= 0.1n^2 - 2n\log_{10}n \\ 0 &= n(0.1n - 2\log_{10}{n}) \end{align*} Since $\displaystyle \log_{10}{0}$ is undefined, we can't accept $\displaystyle n = 0$ as a solution, which means we have $\displaystyle 0 = 0.1n - 2\log_{10}{n}$ You will need to use a numerical method to solve this. • Jul 27th 2011, 09:55 PM pickslides Re: solving quadratic equation involving log This is a tricky one, but maybe we can work it around a bit so the solution might be easier to find. $\displaystyle 2n\log_{10}n = 0.1n^2$ $\displaystyle \log_{10}n = \frac{0.1n^2}{2n}$ $\displaystyle \log_{10}n = 0.05n$ $\displaystyle n = 10^{0.05n}$ You can solve it numerically from here. • Jul 27th 2011, 10:04 PM kiwi5 Re: solving quadratic equation involving log Thanks a lot for the fast replies. I also got up to n = 10^(0.05*n) and did not know how to proceed any further. What is meant by solving it numerically, exactly? Does it basically mean a brute-force approach? Cheers • Jul 27th 2011, 10:09 PM pickslides Re: solving quadratic equation involving log Yep, brute force, open up a spreadsheet you'll find $n \approx 29$ • Jul 28th 2011, 08:33 AM mithgar Re: solving quadratic equation involving log
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981248378753662, "perplexity": 1243.5549959308591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544140.93/warc/CC-MAIN-20161202170904-00215-ip-10-31-129-80.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/102232/let-operation-vs-tkz-euclide
# let operation vs tkz-euclide Before put my question to give some explanation of why I do this question. When I started to use the TeX a matter of luck that I started working with LaTeX (xelatex) and not with LuaTex for example. But now as little more experienced user does not want to leave things to luck. Well my question: Because I want to create several similar shapes and I do not want to go back and to fix them can you tell me in your opinion which method - pack of those presented to this question arc that passes from (B) having a center on (A), automatically calculates (AB) distance offers me more possibilities - options. The method with let operation or the method tkz-euclide of course there is the method with PSTricks (although I did not use it but it is never too late) I do not know if I put my question correctly because let operation is not package ... but I hope you understand. I would like namely to tell me the advantages and disadvantages of each method (I do not want to hurry to give myself my own right choice but I want the users through their vote to decide). I know my question is rather general and perhaps closed ... but my concern is real (because my English does not help me if you understand the meaning of my question please correct where necessary to have better sense) 1ο update: To clarify ... my concern is mainly on the possibilities of let operation and tkz-euclide and not on features differences between tikz and PSTricks, I think this is covered on the links you indicated - tikZ experts will vote for tikZ and PSTricks experts for PSTricks ... and there is no expert in the whole wide world who is it for both and can give a good answer. – Herbert Mar 13 '13 at 8:00 @Herbert certainly this is true .... but let us hear their arguments and then we decide – karathan Mar 13 '13 at 8:12 @Herbert ... here are given the opportunity to anyone who has a package not only what I said to present and support it. – karathan Mar 13 '13 at 8:17 This is similar to tex.stackexchange.com/q/6676/15925 – Andrew Swann Mar 13 '13 at 9:04 – percusse Mar 13 '13 at 12:10 I can help you to clarify the question. You need to add pst-eucl in your question. The most important thing is Because I want to create several similar shapes and I do not want to go back and to fix them can you tell me in your opinion which method offers me more possibilities - options. Part 1) _Tikz Pstricks tkz-euclide pst-eucl_ Firstly you can't compare my little package tkz-euclide with main packages like Pgf-tikz and pstricks. On one hand tkz-euclide is a package based on Pgf-tikz and on other hand pst-eucl is a package based on pstricks. These packages allow the drawing of Euclidean geometric figures using macros for specifying mathematical constraints. If you want to create several similar shapes you need to make a choice between a general tool like pgf or pstricks and a specialized package like tkz-euclide or pst-eucl. I agree with Herbert, it's not easy to give a correct answer. I can only say that I prefer tikz because I like the syntax, the documentation but this is just my opinion and these arguments are not necessarily objectives. Perhaps you can also find it strange that my package syntax is different from that of tikz. To understand why... I can also tell you why I wrote tkz-euclide. My idea was like you to create several similar Euclidean geometric figures. A fine tool was pst-eucl so I decided to create a similar tool based on tikz. There are a lot of similitudes between pst-eucl and tkz-euclide. I made a mix ... the names of the macros are based on pst-eucl and the options are based on tikz. The syntax is also a mix. An important thing is that you can mix tikz and tkz-euclide (see my answer on your first question) Conclusion : tkz-euclide is a tool to create similar geometric figures without knowing the complete documentation of tikz. And it's the same thing for pst-eucl and pstricks. Part 2) Update _let operation vs tkz-euclide_ The question is now ore precise. In a perfect world, tkz-euclide is unnecessary. A fine solution would be to write a library euclide to provide some useful tools to get geometrical figures with simple codes. I wrote the first version of tkz, before some libraries of pgf/tikz without the let operation without the intersections library. Now I need to update the package to add new macros for the user and the possibility to use name path and something like let. Some disadvantages of tkz-euclide a) The package is based on tikz but the syntax is different b) if you work with tkz-euclide, you can't use something like let or name path. It's possible to mix syntax and codes but I agree it's not satisfactory. c) Like Tikz calculations depend of TeX and it's a bad thing. Perhaps lua can change a lot of things. I agree with Garbage Collector, Pstricks with postscript is more powerful to make complex calculations. If you only want to draw geometrical figures the package can facilitate the creation of severals shapes. If you want to use tikz A recent example (yesterday), the next code shows a bug : \documentclass{article} \usepackage{tkz-euclide} \usetkzobj{all} \usetikzlibrary{through,intersections} \begin{document} \begin{tikzpicture} \tkzInit[xmin=-0.5,xmax=14.5,ymin=-0.5,ymax=7] \tkzClip \tkzDefPoints{0/0/A, 13/0/B} \tkzDefMidPoint(A,B) \tkzGetPoint{M} \tkzDefLine[orthogonal=through B](A,B) \tkzGetPoint{C} \tkzInterLC(B,C)(B,M) \tkzGetSecondPoint{C} \tkzInterLC(A,C)(C,B) \tkzGetFirstPoint{D} %\node [name path=ci,circle through=(B)] at (C) {}; %\path [name path=A--C] (A) -- (C); %\path [name intersections={of=ci and A--C,by={D}}]; \tkzInterLC(A,B)(A,D) \tkzGetSecondPoint{S} \tkzDrawSegment[color=red](A,S) \tkzDrawSegment[color=blue](S,B) \tkzDrawSegment[thin](A,C) \tkzDrawSegment[thin](B,C) \tkzDrawArc[delta=10](C,D)(B) \tkzDrawArc[delta=10](B,C)(M) \tkzDrawArc[delta=10](A,S)(D) \tkzDrawPoints(A,B,C,D,S,M) \tkzLabelPoints[above left](A,B,C,D,S,M) \end{tikzpicture} \end{document} The result seems to be fine but if you look at the figure in B with a zoom, you see that : It's probably a rounding error. The mistake comes from the macro \tkzInterLC. A solution is to use some codes from Tikz, an I can replace : \tkzInterLC(A,C)(C,B) \tkzGetFirstPoint{D} by \node [name path=ci,circle through=(B)] at (C) {}; \path [name path=A--C,red] (A) -- (C); \path [name intersections={of=ci and A--C,by={D}}]; You can see that the code from tikz is very different and less concise. This is ironic. I used the fp package with the tkz packages to avoid these kinds of problems (there are other problems like this with tikz) but it seems to be insufficient. Conclusion. Tikz is very useful for a lot of things and I think it's a good idea to study this tool. If you don't have a lot of time and if you want to draw only geometrical figures tkz-euclide can help you. It's obvious that it's possible to draw all the pictures with only Tikz but sometimes it's not easy when you want to draw complex figures (but Tikz is not a mathematical tool !). - My english is not very good as it might be a good thing someone corrects my mistakes. – Alain Matthes Mar 13 '13 at 18:57 Your response to my concern is very important (as author of tkz-euclide). I am really confused ... perhaps begin to put a series in my mind. – karathan Mar 13 '13 at 19:20 Could you please modify my question if you think I should correct it, welcome any changes as I have written. :) – karathan Mar 13 '13 at 19:30 This is another longish comment. I think you are overwhelmed with the variety of options within TikZ which I consider it to be a great advantage. First this let operation... TikZ/PGF has a layered structure. TikZ part is the frontend where you literally type what you want to do and I can't overemphasize how much thought went into that (having a big influence from PSTricks is also quite visible). But the stuff that is literal and easy to use is mapped back to lower level PGF commands, example: \draw (0,0) -- (1,1); is, roughly mapped to \pgfpathmoveto{\pgfpointorigin} \pgfpathlineto{\pgfpoint{1cm}{1cm}} \pgfusepath{stroke} This is the lower level that is being mentioned in the manual etc. These are once more mapped to a stream of system level commands like (I'm really approximating here) \pgfsys@moveto{0pt}{0pt} \pgfsys@lineto{1cm}{1cm} \pgfsys@stroke These last ones are actually great convenience since they translate into PDF or PS specials depending on your compilation driver where PSTricks use only the mighty PostScript. Now when you start a let operation it causes a \pgfextra{...} instruction which pauses the current path construction computes some stuff such as length and angle as we did in the previous question and records it into some macros such as \p1,\n2 etc. Then resumes the path with those macro values remembered. Coming back to the PSTricks discussion, I think you already have an idea how powerful it is compared to TikZ but it's usability and the steepness of the learning curve is often gets in the way. But that's not a major obstacle but let's leave that discussion for the linked questions. We all hail to Herbert. There is also Asymptote which is yet another monster but that's again irrelevant. We also have the TKZ-family which is a layer in the opposite direction namely it's built on top of TikZ. Again they are the great accomplishments of our Alain Matthes (or Altermundus as we have known him) In other words, its commands invoke TikZ and PGF commands in the background. So it might happen that some of them might use a let or \pgfextra{} behind the scenes however the usage is super easy and it really does a great job immitating pst-eucl without typing out a let or whatever each time you want to mark an angle etc. Long story short, it's all two and a half main families; TikZ/PGF, PSTricks and Asymptote. within these families you have lots of varieties but still they all come back to the same primitive commands. So, you only need to decide on what you want to do and choose the easiest way out. - Assume we have constraints as follows: 1. We need better typography with microtype package. 2. Sometimes we need to annotate some parts of the text in a document or in your presentation with beamer. 3. We intensively need to draw standalone diagrams. 4. Sometimes we need to draw 3D diagrams involving 3D projection, etc. My consideration is as follows 1. Use microtype package and compile the main TeX input file with pdflatex. 2. As we use pdflatex for compiling the main TeX file, any annotation must be done with TikZ. In most cases, the annotations are so simple that grasping basic knowledge of TikZ should be more than enough. 3. For 2D drawing, both TikZ and PSTricks are great and powerful. We can choose either one. Write the diagram with standalone documentclass, compile with proper compiler and make sure all standalone diagrams are in PDF such that they can be imported from the main TeX input file. 4. PSTricks is superior in 3D drawing. See Manuel Luque's blog to see how sophisticated PSTricks in 3D drawing. Write the diagram with standalone document class, compile with latex-dvips-ps2pdf (faster) or xelatex (slower) to get PDF outputs that consumable for the main TeX input file. ## Summary: • If you are bound to all constraints above • and if you have limited time and memory, I think devoting time for mastering PSTricks should be in the correct path. :-) Or Waiting for someone to create better 3D suppport for TikZ will waste time. • you need to consider that putting PSTricks as the main tool and TikZ as the second tool (just for inline annotations in pdflatex compilation) should be more efficient than putting TikZ as the main tool and PSTricks as the second tool (just for 3D drawing support). • If you don't need 3D capability, use TikZ only. • If you don't need inline anotation in the text or presentation with beamer, use PSTricks only. Note: It is my personal opinion and please correct my bad English sentences (if any). - This is not because a package is more powerful than another, it is superior. It's your opinion but you need to consider the syntax, the documentation and the ease of use. These considerations depend of each user and I think it's not possible to give a correct answer. I'm not sure it's a good idea to choice a package with 3D consideration. You can also get 3D figures with Asymptote if you need a powerful tool. – Alain Matthes Mar 13 '13 at 22:02 @AlainMatthes: PSTricks syntax seems to be good enough (but there is inconsistency, for sure) but I admit its adopted naming conventions are rather inconsistent (plus counter intuitive) and the documentations are a bit difficult to understand (because of the language and less detailed explanation). :-) – kiss my armpit Mar 13 '13 at 22:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219594597816467, "perplexity": 1241.047553888914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460982.68/warc/CC-MAIN-20151124205420-00262-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/17738/conjugacy-classes-in-the-absolute-galois-group?sort=newest
# Conjugacy classes in the absolute galois group We consider $G_{\mathbb Q} = Gal(\mathbb {\bar Q}/\mathbb Q)$. The Frobenius elements corresponding to each prime are well-studied. But these are really not elements; these are only defined as some conjugacy classes(upto inertia, etc..) Question: Are these the only conjugacy classes in the absolute Galois group? If there are others, please give examples or methods to construct them. The conjugacy classes are of course defined algebraically; this question is not asking for results of the form that the Frobenii form a dense set. - Firstly, Frobenius elements aren't even conjugacy classes, as you know. So you had better look at quotients $Gal(K/\mathbf{Q})$ of the Galois group which are unramified outside some set $S$. Now you have Frobenius conjugacy classes for all $p$ not in $S$. But now there's a dichotomy. If the quotient is finite, then we see every conj class infinitely often. If the quotient is infinite, then the Frobenius conj classes are dense, as you know, but again trivially there are conj classes that aren't Frobenius elements, because there are only countably many of them, and there will almost always (and perhaps always? not sure) be uncountably many conj classes in the Galois group---for example if $K$ is the maximal extension unramified outside $S$ and $S$ is non-empty then $K$ contains the $p$-cyclotomic extension for some prime $p\in S$ and so the Galois group has a quotient isomorphic to $\mathbf{Z}_p^\times$, and this group is uncountable and the Frobenius conj classes here are just the elements $\ell\in\mathbf{Z}_p^\times$ for $\ell$ running through the primes other than $p$. So now we can see that not only are there uncountably many conj classes that aren't Frobenius elements, we can see uncountably many that aren't in the group generated by the Frobenius elements. I have no idea how to construct these guys in any sort of natural way. - This is a side comment, but I think it's plausible that if G is any infinite profinite group, then there are uncountably many conjugacy classes in G. In fact, I believe that in such a G the Haar measure of any conjugacy class is zero (but please correct me if I am wrong). –  senti_today Mar 10 '10 at 20:45 @senti_today : The first statement is true, because the number of conjugacy classes in a finite group cannot remain bounded as the size of the group increases. The second statement is false: Reflections inside a dihedral group are a conjugacy class of measure $\frac{1}{2}$, and you can convert this to an infinite example. –  moonface Mar 10 '10 at 23:16 @moonface: I am not sure I follow your first comment. How does it imply that the number of conjugacy classes is not countable? In the second comment, you probably mean "dihedral group of order not divisible by 4" (otherwise there are two conjugacy classes of reflections, if I'm not mistaken). If I understand you correctly, the sort of example you have in mind is: consider $\mathbb{Z}/2\mathbb{Z}$ acting on $\mathbb{Z}_p$ via $x\mapsto-x$, where $p$ is an odd prime, and form the semidirect product. Then the nontrivial coset of $\mathbb{Z}_p$ is a single conjugacy class of Haar measure 1/2. –  senti_today Mar 10 '10 at 23:49 Yes, dihedrals of 2xodd order, and that's exactly what I had in mind. About the first comment, I don't follow it either (I had hastily assumed that any inverse limit of finite sets with increasing size and surjective transition maps is uncountable...) Thanks for catching. –  moonface Mar 11 '10 at 2:38 I agree with the answer of Kevin Buzzard -- you better look only at quotients $Gal(K/Q)$ unramified outside some (finite) set $S$ to make the question make sense as it is. But regardless, you can ask about the conjugacy classes in the absolute Galois group, and what they "mean". An answer was given many years ago by Ax (I guess in one of his Annals papers, 1968 or 1969), and there are elaborations and new results in the thesis of James Gray, now published in the J. of Symbolic Logic as "Coding complete theories in Galois groups" (his thesis is also available freely online). The reason why these "other conjugacy classes" (which are essentially all of them) are difficult to construct is that the associated fixed fields are the algebraic (over $Q$) subfields of pseudofinite fields of characteristic zero. In Theorem 1.27 of his thesis, Gray states that there is a [natural, homeomorphic in the appropriate topology] bijection between the set of conjugacy classes in $Gal(\bar Q / Q)$ and the Stone space of completions of $ACFA_0$ -- the theory of algebraically closed fields of characteristic zero with generic automorphism. Given such a completion of $ACFA_0$, realized by a model $(K, \sigma)$ where $K$ is algebraically closed, and $\sigma$ is a generic automorphism (see MacIntyre, "Generic automorphisms of fields" for the definition of generic used here), the fixed field $K^\sigma$ is a pseudofinite field of characteristic zero. Now, while two conjugacy classes in $Gal(\bar Q / Q)$ are quite simple to describe -- the trivial conjugacy class and the conjugacy class of order 2 elements -- other conjugacy classes contain elements of infinite order. The associated fixed fields (in this infinite-order case) in $\bar Q$ are psuedofinite fields, which are difficult to get your hands on. Probably the best way is to take an ultraproduct (for a non-principal ultrafilter on the set of prime numbers) of finite fields, and take the algebraic elements within. This is certainly nonconstructive, relying heavily on Zorn's lemma. Still, such fields have arithmetic significance. The model theory related to $ACFA_0$ has had a great deal of impact on number theory lately, and Fried-Jarden also touch on related matters in their "Field Arithmetic" book. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612490892410278, "perplexity": 242.23569768684587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008289.40/warc/CC-MAIN-20141125155648-00036-ip-10-235-23-156.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/41171/whitened-matched-filter
# Whitened Matched Filter I am seeking for an advice on whitened matched filtering technique. I have looked into the literature and I do understand its the purpose and how to select the filter in order to achieve the desired response. However, what I don't understand is that if the signal is matched to the channel filtered signal plus noise using the matched filter, and then if we apply the whitening filter, we get the inverse again which should actually be the channel filtered signal and noise. So in essence we end up where we started from. I am sure I am missing something but any comments will be highly appreciated. Thanks Milos In the ideal AWGN channel we have the received signal is $r(t)=s(t)+n(t)$, where $s(t)$ is the transmitted signal and $n(t)$ is white Gaussian noise. In this case, the transmitted symbols can be estimated using a matched filter whose output is sampled at the symbol rate. Note that in general the noise at the output of the matched filter is correlated, and no longer white; however, at the sampling times the noise is uncorrelated. In the ISI channel we have $r(t)=c(t) \ast s(t) + n(t)$, where $c(t)$ is the channel response. We can think of this system as an AWGN channel where the transmitted signal is $g(t)=c(t) \ast s(t)$, and then we can use $g^*(-t)$ as a matched filter. However, in this case we no longer have uncorrelated noise samples. Correlated noise is more harmful, and thus this situation is undesirable. Note that the whitening filter does not revert what the matched filter did. The reason is that the purpose of the filter is not to turn the noise back into white noise; its purpose is to decorrelate the noise at the sampling instants. If the transmitted symbols are $a_k$ for integer $k$, and the (discrete) whitening filter has taps $f_n,\,n=0,1,\ldots,L$, then the output of the whitening filter is $$v_k=\sum_{n=0}^L f_n a_{k-n} + w_k,$$ where the noise samples $w_k$ are uncorrelated. The symbols $a_k$ can then be optimally obtained from $v_k$ by the Viterbi algorithm, or (perhaps sub-optimally, but easily) from another type equalizer (ZF, LS, etc.). • @MilosMilosavljevic Indeed I misunderstood your question. I have edited my answer; hopefully it's more useful now. The key point is that the whitening filter does not turn the noise back into white noise; it only decorrelates the noise samples at times $kT$ where $T$ is the symbol rate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881288170814514, "perplexity": 234.457502177712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00169.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=148&t=44395&p=154054
## 7B.9 $\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$ Maggie Doan 1I Posts: 61 Joined: Fri Sep 28, 2018 12:24 am ### 7B.9 For the first order reaction A $\rightarrow$ 3B + C, When [A]0 = .015 mol/l, the concentration of B increases to .018 mol/ L in 3.0 min. a) What is the rate of constant for the reaction expressed as the rate of loss of A? I got the answer 1.7 minutes but the answer is .17 minutes. I was wondering how they got the answer. Nicklas_Wright_1A Posts: 60 Joined: Fri Sep 28, 2018 12:23 am ### Re: 7B.9 If you use the equation you should get the right answer. I recommend rechecking your math as you probably put a decimal point in the wrong place. Destiny Diaz 4D Posts: 51 Joined: Fri Sep 28, 2018 12:28 am ### Re: 7B.9 the mistake is more likely in your unit conversions, I would just double check those.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285757899284363, "perplexity": 3028.5381998008393}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00125.warc.gz"}
https://cmua.uniandes.edu.co/index.php/en/microscopy/stm
Scanning Tunneling Microscope The Scanning Tunneling Microscope was the first born in the SPM family. When two electrodes are brought very close together (~nm) and they have a potential difference Vb, there is a fair probability that some electrons will tunnel across the electrodes gap. The STM uses the tunneling current exiting between these electrodes as the interaction that can render the surface topographies of a particular specimen. When a constant current and a control system are put in place, the probe raster the sample and describe it's topography as illustrated in the figure below. Operation principle of the STM: A tip scans the surfaces at a constant current It. A change in the surface topography produces a proportional change in the scanning height called lateral resolution (\delta). The scientific background section describes thoroughly the behavior and physics behind the STM Go to top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360071778297424, "perplexity": 1278.4950307867184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00410.warc.gz"}
http://translate.vernier.com/experiments/chem-a/35/rate_determination_and_activation_energy/
Vernier Software & Technology # Rate Determination and Activation Energy ## Introduction An important part of the kinetic analysis of a chemical reaction is to determine the activation energy, Ea. Activation energy can be defined as the energy necessary to initiate an otherwise spontaneous chemical reaction so that it will continue to react without the need for additional energy. An example of activation energy is the combustion of paper. The reaction of cellulose and oxygen is spontaneous, but you need to initiate the combustion by adding activation energy from a lit match. In this experiment you will investigate the reaction of crystal violet with sodium hydroxide. Crystal violet, in aqueous solution, is often used as an indicator in biochemical testing. The reaction of this organic molecule with sodium hydroxide can be simplified by abbreviating the chemical formula for crystal violet as CV. As the reaction proceeds, the violet-colored CV+ reactant will slowly change to a colorless product, following the typical behavior of an indicator. You will measure the color change with a Vernier Colorimeter or a Vernier Spectrometer. You can assume that absorbance is directly proportional to the concentration of crystal violet according to Beer’s law. The molar concentration of the sodium hydroxide, NaOH, solution will be much greater than the concentration of crystal violet. This ensures that the reaction, which is first order with respect to crystal violet, will be first order overall (with respect to all reactants) throughout the experiment. You will monitor the reaction at different temperatures, while keeping the initial concentrations of the reactants the same for each trial. In this way, you will observe and measure the effect of temperature change on the rate of the reaction. From this information you will be able to calculate the activation energy, Ea, or the reaction. ## Objectives In this experiment, you will • React solutions of crystal violet and sodium hydroxide at four different temperatures. • Measure and record the effect of temperature on the reaction rate and rate constant. • Calculate the activation energy, Ea, for the reaction. ## Sensors and Equipment This experiment features the following Vernier sensors and equipment. ### Option 4 You may also need an interface and software for data collection. What do I need for data collection?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8434632420539856, "perplexity": 1119.9066527947214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00367.warc.gz"}
https://www.slim.eos.ubc.ca/content/stable-sparse-expansions-non-convex-optimization
# Stable sparse expansions via non-convex optimization Title Stable sparse expansions via non-convex optimization Publication Type Conference Year of Publication 2008 Authors Ozgur Yilmaz Conference Name SINBAD 2008 Keywords Presentation, SINBAD, SLIM Abstract We present theoretical results pertaining to the ability of p-(quasi)norm minimization to recover sparse and compressible signals from incomplete and noisy measurements. In particular, we extend the results of Candes, Romberg and Tao for 1-norm to the p $łl$ 1 case. Our results indicate that depending on the restricted isometry constants and the noise level, p-norm minimization with certain values of p $łl$ 1 provides better theoretical guarantees in terms of stability and robustness compared to 1-norm minimization. This is especially true when the restricted isometry constants are relatively large, or equivalently, when the data is significantly undersampled. URL https://www.slim.eos.ubc.ca/Publications/Private/Conferences/SINBAD/2008/yilmaz2008SINBADsse/yilmaz2008SINBADsse.pdf Citation Key yilmaz2008SINBADsse
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005951046943665, "perplexity": 1388.4940310568018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511314.51/warc/CC-MAIN-20181017220358-20181018001858-00341.warc.gz"}
http://jas21.com/athenaeum/athenaeum241.htm
# ӔN̉K搶- Ɍ̓Ǐ > R > ɌgV̓Ǐ ## @ @ ӔN̉K搶 @ @ @ @ @ ИvzƒˎR{ ɌFȉ́AKŕ25N 727 (y) ɍs‚񍐃WłB @ @ @ @ J V (Ȃ ) ㋳{棋^{m̉L 3e̒A @ @ @ @ wKFҁE{҂̐UƎvzx ({oŁA2012.1.10) @ @ @ @ ẅ脟K]`x ({oŁA2013.1.20) @ @ @ @ wK]`ꍂfwIҁx @ @ @ @ @ @ (ё{kޱA2013.2.20) @ @ @ @ ɌĂuИvzƒˎR{v̕‚̂łB @ @ @ @ 玑ɂȂ₤ɁAڂ݂܂i񍐌Ɏ኱₵܂jB @ @ @ @ K搶̈椎҂͑Sɍ̂ŁA̕X椂߂₤ɂƍlցA @ @ @ @ ɋL^邱Ƃɂ܂B hDИvz (Ўv) dK搶 (1901^34.11.6|1988^a63.6.12)F iPjL^ɟk銈F @ @ iȉA݁x@萎w݂‚x^ЁwИvzxj C) Ўv̋NFސẺ͍ĎY搶Ղ @ “ (”N{̘)FYoung Japan @ @ @ @ @ 1840Nɉpێ}̎肪gD”Np} Young England Ɉޖ @ @ @ @ @ ՗O ϯ¨ Young Italia gDB @ 1939^a14.1.25 ͍ĎYA隠{xE| @ @ @ @ ͍̍剺BuT 1 ͍@Ř @ @ @ @ ` (₪) āA񍐌ɋc_錤ᢓW @ 1941^a16. 2. 5@ vް Māw{DSZ_x(_) s @ 1944^a19. 1.23@ ͍ᢓWF @ @ @ @ @ @ @ @ @ @ @ ޲• ؑN^ؽ y^챼ޱ 萉ÕF @ @ @ @ @ @ @ @ 2.15@ ͍ĎYA}B͍AR @ @ @ @ @ @ @ @ @ @ @ u15vWt ) ЎvdK搶FJuEĂ̐瑁hE_椏E椏 @ 1946^a21.11. 1@ yEΏǕE萉ÕF 20n`J @ @ @ @ @ @ @ @ @ @ @ 7Aǒ萉ÕF@ dK搶c @ @ @ @ @ @ @ @ @ @ @ (͌E񍐝^c^F͍Âə҉邾) @ 1947^a22. 4.@ @ ܍ڂ̍j̍F҂ 7 (RcY^tY^ؑN^ @ @ @ @ @ @ @ @ @ @ @ y@ ^ΏǕ^萁@ ÕF^ؐ) @ @ @ @ @ @ @ @ @ @ @ li̐^R^И`^吭i^Eaێ @ 1948^a23. 4.@ @ dK 46΁A @ @ @ @ @ @ @ @ @ @ @ N̖M‚Č؈ُ҂A⛔Â̐gƂȂB @ @ @ @ @ @ @ @ 9.10@ wИvzx(B5 4) njᢍs @ @ @ @ @ @ @ @ @ @ @ uFvWAɌgV @ 1949^a24. 2.25@ wx6jdKu椏Gvf (M 2 3) @ @ @ @ @ @ @ @ @ @ @ (ށEݼwДbxVЁ^{wŲ݂ᢌxnЂ 2e) @ @ @ @ @ @ @ @ 5. 3@ 萐xݗ (xc Иƛ{Z) ް 30l @ @ @ @ @ @ @ @ 7.@ @ dK 47΁A_ˑ{zC @ @ @ @ @ @ @ @ @ @ @ { {EИȋ_u` @ @ @ @ @ @ @ 10.16@ 萐xuFdKul`v^u 1 F^ @ @ @ @ @ @ @ 11.15@ ЎvAՎ`FrYEc𗝎ɒlj @ 1950^a25. 1.@ @ wИvzx(B5) ^ (8Ł`14) @ @ @ @ @ @ @ @ 3.28@ V_ˑ{SZ{̍iᢕ\FɌgVAi (sHdȑ) @ @ @ @ @ @ @ @ @ @ @ ̓AdKwƑ̖x椗AhB @ @ @ @ @ @ @ @ 4. 5@ ɌgVAdK搶K˂ďZg{ɂłqR̂ł߂܂炸 @ @ @ @ @ @ @ @ @ @ @ 18-19歲ʂ̊ ظ ځB2疾܂ŗs̗RB @ @ @ @ @ @ @ @ @ @ @ c̏ЉƎ̖hnȂʒmЂB @ @ @ @ @ @ @ @ 4.14@ ɌgVAdK搶ɏʁFZg{ (_ˑ{{) 2Ǩ @ @ @ @ @ @ @ @ @ @ @ ȗōBؑN̋󂫎ԂƂ̒B5.7 () ɌB @ @ @ @ @ @ @ @ 5. 7@ 萐x 1NLOu ( 4Ku) F13:00 JÁ^O 500 @ @ @ @ @ @ @ @ @ @ @ @ @ @ dKuؽ `v @ @ @ yJuzF ɌgV̗vi2-5^5j, 11ŁjFȉdK搶̕ @ @ @ @ @ @ @ @ @ @ @ p`͍Ȃٌڂׂ݂JlB̐lX͐l̖ړI @ @ @ @ @ @ @ @ @ @ @ KƍlւĂ̂łBނ͌ǂ͗z`֍s˂΂ȂȂ̂ @ @ @ @ @ @ @ @ @ @ @ z`屢ɉ߂B`͕Ղœ퐶ɗnłB @ @ @ @ @ @ @ @ @ @ @ ̑命ɂ̐lXɂƂ‚Ă͌`痝z`ɍŝőPłB @ @ @ @ @ @ @ @ @ @ @ `͗z`ւ̒ʘHƂĉivIӋ`B`̐X @ @ @ @ @ @ @ @ @ @ @ 󂯂ĂȂXɂ́A`̂₤ȓOꂵ`El`𕁋y @ @ @ @ @ @ @ @ @ @ @ Oꂹ߂邱Ƃ͎ɕKvłB @ @ @ @ @ @ @ @ @ @ @ @ @ @ ؐua̞И`ᢓWsϓv @ @ @ @ @ @ @ @ @ @ @ @ @ @ ؑNuƂɂ‚āv @ @ @ @ @ @ ci21-7^7Ǔj,26)F̍uŖؑ́A @ @ @ @ @ @ @ @ @ @ @ uvz猾‚Ăe猾‚ĂA{̼ޮ݁EāEفAɍ݂v @ @ @ @ @ @ @ @ @ @ @ dK]ˁB @ 1951^a26. 7.@ @ dK 49΁A_ˑ{{ɏAC @ 1953^a28. 6.15@ 萐x ({ 3K)FKuИvƏ@v(5-6/7 f) @ @ @ @ @ @ @ 9.-12. 萐ŰفFutc^÷ā͍ĎYwЉ􌴗x @ @ @ @ @ @ @ @ @ @ @ 9.14`12.14 12ďI @ @ @ @ @ @ @ 10.31@ 萐xɁuyjvᢑF萐ݏŽ (͍{h{c۔n{h) @ @ @ @ @ @ @ @ @ @ @ ndK搶҉^lyjߌɌJ @ @ @ @ @ @ @ 11.28@ 萐xuyjvᢑF萐ݏŽ (͍{h{c۔n{h) @ 1954^a29. 2.@ @ _ˑ{ dKEkFvAИvz̗ɑIC @ @ @ @ @ @ @ 4.-10. 萐ŰفFutc @ @ @ @ @ @ @ @ @ @ @ ÷āDurbin,The Politics of Democratic Socialism,1940. @ @ @ @ @ @ @ @ @ @ @ ȏ͘B (6-10,27)F萐Ű 4Ș޳ާ݁w|ɑւāx @ @ @ @ @ @ @ @ @ @ @ wV炵ИxAcOwИ`ƎRx̍]s‚Ę҂܂A @ @ @ @ @ @ @ @ @ @ @ 9.11 Durbin ÷ n߂܂B @ @ @ @ @ @ @ @ 4.24@ ͍ĎY\LO萐u ()F @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ؑ@ Nu{Ɛv @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ kFvuVИ`̗v @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ dK@ u͍Ǝ߂̗z`vi6-6fځj @ @ @ @ @ @ @ @ 8.21/22/23 V_EREFaui6-8/9jF @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ dK@ uИvƏ@v @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ kRIvuSZvV̕v @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ c@ u`̉^v @ @ @ @ @ @ @ 10.@ @ ɌgV (_ˑ{{@ݛ{) F ɏi @ 1955^a30.5.9-7.11 (10A) Tj 18:00-21:00 Иț{ÓTu (萐) @ @ @ @ @ @ @ @ @ @ @ ÷ 5ew̪ޱݎИ`_Wx(c)^݁wИ̊TOx @ @ @ @ @ @ @ @ @ @ @ (ؐ) ^ϱفwSZ{x(kFv)^ݽށwٗbEqy @ @ @ @ @ @ @ @ @ @ @ ݕ̈ʗ_x(؉av)^Ʋwϲݼ ޾ټāx(䗘) @ @ @ @ @ @ @ @ 7.11@ Иț{ÓTuA𗡂ɏI @ @ @ @ @ @ @ @ 7.5-11. 萐ЁA瑁ōh Durbin 椗B桏oŕo @ @ @ @ @ @ @ @ 7.12@ Иț{ÓTuA ({ 3Fjōk J @ @ @ @ @ @ @ @ @ @ @ ÷āʲwnւ̓x(nЁAJY) @ @ @ @ @ @ @ @ @ @ @ utcE䗘 @ @ @ @ @ @ @ 10.10- O萐ŰٕWF10.10 uTj 18:30-20:30 (10) @ @ @ @ @ @ @ 10.24@ O萐Ű 2ځAuẲ͍ ɕCGvxȋc_WJ @ @ @ @ @ @ @ 11. 7@ _ˑ{x̍hAdK搶Ƃ킩قō (7-12,10)F @ @ @ @ @ @ @ @ @ @ @ _ˑ{xł͘䏕 (SZИ{) 𒆐S ީāEʲ݁wY @ @ @ @ @ @ @ @ @ @ @ `ȨсE`x椏㔂Ă邪A11.6-7 ɌÓTq˂ @ @ @ @ @ @ @ @ @ @ @ ޗǂɗV񂾁BZt瓂񎛂‚AӂɎvЂȂ20N @ @ @ @ @ @ @ @ @ @ @ x̌JƂӏtЂ̌ÎقK^ɜ܂ꂽBț{ƐE @ @ @ @ @ @ @ @ @ @ @ V̑萌W‚ĝc܂㔂̘_cɉāAɌ͉vғI @ @ @ @ @ @ @ @ @ @ @ n_҂̗L@IƂӛȈdvA`ؘ͌҂Ƃ @ @ @ @ @ @ @ @ @ @ @ ͖O܂ł_҂sʂIJ޵۷ްI΍𒴂镁ՓIᢌɓw @ @ @ @ @ @ @ @ @ @ @ ׂƂA͐EVɛț{̒ʐƁAɗ_ț{ւ̒ @ @ @ @ @ @ @ @ @ @ @ ̕KvB (7) ͎ᑐق͍̌̉̕ɕ׋ɘ҂ @ @ @ @ @ @ @ @ @ @ @ ꂽdK搶q˂ĂbfЁA搶̏ꂽ|͂̕ɉ͍ @ @ @ @ @ @ @ @ @ @ @ Â񂾁Bߌ㉜RAVᑐR̒ォ猩Ȃޓޗǖ~n̔V @ @ @ @ @ @ @ @ @ @ @ ğd{Aɔӂ萐Űقɒy҂B(`،Y) @ 1956^a31.5.-12. l萐Ű @ @ @ @ @ @ @ @ @ @ @ @ A׽ (N{) Ѱƴwli`xutdKEc @ @ @ @ @ cf. l萐Űق (9-1/1857.1j,48)FN 5Ɏn‚ @ @ @ @ @ @ @ @ @ @ @ 1210̎И{׽ ŌɏI܂B @ @ @ @ @ @ @ @ @ @ @ @ N{׽ Ѱƴwli`x ÂĂ䂯Ȃ‚A @ @ @ @ @ @ @ @ @ @ @ Ѱƴ li` Ƙ҂Ȃ Ƃ߂ @ @ @ @ @ @ @ @ @ @ @ ꕔ (啔?) o܂B ÷ ̑I莸s @ @ @ y瑁h1)z8-6 (6j) 35łɁuĊ (萐)v̕WLf @ @ @ @ @ @ @ @ @ @ ꏊF{͓S瑁瑁A{wR̉Ɓx (piuفv) @ @ @ @ @ @ @ @ @ @ pFʔ 200Ah 1 300 @ @ @ @ @ @ @ @ 7.14-16 瑁h1) ( 5j^ 8-8) ҉ 57 (j 31/ 26) @ @ @ @ @ @ @ @ @ @ @ @ u`u^ꔑҁ^񔑎 ƎRIł (Ζ҂ɍD]) @ @ @ @ @ @ @ @ @ 14 (y) 18:00WAE[H @ @ @ @ @ @ @ @ @ @ @ @ uJ熁vc^uc}̈AvR̉ _lĎYْ @ @ @ @ @ @ @ @ @ @ @ @ հӱ鎩ȏЉ ڸش ( ް) @ @ @ @ @ @ @ @ @ 15 () ߑO@ u`uߑИ̊{Iv䗘 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ussɂƍ߂ƎИvēcP @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ uƂւ̓vc @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ u`u͍ĎYƑqcSOvdK (8-9f) @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ^䓚Ec @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ x (KĐRcExƖ) ^ڸش @ @ @ @ @ @ @ @ @ 16 () ߑO@ R (wǂRo) @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ Ș^|^U (Еqē񝾁rȂ牺R) @ @ @ yݹāz( 5j,1)Feu`͎ԕsA|[カ^u`Iœ^ @ @ @ @ @ @ @ @ ٰߓc]ށ^H沂şޑ^ڸش ‚Ƃ肽^ @ @ @ @ @ @ @ @ ҉ғmE搶ƍk^ʓma\XƂꂽ^ @ @ @ @ @ @ @ @ ҉ėǂ‚ etc. @ 1957^a32. _ˎx椏𘬍i9-1,42)F @ @ @ @ @ @ @ @ @ @ @ ÷ādKw@Ɛlxw⛔Ixwb̒Njx(Ɍ㋳{) @ @ @ @ @ @ @ @ @ @ @ RcBdK oȁB2.12 uTΗj 6-8J @ @ @ @ @ @ @ @ 2.12@ _ˎx椏1) (V)FdK𚡂ގRc (i`،Y) @ @ @ y_ˎxւz(9-3/3j,41)F́A2.12 () 6 8܂ @ @ @ @ @ @ @ @ @ @ @ _ːVٓ萐dͻ޽Ecōs͂ꂽB @ @ @ @ @ @ @ @ @ @ @ oȎҁdKȉ20B÷Ăɓ炸ȏЉƍks‚A @ @ @ @ @ @ @ @ @ @ @ EƂƐb萌W‚āAZp҂ƎƂ̔rMSɘ_cꂽB @ @ @ @ @ @ @ @ @ @ @ 񁁐_ˑ{{B @ @ @ @ @ @ @ @ @ @ @ wb̒Njx́u䂪lV̝̑Jv𒆐Sɕ׋B @ @ @ @ @ @ @ @ @ @ @ ꏊ͑Ɠꏊ (`،Y) @ @ @ @ @ @ @ @ 2.26@ _ˎx椏2) ({)FdKubA҉҂Ƃ̘_c @ @ @ y_椏2)oȋL^z( 12j 2)FdK搶̂ŁAVAɌ @ @ @ @ @ @ @ @ @ @ @ dK搶𒆐SɁwb̒Njx̉䂪lV̕՗̌b @ @ @ @ @ @ @ @ @ @ @ 搶S鄂ɊlԐ̐MAȈAɂ‚b܂B @ @ @ @ @ @ @ @ @ @ @ ʂĐlԐ͐Mł̂ł炤B @ @ @ @ @ @ @ @ @ @ @ łĂ悢̂ł炤B @ @ @ @ @ @ @ @ @ @ @ e̖̒獂簂Ȃ{IES鄏̎₠B @ @ @ @ @ @ @ @ @ @ @ _c e݂̍Aꐫɂ‚ ӌɊ潑(T) @ @ @ @ @ @ @ @ 2.26@ _ˎx椏2) ({)FdKubAubƙ҉҂Ƃ̘_c @ @ @ @ @ @ @ @ 3.12@ _ˎx椏3) (V)F (ȌuTɊJÁ^dK搶𚡂 2) @ @ @ @ @ @ @ @ 4.22@ 15:00` xJuFdKuИvƏ@v @ @ @ @ @ @ @ @ @ @ @ (xIOu^N椏E񍐘ȂǂJ) @ @ @ @ @ @ @ @ @ @ @ ӁAdKA_ˎx椏ɏo @ @ @ y萐z(9-5/5j,32) F4.22 () ߌ 3 s{@Sԋ @ @ @ @ @ @ @ @ @ @ @ _ˑ{{dK̍u‚B @ @ @ @ @ @ @ @ @ @ @ ͍N߂犈n߂s{ИvzÂ̛{uŁA @ @ @ @ @ @ @ @ @ @ @ ؐ̈AƎИvz̏Љ̂ƁA @ @ @ @ @ @ @ @ @ @ @ dKuИvƏ@vƑ肵āA @ @ @ @ @ @ @ @ @ @ @ @ƎИvƂ͙_Ȃ̂ł͂ȂƂ @ @ @ @ @ @ @ @ @ @ @ 鈤椎҂̋^ɓւ`ŁA_҂ɂĈł鏊ȂB @ @ @ @ @ @ @ @ @ @ @ ŌɎēcPut̕qׂĕ˜B @ @ @ @ @ @ @ @ @ @ @ ̂ Ȃ ҂‚͞B҉Җ400 (g) @ @ @ @ @ @ @ @ 7. 8@ _ˎx椏 (OŏI)FO{ŗ[HBdK搶Ɋӂ @ @ @ @ @ @ @ @ @ @ @ `؋L ( 16j2)Fu搶̂ɂ笕f椏̂ƁA @ @ @ @ @ @ @ @ @ @ @ 搶̏uԒˎR[v ˰ yɂȂ ׼قɎX̂ @ @ @ @ @ @ @ @ @ @ @ XɂƂ‚ĒR͂ł‚AɂƂ‚Ă͑̂ȂƂł‚ @ @ @ @ @ @ @ @ @ @ @ vӁBlɐ[X\グv @ @ @ @ @ @ @ @ 7. @ @ dK 55΁A_ˑ{{ 4I @ @ @ y瑁h2)z9-5,36Ł^9-6, 26łɁuĊ (萐)v ^WL @ @ @ @ @ @ @ @ @ @ 13H 350^45AủHځ`̉HځvŐ\ @ @ @ @ @ @ @ @ @ @ ҉ 42 ()Fn恁n_ޗǂ̂قADyEEEÉ @ @ @ @ @ @ @ @ @ @ (j28/14)@ @ xEE_ˁEcEE_ˑ @ @ @ @ @ @ @ @ 7.13-16@ ( 17j^9-8)F13-14 ʌ^15-16 { @ @ @ @ @ @ @ @ @ 13 (y) ӁAE[H^cE_lْ A^ȏЉ^k^ @ @ @ @ @ @ @ @ @ 14 () ߑO@ u`u㔁Eli`ƎИ`vdK @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ u`uИ`Ɩ`vc @ @ @ @ @ @ @ @ @ @ @ @ @ ^䓚Ec @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ ڸش (ް) @ @ @ @ @ @ @ @ @ 15 () ߑO@ u`ulԂ̕ɂ‚āvdK @ @ @ yɌ̍u`Ӂz 1)Ѱƴ 椏ł̘buli̕łʁv^ @ @ @ @ @ @ @ @ @ @ ̎莆u\͂̕s𒴂li̕𛉊ɂ͂ǂ΂悢v @ @ @ @ @ @ @ @ @ āFlԂ͕sɐĂ邪̂ɕւ̗~B @ @ @ @ @ @ @ @ @ @ łȂ΂ȂʂƎvւΎvӂق s𛉊₤ɂȂB @ @ @ @ @ @ @ @ @ @ ݞHAulԂ ǂyŕ ǂyŕsA @ @ @ @ @ @ @ @ @ @ ulԂ ǂyŕłׂ ǂyŕsłׂ @ @ @ @ @ @ @ @ @ @ uʂčl@˂΂Ȃʁv @ @ @ @ @ @ @ @ @ @ 2)Ȃ̉”\ɛMF”\ƂĂ PɓB @ @ @ @ @ @ @ @ @ @ (qcSOuvwɂ‚āv)B @ @ @ @ @ @ @ @ @ @ {c̈̑ȎИ𗝘_́AɉċȕvƋɁA @ @ @ @ @ @ @ @ @ @ As̐F܂łB@vzɂ ʂ݂̌B @ @ @ @ @ @ @ @ @ @ 3)ꕍ̕skOAꕍfǂȂkO @ @ @ @ @ @ @ @ @ @ ɗ炸[B (Иvz) I]܂A@̕ɂB @ @ @ @ @ @ @ @ @ @ |@̋ϓł͎И𐶊ɉğޑo҂ȂB @ @ @ @ @ @ @ @ @ @ 4)ulԂ̕vɛF̕sBFĂl͏ȂA @ @ @ @ @ @ @ @ @ @ xɍB啔̐lX͕~sɎĂB @ @ @ @ @ @ @ @ @ @ ߂ułȂ΂ȂʁvƍlւĂƕꂸA @ @ @ @ @ @ @ @ @ @ J날sÎ󂵂悤ƂӐS\ւɂȂƁA𓾂₷B @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ u`u{{ɂ‚āF ޽ľװ{ɂ‚āv@ @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ c𚡂ŋcF{̕␧EOߑ㐫_c̒S @ @ @ @ @ @ @ @ @ 16 () ߑO@ oR^HAU @ yOaj̙҉Lz ( 17jA57.8.15j 2) F(É@4) @ @ @ @ @ @ @ @ @ @ @ @ @ 1)򍞂ł݂ď߂Ēm‚Иvz̑ۂ́A̘r @ @ @ @ @ @ @ @ @ @ @ @ @ MSɎИvzɂ‚Č悤ƂlB̏Wł邱ƂłB @ @ @ @ @ @ @ @ @ @ @ @ @ ΂̏o₤ȔMӂAłďirɐli`҂炵ԓx @ @ @ @ @ @ @ @ @ @ @ @ @ blB΂łB @ @ @ @ @ @ @ @ @ @ @ @ @ 2)҉ėǂ‚̂́Aheu`E_Ȃnjꂵ̂ς @ @ @ @ @ @ @ @ @ @ @ @ @ łȂAԓIPT[‚ĎXȐlBƘbւƂłB @ @ @ @ @ @ @ @ @ @ @ @ @ ə҉҂݂Ђɞق߂ ڸش ‚ƂłB @ y@ ]̙҉Lz ( A3)F(x) 瑁܂ŋDԁEdԁE޽ Ƙ㋂ @ @ @ @ @ @ @ @ @ @ @ @ @ 15PBSfj̐ΒiAuR̉Ɓv̌Ɉꂽ @ @ @ @ @ @ @ @ @ @ @ @ @ قȕ\āu҂ėǂ‚v ݂ݎv‚B @ @ @ @ @ @ @ @ @ @ @ @ @ 猩m͉c搶ƈɌ񂾂Aق̕X‚ƑO @ @ @ @ @ @ @ @ @ @ @ @ @ m‚Ă₤ȟBȏЉANɂĂ邤 @ @ @ @ @ @ @ @ @ @ @ @ @ N₩ɕяlԌQ͂ɂИvzɑ䂵B @ @ @ @ @ @ @ @ @ @ @ @ @ ͛瑁̎RX̍XցAOȂT̎p̂ @ @ @ @ @ @ @ @ @ @ @ @ @ Ƃӈۂ́A̍ő̝n‚B @ @ @ @ @ @ @ @ @ @ @ @ @ ړIɌ΂ꂽl萌W̐D栂ւ₤Ȃ́A @ @ @ @ @ @ @ @ @ @ @ @ @ ‚ƖYȂB y܉萐Űفz( 18j 2ʁ^ 9-9,50)F2JnBʲсw̐fЁx ÷ ēcEE @ @ @ @ @ @ @ @ @ @ @ @ @ dK 3uto ̉Au㔂uTjߌ 6:30`8:30 ` @ @ @ @ @ @ @ @ @ @ @ @ @ قōsӁv|mB y䗘̉zz( 69j 1)FudK搶ɂ‚āv @ @ @ @ @ @ @ @ @ @ @ @ @ Ɉۂ̐[vЏo a30N 33NИvz萐x @ @ @ @ @ @ @ @ @ @ @ @ @ uTɊJĂ萐Ű 搶 ꏏə҉ƂłB @ @ @ @ @ @ @ @ @ @ @ @ @ Ű ͑N (a30N31N) ʲ ́wnւ̓xA @ @ @ @ @ @ @ @ @ @ @ @ @ N (31N32N) ϯާ ́wИ{xA @ @ @ @ @ @ @ @ @ @ @ @ @ ON (32N33N) ʲ ́w̐ffxAꂼ ÷ ƂA @ @ @ @ @ @ @ @ @ @ @ @ @ ut Nڂ c A @ @ @ @ @ @ @ @ @ @ @ @ @ N ON Ƃ ēcPsƐ{utƎłB @ @ @ @ @ @ @ @ @ @ @ @ @ ꏊ N ONڂ `قŁA @ @ @ @ @ @ @ @ @ @ @ @ @ Nڂ VقłB @ @ @ @ @ @ @ @ @ @ @ @ @ dK搶 MSɏoȂAȂǂ Ǜ{ȍuqɂXA @ @ @ @ @ @ @ @ @ @ @ @ @ ܂ႢʘƋɋc_ւꂽ̂łB @ @ @ @ @ @ @ @ @ @ @ @ @ ̌pɂ́AȂ̑哹߂Ď~܂ȂA @ @ @ @ @ @ @ @ @ @ @ @ @ ǂȑfpȈӌłhӂȂĎA @ @ @ @ @ @ @ @ @ @ @ @ @ Rȋ҂̖ʉeݏoĂB @ @ @ @ @ @ @ @ @ @ @ @ @ drAɐl̎ႢlBւĐ搶ƌꏏ @ @ @ @ @ @ @ @ @ @ @ @ @ ~c粂̋iXɓA˰ ݂Ȃ @ @ @ @ @ @ @ @ @ @ @ @ @ ȟck̈ꎞ߂ĂƂvЏołB @ ɌF N{ESZ{EИ{ O‚A䂳͎И{^cB @ @ @ @ @ @ @ 10. 1 () _椏 ĊJFuTΗj ߌ 6`9^÷āFE.H.wVИx @ @ @ @ @ @ @ @ @ @ @ @ @ 椏̂ƁAiX֌J݁Ac_㔂{k_ @ @ @ @ @ @ @ @ @ @ @ @ @ oȎ҂20W߂ĐϗāA[HȂǂŊҌ @ @ @ yÑq (_ˎx) _ˎxz(9-12/12j,44) F @ @ @ @ @ @ @ @ @ @ @ @ @ 1)_˕߂ɘFAN2A_ˎxo҂܂B @ @ @ @ @ @ @ @ @ @ @ @ @ dK搶𚡂椏ƂӖ͂ ȐlXuTΗj̔ @ @ @ @ @ @ @ @ @ @ @ @ @ 椏Aw@Ɛlxw⛔Ix ÷ @ @ @ @ @ @ @ @ @ @ @ @ @ l_E@_ElS鄘_ьЂ܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 㔼dK搶̐_ˑ{{ɊX态XA @ @ @ @ @ @ @ @ @ @ @ @ @ NwlᢌAPTA ̔@ Ɨ₩ꂽ肵܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 椏 搶z̐ԒˎRi 낼 ҉ ĂȂA @ @ @ @ @ @ @ @ @ @ @ @ @ ں޺ݻ A‚‚ ق @ @ @ @ @ @ @ @ @ @ @ @ @ 悢椏 7܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 2) 7 8 9ɘi‚Đ_ˑ{xƍLu̎w @ @ @ @ @ @ @ @ @ @ @ @ @ ݹٽ ́wzț{ցx 椂݁AɊ潑 Ɍ_͂ @ @ @ @ @ @ @ @ @ @ @ @ @ c_܂B_{Ԍ̛{ƁASŕ׋Ă @ @ @ @ @ @ @ @ @ @ @ @ @ wƂ̑֕̈̕ЂmA{̑؂Ɋv܂B @ @ @ @ @ @ @ @ @ @ @ @ @ Ă̒101椏͉ĕȓɗǂhł܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 3)10.2 E.H. ́wVИx ÷ _ːVفA㏤D @ @ @ @ @ @ @ @ @ @ @ @ @ ̙_ւŏ펞 15l̏oȂ𓾂 Ɍ r椏 @ @ @ @ @ @ @ @ @ @ @ @ @ JĂ܂B11.26 () ɂ ضl E.H. Fl Prof.Spalding @ @ @ @ @ @ @ @ @ @ @ @ @ āAۓss_˂炵椏܂B @ @ @ @ @ @ @ @ @ @ @ @ @ ̕ ײČőOɐmjւɘ҂ÂRA @ @ @ @ @ @ @ @ @ @ @ @ @ Ўv̑QiIȍs 낢Ƙbfւƞق݂łB @ @ @ @ @ @ @ @ @ @ @ @ @ ̊cɂ O {̙҉]A @ @ @ @ @ @ @ @ @ @ @ @ @ }ɁwVИx椂ݎn߂lBƂB @ @ @ @ @ @ @ @ @ @ @ @ @ 񍐒v܂B @ @ @ @ @ @ @ 10.@ dK搶Axōu ( 21j,2)F @ @ @ @ @ @ @ 11.26 () _椏FOut ߰ިݸދA㞐 @ @ @ @ @ @ @ 12.10 () _椏F߰ިݸދAwVИxɂ‚ ub @ @ @ @ @ @ @ @ @ @ @ @ @ (pA桂Ȃ) @ @ @ @ @ @ @ @ @ @ @ @ @ ⊈潑B8:00 A߂̋iX 9:30 ܂ ut𚡂şck @ @ @ @ @ @ @ 12.16 () _椏YNF؞ 13ҏW @ 1958^a33. 1.14 _椏F扮 2KA19:00-21:00 @ @ @ @ @ @ @ @ 1.28 _椏F扮 2K @ @ @ @ @ @ @ @ 2. 1 yj ް݁wИ`̐_xu] 2ځA @ @ @ @ @ @ @ @ @ @ ܏͝^cdKAsɂ艄 @ @ @ @ @ @ @ @ 2.11 _椏i扮 2K)F14:30`21:00 wVИx ÷āB @ @ @ @ @ @ @ @ @ @ l`I ` O`ւzڂA @ @ @ @ @ @ @ @ @ @ R@̕]Jɂ‚‚_c @ @ @ @ @ @ @ @ 2.25 _椏i扮 2K)F14:30`21:00 @ yÑqu椏̒ł̓Xvz( 24jA58.3.15j 2) Fm肽‚̂ŁA @ @ @ @ @ @ @ @ @ @ Ўv椏ɂlŔ򍞂ŎȂ̒qd̊ꂫȂA @ @ @ @ @ @ @ @ @ @ ɔNAD̗FB @ɍsAdĒ̛{K̗ǂɊA @ @ @ @ @ @ @ @ @ @ 椏悤Ƃ̂ƁA椂 ÷ q㔂ȂA ˭ް @ @ @ @ @ @ @ @ @ @ w{`SZ݁̕xƁw{̎vzx(gV) 0 @ @ @ @ @ @ @ @ @ @ w̏W椂ł܂Bӌ͕S㇘łB @ @ @ @ @ @ @ @ @ @ _ˎx E.H. ́wVИx L̓‚ƍ͂ @ @ @ @ @ @ @ @ @ @ O‚椏̒ŁAɛ銴󐫂݂̓A @ @ @ @ @ @ @ @ @ @ ̒̊SˆŒƂȂƂȂ‚Ă韆łB @ @ @ @ @ @ @ @ @ @ ̐l—ǂɌ‚Ă钆ŁAkĂ͂ȂA @ @ @ @ @ @ @ @ @ @ ׋čsƎvӓXłB @ @ @ @ @ @ @ @ 6. 2 _椏i 27,1ʁ^10-6,28)F爰قɝ̍XB @ @ @ @ @ @ @ @ @ @ ̓sō 2񂾂ΗjłȂjɂ @ @ @ @ @ @ @ @ 6.16 _椏F18:30-21:00@ wVИx椗 @ @ @ @ @ @ @ @ 7.12 (y)@ _椏 (10-8,28)F 椏ÎŁA @ @ @ @ @ @ @ @ @ @ ̒Ñx [HkJÁB @ @ @ @ @ @ @ @ @ @ 9椏 ĊJB÷āсwR̓x @ @ @ y瑁h3)z( 27j 2ʁ^10 (萐)vWLfځ^1 3H 400^23 @ @ @ @ @ @ @ @ 瑁h3) 񍐋L ( 29j 1-2ʁ^ 10-9,28) ҉ 39 (j19/20) @ @ @ @ @ @ @ @ 7.19 (y) ӁAcE_lْ̈A^ȏЉ^E_ˎx^ڸش @ @ @ @ @ @ @ @ @ 20 () ߑO@ u`uɂ‚āvdK @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ ҉ᢕ\ufՂ̌ʂɂ‚āvck @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ uaɏoȂāvq @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ @ @ @ @ @ uޭ۸׼ lԁvMJ@ @ @ @ @ @ @ @ @ @ @ @ @ @ 22:00 ܂ c @ @ @ @ @ @ @ @ @ 21 () ߑO@ Rԁ^oR @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ U @ yO̙҉Lz ( 29jA58.8.15j 1)F(_ˑ@) Ўvɟ^ȑԓxA @ @ @ @ @ @ @ @ @ @ @ @ @ łĖ邭e݈ՂjȕAɏ̐^ȊA @ @ @ @ @ @ @ @ @ @ @ @ @ ɂ͉̐ƂȂ܂BxAh̛͂܂A @ @ @ @ @ @ @ @ @ @ @ @ @ ̐܂̈ꑈꑂقIS‚ę҂܂B @ @ @ @ @ @ @ @ @ @ @ @ @ dK搶̓萂u`AcEEMJǑᢕ\ @ @ @ @ @ @ @ @ @ @ @ @ @ ^䓚AɌw̍ߍAVࣖȗVYARoccA @ @ @ @ @ @ @ @ @ @ @ @ @ ĕʂBInѓO ϲݼēI󟆂tƂȂ̂ƂȂ‚ @ @ @ @ @ @ @ @ @ @ @ @ @ 粂Ύju܂ӓ܂Łv񂵂̕ʂ̈ꎞA @ @ @ @ @ @ @ @ @ @ @ @ @ ͎Ɋ[̂łB޾ټ @ @ @ @ @ @ @ @ @ @ @ @ @ ϲݼēI󟆂F߁Aǔj͂ȗs̈ɐ^ȉԂoA @ @ @ @ @ @ @ @ @ @ @ @ @ x͂Ȏ悻ɐl̐vӐȐlԑToƂ́A @ @ @ @ @ @ @ @ @ @ @ @ @ Đ瑁hYȂ̂ɒv܂B @ ydK搶̙jȈʁz ( AuҏSLv)F @ @ @ @ @ @ @ @ @ @ @ @ @ 瑁h̍ŏIAHꑫɟddKEc_搶Ɠ䎁 @ @ @ @ @ @ @ @ @ @ @ @ @ g܂ӓ܂" ̍Ō‚ƂAdK搶Xqʂ @ @ @ @ @ @ @ @ @ @ @ @ @ g撣[" ƌĉ‚̂ɂ͊B{uR̉Ɓv @ @ @ @ @ @ @ @ @ @ @ @ @ 萂班oƁAlBKi (̐Sfj̊Ki) @ @ @ @ @ @ @ @ @ @ @ @ @ ~؂‚Ƃp̂ŁA݂Ȃʼn̂㔂Ȃ҂‚ĂA @ @ @ @ @ @ @ @ @ @ @ @ @ gbz[" Ƃ Ă߁Agbz[" ƓւgoJ[" @ @ @ @ @ @ @ @ @ @ @ @ @ Ԃ‚ĂB҂ɃoJ[Ƃ͉!?@ ߂͓䎁炵A @ @ @ @ @ @ @ @ @ @ @ @ @ x‚ł͒u񂼂Ǝv‚ĂAŕɂƁA @ @ @ @ @ @ @ @ @ @ @ @ @ dK搶̐ (?) ɂ̂ŁAgoJ[b" Ɠ{‚΁A @ @ @ @ @ @ @ @ @ @ @ @ @ gKo[b" ƕƂӛ{ɊRBdK搶䎩gA @ @ @ @ @ @ @ @ @ @ @ @ @ ܂Ō‚čs‚uR̉Ɓvْ̊Ɏ玩Ԃ̑ @ @ @ @ @ @ @ @ @ @ @ @ @ goJ[b" Ɠ{‚ꂽłB @ @ @ @ @ @ @ @ @ @ @ @ @ goJ[Ƃ͌̌tƂ݂‚" @ @ @ @ @ @ @ @ @ @ @ ɌF͗NJԂB @ y쑺q̙҉Lz ( 30jA58.9.15j 2) F(Б{) @ @ @ @ @ @ @ @ @ @ @ @ @ uق‚瑁hc搶ւ̎莆v7.22LB @ @ @ @ @ @ @ @ @ @ @ @ @ 1)A瑁hĂёɟd‚Ă̂́A܂ق‚ @ @ @ @ @ @ @ @ @ @ @ @ @ ̂ł܂̐ɖ߂ꂳɂ܂Buق‚Ȃv @ @ @ @ @ @ @ @ @ @ @ @ @ uق܂ɂ݂ȐӂɈꂽ̗ǂlB‚Ȃv @ @ @ @ @ @ @ @ @ @ @ @ @ d̓dԂ̒ŁAB͂ȌtJԂĂ܂B @ @ @ @ @ @ @ @ @ @ @ @ @ ĎRŐLѐLтƂИvz̕B̕׋ԂɈցA @ @ @ @ @ @ @ @ @ @ @ @ @ B͉PT̂Ȃ{ɏInĂ邱ƂA @ @ @ @ @ @ @ @ @ @ @ @ @ ̛ɋÂł܂炸A A{΂˂΂ȂʂƈӌvA @ @ @ @ @ @ @ @ @ @ @ @ @ R ̧ R₵ĕʂ܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 2)Sʂ̒ łY̂́A19ӂ̐̉ł̓cłB @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ (tc搶Ƃ̋c_ȗ) @ @ @ @ @ @ @ @ @ @ @ @ @ 3)dK搶́uɂ‚āvA dK搶 b̓e @ @ @ @ @ @ @ @ @ @ @ @ @ AxXƐꂽ΂ȌA ɛ搶̌虛 @ @ @ @ @ @ @ @ @ @ @ @ @ ԓxɐSł܂Bɐ搶 ͍ĎY搶uċx͐l @ @ @ @ @ @ @ @ @ @ @ @ @ ̈ꂩɑc؂Ȃ̂łB̂ɁA̔N̉ċx݂͂ǂ߂ @ @ @ @ @ @ @ @ @ @ @ @ @ NS‚Ă͂‚茾ւ₤ɉ߂Ȃvƌ͂ꂽƂ @ @ @ @ @ @ @ @ @ @ @ @ @ YʂƋ‚‚A͌Ɗ]Ɗ铓RvЂłB @ @ @ @ @ @ @ @ @ @ @ @ @ čAK̟drXɓAӟȂɏIɌ‚Ă @ @ @ @ @ @ @ @ @ @ @ @ @ Aw䂪S̉́xƂӑẢdKƂӖ܂B @ @ @ @ @ @ @ @ @ @ @ @ @ ȂɂƂ‚ĂӂƊJłɂ܂B @ @ @ @ @ @ @ @ @ @ @ @ @ ulԂ͒NłႢɂ͔\͂̕sɜނ̂łBł‚B @ @ @ @ @ @ @ @ @ @ @ @ @ N͔\͂̕sAJoւꂽ̔\͂@Ɋp @ @ @ @ @ @ @ @ @ @ @ @ @ ׂɜ݋ꂵ񂾁v @ @ @ @ @ @ @ @ @ @ @ @ @ 4)͖łBdK搶ŏɋ‚‚uޯް ̂lԂɂȂ @ @ @ @ @ @ @ @ @ @ @ @ @ Ƃ ȎgɂȂ邱ƂłvƂ́AǂȏGɑ‚ĂAɎȂ @ @ @ @ @ @ @ @ @ @ @ @ @ ”\MÅJᢂɓw͂邱Ƃƌ܂B낢lւ @ @ @ @ @ @ @ @ @ @ @ @ @ 邱Ƃ̑‚łB @ @ @ @ @ @ @ @ @ @ @ @ @ 5)ȈӋ`WЂɂUВA瑁ł‚BɂSБՂ܂ @ @ @ @ @ @ @ @ @ @ @ @ @ ƂӒv܂B䌒NF肵Ă܂B @ @ @ @ @ @ @ @ @ @ @ @ @ ` _椏 (10-11,26)FȌ uTؗj̔ӂ s @ @ @ @ @ @ @ 9.19@ @ @ @ dK搶𚡂 ́wR̓x椂ł܂B @ @ @ @ @ @ @ @ @ @ @ @ @ 30l˔j鐷ԂłB @ @ @ @ @ @ @ 10. 9@ _椏 ( 32j,2)F @ @ @ 10.23@ _椏 ()FсwR̓x @ @ @ @ @ @ @ 11. 6@ _椏 ( 33j,2)FсwR̓x @ @ @ 11.20@ _椏 ()FсwR̓x @ @ @ @ @ @ @ 12. 4@ _椏 ( 34j,1)FсwR̓x @ @ @ @ @ @ @ 12.18@ _ˎxYN ( 34j,1ʁ^11-1,24)F̌قŊJ @ 1959^a34. 1.27@ _椏 ( 35j,1ʁ^11-2,24)FuTΗjJ (÷ā) @ @ @ @ @ @ @ @ 2.10@ _椏 ( 36j,2)FсwR̓x @ @ @ @ @ @ @ @ 2.24@ _椏 (^37j,2)F̓udK搶㞐ȁv @ @ @ @ @ @ @ @ 3.10@ _椏 ( 37j,2)FсwR̓x @ @ @ @ @ @ @ @ 3.24 () _椏 сwR̓x܏́u Ҷƽсv @ @ @ @ @ @ @ @ 4. 7 () _椏 ( 37j,2ʁ^11-5,26) F18:30-21:30 @ @ @ @ @ @ @ @ @ @ @ сwR̓x@ Źu¨ Sv @ @ @ @ @ @ @ @ @ @ @ dK搶w̉ JÁB֔Ԃᢕ\ĂAŋߓڂɊ潑Ȉӌ @ @ @ @ @ @ @ @ @ @ @ 骂͂ċɂ߂ėLӋ`ȘB椏̂30 ÷ 𗣂ĉf`]A @ @ @ @ @ @ @ @ @ @ @ ]A񍐁AΕ]ॕ񍐁AॏW҉񍐁Ae @ @ @ @ @ @ @ @ @ @ @ ҉񍐓elXȘb邱ƂɂĂB̏WЂ ް @ @ @ @ @ @ @ @ @ @ @ FƂāAȂ߁AwE{Eذ ̎O҂΂ @ @ @ @ @ @ @ @ @ @ @ ӌ骂͂ƂASł ̗ ɂ߂ěƎv͂B @ @ @ @ @ @ @ @ 6.@ @ dK 57΁A_ˑ{{ Cޗ @ @ @ y瑁h4)z 39j/11-6,24łɁulĊ (萐)vWL^13H 400 @ @ @ @ @ @ @ @ 7.18-20@ 瑁h4) ( 41j,2-3ʁ^ 11-8,26) ҉ 43 (j24/19) @ @ @ @ @ @ @ @ @ 18 (y) Ӂ@ @ R@YxE_lْ̈A @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ u`u{`̝̖eƎИ`̛ҁvc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 潑ȓc^ȏЉ^^ްс^AiɂيOŌ荇Ӂj @ @ @ @ @ @ @ @ @ 19 () ߑO@ u`uYɂlԂ̌vdK @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ E_˙_x^eniEEVE_ˁEE @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ E_ˑEޗǏqj̊󋵕񍐁^xc ٰߓc @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ O[vc̓e񍐁^c @ @ @ @ @ @ @ @ @ 20 () ߑO@ F񍐁uț{Zp̎IᢓWɂ{SZ̋vΕ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ufm~l[Vɂ‚āvJ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ uZpƂĂ̑Ovؑ@ @ @ @ @ @ @ @ @ @ @ @ @ dKEc_搶uvuAɂꂾ̘bł҂ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ɂ͋Ȃv̂ƋRoA14:00 HAU @ ylڂ}ւ瑁hz( 41jA59.8.15j 2) FɌgVLuevX[v @ yӔV̂ւz( 3)F1)҉A̍h͓Z㞂ĂB @ @ @ @ @ @ @ @ @ @ @ @ ut̘bcŎ~߂ĂȂB2)F񍐂͏[B̂₤ @ @ @ @ @ @ @ @ @ @ @ @ [ 3̕񍐎҂ɛ鍡ᢓWւ̊]ďIւꂽ̂ @ @ @ @ @ @ @ @ @ @ @ @ тł‚B3)RoB4)ŔAdKEc_搶ƋɂA @ @ @ @ @ @ @ @ @ @ @ @ ԂɓĒBɌ񏉂ߋklXɌꂽB @ @ @ @ @ @ @ @ @ @ @ @ u₤ȂA޶۰vu₤ȂA޶۰v dK搶߂ @ @ @ @ @ @ @ @ @ @ @ @ u޶۰v ܂ҔNAŏIS鄂₤ȏ[҂āA @ @ @ @ @ @ @ @ @ @ @ @ Ăсu޶۰vЂɂ‚Ę҂̂Ǝv‚B @ @ @ @ @ @ @ 11.13@ _椏 ( 45j,2)F哇Nҁw̎vzxhlԂ̎Rɂ‚ @ @ @ @ @ @ @ 11.27@ _椏 ( )F哇w̎vzxIIB̊v @ @ @ @ @ @ @ 12.11@ _椏 (12-1,39)FN̍ŏI @ @ @ @ @ @ @ 12.26@ 萐x (_ˁEE) eF_ˎx Ñq @ 1960^a35. 1.22 () _椏 (12-2E3,64)FdKoȁAA݂ɐVt̕B@ @ @ @ @ @ @ @ @ @ @ ̂ 哇Nҁw̎vzxÓulƚƁvɂ‚ ӌB @ @ @ y瑁h5)z j (7.10)/12-6,26łɁu܉Ċ (萐)v̕WLf @ @ @ @ @ @ @ @ @ @ ðρFИ`̍Ğ^13H 400 @ @ @ @ @ @ @ @ 7.30-8.1@ 瑁h5) ( 53j^ 12-9) @ @ @ @ @ @ @ @ @ @ @ @ @҉ 67 (ߋōL^^̙҉) @ @ @ @ @ @ @ @ @ 30 (y)@ @ @ u`uИ`SZIӖvc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ u{ɂ鐭}vs @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ȏЉ^ڸش @ @ @ @ @ @ @ @ @ 31 () ߑO@ u`uИ`̐lVvdK @ @ @ @ @ @ @ @ @ @ @ @ @ ߌ@ O[vc (ϗEESZʂ疯И`) @ @ @ @ @ @ @ @ @ @ @ @ @ Ӂ@ @ c^ڸش @ @ @ @ @ @ @ @ 8. 1 () ߑO@ ҉ҕ񍐁uИ`ƋΙҋvЎY @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ uc𐭎̍Ğvck @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ oR^U @ yΕ@ ̙҉Lz (wИvzx12-9j 23-24) F1)̍h̖ړÍAsЂ΂΂ @ @ @ @ @ @ @ @ @ @ @ ĂF̐er𚤂A҉҂̑݌[ᢂ_ӂ́B2) @ @ @ @ @ @ @ @ @ @ @ NXɂȂ‚ĂA_eNX[ĂĂB߂͍u` @ @ @ @ @ @ @ @ @ @ @ ‚A҉҂̈ӌē_dɈڍsBh͙҉҂ @ @ @ @ @ @ @ @ @ @ @ Ǝ^䓚ŐϋɓIo𑣂AD܂B3)Ă̝ɓu @ @ @ @ @ @ @ @ @ @ @ RɓuW܂‚Č芎‰̂ӟ͊iʁBN߂ĘAčs @ @ @ @ @ @ @ @ @ @ @ ‚̖AИ`ɂ͂ċȂ‚ɂS炸A @ @ @ @ @ @ @ @ @ @ @ h̕ɂ͖Ɏ䂫ꂽ͞B @ @ @ @ @ @ @ @ @ @ @ F̐eȑł茤ȑł邱̍hɁAƂS @ @ @ @ @ @ @ @ @ @ @ ɂ̙҉҂܂B @ @ @ @ @ @ @ 11.26@ dKuu@ɂ‚Ă̎̍l֕v(uA18:30` ) @ 1961^a36. 1.-3. dK 59΁ASfaŕa @ @ @ @ @ @ @ @ 7.8-13@ pzKSxmőĊ{ŰيJ @ @ @ @ @ @ @ @ @ @ @ @(̂ݛ{^) @ @ @ @ @ @ @ @ @ @ @ cf.Љ@ rquxmŰًLv(13-8,20-22) @ @ @ @ @ @ @ @ @ @ @ @ ɌgVuxmŰ ̂Ɓv(13-8,23-25) @ y瑁h6)z7.29-31 uZĊ (萐)vdK搶ŝs @ @ @ @ @ @ @ @ 9.- @ _椏AdK𒆐S ٽށwܲĶװx椂 () @ @ @ @ @ @ @ 11.19@ _ˎs e𓰂 җj^wdK搶ژ^xzz @ @ @ @ @ @ @ 12. 2@ Ўv Ou ( 70j,2)FdKuИv̓N{v @ 1962^a37. 2.17@ dKu͍搶̎vЏov(14-3,4-7)Fg̐iD݂炷ƁA @ @ @ @ @ @ @ @ @ @ @ łΐU̟k͏Vɕ‚‚ĕ邵B̏V̂Ȃ椂 @ @ @ @ @ @ @ @ @ @ @ 菑肷eƕ́Aǂň؂̐lX̐liƍKBɌqA @ @ @ @ @ @ @ @ @ @ @ И̐iWƌq̂łȂĂ͂ȂȂBłȂAg̐ls @ @ @ @ @ @ @ @ @ @ @ AV̐̂̂ӖƂȂ̂łB͍搶́ÂƂAg @ @ @ @ @ @ @ @ @ @ @ ‚ĎłMdȋPƂāAɟkĉ‚₤Ɏv͂B @ @ @ @ @ @ @ @ 4.@ @ dKu`̓N{v(14-4,2-20)F߁@ BjVƓN{ @ @ @ @ @ @ @ @ @ @ @ ߁@ `IJlVƓN{ @ @ @ @ @ @ @ @ 7.@ @ dKu`̐lVv(14-7,58-72)F @ @ @ @ @ @ @ @ 9.-@ _椏AE.H.wjƂ͉x椂ݎn߂ (EÑ) @ @ @ @ @ @ @ 10.@ @ dKuȐSES̖F`̐lVE̓v(14-10,66-81) @ @ @ @ @ @ @ 12.17@ _椏AE.H.wjƂ͉x椗Ae߰èJ @ 1963^a38. 1.@ @ dKulԐɂ镽F`̐lVE̎Ov(15-1,2-18) @ @ @ @ @ @ @ @ 4.@ @ dKulԐɂ鍇IvƔ񍇗IvF̎lv(15-4,37-52) @ @ @ @ @ @ @ @ 4.22- _椏 (EOj)^ڲwR̋G߁x(gX) @ @ @ @ @ @ @ @ 7.@ @ dKulԐɂ鋦ւ̌Xूւ̌XF̌܁v(15-7,30-43) @ @ @ @ @ @ @ 10.@ @ dKulԐɂF`̐lVE̘Zv(15-10,33-50) @ @ @ @ @ @ @ 10.13@ HGW (ԉ~) dKoȁA{̈ۂɂ‚ڂӌJ @ 1964^a39. 1.@ @ dKuOS̔񍇗Ɩ`E̎v(16-1,26-40) @ @ @ @ @ @ @ @ 4.@ @ ɌgVu͍ĎY̋{_v(16-4,14-33) @ @ @ @ @ @ @ @ @ @ @ dKuOS̔񍇗Ɩ` (2)F̔v(16-4,41-61) @ @ @ @ @ @ @ @ 7.@ @ dKuϽЭƹ݂Ɩ`O (3)F̋v(16-7,44-43) @ @ @ @ @ @ @ 10.@ @ dKuϽEӸ׼ƑO̐lԐO (4)F̏\v(16-10,29-50) @ 1965^a40. 1.@ @ dKuϽEӸ׼̞ܗ͍\ƑO̔\F11v(17-1,40-58) @ @ @ @ @ @ @ @ 2.27@ Ru҂lԑvcdK ʙ҉ (17-4,19-27) @ @ @ @ @ @ @ @ 4.@ @ dKuI萐S̖脟`̐lVF12v(17-4,50-72) @ @ @ @ @ @ @ @ 7.@ @ dKu`̐lVvƕF13v(17-7,61-78) @ @ @ @ @ @ @ 10.@ @ dKuJ.S.ق́wR_xɂlV`̐lVF14 () v @ @ @ @ @ @ @ @ @ @ @ (17-10,29-48) @ 1968^a43. 8.@ @ dKu͍搶̎莆v(20-8,13-22) IIDˎR{dK搶F(̓L) @ @ ˎR{ւzC̏Љ c (dKEɌgVE`،Y 3lꏏ) @ @ c ({ИƒZ{E{{SZw) @ @ @ ˎR{ˎR{{̐XgEˎR{{ČFV_ƐeA @ @ @ ˎR{{{̑n瑊kəoĂB @ @ @ ˎR{{{̍\zɉ͍ĎY̋{_fĂ̂́A @ @ @ coĂłB @ @ dK搶́ASf֓AˎR{̛{Ƃ̎X̛{Oɉ҉ꂽB @ @ @ ꏏŁAVZtł̈ꔑAoXɘ‚Ă̏ڗsȂǂ͖YA @ @ @ ̂ƂłȂBdK搶̓Lɂ͋L^Ă锤B @ @ c̒ˎR{ɂ́An̕‚₤Ȋ‚ĂB @ @ @ 鏕AujՓɉƂɋƁAĂ₤ɎvӁvƏq債قǁA @ @ @ {̐l萌W͖ĂB @ @ @ ͌ɒق̏A{̓[~Ƃ萌WȂARɏo肵ğckĂB @ 1965^a40. 3.31@ dK 64΁A_ˑ{NސEB @ @ @ @ @ @ @ @ 4. 1@ ˎR{ɏACB^cȖځuИvzjvuИ𐧓x̌v @ @ @ @ @ @ @ @ 7. 8@ dK搶̂bƁAʼnĐQꂵvЂƂ͈xȂRB @ @ @ @ @ @ @ @ @ @ @ SA܂ (F͋{{ɂ̂TAԒˎRR)B @ @ @ @ @ @ @ 11. 2 () bKFdK搶̂bB{ 3l񍐁BðρF`EݻсE @ 1966^a41. 4.22 () ߌAFV{Ăɘ҂āA‚ĖւȂƂ̘bB @ @ @ @ @ @ @ @ @ @ @ S債Ă̂ň󂯂B߂ނ𓾂܂B @ @ @ @ @ @ @ @ @ @ @ ƂɁAdK搶ƂӁB @ @ @ @ @ @ @ @ 5.@ @ dK 65΁A𕹔C @ @ @ @ @ @ @ @ 7.@ @ S؍[ǂœ| @ @ @ @ @ @ @ @ 9.@ @ @ @ @ @ @ @ @ 10.17 () dK搶 (Ď) @ 1967^a42. 4.@ @ dK 66΁ASߗǍD u`ĊJ^uオ‚vƂ̎S @ @ @ @ @ @ @ @ @ @ @ ^cȖځИ𐧓x̌^[~ĥ݁BИvzj Ɍ^c @ @ @ @ @ @ @ @ 4.17 () ˎR{JuBߌAdK搶̍u`uИ𐧓x̌vuB @ @ @ @ @ @ @ @ @ @ @ 搶̂bԂAsЂƑSR̂炸 (オ‚邱ƂȂ)B @ @ @ @ @ @ @ @ @ @ @ uOИ̖グBOИ͐lԂɝ̉߂邩A @ @ @ @ @ @ @ @ @ @ @ Ăǂӛ|̎d邩vuИț{҄S鄉ț{E暓IƂ @ @ @ @ @ @ @ @ @ @ @ ڋ߂AИvzƂƂĐڋ߂B‚܂AИ𐧓x̑P_B @ @ @ @ @ @ @ @ @ @ @ PƂ͂ǂӂƂBǂӐlԂ‚̂PƂBꂪOҁB @ @ @ @ @ @ @ @ @ @ @ 𓥂܂Č҂őOИ_v @ 1968^a43. 3.@ @ w`̐lVxo @ @ @ @ @ @ @ @ 4.25@ dK搶̌䏵 (ޗǂ̘̒V܁uv)F`؁Er{EHX @ @ @ @ @ @ @ @ @ @ @ ƈꏏ v 5 @ @ @ @ @ @ @ @ 7. 4@ 17:00` dK搶w`̐lVxoŋLO @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ (SBcEЏ㖾E`،YEɌgV)F @ 1969^a44. 6.12@ dK 67΁AˎR{ 301ɉāuИ𐧓x̌vu` } @ @ @ @ @ @ @ @ @ @ @ S 11:55^aS؍[ @ @ @ @ @ @ @ @ @ @ @ Ō̌tujDूh߂ɂ͂w͂ᶂȂĂ͂ȂȂv @ @ @ @ @ @ @ @ 6.19@ 14:00 ޗpقɉĒˎR{ @ @ @ @ @ @ @ @ @ @ @ ̂ƁAޗǎs킩فu͍搶̕v @ @ @ @ @ @ @ @ @ @ udK搶Âԁvk @ @ @ @ @ @ @ @ @ @ @ @ @ (wИvzx21-7/a44.7j 21-32Ōf) @ @ @ @ @ @ @ @ @ @ @ @ @ (uɌgV椏vɏW^Z) @ 1988^a63.11.13@ dK` (ˎR{ 4Kc) @ @ @ @ @ @ @ @ @ @ @ 񍐁ɌgVug߂ɌdK搶v (25/2013.8.1 j
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992725253105164, "perplexity": 178.87118575632036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00447.warc.gz"}
https://www.sebastianstoeckl.com/tags/parameter-uncertainty/
# Parameter uncertainty ## Parameter uncertainty and Financial Markets In this project, we will research many aspects derived from the paper of Garlappi et al. (2007)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631940722465515, "perplexity": 3979.804885426626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00303.warc.gz"}
http://mathhelpforum.com/advanced-algebra/13581-irreducible.html
# Math Help - Irreducible 1. ## Irreducible By looking at f (x+1) I think I can prove this but not exactly sure how. Here is the question: Prove for any prime number p, f(x)=x^p-1=x^p-2)+ ... + x+1 is irreducible in Q[x]. 2. .. Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618009924888611, "perplexity": 2219.5492890878095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464386.98/warc/CC-MAIN-20151124205424-00160-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.scholarpedia.org/article/Siegel_disks/Linearization
# Siegel disks/Linearization The linearization problem in complex dimension one dynamical systems ## Statement Linearizable at a fixed point $$\implies$$ tame Given a fixed point of a differentiable map, seen as a discrete dynamical system, the linearization problem is the question whether or not the map is locally conjugated to its linear approximation at the fixed point. Since the dynamics of linear maps on finite dimensional real and complex vector spaces is completely understood, the dynamics of a map on a finite dimensional phase space near a linearizable fixed point is tractable. More precisely the problem is the following: there is a set $$S\ ,$$ the phase space, which can be for instance a subset of $$\mathbb{R}^n$$ or $$\mathbb{C}^n$$ or a manifold, and a map $$f$$ from part of $$S$$ to part of $$S\ ,$$ which represents a discrete dynamical system. We are interested in a fixed point of $$f\ ,$$ call it $$a\ .$$ The differential of $$f$$ at $$a$$ is a linear map, call it $$T\ .$$ In our example, $$T$$ acts respectively on $$\mathbb{R}^n\ ,$$ $$\mathbb{C}^n$$ and the tangent space of $$S$$ at $$a\ .$$ Does there exist a neightborhood $$V$$ of $$a$$ and a homeomorphism $$\phi$$ from $$V$$ to some neighborhood of the origin such that the local conjugacy (see Topological conjugacy) $$T=\phi\circ f\circ \phi^{-1}$$ holds in a (possibly smaller) neighborhood of $$0\ ?$$ Topologically linearizable $$\iff$$ holomorphically linearizable It shall be noted that for a given fixed point of a given map, the answer to this question may or may not depend on the regularity allowed for the conjugacy. However, in the particular setting of a holomorphic map of a complex dimension 1 manifold (i.e. a Riemann surface) linearizability by a continuous conjugacy turns out to be equivalent to linearizability by a holomorphic conjugacy. Any regularity in between is thus also equivalent. The multiplier If $$f$$ is a holomorphic map and $$a$$ is a fixed point, i.e. $$f(a)=a\ ,$$ then the multiplier is the complex number $$\lambda=f'(a)\ .$$ The multiplier is invariant under conjugacy. Depending on $$\lambda\ ,$$ the fixed point $$a$$ is termed accordingly: • for $$|\lambda|>1\ ,$$ $$a$$ is repelling • for $$|\lambda|=1\ ,$$ $$a$$ is indifferent • for $$0\leq|\lambda|<1\ ,$$ $$a$$ is attracting • for $$\lambda=0\ ,$$ $$a$$ is superattracting The multiplier, on a Riemann surface • Let S be a complex dimension 1 manifold (a Riemann surface) • $$f$$ be a holomorphic map from a part of S to a part of S • $$a$$ be a fixed point of $$f$$ • $$T_aS$$ be the tangent space of S at $$a$$ • $$D_af: T_aS \to T_aS$$ the differential $$f$$ at $$a$$ Since we are in dimension 1, $$D_af$$ is completely characterized by its unique eigenvalue λ, and equal to the multiplication by λ: $$D_af$$(v) = λv. Identifying $$T_aS$$ with the complex plane $$\mathbb{C}\ ,$$ $$D_af$$ is a similarity of ratio λ. The multiplier is the number λ. Linearizability, depending on the multiplier • If |λ| = 0 (superattracting fixed point), then $$f$$ is not linearizable, unless it is constant in a neighborhood of $$a\ .$$ • If 0 < |λ| < 1 (attracting not superattracting), or 1 < |λ| (repelling), then $$a$$ is a linearizable fixed point. This is referred to as Koenig's theorem. • If |λ| = 1 (indifferent), then it depends. Write λ = exp(i2πθ) for some $$\theta\in\mathbb{R}\ .$$ • If $$\theta\in\mathbb{Q}$$ (parabolic fixed point), then $$f$$ is not linearizable most of times. More precisely, it will be linearizable if and only if $$f$$ has an iterate equal to the identity, which is impossible for instance in the case of a rational map of degree at least 2 (this includes polynomials) and for entire maps that are not of the form $$z\mapsto az+b$$. • If $$\theta\notin\mathbb{Q}$$ (irrationally indifferent), then we get into a much more difficult question. The latter case is where Siegel disks arise. ## Power series expansions and small divisors Assume $$f$$ fixes the origin (take a chart where the fixed point is at the origin) and consider the power series expansions $f(z)=\lambda z +\sum_{n=2}^{+\infty} a_n z^n\ .$ The linearization equation consists in finding $\phi(z)=z+\sum_{n=2}^{+\infty} b_n z^n$ such that $$\phi^{-1} \circ f \circ \phi (z) = \lambda z$$ holds near the origin (a problem equivalent to finding $$\psi$$ such that $$\psi \circ f \circ \psi^{-1} (z) = \lambda z$$). In other words, $$f\circ \phi(z) = \phi(\lambda z)\ .$$ By indentifying power series expansions, one finds a unique solution defined by the recurrence relation on the coefficients $$b_n$$ of $$\phi\ :$$ $b_1=1$ $b_{n+1}=\frac{P_n(a_2,\ldots,a_{n+1},b_2,\ldots,b_{n})}{\lambda^{n+1}-\lambda}$ where $$P_n$$ is an explicit, yet complicated, mutivariate polynomial. Thus for $$\lambda=\exp(i 2\pi\theta)$$ with irrational $$\theta\ ,$$ the conjugating power series $$\phi$$ is always defined as a formal power series. Linearizability of $$f$$ is equivalent to the convergence of this series, i.e. to its convergence radius to be positive. Even though the numerator in the recurrence relation giving $$b_{n+1}$$ is much more complicated than the denominator, it is the latter which is the potential source of divergence. The term $$\lambda^{n+1}-\lambda$$ is called a small divisor. Indeed, for some values of n, typically for n=q where $$p/q$$ is a continued fraction rational approximant of $$\theta\ ,$$ the quantity $$\lambda^{n+1}-\lambda$$ is small. Estimating the growth rate of $$b_n$$ is thus a subtle problem. It requires a good understanding of rational approximations of irrationals. ### A reminder on continued fractions An irrational has a unique continued fraction expansion $$\theta=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\ddots}}$$ with $$a_0\in\mathbb{Z}$$ and $$a_n\in\mathbb{N}\ .$$ The continued fraction approximants of $$\theta$$ are the numbers $$\frac{p_n}{q_n}=a_0+\cfrac{1}{\ddots+\cfrac{1}{a_n}}$$ (the notations may vary). The best rational approximations of an irrational are given by its continued fraction approximants: • if $$|\theta-p/q|<\frac{1}{q^2\sqrt{5}}$$ then p/q is an approximant of $$\theta$$ • if $$p/q$$ is an approximant then $$|\theta-p/q|<\frac{1}{q^2}\ .$$ The quantity $$q^2|\theta-p/q|$$ can be thought of a measure of the quality of the rational approximation of $$\theta$$ by $$p/q\ .$$ The second point can be made more precise: if $$p_n/q_n$$ is the n-th approximant then we have: $$\frac{1}{2q_n q_{n+1}}<|\theta-p_n/q_n|<\frac{1}{q_n q_{n+1}}\ .$$ There is the well-known recurrence relation on the denominators (also satisfied by the numerators $$p_n$$): $$q_{n+1}=a_n q_n + q_{n-1}\ .$$ Therefore if $$a_n$$ is big, then $$q_n^2|\theta-p_n/q_n|\approx \frac{1}{a_n}\ .$$ Good approximations correspond to big values of $$a_n\ .$$ Concerning the small divisors, with $$\lambda=\exp(2i\pi\theta)\ ,$$ the quantity $$|\lambda^{q+1}-\lambda|$$ is comparable to $$q|\theta-p/q|\ ,$$ where p is the integer so that p/q is closest to $$\theta\ .$$ More precisely, there is the following theorem: the smallest value of $$|\lambda^k-\lambda|$$ for k ranging from 2 to $$1+q_n$$ is obtained precisely at $$k=1+q_n\ ,$$ and we have for $$k=1+q_n\ :$$ $$|\lambda^k-\lambda|\approx\frac{2\pi}{q_{n+1}}\ .$$ ## History Linearizability is closely related to stability. Poincaré, in studying the stability of the solar system, had to face similar questions. He thought he could prove stability in the simplified problem he was looking at (1889). He latter realized he was wrong, and by correcting this famous mistake opened the field of chaotic behaviour in dynamical systems. Concerning the center problem (linearization of an irrationnaly indifferent fixed point of a discrete dynamical system in complex dimension 1): At the International Congress in 1912, E. Kasner conjectured that such a linearization is always possible. Five years later, G. A. Pfeiffer disproved this conjecture by giving a rather complicated description of certain holomorphic functions for which no local linearization is possible. In 1919 Julia claimed to settle the question completely for rational functions of degree two or more by showing that such a linearization is never possible; however, his proof was wrong. H. Cremer put the situation in much clearer perspective in 1927 with a result [...] —John Milnor, Dynamics in one complex variable (second edition, 2000) Cremer's argument is indeed simple. It uses irrationnal rotation numbers $$\theta$$ which are well approximated by rationnals. A non-linearizable irrationaly indifferent fixed point is nowadays called a Cremer point. Siegel was the first to be able to prove, in the 1940s, that linearizability does occur. In fact he showed that if the rotation number is Diophantine, then the fixed point is always linearizable. Then there remained to determine the exact set of values of $$\theta$$ for which f is always linearizable. Brjuno and Rüssman found the exact arithmetic condition, but could only prove it is sufficient. This condition is now called the Brjuno condition. Yoccoz proved the necessity of the condition, i.e. that for an irrationnal $$\theta$$ not satisfying the Brjuno condition, there exists at least one non linearizable example with this rotation number $$\theta\ .$$ He even proved that the degree 2 polynomial $$f(z)=\exp(2i\pi\theta) z + z^2$$ is such an example. ## Results Here, $$\theta$$ refers to an irrational real number$$:\theta\in\mathbb{R}\setminus\mathbb{Q}$$ Definition: Let $$p_n/q_n$$ be the sequence of continued fraction convergents of $$\theta\ .$$ The number $$\theta$$ is said to satisfy Brjuno's condition (also called the Brjuno-Rüssmann condition) whenever $$\sum_{n=0}^{\infty} \frac{\log q_{n+1}}{q_n} < +\infty\ .$$ There are several other equivalent definitions of Brjuno's condition: • $$\sum_{k=0}^{\infty} \frac{1}{2^k}\log\left(\sup_{2^k\leq n< 2^{k+1}} \frac{1}{|\lambda^n-1|}\right) < +\infty$$ where $$\lambda=e^{2i\pi\theta}$$ • $$\sum_{n=0}^{\infty} \beta_{n-1} \log \frac{1}{\alpha_n}< +\infty$$ where $$\alpha_0$$ is the fractional part of $$\theta\ ,$$ $$\alpha_{n+1}$$ is the fractional part of $$\alpha_n\ ,$$ $$\beta_{-1}=1$$ and $$\beta_n=\alpha_0 \cdots \alpha_n$$ For instance the Diophantine numbers satisfy Brjuno's condition. (An irrational number $$\theta$$ is Diophantine if there exists $$C>0$$ and an exponent $$\delta\geq 2$$ such that rational $$\forall p/q\ ,$$ $$\left|\theta-\frac{p}{q}\right| \geq \frac{C}{q^\delta}\ ,$$ i.e. such an irrational cannot be too well approximated by rationals.) Theorem: let $$\theta$$ be irrational • If $$\theta$$ satisfies Bjruno's condition, then all fixed points with multiplier $$e^{2i\pi\theta}$$ are linearizable. • If $$\theta$$ does not, then there exists maps with a non linearizable fixed point with multiplier $$e^{2i\pi\theta}\ .$$ The following statement specifies the second case: Theorem: If $$\theta$$ does not satisfy Bjruno's condition, then the fixed point $$z=0$$ of the degree 2 polynomial $$e^{2i\pi\theta}z+z^2$$ is not linearizable. ## References • Milnor, J. [2000]: Dynamics in one complex variable, second edition
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972618579864502, "perplexity": 287.0955607226005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427132827069.83/warc/CC-MAIN-20150323174707-00187-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.kickzstore.com/en-ca/products/nike-mens-air-force-1-low-acg-light-orewood-brown
### Shopping Cart 0 Your shopping bag is empty Go to the shop Close # Nike Men's Air Force 1 Low ACG Light Orewood Brown \$157.00 \$131.00 ITEM: Nike Men's Air Force 1 Low ACG Light Orewood Brown COLOR: LIGHT OREWOOD BROWN/PINK-ORANGE STYLE NUMBER:  CD0887-100 CONDITION: Brand New with Box (Deadstock) RELEASE DATE: 01/11/2020 ALWAYS 100% Authentic Fast Shipping! 30-Day Returns USA: from \$ 6.95 CANADA: from \$ 40.00 INTERNATIONAL: from \$ 60.00 # Nike Men's Air Force 1 Low ACG Light Orewood Brown \$157.00 \$131.00 US Women US Men European UK Cm 5 3 1/2 35 1/2 2 1/2 22.5 5 1/2 4 36 3 1/2 23 6 4 1/2 36 1/2 4 23.5 6 1/2 5 36 1/2 4 1/2 23.5 7 5 1/2 38 5 24 7 1/2 6 38 1/2 5 1/2 24 8 6 1/2 39 6 24.5 8 1/2 7 40 6 25 9 7 1/2 40 1/2 6 1/2 25.5 9 1/2 8 41 7 26 10 8 1/2 42 7 1/2 26.5 11 9 42 1/2 8 27 12 9 1/2 43 8 1/2 27.5 13 10 44 9 28 14 10 1/2 44 1/2 9 1/2 28.5 15 11 45 10 29 11 1/2 45 1/2 10 1/2 29.5 12 46 11 30 12 1/2 47 11 1/2 30.5 13 47 1/2 12 31 13 1/2 48 12 1/2 31.5 14 48 1/2 13 32 14 1/2 49 13 1/2 32.5 15 49 1/2 14 33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334663510322571, "perplexity": 2825.952470834331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00753.warc.gz"}
http://physics.oregonstate.edu/~rubin/nacphy/UNIX/latex.html
22.A: Printing a LaTeX File 21: Running C and Fortran Together Contents ## § 22:  LaTeX for Scientific Documents The LaTeX package (which is really a macro package composed of primitive TeX commands) is commonly used by scientists and engineers for its high quality typesetting of even complicated equations. Because it is of high quality, yet with highly transportable and compact source code, it is also used by some journals and book publishers for their publications. In 1993 a reorganized version of LaTeX called LaTeX 2E was introduced in order to bring various LaTeX extensions under a common umbrella and in order to add new features. To tell the difference between the two implementations, the first command in a LaTeX  document was changed from \documentstyle (in old LaTeX ) to \documentclass in LaTeX 2E. The new LaTeX 2E will accept the old \documentstyle directive, but will be slower. After creating a LaTeX document, you can convert it to postscript or  pdf for high quality printing and posting, convert it to HTML for a hyper-linked Web document or, of course, send the source code to your favorite journal for them to process it for publication. We will discuss: 1. Viewing and Printing LaTeX Documents 2. Creating LaTeX Documents 3. Converting LaTeX to HTML for Web Documents 22.A: Printing a LaTeX File 21: Running C and Fortran Together Contents
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990994930267334, "perplexity": 3729.6067445602316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00224-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/157256/exhibiting-a-ring-isomorphism-between-a-ring-and-itself
# Exhibiting a ring isomorphism between a ring and itself. I recently proved to myself that if $R$ is a ring, and $R'$ a set in bijection with $R$, say by $f\colon R'\to R$, then one can turn $R'$ into a ring by defining $0'=f^{-1}(0)$, $1'=f^{-1}(1)$, $$r'+s'=f^{-1}(f(r')+f(s')),\qquad r's'=f^{-1}(f(r')f(s')),$$ and then $f$ is a ring isomorphism. Now suppose you put a new ring structure on $R$, say $(R,+,\cdot_u,0,u^{-1})$, where $a\cdot_u b=aub$. I want to use the above result as a shortcut to show $(R,+,\cdot, 0,1)$ is isomorphic to $(R,+,\cdot_u, 0,u^{-1})$ by exhibiting a bijection on $R$ which satisfies the four properties I listed above. I've had trouble thinking of what the map would look like. Does anyone see what the map would be? - The first paragraph is an example of what is known as transport of structure. –  Arturo Magidin Jun 12 '12 at 6:13 Note, however, that the first paragraph is really irrelevant to your second paragraph: two rings $R$ and $S$ are isomorphic if and only if there exists a bijection $f$ such that $f(r+s) = f(r)+f(s)$ and $f(rs) = f(r)f(s)$. Applying the inverse function $f^{-1}$ to both sides of both equations we get your two displayed properties; the first displayed equation already implies that $f(0)=0$; and the fact that $f$ is onto and multiplicative implies that $f(1)$ is necessarily a unity, hence equal to $1$. –  Arturo Magidin Jun 12 '12 at 6:20 Thanks for these comments. I think the word shortcut was bad word choice on my part. –  Linda Cortes Jun 12 '12 at 6:31 (So using this "method" you end up doing more work than simply checking to see if you have a bijective ring homomorphism, which does not require checking $f(0')=0$ and $f(1') = 1$.) –  Arturo Magidin Jun 12 '12 at 6:31 If $u$ is invertible, then $f: R \rightarrow R$, $r \mapsto ru$ is a bijection and satisfies the properties you want. $f : R \rightarrow R$, $r \mapsto ur$ will also work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708836078643799, "perplexity": 121.25201507871759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00222-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathhelpforum.com/calculus/171956-annoying-integral-involving-bessel-functions.html
# Thread: Annoying Integral involving Bessel Functions 1. ## Annoying Integral involving Bessel Functions Hi I'm having trouble with this integral $ \int_{0}^{\infty} \frac{J_0(kR)}{(1+(kR_d)^2)^{3/2}} dk $ I'm supposed to evaulate it using $ \int_{0}^{\infty} J_{\nu}(xy) \frac{dx}{(x^2+a^2)^{1/2}} = I_{\nu/2} (ay/2) K_{\nu/2} (ay/2) $ Where standard notation has been used for the bessel functions, any hints on how to transform it to the correct forn would be much appreciated, I can't really see how to get this to work 2. Originally Posted by thelostchild Hi I'm having trouble with this integral $ \int_{0}^{\infty} \frac{J_0(kR)}{(1+(kR_d)^2)^{3/2}} dk $ I'm supposed to evaulate it using $ \int_{0}^{\infty} J_{\nu}(xy) \frac{dx}{(x^2+a^2)^{1/2}} = I_{\nu/2} (ay/2) K_{\nu/2} (ay/2) $ Where standard notation has been used for the bessel functions, any hints on how to transform it to the correct forn would be much appreciated, I can't really see how to get this to work Hint: $\displaystyle \int_0^{\infty} \frac{J_0(kR)}{(1 + (kR)^2 )^{3/2}}dk = ~...~= \frac{1}{y}\int_0^{\infty}J_0(m) \cdot m(1 + m^2)^{-3/2}dm$ (after letting y = R and m = xy.) Now integrate by parts: $\displaystyle \int p~dq = pq - \int q ~dp$ using $\displaystyle p = J_0(m)$ and $\displaystyle dq = m(1 + m^2)^{-3/2}$ -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865205883979797, "perplexity": 534.199223142597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00437.warc.gz"}
http://math.stackexchange.com/questions/221857/existence-of-non-atomic-probability-measure-for-given-measure-zero-sets
# Existence of non-atomic probability measure for given measure zero sets Let $\Omega$ be a set and $\Sigma$ be a $\sigma$-algebra of subsets of $\Omega$. Let $N$ be a collection of measurable subsets of $\Sigma$. Question: What conditions on $\Sigma$ and $N$ guarantee that there exists a non-atomic probability measure $\mu:\Sigma\to [0,1]$ such that for any $E\in \Sigma$ if $\mu(E)=0$, then $E\in N$ ? Edited to make question coherent. - The condition on $N$ seems strange, because you can always choose $E'=\Omega$, at least the way that it is written now. – Lukas Geyer Oct 27 '12 at 2:18 Thanks Lukas. Brain wasn't fully engaged. – Rabee Tourky Oct 27 '12 at 2:43 @Lukas: Either of you could write that up as an answer so the question doesn't remain unanswered. – joriki Oct 27 '12 at 6:19 @MichaelGreinecker: You might be interested in Kelley's criterion (original article here) covered in several books (e.g. Fremlin vol 3, ch. 39) as well as Maharam's "control measure problem" which generated some excitement in the past decade due to its negative resolution by Talagrand in '06. I don't have the time to go digging any further, but this should give some pointers. As an aside: there was also this MO thread by the OP but I didn't read it closely. – commenter Nov 26 '12 at 18:47 @Michael: I don't understand. Take a $\sigma$-ideal $J$ included in $N$ and consider $\mathfrak{A} = \Sigma/J$. Every property of $\mathfrak{A}$ is a property of how $J$ sits inside $\Sigma$. – commenter Nov 26 '12 at 20:38 Thanks Michael Greinecker and commenter. The main practical problem for me in applying commenter's idea was that the weak $\sigma$-distributive property in Maharam's 1947 paper, in Kelley's paper, and in Todorcevic's amazing paper of 2004 on measure algebras may not hold if we choose an arbitrary $\sigma$-ideal $J$ in $N$ (and it certainly has no clear meaning for what I am doing). In the end, the best fit for my work was Ryll-Nardzewski's result published in the addendum section of Kelley and not Kelley's result with the distributive property. 1) There exists a sequence $B_n$ of families of subsets of $\Sigma$ such that $(\Sigma\setminus N)\subseteq \bigcup_{n} B_n$. 2) Each $B_n$ has a positive intersection number (as in Kelley). 3) Each $B_n$ is open for increasing sequences; (if $E_m\uparrow E\in B_n$, then eventually $E_m\in B_n$). The final condition (3) of Nardzewski guarantees that $\Sigma\setminus \bigcup_{n} B_n$ is a $\sigma$-ideal. Condition (2) guarantees that there is a finitely additive (positive) probability measure $\nu_n$ on $\Sigma$ that is bounded away from zero on $B_n$. Condition (3) tells us that from $\nu_n$ we can define a countably additive probability measure $\mu_n$ that also measures elements of $B_n$ positively. Letting $\mu= \sum_{n=1}^\infty 2^{-n} \mu_n$, we have the required measure. For the converse suppose that $\mu$ is the required measure, letting $B_n=\{ \mu>1/n\}$ we see that (1), (2), and (3); hold. - Nice. Glad to see that my suggestion helped even if it was in an unexpected way... – commenter Jan 21 '13 at 21:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172746539115906, "perplexity": 338.7005243891341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00345-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.mathxplain.com/probability-theory/discrete-and-continuous-distributions/problem-16
Contents of this Probability theory episode: Random variable, Discrete and continuous random variable, Binomial distribution, Poisson distribution, Hypergeometryc distribution, Exponential distribution, Normal distribution, Uniform distribution, Probability, Average, Density function, Distribution function, Expected value, Standard deviation. # Problem 16 16 Oops, it seems you aren't logged in. It's a shame, because you'd find interesting things here, such as: Random variable, Discrete and continuous random variable, Binomial distribution, Poisson distribution, Hypergeometryc distribution, Exponential distribution, Normal distribution, Uniform distribution, Probability, Average, Density function, Distribution function, Expected value, Standard deviation. Let's see this Probability theory episode
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923595190048218, "perplexity": 4273.9760943629635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00142.warc.gz"}
https://math.stackexchange.com/questions/378376/if-g-has-only-2-proper-non-trivial-subgroups-then-g-is-cyclic/378396
If $G$ has only 2 proper, non-trivial subgroups then $G$ is cyclic Is the following true? If $G$ has two proper, non-trivial subgroups then $G$ is cyclic. • Did you mean two proper non-trivial subgroups? – Tobias Kildetoft May 1 '13 at 16:31 • yeah, I mean two proper non-trivial subgroup here. – ROBINSON May 1 '13 at 16:36 • @AKASNIL: In that case you should go back and edit the question right away, since that's not what you asked and lots of people are answering the question you actually asked. – Pete L. Clark May 1 '13 at 16:46 • This is more of a general comment on all the responses thus far. In the initial set of comments on the question, the OP stated that he was referring to a group with exactly two proper, non-trivial subgroups. I assumed he meant exactly two subgroups other than $\{1\}$ and $G$. Am I missing something here? This possibility is all the more likely now that I have recently celebrated yet another birthday. – Chris Leary May 1 '13 at 16:53 • @Pete - I've always believed that if you try to reason you will make logical errors from time to time. Some of the ones I have made have astounded me once I realized them. That's pretty much the price we pay for being human. – Chris Leary May 3 '13 at 15:45 First note that if $G$ does not have finite order, then it does not have a finite number of subgroups, so we can assume that $G$ is finite (see the comment by Pete Clark). Note that if $3$ distinct primes divides the order of the group, then the group has at least $3$ proper non-trivial subgroups. So $|G| = p^nq^m$ with $p$ and $q$ primes. Now, if either $n$ or $m$ is greater than or equal to $4$, then the corresponding Sylow subgroup has too many subgroups. Also, if either if at least $2$ and the other is not $0$, we again get too many subgroups. We are left with either $|G| = p^3$ or $|G| = pq$. In both cases the cyclic group of that order will satisfy the conditions, and we wish to show that these are the only ones (since the cyclic group of order $p^2$ has too few subgroups, and the non-cyclic one has too many). If $|G| = pq$ and $G$ is not cyclic, then $G$ is not abelian, and thus has more than one Sylow subgroup for either $p$ or $q$, giving us too many subgroups. If $|G| = p^3$ then $G$ has at least one subgroup of order $p$ and one of order $p^2$. But if $G$ is not cyclic, it has more than one maximal subgroup, which gives us at least two of order $p^2$, again resulting in too many subgroups. • Assuming $G$ is finite? – Metin Y. May 1 '13 at 16:45 • @MetinY. Right, no infinite group can have this property. – Tobias Kildetoft May 1 '13 at 16:47 • This is a nice answer. One minor point: you seem to be assuming that the group is finite, which the OP did not say (although s/he didn't say other things as well...) and in any case is not necessary. But an infinite group either contains an element of infinite order or arbitrarily large finite subgroups, so this is no problem. I do think you should put this in the answer, though. – Pete L. Clark May 1 '13 at 16:53 • Let me try again: I claim that every infinite group $G$ contains infinitely many subgroups. Indeed, this is clear if it contains an element of infinite order. If not, it contains infinitely many elements of finite order, hence infinitely many finite cyclic subgroups (but they may well all have the same order, despite the fact that my intuition still suggests that this is impossible). – Pete L. Clark May 1 '13 at 18:27 • @StevenStadnicki That group does have arbitrarily large finite subgroups. As mentioned previously, an example where all the proper non-trivial subgroups have the same (finite) order is given by the Tarski monster. – Tobias Kildetoft May 2 '13 at 1:21 Let $H_1$ and $H_2$ be the two non-trivial proper subgroups of the given group $G$. I claim that $G$ is not the union $H_1\cup H_2$. If one of the subgroups is contained in the other, then this is trivially true. Otherwise there exist elements belonging to one subgroup but not the other. Let $h_1\in H_1\setminus H_2$ and $h_2\in H_2\setminus H_1$. What about $g=h_1h_2$? If it belongs to $H_1$, then so does $h_2$. If it belongs to $H_2$, then so does $h_1$. In either case we contradict our assumptions, so we have to conclude that $g\notin H_1\cup H_2$. So we know that there exists an element $g\in G$, $g\notin H_1\cup H_2$. What is the subgroup generated by $g$? Can't be either $H_1$ or $H_2$, so it has to be all of $G$. Ergo, $G$ is cyclic. • Very nice and elementary answer. – Tobias Kildetoft May 1 '13 at 17:00 • Complete, yes. Not so sure about clean. If $G$ were finite, it would be easier to prove that $G\neq H_1\cup H_2$ (Lagrange is all you need). I didn't see a nice way of covering the infinite groups in the same argument, so I resorted to the uglier way of picking those $h_1,h_2$ :-( – Jyrki Lahtonen May 1 '13 at 19:50 • But the fact that the union of two subgroups is not a subgroup unless one is contained in the other is standard knowledge (or should be). – Tobias Kildetoft May 2 '13 at 0:45 • Unlike Jyrki, I think his argument using $h_1$ and $h_2$ is prettier than the one using Lagrange's theorem. Not only does it work for infinite groups, but it uses only information that is even more basic than Lagrange's theorem. – Andreas Blass May 2 '13 at 0:50 • +1: With respect to some esthetic that I would have trouble enunciating, this is clearly the best possible answer. – Pete L. Clark May 2 '13 at 1:47 Since $G$ has proper non-trivial subgroups $\exists~a~(\neq e)\in G.$ • Case $1$: $G=(a):$ Nothing left to prove. • Case $2$: $(a)$ is a non-trivial proper subgroup of $G:$ Choose $b\in G-(a).$ • Case $2.1:$ $G=(b):$ Nothing left to prove. • Case $2.2:$ $(b)$ is also a non-trivial proper subgroup of $G:$ • Case $2.2.1:$ $(a)\cup(b)=G,$ a subgroup of $G.$ Consequently either $(a)\subset(b)$ or $(b)\subset(a).$ • Case $2.2.2:$ $\exists~c\in G-(a)\cup(b).$ Since $G$ has only two proper subgroups $G=(c).$ • @Adhya : I like your answer, but you lost me in the middle of Case 2. Do you mean "Since $G$ has only two proper subgroups..."? And why do you state that $G$ is a subgroup of $G$? Is this a typo? – Stefan Smith May 1 '13 at 21:42 • @Adhya : If $(a) \cup (b)$ is a subgroup of $G$, does it follow that $(a) \subset (b)$ or $(b) \subset (a)$? I don't know much group theory. I upvoted your answer because it is so elegant. – Stefan Smith May 1 '13 at 21:52 • @StefanSmith: See groupprops.subwiki.org/wiki/…. – Sugata Adhya May 2 '13 at 0:25 • @Adhya : Thanks for the link. I read the proof, which should be quite simple, and I took a survey posted there and complained about the "tabular method" of proof they used there, which I didn't care for. (I apologize if you happen to be the creator of that page/proof, and I respect the effort you put into it) – Stefan Smith May 2 '13 at 0:50 Llet $|G| = n$. Suppose $a,b$ be two nonidentity elements of $G$. Now consider $\langle a \rangle$ and $\langle b \rangle$. If $G$ is commutative then it is easy to see that $\langle ab \rangle$ is a cyclic group other than $\langle a \rangle$ and $\langle b \rangle$, which leads to a contradiction, so one of $a$ and $b$ must be of order $n$, so $G$ is cyclic. If $G$ is non-commutative then it can't be cyclic, so we are done. • thank u Mr. Alex. I don't know how to use mathjax. – Anjan Samanta Oct 16 '17 at 8:35 • I fail to follow your logic. If $G$ is commutative it can easily happen that $\langle a\rangle$, $\langle b\rangle$ and $\langle ab\rangle$ are all the same subgroup. And in the non-commutative case the contrapositive claim would be to prove that the group has at least three proper subgroup, so you are not done in that case either. – Jyrki Lahtonen Oct 16 '17 at 14:35 • sorry my bad.if b is the inverse of 'a' then they can't be distinct.And the second case isn't obvious.I have done several mistakes. – Anjan Samanta Oct 16 '17 at 18:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945598602294922, "perplexity": 227.57617355086163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00486.warc.gz"}
https://infoscience.epfl.ch/record/91178?ln=en
## Average case analysis of multichannel thresholding This paper introduces p-thresholding, an algorithm to compute simultaneous sparse approximations of multichannel signals over redundant dictionaries. We work out both worst case and average case recovery analyses of this algorithm and show that the latter results in much weaker conditions on the dictionary. Numerical simulations confirm our theoretical findings and show that p-thresholding is an interesting low complexity alternative to simultaneous greedy or convex relaxation algorithms for processing sparse multichannel signals with balanced coefficients. Published in: Proc. ICASSP'07 Presented at: ICASSP07, Honolulu Year: 2007 Keywords: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913471937179565, "perplexity": 2182.082351779301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256812.65/warc/CC-MAIN-20190522123236-20190522145236-00358.warc.gz"}
http://math.stackexchange.com/questions/112596/how-to-find-the-coproduct-in-the-category-of-pointed-sets
# How to find the coproduct in the category of pointed sets? Exercise $6 (b)$, page 58 from Hungenford's book Algebra. Show that in $\mathcal{S}_{\star}$ (the category of pointed sets) every family of objects has a coproduct (often called a "wedge product"); describe this coproduct. I need a suggestion in order to find the coproduct. I would appreciate your help. - With two normal sets, the coproduct is the disjoint union. With pointed sets, you merely add the condition that the basepoints of both sets always go to the basepoint of the new set, which only requires a small modification to the disjoint union. –  Carl Feb 23 '12 at 21:13 @Carl: I would like to thank you. Can you please write it as an answer so that I can accept it? Thank you again! –  spohreis Feb 23 '12 at 21:33 No problem! Reposting as an answer. –  Carl Feb 23 '12 at 22:03 @magma: It's just the disjoint union $X = \{a,b\} \coprod \{c,d\}$ with the base points $a,c$ identified, ie. the quotient of $X$ by the equivalence relation generated by $a \sim c$. –  Najib Idrissi Feb 25 '12 at 16:32 @magma: the coproduct of pointed sets $(X_i,b_i)$ is the universal pointed set $(X,b)$ with inclusion maps $f_i:(X_i,b_i)\to (X,b)$. By definition of morphisms between pointed sets, $f_i(b_i)=b$: all basepoints are mapped to the new basepoint. –  wildildildlife Feb 25 '12 at 17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377155900001526, "perplexity": 526.0649635490723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00448-ip-10-180-136-8.ec2.internal.warc.gz"}
http://sms.niser.ac.in/news/seminar-1
# News & Events ## Seminar Date/Time: Tuesday, September 9, 2014 - 16:45 to 17:45 Venue: LH-101 Speaker: Dr. Ghurumuruhan Ganesan Affiliation: EPFL, Lausanne Title: Infection Spread and Stability in Random Graphs Abstract: In the first part of the talk, we study infection spread in random geometric graphs where $n$ nodes are distributed uniformly in the unit square $W$ centred at the origin and two nodes are joined by an edge if the Euclidean distance between them is less than $r_n$. Assuming edge passage times are exponentially distributed with unit mean, we obtain upper and lower bounds for speed of infection spread in the sub-connectivity regime, $nr^2_n \to \infty$. In the second part of the talk, we discuss convergence rate of sums of locally determinable functionals of Poisson processes. Denoting the Poisson process as $\mathcal{N}$ , the functional as $f$ and Lebesgue measure as $l(.)$, we establish corresponding bounds for $$\frac{1}{l(nW)}\sum_{x\in nW \cap \mathcal{N}} f(x)$$ in terms of the decay rate of the radius of determinability. ## Contact us School of Mathematical Sciences NISERPO- Bhimpur-PadanpurVia- Jatni, District- Khurda, Odisha, India, PIN- 752050 Tel: +91-674-249-4081 Corporate Site - This is a contributing Drupal Theme Design by WeebPal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682180881500244, "perplexity": 524.9101451275059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00486.warc.gz"}
http://science.sciencemag.org/content/147/3661/991
Articles # Magnetic Fields in Interplanetary Space + See all authors and affiliations Science  26 Feb 1965: Vol. 147, Issue 3661, pp. 991-1000 DOI: 10.1126/science.147.3661.991 ## Abstract The brief period between the conception of the interplanetary magnetic field and conclusive proof of its existence has been an exciting one. Imaginative theoretical developments and careful experimental verification have both been essential to rapid progress. From the various lines of evidence described here it is clear that an interplanetary magnetic field is always present, drawn out from the sun by the radially streaming solar wind. The field is stretched into a spiral pattern by the sun's rotation. The field appears to consist of relatively narrow filaments, the fields of adjacent filaments having opposite directions. At the earth's orbit the field points slightly below the ecliptic plane. The magnitude of the field is steady and near 5 gammas in quiet times, but it may rise to higher values at times of higher solar activity. A collision-free shock front is formed in the plasma flow around the earth. In the transition region between the shock front and the magnetopause the magnitude of the field is somewhat higher than it is in the interplanetary region, and large fluctuations in magnitude and direction are common. A shock front has also been observed in space between a slowly moving body of plasma and a faster, overtaking plasma stream.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788841962814331, "perplexity": 745.6755005946846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104704.64/warc/CC-MAIN-20170818160227-20170818180227-00368.warc.gz"}
https://meangreenmath.com/2013/10/01/area-of-a-triangle-base-and-height-part-1/
# Area of a triangle: Base and height (Part 1) This begins a series of post concerning how the area of a triangle can be computed. This post concerns the formula that students most often remember: $A = \displaystyle \frac{1}{2} b h$ Why is this formula true? Consider $\triangle ABC$, and form the altitude from $B$ to line $AB$. Suppose that the length of $AC$ is $b$ and that the altitude has length $h$. Then one of three things could happen: Case 1. The altitude intersects $AC$ at either $A$ or $C$. Then $\triangle ABC$ is a right triangle, which is half of a rectangle. Since the area of a rectangle is $bh$, the area of the triangle must be $\displaystyle \frac{1}{2} bh$. Knowing the area of a right triangle will be important for Cases 2 and 3, as we will act like a good MIT freshman and use this previous work. Case 2. The altitude intersects $AC$ at a point $D$ between $A$ and $C$. Then $\triangle ABD$ and $\triangle BCD$ are right triangles, and so $\hbox{Area of~} \triangle ABC = \hbox{Area of ~} \triangle ABD + \hbox{~Area of~} \triangle BCD$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} b_1 h + \frac{1}{2} b_2 h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b_1 + b_2) h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} bh$ Case 3. The altitude intersects $AC$ at a point $D$ that is not in between $A$ and $C$. Without loss of generality, suppose that $A$ is between $D$ and $C$. Then $\triangle ABD$ and $\triangle BCD$ are right triangles, and so $\hbox{Area of~} \triangle ABC = \hbox{Area of ~} \triangle BCD - \hbox{~Area of~} \triangle ACD$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b+t) h + \frac{1}{2} t h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b+t-t) h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} bh$ ## One thought on “Area of a triangle: Base and height (Part 1)” This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625933766365051, "perplexity": 137.43669891811362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00776.warc.gz"}
https://mca2017.org/fr/prog/session/gph
# Géométrie et physique des fibrés de Higgs Tous les résumés Taille : 99 kb Organisateurs : • Florent Schaffhauser (Universidad de los Andes) • Laura P. Schaposnik (University of Illinois-Chicago) • Richard Wentworth (University of Maryland) • Marcos Jardim UNICAMP Branes on moduli spaces of sheaves Résumé en PDF Taille : 38 kb Branes are special submanifolds of hyperkähler manifolds that play an important role in string theory, particularly in the Kapustin–Witten approach to the geometric Langlands program, but which also are of intrinsic geometric interest. More precisely, a brane is a submanifold of a hyperkähler manifold which is either complex or Lagrangian with respect to each of the three complex structures or Kähler forms composing the hyperkahler structure. Branes on moduli spaces of Higgs bundles have been largely studied by many authors; in this talk, I will summarize recent work done in collaboration with Franco, Marchesi, and Menet on the construction of different types of branes on moduli spaces of Higgs bundles via Nahm transform, of framed sheaves on the projective plane, and on moduli spaces of sheaves on K3 and abelian surfaces. • Lara Anderson Virginia Tech Elliptically Fibered CY Geometries and Emergent Hitchin Systems Résumé en PDF Taille : 38 kb I provide a brief overview of the way that Higgs bundles arise in string compactifications (particularly F-theory). Further, I will describe recent progress in describing the moduli spaces of singular Calabi-Yau manifolds and the surprising relationships only recently discovered between Calabi-Yau and Hitchin integrable systems, providing a kind of transition function to relate open and closed string degrees of freedom in F-theory. • Alessia Mandini PUC Rio de Janeiro Hyperpolygon spaces and parabolic Higgs bundles Résumé en PDF Taille : 38 kb Hyperpolygons spaces are a family of (finite dimensional, non-compact) hyperk\"{a}hler spaces, that can be obtained from coadjoint orbits by hyperkaehler reduction. In joint work with L. Godinho, we show that these space are diffeomorphic (in fact, symplectomorphic) to certain families of parabolic Higgs bundles. In this talk I will describe this relation and use it to analyse the fixed points locus of a natural involution on the moduli space of parabolic Higgs bundles. The fixed point locus of this involution is identified with the moduli spaces of polygons in Minkowski 3-space and the identification yields information on the connected components of the fixed point locus. This is based on joint works with Leonor Godinho and with Indranil Biswas, Carlos Florentino and Leonor Godinho UIUC Fiber products and spectral data for Higgs bundles Résumé en PDF Taille : 37 kb I will discuss some interesting relations among Higgs bundles, especially from the point of view of spectral data, that result from isogenies between low dimensional complex Lie groups and their real forms. • Andy Neitzke UT Austin Abelianization in classical complex Chern-Simons theory Résumé en PDF Taille : 54 kb I will describe an approach to classical complex Chern-Simons theory via "abelianization," relating flat $GL(N)$-connections over a manifold of dimension $d \le 3$ to flat $GL(1)$-connections over a branched $N$-fold cover. This is joint work with Dan Freed. • Sara Maloni University of Virginia The geometry of quasi-Hitchin symplectic Anosov representations. Résumé en PDF Taille : 57 kb After revising the background theory of symplectic Anosov representations and their domains of discontinuity, we will focus on our joint work in progress with Daniele Alessandrini and Anna Wienhard. In particular, we will describe partial results about the homeomorphism type of the quotient of the domain of discontinuity for quasi-Hitchin representations in $\mathrm{Sp}(4, \mathbb{C})$ acting on the Lagrangian space $\mathrm{Lag}(\mathbb{C}^4)$. • Leticia Brambila-Paz CIMAT Coherent Higgs Systems Résumé en PDF Taille : 62 kb Let $X$ be a Riemman surface and $K$ the canonical bundle. An $L-$pair of type $(n,d,k)$ is a pair $(E, V)$ where $E$ is a vector bundle over $X$ of rank $n$ and degree $d,$ and $V$ a linear subspace of $H^0(EndE\otimes L)$ of dimension $k.$ A coherent Higgs system is a $K-$pair. In this talk the moduli space of $K-$pairs of type $(n,d,1)$ are related to the moduli spaces of Hitchin pairs of type $(L,P).$ • Claudio Meneses CIMAT On the Narasimhan-Atiyah-Bott metrics on moduli of parabolic bundles Résumé en PDF Taille : 38 kb I will discuss my current work regarding the canonical Kähler structure on moduli spaces of stable parabolic bundles. If time permits, I will also discuss a conjectural relation with the geometry of the nilpotent cone locus and the abelianization of logarithmic connections in genus 0. This talk is based on ongoing projects with Leon Takhtajan, Marco Spinaci and Sebastian Heller. • Steve Rayan Asymptotics of hyperpolygons Résumé en PDF Taille : 56 kb As discovered in the work of Godinho-Mandini and Biswas-Florentino-Godinho-Mandini, the moduli space of $n$-sided hyperpolygons in the Lie algebra $\mathfrak{su}(2)^*$ is naturally a subvariety of the moduli space of rank-$2$ parabolic Higgs bundles on the projective line punctured $n$ times, and the integrable system structure pulls back to one on hyperpolygon space. These results were extended to higher rank in recent work by J. Fisher and myself. In this talk, I will report on joint work with H. Weiss regarding the asymptotic geometry of hyperpolygon space and its ambient space of parabolic Higgs bundles. The former has a hyperkaehler metric arising from a finite-dimensional quotient and the latter has one arising from an infinite-dimensional quotient. We use properties of the hyperkaehler moment map for hyperpolygon space to construct a limiting sequence of hyperpolygons that terminates in a moduli space of degenerate hyperpolygons. In the spirit of the work of Mazzeo-Swoboda-Weiss-Witt on ordinary Higgs bundles, we use this partial compactification to show that hyperpolygon space is an ALE manifold, as expected for Nakajima quiver varieties. Finally, I will use this analysis to speculate on differences between the metric on hyperpolygon space and the one on the ambient parabolic Higgs moduli space. • Victoria Hoskins Freie Universität Berlin Group actions on quiver moduli spaces and branes Résumé en PDF Taille : 38 kb We consider two types of actions on moduli spaces of quiver representations over a field k and we decompose their fixed loci using group cohomology. First, for a perfect field k, we study the action of the absolute Galois group of k on the points of this quiver moduli space valued in an algebraic closure of k; the fixed locus is the set of k-rational points and we obtain a decomposition of this fixed locus indexed by the Brauer group of k and give a modular interpretation of this decomposition. Second, we study algebraic actions of finite groups of quiver automorphisms on these moduli spaces; the fixed locus is decomposed using group cohomology and each component has a modular interpretation. Finally, we describe the symplectic and holomorphic geometry of these fixed loci in hyperkaehler quiver varieties in the language of branes. This is joint work with Florent Schaffhauser. • Qionling Li CalTech Metric domination for Higgs bundles of quiver type Résumé en PDF Taille : 46 kb Given a $G$-Higgs bundle over a Riemann surface, there is a unique equivariant harmonic map into the associated symmetric space $G/K$ through solving Hitchin equation to Higgs bundles. We find a maximal principle for a type of coupled elliptic systems and apply it to analyze the Hitchin equations associated to the Higgs bundles of quiver type. In particular, we find several domination results of the pullback metrics of the associated branched harmonic maps into the symmetric space. This is joint work with Song Dai. • Sergei Gukov CalTech Equivariant invariants of the Hitchin moduli space Résumé en PDF Taille : 38 kb This talk will be a fairly broad review of exploring geometry and topology of the moduli space of Higgs bundles through the equivariant circle action (which acts by a phase on the Higgs field). This approach leads to new invariants of the moduli space of Higgs bundles, the so-called equivariant Verlinde formula, the real and wild versions of the Hitchin character, and the equivariant elliptic genus. The real reason, though, for studying these new invariants is not so much that they contain wealth of useful information about Higgs bundles (they actually do!) but that they have surprising new connections to other problems in math and mathematical physics. • Laura Fredrickson Stanford Constructing solutions of Hitchin's equations near the ends of the moduli space Résumé en PDF Taille : 50 kb Hitchin's equations are a system of gauge theoretic equations on a Riemann surface that are of interest in many areas including representation theory, Teichm\"uller theory, and the geometric Langlands correspondence. In this talk, I'll describe what solutions of $SL(n,\mathbb{C})$-Hitchin's equations near the ends'' of the moduli space look like, and the resulting compactification of the Hitchin moduli space. Wild Hitchin moduli spaces are an important ingredient in this construction. This construction generalizes Mazzeo-Swoboda-Weiss-Witt's construction of $SL(2,\mathbb{C})$-solutions of Hitchin's equations where the Higgs field is simple.'' • Michael Groechenig FU Berlin p-adic integration for the Hitchin system Résumé en PDF Taille : 38 kb I will report on joint work with D. Wyss and P. Ziegler. We prove a conjecture by Hausel-Thaddeus which predicts an agreement of appropriately defined Hodge numbers for moduli spaces of Higgs bundles for the structure groups SL(n) and PGL(n) over the complex numbers. Despite the complex-analytic nature of the statement our proof is entirely arithmetic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898596167564392, "perplexity": 713.0310558550514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00442.warc.gz"}
http://math.stackexchange.com/questions/159013/finding-value-of-x-for-an-equation
# finding value of x for an equation if we have an equation of form $y=x^{nx+1}$ and if we are given the values of $y$ and $n$ then how can one find $x$? I have reduced the equation to $\log(y)/\log(x)=nx+1$ but can't proceed further. Is there some kind of standard equation? Thanks - If you know about differentiation then you should look up "Newton's Method." $x=2/n$ can't possibly be correct, even as an approximate answer; for one thing, it doesn't involve $y$. –  Gerry Myerson Jun 16 '12 at 22:15 A quick search on Google reveals the page http://mathforum.org/library/drmath/view/70483.html, where Doctor Vogler points out that the function $f(X) = x^x$ is not injective, since $$(\frac{1}{2})^{(1/2)} = (\frac{1}{4})^{(1/4)}.$$ However, he points out that it is possible to restrict the domain of the function so that it is injective. Nevertheless, I'm inclined to think that due to this observation, no notation may have been invented specifically for the inverse of this function, unlike other functions such as $f(x) = e^x$ which have inverses like $f^{-1}(x)=ln(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524912238121033, "perplexity": 150.73081992322108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011456.52/warc/CC-MAIN-20141125155651-00211-ip-10-235-23-156.ec2.internal.warc.gz"}
https://amathew.wordpress.com/2010/01/16/the-fourier-transform-the-heat-equation-and-fundamental-solutions/
I have now discussed what the Laplacian looks like in a general Riemannian manifold and can thus talk about the basic equations of mathematical physics in a more abstract context. Specifically, the key ones are the Laplace equation $\displaystyle \Delta u = 0$ for ${u}$ a smooth function on a Riemannian manifold. Since ${\Delta = \mathrm{div} \mathrm{grad}}$, this often comes up when ${u}$ is the potential energy function of a field which is divergence free, e.g. in electromagnetism. The other major two are the heat equation $\displaystyle u_t - \Delta u = 0$ for a smooth function ${u}$ on the product manifold ${\mathbb{R} \times M}$ for ${M}$ a Riemannian manifold, and the wave equation $\displaystyle u_{tt} - \Delta u = 0$ in the same setting. (I don’t know the physics behind these at all, but it’s probably in any number of textbooks.) We are often interested in solving these given some kind of boundary data. In the case of the Laplace equation, this is called the Dirichlet problem. In 2-dimensions for data given on a circle, the Dirichlet problem is solved using the Poisson integral, as already discussed. To go further, however, we would need to introduce the general theory of elliptic operators and Sobolev spaces. This will heavily rely on the material discussed earlier on the Fourier transform and distributions, and before plunging into it—if I do decide to plunge into it on this blog—I want to briefly discuss why Fourier transforms are so important in linear PDE. Specifically, I’ll discuss the solution of the heat equation on a half space. So, let’s say that we want to treat the case of ${\mathbb{R}_{\geq 0} \times \mathbb{R}^n}$. In detail, we have a function ${u(x)=u(0,x)}$, continuous on ${\mathbb{R}^n}$. We want to extend ${u(0,x)}$ to a solution ${u(t,x)}$ to the heat equation which is continuous on ${0 \times \mathbb{R}^n}$ and smooth on ${\mathbb{R}_+^{n+1}}$. To start with, let’s say that ${u(0,x) \in \mathcal{S}(\mathbb{R}^n)}$. The big idea is that by the Fourier inversion formula, we can get an equivalent equation if we apply the Fourier transform to both sides; this converts the inconvenience of differentiation into much simpler multiplication. When we talk about the Fourier transform, this is as a function of ${x}$. So, assuming we have a solution ${u(t,x)}$ as above: $\displaystyle \hat{u}_t = \widehat{\Delta u} = -4\pi^2 |x|^2 \hat{u}.$ Also, we know what ${\hat{u}(0,x)}$ looks like. So this is actually a linear differential equation in ${\hat{u}( \cdot, x)}$ for each fixed ${x}$ with initial conditions ${\hat{u}(0,x)}$. The solution is unique, and it is given by $\displaystyle \hat{u}(t,x) = e^{-4 \pi^2 |x|^2 t} \hat{u}(0,x).$ Now recall that multiplication on the Fourier transform level corresponds to conveolution, and the Fourier transform of ${K(t,x) = (4 \pi t)^{-n/2} e^{- |x|^2/ (4 t)}}$ is ${e^{-4 \pi^2 |x|^2 t}}$. As a result, given a putative solution ${u(t,x)}$, we have determined ${u(t,x)}$ by $\displaystyle u(t,x) = (K(t, \cdot) \ast u) = (4 \pi t)^{-n/2} \int_{\mathbb{R}^n} e^{- |y-x|^2/ (4 t)} u(0, y) dy.$ So we have a candidate for a solution. Conversely, if the boundary data ${u \in L^1(\mathbb{R}^n)}$ alone, it is easy to check by differentiation under the integral (justified by the rapid decrease of the exponential) that we have something satisfying the heat equation in the upper half-space. Moreover ${||u(t, \cdot) - u(0, \cdot)||_{L^1} \rightarrow 0}$ as ${t \rightarrow 0}$ by general facts about approximation to the identity and a look at the definition of ${K(t, x)}$—note that ${K(\sqrt{t}, x)}$ is just the orthodox version of an approximation to the idnetity. So, we have found a way to solve the heat equation on ${\mathbb{R}^{n+1}}$. It thus seems that the way to solve equations such as the heat equation by convolution with appropriate kernels. In fact, this is more generally true of nonhomogeneous constant-coefficient linear PDE on ${\mathbb{R}^n}$ (we’re forgetting about boundary value problems). Suppose we are given a partial differential operator ${P}$ with constant coefficients, i.e. $\displaystyle Pf = \sum_{|a| \leq k} c_a D^a f ,$ where the ${a}$‘s are multi-indices. Then it is immediate that ${P}$ extends to an operator on distributions. Moreover, $\displaystyle \boxed{ P(\phi \ast f) = P\phi \ast f = \phi \ast Pf }$ whenever ${\phi}$ is a distribution and ${f \in \mathcal{S}}$. (This is clear whenever ${\phi \in \cal{S}}$; in general any distribution can be approximated in the weak* sense by distributions by convolving with an approximation to the identity.) As a result, if we have a fundamental solution ${\phi}$, i.e. one with $\displaystyle P \phi = \delta$ we can get a solution to any equation of the form ${Pf = g}$ for ${g \in \mathcal{S}}$ by taking $\displaystyle f = g \ast \phi,$ which is not only a distribution but also a polynomially increasing ${C^{\infty}}$ one. So we can solve any constant-coefficient PDE given a fundamental solution. There is a big theorem of Malgrange and Ehrenpreis that fundamental solutions always exist to constant-coefficient linear PDE. However, the above statement about solving PDEs can actually be proved in a more elementary fashion; perhaps this will be a future topic. For now, however, I want to show that the Gauss kernel ${K(t,x)}$ is actually a fundamental solution to the heat equation, once it is extended to ${\mathbb{R}^{n+1}}$ with ${K(t,x) \equiv 0}$ for ${t \leq 0}$. (This is no longer smooth, but it is still a distribution.) We need to show that $\displaystyle \int_{\mathbb{R}^{n+1}} K(t,x) \left( - \frac{d}{dt} - \Delta \right)u(t,x) dt dx = u(0).$ Let’s take the integral ${I_{\epsilon}}$ where ${t}$ is integrated over ${[\epsilon, \infty)}$; then by integration by parts $\displaystyle I_{\epsilon} = \int_{\epsilon}^{\infty} \int_{\mathbb{R}^n} u \left( \frac{d}{dt} - \Delta \right) K dt dx + \int_{\mathbb{R}^n} K(\epsilon,x) u(\epsilon,x).$ Since ${K}$ is a solution to the heat equation on ${\mathbb{R}^{n+1}_+}$ (e.g. look at the Fourier transform) it is the second integral that is nonzero. We can write this as $\displaystyle \int_{\mathbb{R}^n} K(\epsilon, x)( u(\epsilon,x) - u(0,0)) dx + u(0)$ and it is easy to see (the same approximation to the identity argument) that the former term tends to zero as ${\epsilon \rightarrow 0}$. So we indeed have a fundamental solution to the heat equation. It thus seems fair that we get solutions to it by convolving with the Gauss kernel.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900575280189514, "perplexity": 102.67716744654454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00661.warc.gz"}
https://calculus123.com/wiki/Fixed_points_and_selections_of_set_valued_maps_on_spaces_with_convexity_by_Saveliev
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev. # Fixed points and selections of set valued maps on spaces with convexity by Saveliev Jump to navigationJump to search Fixed points and selections of set valued maps on spaces with convexity by Peter Saveliev International Journal of Mathematics and Mathematical Sciences, 24 (2000) 9, 595-612. Also a talk at at the Joint Mathematics Meeting in January 2000. Reviews: MR 2001h:47097, ZM 0968.47016. We provide two results that unite the following two pairs of theorems respectively. First: Second: For this purpose we introduce convex structures on topological spaces that are more general than those of topological vector spaces, or topological convexity structures due to Michael, Van de Vel, Horvath, and others. We are able to construct a convexity structure for a wide class of topological spaces, which makes it possible to prove a generalization of the following purely topological fixed point theorem. Eilenberg-Montgomery fixed point theorem. Let $X$ be an acyclic compact ANR, and let $F:X\rightarrow X$ be an upper semicontinuous multifunction with nonempty closed acyclic values. Then $F$ has a fixed point. This theorem is especially important as it is used in proving the existence of periodic solutions of differential inclusions (multivalued differential equations), see Dissipativity in the plane and The dissipativity of the generalized Lienard equation. It is generalized in a different direction in A Lefschetz-type coincidence theorem by Saveliev. This fixed point theorem is just one of the scores generated by Problem 54 of The Scottish Book. However I believe that I don't just add another one to the list but instead reduce the total number. Selection theorems are just as numerous, see "Continuous selections of multivalued mappings" by D. Repovs and P.V. Semenov. Full text: Fixed points and selections of set valued maps on spaces with convexity (17 pages)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406179547309875, "perplexity": 580.1347866980544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00363.warc.gz"}
http://tex.stackexchange.com/questions/82921/bookmarks-not-automatically-generating
Bookmarks not automatically generating [closed] I have \usepackage{hyperref} in the preamble of a very simple document, which has defined sections. However, the output pdf file has no bookmarks. In 4.1 of this manual, it's stated that Usually hyperref automatically adds bookmarks for \section and similar macros What am I missing? Also, surfing a forum, I found that someone said that all he had to was put \usepackage[bookmarks=true]{hyperref} in his preamble... but this did not work for me either. (I am using the latest version of TeXworks as my editor and I am a PC user.) - closed as too localized by diabonas, Werner, Jake, Kurt, ThorstenMar 20 '13 at 19:50 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. Please add to your question a simple, complete and minimal document illustrating the problem mentioned. –  Gonzalo Medina Nov 16 '12 at 0:31 I guess you could try loading hyperref last. Surprising how often that works. –  Peter Grill Nov 16 '12 at 0:35 Which hyperref driver you are using (see the .log file)? How do you generate the PDF file? Have you run latex at least twice? If you load package bookmark after hyperref, the bookmarks are faster updated. Which document class is used? –  Heiko Oberdiek Nov 16 '12 at 0:58 This is really strange. Could you please a minimal portion of the offending files? –  Masroor Nov 16 '12 at 2:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028549790382385, "perplexity": 3111.281117907675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00185-ip-10-180-206-219.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/109406/find-two-points-in-a-point-cloud-with-the-maximum-distance
# Find two points in a point cloud with the maximum distance? What is the least computationally complex way to find two points such that the distance between them is greater or equal to any other pair. Remember hearing something about how you could find such a pair by randomly picking a point r, finding the point furthest away fa and then finding the most distant point from it, md. The diameter is then the distance between fa and md (i.e norm(fa - md)). Is this correct? Can you prove or disprove it? What is the correct way if this is incorrect? • Diameter is not well defined for a general set of points. Please define more precisely what you are looking for. Oct 10, 2015 at 5:28 • @PieterGeerkens You are right, I borrowed this from Graph Theory. Oct 10, 2015 at 17:36 It wont be correct. If you take 4 points; 3 of which lie on a circle and the 4th is in the center. The diameter of this set is the distance of 2 points on the circle. Your algorithm may choose the other point which won't have a distance like that. The proven correct way is to create the convex hull and use the Rotating Calipers method for finding the largest distance. This ends up being O(n log n + k) time complexity. O(n log n) for creating the convex hull and O(k) for iterating over then entire hull to find the points the furthest apart. • Your explanation is not clear. I can't understand why the OP's approach is not valid, and how your "4 points" relates to it. Oct 8, 2015 at 16:42 • @JPhi1618 To visualize ratchet's complaint about the OP's approach, consider the case with exactly 3 equidistant points (equilateral triangle). The OP's approach will calculate the length of a triangle's leg as the diameter. What is more often desirable though is to visualize the 3 points as part of a curved shape (circle in this case) and to take the diameter from that; which is what this answer describes. Oct 8, 2015 at 18:06 • @StevenHansen Ok, thanks - that clears it up. If we define "diameter" as longest distance between two points, is the OP's algorithm acceptable? If we have a cloud of 1000 points, I think in most instances the fastest, reasonably accurate approach would be desired. Oct 8, 2015 at 18:11 • @JPhi1618 Even with this new definition of "diameter" there are problems. There is not enough room in a comment, so I've posted an answer that you can examine for an explanation. Oct 8, 2015 at 19:46 What is the correct way if this is incorrect? You should only ask one question at a time. I'll cover: Is this correct? Can you prove or disprove it? Also, it was questioned in comments whether the OP algorithm might be "good enough" even if it isn't the diameter from a curve. Bottom line: you aren't guaranteed to get the two points that are furthest from each other. Consider the point cloud with exactly four co-planar points, { A, B, C, D }. AB = AC = r, BAC = 70 degrees, ABC = ACB = 55 degrees, and Dis halfway along BC. Like so: A r r B D C Bottom line (TLDR;) using OP algorithm: if D is the random point, the next point is A, then either B (or C: same distance). OP algorithm yields AB or AC. However, the longest distance is BC. The algorithm fails in at least this case. Math Proof: • From random point D we compare AD and BD. AD = r*sin(55) and BD = DC = r*cos(55). Since cos(55) < sin(55), AD > BD and point 2 is A. • From A we consider AD and AC. AC = r and r > r*sin(55), so AC > AD and the final point is C (or B: same distance). • Final OP diameter is AC = r. However, BC = 2*r*cos(55) which means BC > r. The furthest two points from each other are B and C, not A and C. • What if you keep at it a few times, does it converge? Oct 9, 2015 at 18:03 • There are situations (such as you seem to have found), where the wrong points keep pointing back to themselves. Oct 9, 2015 at 19:44 Ok.. Let's look at the following shape: B / \ A C \ / D Now, lets say, the distance BD is 5. The distance AC is 4. If you pick A, you will get C at a distance of 4. From C you will get A again. The distance AD = CD = CB = AB is from Pythagoras: sqrt((4/2)^2 + (5/2)^2) = ~3.2 So yeah, you can miss the mark with a diamond shape.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031021356582642, "perplexity": 717.8863563002952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00690.warc.gz"}
http://math.stackexchange.com/questions/115046/there-are-infinitely-many-triangular-numbers-that-are-the-sum-of-two-other-such
There are infinitely many triangular numbers that are the sum of two other such numbers In the Exercise $9$, page 16, from Burton's book Elementary Number Theory he state the following: Establish the identity $t_{x}=t_{y}+t_{z},$ ($t_{n}$ is the nth triangular numbers) where $${x}=\frac{n(n+3)}{2}+1\,\,\,\,\,\,\,y=n+1\,\,\,\,\,\,\,z=\frac{n(n+3)}{2}$$ and $n\geq 1,$ thereby proving that there are infinitely many triangular numbers that are the sum of two other such numbers. I tried to find out how did he get $x,y$ and $z$ but I've failed. I wrote $$\frac{y(y+1)}{2}+\frac{z(z+1)}{2}=\frac{x(x+1)}{2}$$ but I don't know what to do from now on. How one can find $x,y,z$ as above? - Multiply both sides by 8 and complete the squares, see what happens. – Will Jagy Feb 29 '12 at 23:33 @WillJagy: I've found $(2y+1)^{2}+(2z+1)^{2}=(2x+1)^{2}+1$. +1 for you. – spohreis Feb 29 '12 at 23:49 @WillJagy: I just noticed your comment. Please read this thread: meta.math.stackexchange.com/questions/1559/…. There are reasons to add an answer instead of a comment, even if it is just a hint (in this case, a substantial one and so deserves to be an answer, IMO). – Aryabhata Mar 1 '12 at 0:19 @Aryabhata, I see what you mean. In this case I felt multiplying by 8 was what I would do next, but had not the time to finish that. Also, I was a little unclear about the OP's notation. – Will Jagy Mar 1 '12 at 0:40 Triangular numbers are the sum of the first $m$ positive integers, so clearly for any $m$ there is always a triangular number which when $m$ is added to it is a triangular number: $$\frac{m(m+1)}{2}=m+\frac{m(m-1)}{2} .$$ There are others: for example any odd number greater than $1$ is the difference between two triangular numbers two steps apart, while any multiple of $3$ greater than $3$ is the difference between two triangular numbers three steps apart, etc. So if $m$ is any triangular number, say $t_k$ where $m=\frac{k(k+1)}{2}$, then we have a triangular number which is the sum of two triangular numbers, and since there are an infinite number of triangular numbers there are an infinite number of cases of this. In this case we have $t_{m}=t_k + t_{m-1}$ or $$t_{k(k+1)/2}=t_k+t_{k(k+1)/2 - 1}.$$ Now let $k=n+1$ so $\frac{k(k+1)}{2} - 1 =\frac{(n+1)(n+2)}{2} -1 = \frac{n(n+3)}{2}$ and similarly $\frac{k(k+1)}{2}=\frac{(n+1)(n+2)}{2} = \frac{n(n+3)}{2}+1$. So the last expression of the previous result becomes $$t_{n(n+3)/2 + 1}=t_{n+1}+t_{n(n+3)/2}.$$ This explains how he got his result. It does not explain why he prefers the final step over the slightly simpler previous step. - One way to try and come up with this would be to start from the other side: $$\frac{y(y+1)}{2} + \frac{z(z+1)}{2} = \frac{x(x+1)}{2}$$ Multiply by $8$, and adding two gives us $$(2y+1)^2 + (2z+1)^2 = (2x+1)^2 + 1$$ i.e. $$(2y+1)^2 - 1 = (2x+1)^2 - (2z+1)^2$$ and so $$y(y+1) = (x-z)(x+z+1)$$ Given a $y$, we can get a solution by putting $$x - z = 1$$ and $$x+z+1 = y(y+1)$$ and solving the system of equations. - I suspect the choices come from this: start with $$(2y+1)^2 + (2z+1)^2 = (2x+1)^2 + 1.$$ If you now try to see what happens when $$x = z + 1$$ it reduces to $$(2y+1)^2 = 8 z + 9.$$ Once again, if, instead of $y$ itself, we take $$y = n+1,$$ we have $$(2 n+3)^2 = 8 z + 9$$ or $$n^2 + 3 n = 2 z$$ or $$z = \frac{n^2 + 3 n}{2}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096264243125916, "perplexity": 145.4350129032567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277091.36/warc/CC-MAIN-20160524002117-00168-ip-10-185-217-139.ec2.internal.warc.gz"}
https://groupprops.subwiki.org/wiki/Fully_invariant_subgroup
# Fully invariant subgroup ## Definition QUICK PHRASES: invariant under all endomorphisms, endomorphism-invariant ### Equivalent definitions in tabular format A subgroup of a group is termed fully invariant or fully characteristic if it satisfies the following equivalent conditions: No. Shorthand A subgroup of a group is termed fully invariant if ... A subgroup of a group is termed a fully invariant subgroup of if ... 1 endomorphism-invariant it is invariant under all endomorphisms of the whole group. for any endomorphism of , or equivalently, for all . 2 endomorphism restricts to endomorphism every endomorphism of the whole group restricts to an endomorphism of the group. for any endomorphism of , and the restriction of to is an endomorphism of . VIEW RELATED: Analogues of this | Variations of this | View a complete list of semi-basic definitions on this wiki This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE] This is a variation of characteristicity|Find other variations of characteristicity | Read a survey article on varying characteristicity ## Examples VIEW: subgroups of groups satisfying this property | subgroups of groups dissatisfying this property VIEW: Related subgroup property satisfactions | Related subgroup property dissatisfactions ### Extreme examples 1. The trivial subgroup is always fully invariant. 2. Every group is fully invariant as a subgroup of itself. ### Examples 1. High occurrence example: In a cyclic group, every subgroup is fully invariant. That's because any subgroup can be described as the set of all powers, for some choice of , and such a set is clearly invariant under endomorphisms. (In fact, it is a verbal subgroup). 2. More generally, in any abelian group, the set of powers is a verbal subgroup, and hence fully invariant. The set of elements whose order divides is also fully invariant, though not necessarily verbal (for instance, in the group of all roots of unity, the subgroup of roots for fixed is fully invariant but not verbal). 3. In a (possibly) non-abelian group, certain subgroup-defining functions always yield a fully invariant subgroup. For instance, the derived subgroup is fully invariant, and so are all terms of the lower central series as well as the derived series. ### Non-examples 1. In an elementary abelian group, and more generally, in a characteristically simple group, there is no proper nontrivial fully invariant subgroup (in fact, there's no proper nontrivial characteristic subgroup, either). 2. There do exist characteristic subgroups that are not fully invariant; in fact, the center, and terms of the upper central series, may be characteristic but not fully invariant. Further information: center not is fully invariant ### Examples of subgroups satisfying the property Here are some examples of subgroups in basic/important groups satisfying the property: Here are some examples of subgroups in relatively less basic/important groups satisfying the property: Here are some examples of subgroups in even more complicated/less basic groups satisfying the property: ### Examples of subgroups not satisfying the property Here are some examples of subgroups in basic/important groups not satisfying the property: Here are some some examples of subgroups in relatively less basic/important groups not satisfying the property: Here are some examples of subgroups in even more complicated/less basic groups not satisfying the property: ## Metaproperties Metaproperty name Satisfied? Proof Statement with symbols transitive subgroup property Yes full invariance is transitive If , with fully invariant in and fully invariant in , then is fully invariant in . trim subgroup property Yes The trivial subgroup and the whole group are always fully invariant. intermediate subgroup condition No full invariance does not satisfy intermediate subgroup condition It is possible to have such that is a fully invariant subgroup inside but is not a fully invariant subgroup of . strongly intersection-closed subgroup property Yes full invariance is strongly intersection-closed If , are all fully invariant subgroups of , then is also fully invariant in . strongly join-closed subgroup property Yes full invariance is strongly join-closed If , are all fully invariant subgroups of , then is also fully invariant in . commutator-closed subgroup property Yes full invariance is commutator-closed If are fully invariant subgroups of , so is . quotient-transitive subgroup property Yes full invariance is quotient-transitive If such that is fully invariant in and is fully invariant in , then is fully invariant in . finite direct power-closed subgroup property Yes full invariance is finite direct power-closed If is fully invariant in , then in any finite direct power of , the corresponding direct power is fully invariant. restricted direct power-closed subgroup property Yes full invariance is restricted direct power-closed If is fully invariant in , then in any restricted direct power of , the corresponding direct power of is fully invariant. direct power-closed subgroup property No full invariance is not direct power-closed It is possible to have a fully invariant subgroup inside a group and an infinite cardinal such that the direct power is not a fully invariant subgroup inside the direct power . ## Relation with other properties ### Stronger properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions verbal subgroup defined as the set of elements expressible by certain words verbal implies fully invariant fully invariant not implies verbal (see also list of examples) Existentially bound-word subgroup, Image-closed fully invariant subgroup, Intersection of finitely many verbal subgroups, Pseudoverbal subgroup, Quasiverbal subgroup, Quotient-subisomorph-containing subgroup, Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO intersection of finitely many verbal subgroups intersection of a finite number of verbal subgroups | pseudoverbal subgroup defined as the intersection of normal subgroups for which the quotient group is in a particular pseudovariety Quotient-subisomorph-containing subgroup|FULL LIST, MORE INFO existentially bound-word subgroup defined as the set of elements satisfying a system of equations existentially bound-word implies fully invariant fully invariant not implies existentially bound-word | homomorph-containing subgroup contains every homomorphic image homomorph-containing implies fully invariant fully invariant not implies homomorph-containing Intermediately fully invariant subgroup, Sub-homomorph-containing subgroup|FULL LIST, MORE INFO subhomomorph-containing subgroup contains every homomorphic image of every subgroup (via homomorph-containing) (via homomorph-containing) Homomorph-containing subgroup, Intermediately fully invariant subgroup, Sub-homomorph-containing subgroup, Transfer-closed fully invariant subgroup|FULL LIST, MORE INFO order-containing subgroup contains every subgroup whose order divides its order (via homomorph-containing) (via homomorph-containing) Homomorph-containing subgroup, Image-closed fully invariant subgroup, Subhomomorph-containing subgroup|FULL LIST, MORE INFO variety-containing subgroup contains every subgroup in the variety of groups generated by it (via homomorph-containing subgroup) (via homomorph-containing subgroup) Homomorph-containing subgroup, Intermediately fully invariant subgroup, Subhomomorph-containing subgroup, Transfer-closed fully invariant subgroup|FULL LIST, MORE INFO normal subgroup having no nontrivial homomorphism to its quotient group No nontrivial homomorphism to quotient group Homomorph-containing subgroup, Intermediately fully invariant subgroup, Quotient-subisomorph-containing subgroup|FULL LIST, MORE INFO normal Hall subgroup normal and Hall: its order and index are relatively prime Complemented fully invariant subgroup, Complemented homomorph-containing subgroup, Homomorph-containing subgroup, Image-closed fully invariant subgroup, Intermediately fully invariant subgroup, Normal subgroup having no nontrivial homomorphism to its quotient group, Order-containing subgroup, Quotient-subisomorph-containing subgroup, Sub-homomorph-containing subgroup, Variety-containing subgroup|FULL LIST, MORE INFO normal Sylow subgroup normal and Sylow Complemented fully invariant subgroup, Complemented homomorph-containing subgroup, Homomorph-containing subgroup, Image-closed fully invariant subgroup, Intermediately fully invariant subgroup, Normal Hall subgroup, Normal subgroup having no nontrivial homomorphism to its quotient group, Order-containing subgroup, Quotient-subisomorph-containing subgroup, Sub-homomorph-containing subgroup, Variety-containing subgroup|FULL LIST, MORE INFO quotient-subisomorph-containing subgroup Quotient-subisomorph-containing implies fully invariant Fully invariant not implies quotient-subisomorph-containing Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO image-closed fully invariant subgroup Under any surjective homomorphism, its image is fully invariant in the image of the group full invariance does not satisfy image condition Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO intermediately fully invariant subgroup Fully invariant in every intermediate subgroup full invariance does not satisfy intermediate subgroup condition | transfer-closed fully invariant subgroup Its intersection with any subgroup is fully invariant in that full invariance does not satisfy transfer condition | ### Weaker properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions Comparison characteristic subgroup invariant under all automorphisms fully invariant implies characteristic characteristic not implies fully invariant (see also list of examples) Finite direct power-closed characteristic subgroup, Injective endomorphism-invariant subgroup, Normality-preserving endomorphism-invariant subgroup, Retraction-invariant characteristic subgroup, Strictly characteristic subgroup|FULL LIST, MORE INFO characteristic versus fully invariant normal subgroup invariant under all inner automorphisms (via characteristic) (via characteristic) Characteristic subgroup, Finite direct power-closed characteristic subgroup, Fully invariant-potentially fully invariant subgroup, Image-potentially fully invariant subgroup, Injective endomorphism-invariant subgroup, Normal-potentially fully invariant subgroup, Normality-preserving endomorphism-invariant subgroup, Potentially fully invariant subgroup, Retraction-invariant characteristic subgroup, Retraction-invariant normal subgroup, Strictly characteristic subgroup|FULL LIST, MORE INFO strictly characteristic subgroup invariant under all surjective endomorphisms fully invariant implies strictly characteristic strictly characteristic not implies fully invariant Normality-preserving endomorphism-invariant subgroup|FULL LIST, MORE INFO -- injective endomorphism-invariant subgroup invariant under all injective endomorphisms injective endomorphism-invariant not implies fully invariant | retraction-invariant subgroup invariant under all retractions Retraction-invariant normal subgroup|FULL LIST, MORE INFO retraction-invariant characteristic subgroup characteristic and retraction-invariant retraction-invariant normal subgroup normal and retraction-invariant endomorph-dominating subgroup every image under an endomorphism is conjugate to a subgroup of it | potentially fully invariant subgroup the subgroup is fully invariant in some bigger group Fully invariant-potentially fully invariant subgroup, Normal-potentially fully invariant subgroup|FULL LIST, MORE INFO finite direct power-closed characteristic subgroup any finite direct power of the subgroup is characteristic in the corresponding direct power of the whole group follows from full invariance is finite direct power-closed and fully invariant implies characteristic finite direct power-closed characteristic not implies fully invariant Normality-preserving endomorphism-invariant subgroup|FULL LIST, MORE INFO ## Effect of property operators BEWARE! This section of the article uses terminology local to the wiki, possibly without giving a full explanation of the terminology used (though efforts have been made to clarify terminology as much as possible within the particular context) Operator Meaning Result of application Proof and related observations potentially operator fully invariant in some larger group potentially fully invariant subgroup by definition; any potentially fully invariant subgroup is normal, but normal not implies potentially fully invariant intermediately operator fully invariant in every intermediate subgroup intermediately fully invariant subgroup any homomorph-containing subgroup satisfies this property. image condition operator image is fully invariant in any quotient group image-closed fully invariant subgroup any verbal subgroup satisfies this property. ## Formalisms BEWARE! This section of the article uses terminology local to the wiki, possibly without giving a full explanation of the terminology used (though efforts have been made to clarify terminology as much as possible within the particular context) ### Second-order description This subgroup property is a second-order subgroup property, viz., it has a second-order description in the theory of groups View other second-order subgroup properties The property of being fully invariant has a second-order description. A subgroup of a group is termed fully characteristic if: The condition in parentheses is a verification that the function is an endomorphism of . ### Function restriction expression This subgroup property is a function restriction-expressible subgroup property: it can be expressed by means of the function restriction formalism, viz there is a function restriction expression for it. Find other function restriction-expressible subgroup properties | View the function restriction formalism chart for a graphic placement of this property Function restriction expression is a fully invariant subgroup of if ... This means that full invariance is ... Additional comments endomorphism function every endomorphism of sends every element of to within the invariance property for endomorphisms endomorphism endomorphism every endomorphism of restricts to an endomorphism of the balanced subgroup property for endomorphisms Hence, it is a t.i. subgroup property, both transitive and identity-true endomorphism endomorphism every endomorphism of restricts to an endomorphism of the endo-invariance property for endomorphisms; i.e., it is the invariance property for endomorphism, which is a property stronger than the property of being an endomorphism ## Testing ### GAP command This subgroup property can be tested using built-in functionality of Groups, Algorithms, Programming (GAP). The GAP command for testing this subgroup property is:IsFullinvariant View subgroup properties testable with built-in GAP command|View subgroup properties for which all subgroups can be listed with built-in GAP commands | View subgroup properties codable in GAP Note that this GAP testing function uses an additional package called the SONATA package. ## State of discourse ### History This term was introduced by: Levi The concept was introduced by Levi in 1933 under the German name vollinvariant (translating to fully invariant). Both the terms fully invariant and fully characteristic are now in vogue. ### Resolution of questions that are easy to formulate Any typical question about the behavior of fully invariant subgroups in arbitrary groups that is easy to formulate will also be easy to resolve either with a proof or a counterexample, unless some other feature of the question significantly complicates it. This is so, despite the fact that there are a large number of easy-to-formulate questions about the endomorphism monoid that are still open. The reason is that even though not enough is known about the endomorphism monoids, there are other ways to obtain information about the structure of fully invariant subgroups. At the one extreme, there are abelian groups, where the fully invariant subgroups are quite easy to get a handle on. At the other extreme, there are "all groups" where very little can be said about characteristic subgroups beyond what can be proved through elementary reasoning. The most interesting situation is in the middle, for instance, when we are looking at nilpotent groups and solvable groups. In these cases, there are some restrictions on the structure of fully invariant subgroups, but the exact nature of the restrictions is hard to work out.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 80, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099724292755127, "perplexity": 2211.291208664218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00204.warc.gz"}
https://www.physicsforums.com/threads/fourier-transform-convergence.727314/
# Fourier Transform Convergence 1. ### nabeel17 56 From -infinity to infinity at the extreme ends do Fourier transforms always converge to 0? I know in the case of signals, you can never have an infinite signal so it does go to 0, but speaking in general if you are taking the fourier transform of f(x) If you do integration by parts, you get a term (f(x)e^ikx evaluated from -infinity to infinity why does this always = 0? ### Staff: Mentor No, not always. If a signal is periodic in one domain then it is discrete in the other domain. So if you have a signal which is discrete in time, then it is periodic in frequency. Since it is periodic in frequency it does not converge to 0 at infinity. 3. ### nabeel17 56 Ok then why is it that we the first term in the integration by parts goes to 0 then regardless of the function (Whether it is periodic or not)? For example when finding the fourier transform of a derivative F[d/dx] = ∫d/dxf(x)e^ikx= f(x)e^ikx evaluated -infinity to infinity -ik∫f(x)e^ikx the first term = 0, why is that? If it were a wave function like in QM then it makes sense because the area under the wave function must be finite and converge to 0 at the extremes for it to have a probability density, but why here? ### Staff: Mentor I think that the various properties of the Fourier transform all assume that f satisfies the Dirichlet conditions. 5. ### AlephZero 7,299 The OP is asking about Fourier transforms, not Fourier series (of periodic functions) which is what #2 and #4 appear to be about. A reasonable condition for Fourier transforms to behave sensibly is that ##\int_{-\infty}^{+\infty}|f(x)|dx## is finite. Note that if you use Lebesque measure to define integration, that does not imply ##f(x)## converges to 0 as x tends to infinity. ##f(x)## can take any values on a set of measure zero. (Also note, "reasonable" does not necessarily mean either "necessary" or "sufficient"!) The mathematical correspondence between Fourier series and Fourier transforms is not quite "obvious", since the Fourier transform of a periodic function (defined by an integral with an infinte range) involves Dirac delta functions, and indeed the Fourier transform of a periodic function is identically zero except on a set of measure zero (i.e. the points usually called the "Fourier coefficients"). On the other hand if you integrate over one period of a periodic function, it is a lot simpler to get to some practical results, even if you have to skate over why the math "really" works out that way. Last edited: Dec 8, 2013 1 person likes this.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774683713912964, "perplexity": 335.50031733167395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00323-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/206516-dimensions-problem.html
# Math Help - Dimensions Problem 1. ## Dimensions Problem First I want to say that I must apologize if this problem is in the incorrect part of the board. I honestly could not figure out what area this problem would fall under. If anyone read my intro a good while back, I'm in college and have to take basic college mathematics before I can get to core classes because I chose to pay literally no attention to class when I was in high school. Anyways this was the one problem I got incorrect on my Math midterm. I realized that the way to solve this was SO easy, but I decided to post this so I can get some feedback on whether I did it correctly (I'm positive this time that I did) and also to give my first attempt at the Latex thing. Anyways the word problem goes as such. "A frame that is 18 inches by 24 inches has a mat in it that is 2 1/4 inches all around. What are the dimensions of the picture within the mat?" I took a photo of the picture, I hope the quality is decent enough. So... Code: 2 1/4 + 2 1/4 = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5 thus the answer is 13.5 by 19.5 What say you? 2. ## Re: Dimensions Problem Hm, if anyone would also be so kind as to possibly critique the way I wrote out the formula so I know how the latex should have been set up? Thanks in advance for any consideration guys 3. ## Re: Dimensions Problem You did well with the problem, although some instructors may want units with the numbers, inches in this case. To use LaTeX, enclose your code within the TEX tags, which you can generate using the sigma button. 4. ## Re: Dimensions Problem So like this? $2 1/4 + 2 1/4 = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ Answer is $19.5" by 13.5"$ Edit:1 It would appear there are still some kinks in latex for me to figure out. let me try it this way. $ 2 1/4 + 2 1/4 = 4.5$ $18 - 4.5 = 13.5$ $24 - 4.5 = 19.5$ Edit: 2 how do I make it actually look like a fraction? 5. ## Re: Dimensions Problem You have the right idea, for 2 1/4 you may wish to use the code 2\frac{1}{4}. I'm not sure where the < br/ > symbols are coming from. 6. ## Re: Dimensions Problem $2\frac{1}{4} + 2\frac{1}{4} = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ $2\frac{1}{4} + 2\frac{1}{4} = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ Well I got the weird b symbols to go away by keeping the equation in one long line rather than giving them their own lines like such 1 2 3 however, doesn't it seem a bit sloppy this way? It might be best if there is actually a latex tutorial somewhere on this site? I would hate to waste your time with this stuff. 7. ## Re: Dimensions Problem There is a forum here dedicated to using LaTeX, and there are many tutorials online, just do a search on it. I usually enclose each equation or expression separately, rather than inserting carriage returns within the tags. 8. ## Re: Dimensions Problem Dear Lord lol, how did I not notice that forum? lol, thanks for all the help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047428965568542, "perplexity": 853.3919713796657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989826.86/warc/CC-MAIN-20150728002309-00065-ip-10-236-191-2.ec2.internal.warc.gz"}
https://arxiv.org/abs/1608.08640
hep-ph (what is this?) # Title: Search for sharp and smooth spectral signatures of $μν$SSM gravitino dark matter with Fermi-LAT Abstract: The $\mu\nu$SSM solves the $\mu$ problem of supersymmetric models and reproduces neutrino data, simply using couplings with right-handed neutrinos $\nu$'s. Given that these couplings break explicitly $R$ parity, the gravitino is a natural candidate for decaying dark matter in the $\mu \nu$SSM. In this work we carry out a complete analysis of the detection of $\mu \nu$SSM gravitino dark matter through $\gamma$-ray observations. In addition to the two-body decay producing a sharp line, we include in the analysis the three-body decays producing a smooth spectral signature. We perform first a deep exploration of the low-energy parameter space of the $\mu \nu$SSM taking into account that neutrino data must be reproduced. Then, we compare the $\gamma$-ray fluxes predicted by the model with Fermi-LAT observations. In particular, with the 95$\%$ CL upper limits on the total diffuse extragalactic $\gamma$-ray background using 50 months of data, together with the upper limits on line emission from an updated analysis using 69.9 months of data. For standard values of bino and wino masses, gravitinos with masses larger than about 4 GeV, or lifetimes smaller than $10^{28}$ s, produce too large fluxes and are excluded as dark matter candidates. However, when limiting scenarios with large and close values of the gaugino masses are considered, the constraints turn out to be less stringent, excluding masses larger than 17 GeV and lifetimes smaller than $4\times 10^{25}$ s. Comments: Minor changes, references added, version published in JCAP. 23 pages, 7 figures, 3 tables Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Astrophysical Phenomena (astro-ph.HE) DOI: 10.1088/1475-7516/2017/03/047 Cite as: arXiv:1608.08640 [hep-ph] (or arXiv:1608.08640v2 [hep-ph] for this version) ## Submission history From: Daniel Elbio Lopez [view email] [v1] Tue, 30 Aug 2016 20:00:35 GMT (130kb,D) [v2] Fri, 17 Mar 2017 23:20:55 GMT (131kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558283448219299, "perplexity": 3420.4607373976733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00251.warc.gz"}
https://brilliant.org/discussions/thread/how-do-i-integrate-this/
× # How do I integrate this? I came across this problem today: $$Verify\quad for\quad u(x,y)=e^{ x }sin(y)\quad the\quad mean\\ value\quad theorem\quad for\quad harmonic\quad functions\\ on\quad a\quad circle\quad C\quad of\quad radius\quad r=1,\quad with\quad its\\ centre\quad at\quad z=2+2i.$$ I tried to simplify it but I got stuck at the integral of $$cosh(e^{i\theta})$$. So my question is : how do I integrate $$cosh(e^{i\theta})$$? I know that it is somehow related to $$Chi(e^{i\theta})$$, but I don't know how. Note by Vishnu C 2 years, 6 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: After some simplification I was able to verify, by integration, that it is true for the given function. But the question still stands: How is it related to $$Chi(e^{i\theta})$$? I was able to solve the case where the function had limits from 0 to 2*pi, i.e, I had to use some properties of definite integrals to simplify it. But is it possible to evaluate it with a general limit? - 2 years, 6 months ago @Sandeep Bhardwaj Sir, @Raghav Vaidyanathan @Shashwat Shukla @Pranjal Jain @Abhishek Sinha Sir Please help him. Thanks a lot! @vishnu c - 2 years, 6 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977700710296631, "perplexity": 2918.772096743929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00179.warc.gz"}
https://www.electro-tech-online.com/threads/fixed-voltage-mppt.149336/
# fixed Voltage mppt Status Not open for further replies. #### Hiro Okamura ##### New Member Hi everyone I am new here so not sure how these threads work (format wise). I am doing an assignment that I need to simulate PV panels and track it maximum power. I am wondering if anyone knows what this initial dip is called in my power graph and how to correct it? #### Attachments • 4.3 KB Views: 113 #### ronsimpson ##### Well-Known Member What panel? Data sheet? What simulator? Spice? LTspice? How did you get the graph? Welcome! #### Seyit Yıldırım ##### New Member what this initial dip is called in my power graph because of shading, initial dip you mentioned is occured. or it will be because of that solar arrays are connected in series, paralel. and all pv can not generate the same voltage. So the initial dip is occured Status Not open for further replies. Replies 1 Views 6K Replies 0 Views 4K Replies 5 Views 6K Replies 4 Views 2K Replies 5 Views 4K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046231865882874, "perplexity": 4800.3220514120985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00304.warc.gz"}
http://moodle.remc10.org/moodle/course/index.php?categoryid=15
### English 10 Write a concise and interesting paragraph here that explains what this course is about
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653929471969604, "perplexity": 3715.041705956036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00444.warc.gz"}
http://syymmetries.blogspot.com.au/2016/06/warsaw-workshop-on-non-standard-dark.html
## Sunday, 5 June 2016 ### Warsaw Workshop on Non-Standard Dark Matter For the last few days I've been at the Warsaw Workshop on Non-Standard Dark Matter. It's been very enjoyable! Plenty of interesting ideas, coffee, and social events. Yesterday I gave a short talk, trying to make the case for a dark matter direct detection search for the sidereal modulation signature. The general idea is that, if dark matter has self-interactions, the dark matter wind which strikes the Earth will interact with any Earth-captured dark matter, leading to a non-trivial spatial distribution which terrestrial detectors traverse throughout the day. I share the slides below this post. If nothing else you should click through to see some entertaining magnetohydrodynamic simulation animations! By the way, as of this writing ATLAS+CMS have recorded about 2+2/fb of data (or 20 diphotons in alternative units): We're quickly moving toward the position we were by Christmas last year (about 3+3/fb including the CMS $B=0$ data). If the 750 GeV diphoton resonance prevails in the new data we hope to know by the ICHEP on August 3-10. Some authors have taken to calling the would-be particle Ϝ, which is the archaic Greek letter "digamma" -- very fitting! We will see yet if this name becomes lore... I also quite like the following perhaps future update of the PDG from Strumia: Slides
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412051200866699, "perplexity": 2368.765034363934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00606.warc.gz"}
http://mathhelpforum.com/pre-calculus/15788-two-pumps-one-tank.html
# Thread: Two Pumps, One Tank 1. ## Two Pumps, One Tank Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? 2. Originally Posted by blueridge Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? I'll start you off... Define your variables: I'd call the bigger pump $b$ and the smaller pump $s$ $b=$the number of tanks per hour the bigger pump can empty $s=$the number of tanks per hour the smaller pump can empty Both of those numbers will be fractions. Now, since you only wanted a hint, I'm only going to give you the first equation and you'll have to find the rest... Two pumps of different sizes working together can empty a fuel tank is 5 hours. $b+s=\frac{1\text{tank}}{5\text{hours}}$ Do you need any more help? 3. ## Yes If you can set up the other equation, I can take it from there. 4. Originally Posted by blueridge If you can set up the other equation, I can take it from there. Originally Posted by blueridge Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? The next equation is somewhat weird. If $b=\frac{\text{tanks}}{\text{hours}}$ than $\frac{1}{b}=\frac{\text{hours}}{\text{tanks}}$ So in fact: $\frac{1}{b}=$the number of hours to empty a tank So we know that: $\frac{1}{b}-\frac{1}{s}=4$ 5. ## tell me... I am dealing with two equations in two unknowns? 6. Originally Posted by blueridge I am dealing with two equations in two unknowns? Yes, solve for $b$ and $s$, and the required answer is $s$ RonL 7. Hello, blueridge! Here's another approach . . . Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? Together, they can do the job in 5 hours. . . In one hour, they can do $\frac{1}{5}$ of the job. .[1] The smaller pump can do the job in $x$ hours. .[Note that: . $x > 4$.] . . In one hour, it can do $\frac{1}{x}$ of the job. The larger pump takes 4 hours less; it takes $x - 4$ hours. . . In one hour, it can do $\frac{1}{x-4}$ of the job. Together, in one hour, they can do: . $\frac{1}{x} + \frac{1}{x-4}$ of the job. .[2] But [1] and [2] describe the same thing: . . the fraction of the job done in one hour. There is our equaton! . . . . $\boxed{\frac{1}{x} + \frac{1}{x-4} \:=\:\frac{1}{5}}$ Multiply by the common denominator: $5x(x - 4)$ . . $5(x - 4) + 5x \:=\:x(x-4)$ . . which simplifies to the quadratic: . $x^2 - 14x + 20 \:=\:0$ The Quadratic Formula gives us: . $x \;=\;\frac{14\pm\sqrt{116}}{2} \;=\;7 \pm\sqrt{29} \;\approx\;\{1.6,\:12.4\}$ Since $x > 4$. the solution is: . $x = 12.4$ Therefore, the smaller pump will take about 12.4 hours working alone. 8. ## tell me... Soroban, I thank you for sharing yet a more simplistic avenue to understanding this question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761281967163086, "perplexity": 847.01835358599}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320261.6/warc/CC-MAIN-20170624115542-20170624135542-00373.warc.gz"}
https://arxiv.org/abs/1912.07321
# Title:Transverse Collective Modes in Interacting Holographic Plasmas Abstract: We study in detail the transverse collective modes of simple holographic models in presence of electromagnetic Coulomb interactions. We render the Maxwell gauge field dynamical via mixed boundary conditions, corresponding to a double trace deformation in the boundary field theory. We consider three different situations: (i) a holographic plasma with conserved momentum, (ii) a holographic (dirty) plasma with finite momentum relaxation and (iii) a holographic viscoelastic plasma with propagating transverse phonons. We observe two interesting new features induced by the Coulomb interactions: a mode repulsion between the shear mode and the photon mode at finite momentum relaxation, and a propagation-to-diffusion crossover of the transverse collective modes induced by the finite electromagnetic interactions. Finally, at large charge density, our results are in agreement with the transverse collective mode spectrum of a charged Fermi liquid for strong interaction between quasi-particles, but with an important difference: the gapped photon mode is damped even at zero momentum. This property, usually referred to as anomalous attenuation, is produced by the interaction with a quantum critical continuum of states and might be experimentally observable in strongly correlated materials close to quantum criticality, e.g. in strange metals. Comments: 15 pages, 7 figures Subjects: High Energy Physics - Theory (hep-th); Strongly Correlated Electrons (cond-mat.str-el) Report number: IFT-UAM/CSIC-19-143 Cite as: arXiv:1912.07321 [hep-th] (or arXiv:1912.07321v1 [hep-th] for this version) ## Submission history From: Marcus Tornsö [view email] [v1] Mon, 16 Dec 2019 12:35:54 UTC (307 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974336981773376, "perplexity": 2382.7847894726683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00408.warc.gz"}
https://pineresearch.com/shop/kb/theory/eis-theory/basic-background-theory/
### EIS Basic Background Theory Last Updated: 5/7/19 by Neil Spinner ##### ARTICLE TAGS • EIS, • eis theory, • eis fundamental ### 1Theory Experimental electrochemistry can be as powerful as it is tricky.  Even simple DC methods (e.g., voltammetry, open circuit potential, chronoamperometry, chronopotentiometry) are often plagued by inaccuracies and/or poor signal-to-noise ratios resulting from seemingly insignificant or overlooked sources.  Variables that can affect electrochemical data include, but are not limited to: the state and quality of electrodes, electrolyte, experimental hardware, the physical laboratory layout, software experimental parameters, arrangement of cables, and grounding configuration. AC techniques, like electrochemical impedance spectroscopy (EIS), can be similarly affected by these variables and sources of error.  The user must exercise particular care and caution when setting up and running EIS experiments as the impact of small sources of error often has a larger effect on data quality than for DC methods.  Obtaining and interpreting meaningful EIS data, as with many other facets of electrochemistry, requires repeated practice and often some trial-and-error with respect to both the hardware and software. In AC electrochemistry, a sinusoidal potential (or current) signal is applied to a system and the resulting current (or potential) signal is recorded and analyzed (see Figure 1 for diagram and Table 1 for associated terminology).  The frequency and amplitude of the input signal are tuned by the user, while the output signal normally has the same frequency as the input signal but its phase may be shifted by a finite amount. Figure 1. AC Electrochemistry Sine Wave Input and Output Terminology Symbol Definition $\displaystyle{E(t)}$ time-dependent potential $\displaystyle{E_o}$ (peak) peak potential amplitude RMS root mean square potential amplitude pk-pk peak-to-peak potential amplitude $\displaystyle{t}$ time $\displaystyle{i(t)}$ time-dependent current $\displaystyle{i_o}$ (peak) peak current amplitude $\displaystyle{\phi}$ phase angle $\displaystyle{f}$ frequency (units of Hz) $\displaystyle{\omega}$ angular frequency (units of rad/s) Table 1. AC Electrochemistry Input and Output Symbol Definitions Practically, frequency (f) is reported in units of Hz.  However, for mathematical convenience the angular frequency (ω), which has units of rad/s and is equivalent to 2πf, is typically used for calculations instead (e.g., see input and output signal equations in Figure 1).  Similarly, the phase angle ($\displaystyle{\phi}$) is typically reported in units of degrees but calculated in units of radians. There are three conventions often used to define the input (and sometimes output) signal amplitude: peak, peak-to-peak, and RMS.  “Peak” refers to the difference between the sine wave set point (i.e., the potential or current at the beginning of the sine wave period) and its maximum or minimum point (i.e., the potential or current at one quarter of the sine wave period).  “Peak-to-peak” is simply twice the peak value (see Figure 1). “RMS”, which stands for “root mean square”, is a mathematical quantity used primarily in electrical engineering to compare AC and DC voltages or currents.  Though its practical relevance and importance to EIS measurements is somewhat minimal, it is still widely used in the industry to characterize input signal amplitude.  Mathematically, it is equivalent to the peak value divided by $\displaystyle{\sqrt{2}}$, or roughly peak times 0.707 (see Figure 1). During an EIS experiment, a sequence of sinusoidal potential signals with varying frequencies, but similar amplitudes, is applied to an electrochemical system.  Typically, frequencies of each input signal are equally spaced on a descending logarithmic scale from ~10 kHz - 1 MHz to a lower limit of ~10 mHz - 1 Hz.  Application of these input and output signals is usually performed automatically via a potentiostat/galvanostat. Monitoring the progress of an EIS experiment can be done by observing the input and output signals on a single current vs. potential graph called a Lissajous plot (see Figure 2).  Depending on the system under study, as well as the applied frequency and amplitude, the shape of the resulting Lissajous plot may vary.  Throughout an EIS experiment, the user can observe the progression and pattern of Lissajous plots as a means of identifying possibly erroneous data. Figure 2. Examples of Typical Lissajous Plots for Stable and Linear Systems The shape of the current vs. potential Lissajous plot for a stable, linear electrochemical system typically appears as either a tilted oval or straight line that repeatedly traces over itself (see Figure 2).  The width of the oval is indicative of the magnitude of the output signal phase angle.  For example, if the Lissajous plot looks like a perfect circle, it means the output signal is completely out of phase (i.e., +90°) with respect to the input signal.  This is also the EIS response experienced by an ideal capacitor or inductor. ### 2References Our knowledgebase is the central repository for written content, including help topics, theory, application notes, specifications, and software information. ##### Software Detailed information about our Software, which includes AfterMath and retired PineChem. ##### Applications Application notes discuss practical aspects of conducting specific experiments. ##### Theory Fundamental electrochemical theory presented in a brief and targeted manner. ##### Product Specifications Review complete product specifications and compare products within a category here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654327392578125, "perplexity": 2141.2390731917094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00204.warc.gz"}
https://www.physicsforums.com/threads/confusion-with-einstein-tensor-notation.694252/
# Confusion with Einstein tensor notation 1. May 28, 2013 ### Loro 1. The problem statement, all variables and given/known data I'm confused about writing down the equation: $\Lambda \eta \Lambda^{-1} = \eta$ in the Einstein convention. 2. Relevant equations The answer is: $\eta_{\mu\nu}\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma} = \eta_{\rho\sigma}$ However it's strange because there seems to be no distinction between $\Lambda$ and $\Lambda^{-1}$ if we write it this way. However we know that: $(\Lambda^{-1})^{\mu}{}_{\nu} = \Lambda_{\nu}{}^{\mu}$ 3. The attempt at a solution If the equation was instead $\Lambda B \Lambda^{-1} = B$ Where $B$ is a tensor given in the form $B^{\mu}{}_{\nu}$ then it's clear to me how to write it: $\Lambda^{\rho}{}_{\mu} B^{\mu}{}_{\nu} \Lambda_{\sigma}{}^{\nu} = B^{\rho}{}_{\sigma}$ But $\eta$ is given in the form $\eta^{\mu\nu}$ and I don't understand how I can contract it with both $\Lambda^{\mu}{}_{\nu}$ and $\Lambda_{\nu}{}^{\mu}$ in order to arrive eventually at the result quoted in (2). 2. May 28, 2013 ### Mandelbroth Is there an actual question? :tongue: So, your confusion is how (2) works? 3. May 28, 2013 ### Loro Haha sorry :tongue: I would like to know why (2) works, and possibly how I could arrive at it, starting from an expression that has both $\Lambda^{\mu}{}_{\nu}$ and $\Lambda_{\nu}{}^{\mu}$. 4. May 28, 2013 ### Dick Well, just raise the $\mu$ index and lower the $\rho$ index on the first $\Lambda$ in your form with the B tensor using the metric tensor. Last edited: May 28, 2013 5. May 29, 2013 ### Loro Thanks, Like that: ? $\Lambda_{\rho}{}^{\mu} \eta_{\mu}{}_{\nu} \Lambda_{\sigma}{}^{\nu} = \eta_{\rho}{}_{\sigma}$ But then again both $\Lambda$'s are of the same form - this time they both seem to be inverses. Draft saved Draft deleted Similar Discussions: Confusion with Einstein tensor notation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785434603691101, "perplexity": 688.2141242381977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00347.warc.gz"}
https://owenduffy.net/blog/?p=14660
# Measuring trap resonant frequency with an antenna analyser Finding the resonant frequency of a resonant circuit such as an antenna trap is usually done by coupling a source and power sensor very loosely to the circuit. A modern solution is an antenna analyser or one port VNA, it provides both the source and the response measurement from one coax connector. Above is a diagram from the Rigexpert AA35Zoom manual showing at the left a link (to be connected the analyser) and the trap (here made with coaxial cable. The advantage of this method is that no wire attachments are needed on the device under test, and that coupling of the test instrument is usually easily optimised. ## Why / how does it work? So, what is happening here? Lets create an equivalent circuit of a similar 1t coil and a solenoid with resonating capacitor. The two coupled coils can be represented by an equivalent circuit that is derived from the two inductances and their mutual inductance. The circuit above represents a 1µH coil and a 10µH coil that are coupled such that 3% of the flux of 5% of the flux of one coil cuts the other (they are quite loosely coupled, as in the pic above. The resonant frequency of the 10µH coil and 100pF capacitor can be calculated to be 5.033MHz… and this is the value we want to find from our measurement. Above is a plot of the magnitude of S11. You can see that the cursor set to the theoretical (ie known) resonant frequency coincides almost exactly with the minimum |S11|, and therefore almost exactly with the theoretical (ie known) resonant frequency. Lets increase the coupling. Above, the equivalent circuit with the same coils but 9% flux coupling (the coils have been moved closer together). Above, we have a deeper response, but note the minimum |S11| is now further away from the cursor which is at the theoretical (ie known) resonant frequency. Too much coupling causes interaction with the test object. ### How can you determine how much is too much coupling? One approach is to simply couple up tightly and find the response, and loosen the coupling until the frequency for minimum response stops moving. ### So where do you measure |S11|? Your instrument may display S11 labelled as the complex reflection coefficient, or it may display the magnitude of the complex reflection coefficient, or it may display Return Loss (which is -|S11|). VSWR is related to |S11|, minimising VSWR is akin to minimising |S11| (or maximising Return Loss). Use whatever feature your analyser offers. ## Practical problems Some analysers will not show a useful response for very loose coupling, eg they may not indicate VSWR greater than say 10. You really need to explore the instrument and manual to find if there is a way to display extreme VSWR, even if only at one frequency. There is good reason why some analysers might not show extreme VSWR. If the inherent resolution of the instrument is poor (eg analysers with 8 bit ADCs), then it may not have sufficient accuracy to usefully display extreme VSWR. Sometimes it is just that the designer didn’t really understand the instrument applications in the real world. Of course this technique will not work on a trap that is substantially enclosed in a shield that prevents magnetic coupling. ## Example Here is a measurement made of a parallel resonant circuit at 1.8MHz using a 60mm diameter 1t coil of 2mm copper wire connected directly to an AA-600. Above is a ReturnLoss scan. It is not possible to expand the scale any more… did I mention that designers often do not understand real world applications. Nevertheless we can see that the middle of the peak in the response is at 1.813MHz where ReturnLoss is 0.34dB (equates to 51.6).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136494755744934, "perplexity": 1519.0298353846129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421021543-00005.warc.gz"}
http://math.stackexchange.com/questions/189287/proof-for-heine-borel-theorem?answertab=oldest
Proof for Heine Borel theorem I am trying to prove the Heine Borel theorem for compactness of the closed interval $[0,1]$ using Konig's lemma. This is what I have so far: 1. I assume $[0,1]$ can be covered by $\{(a_i,b_i):i=0,1,2\cdots\}$. 2. I construct a graph $G$ as follows: First let a vertex be labelled $[0,1]$ (the root). Then consider $[0,1]-(a_0,b_0)\cup(a_1,b_1)$. This consists of $n_1$ closed intervals where $n_1$ is finite. Adjoin the $[0,1]$ vertex with $n_1$ vertices labelled by these closed intervals (these vertices will be at level 1). Next consider $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cup(a_2,b_2)$. This consists of $n_2$ closed intervals. Each of these closed intervals is a subset of exactly one the closed intervals considered in the previous step. Make $n_2$ vertices labelled by these closed intervals and adjoin them to that vertex created in the previous step of which it is a subset of (these vertices will be at level 2). Continue doing so for higher levels, each time obtaining the labels by considering the closed interval obtained from $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$. 3. This yields a rooted tree $G$ where each level is finite. 4. Suppose the tree contained an infinite path: $[0,1]\supset[\alpha_1,\beta_1]\supset[\alpha_2,\beta_2]\cdots$. 5. Since a sequence of nested closed intervals is nonempty so there is an element $x$ in it. As $x\in [0,1]$ so $x\in(a_i,b_i)$ for some $i$. But then $x$ cannot exist in any interval which is at a level beyond $i$, yielding a contradiction to 4. 6. So by the contrapositive form of Konig's lemma, $G$ cannot be infinite. It follows that for some $i$, $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$ is empty. Hence $[0,1]$ is covered by $(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$. My doubts are in the arguments presented in 2. and 6. Are they correct? In particular is this statement: "Each of these closed intervals is a subset of exactly one the closed intervals considered in the previous step." correct? What is an upper bound for $n_k$? Thanks - The argument is correct, but it can be cleaned up a bit. Here’s one possible way. Without loss of generality assume that $0\in(a_0,b_0)$, $1\in(a_1,b_1)$, and $b_0\le a_1$. Construct a sequence of closed subsets of $[0,1]$ as follows: $C_0=[0,1]$, and $C_{n+1}=C_n\setminus(a_n,b_n)$ for $n\in\Bbb N$. Claim: Each $C_n$ is the union of a finite family of pairwise disjoint closed intervals (which may be degenerate). Proof: This is clearly true for $C_0,C_1=[b_0,1]$, and $C_2=[b_0,a_1]$. Suppose that it holds for $C_n$, where $n\ge 2$, and write $C_n=\bigcup_{k=1}^m[c_k,d_k]$, where $c_1\le d_1<c_2\le d_2<\dots<c_m\le d_m$. That is, $c_k\le d_k$ for $k=1,\dots,m$, and $d_k<c_{k+1}$ for $k=1,\dots,m-1$. Then $C_{n+1}$ is the disjoint union of the following closed intervals: the intervals $[c_k,d_k]$ such that $d_k\le a_n$ or $c_k>b_n$; the interval $[c_k,a_n]$ if $c_k\le a_n<d_k$; and the interval $[b_n,d_k]$ if $c_k<b_n\le d_k$. The result now follows by induction. $\dashv$ For $n\in\Bbb N$ let $\mathscr{C}_n$ be the set of pairwise disjoint closed intervals that are the connected components of $C_n$, and let $\mathscr{C}=\bigcup_{n\in\Bbb N}\mathscr{C}_n$. It’s clear that if $m\le n$ and $I\in\mathscr{C}_n$, there is a unique $J\in\mathscr{C}_m$ such that $I\subseteq J$. Thus, $\langle\mathscr{C},\supseteq\rangle$ is a tree of height $\omega$, and $\mathscr{C}_n=\operatorname{Lev}_n\mathscr{C}$ for each $n\in\Bbb N$, so $\mathscr{C}$ has finite levels. It follows from König’s theorem that there is a branch $\beta=\langle I_n:n\in\Bbb N\rangle$ through $\mathscr{C}$. Then $\beta$ is a nested sequence of closed intervals, so $\bigcap_{n\in\Bbb N}I_n\ne\varnothing$. Fix $x\in\bigcap_{n\in\Bbb N}I_n\ne\varnothing$. Then $x\in[0,1]$, but for each $n\in\Bbb N$ we have $x\in I_{n+1}\subseteq[0,1]\setminus(a_n,b_n)$, so $x\in[0,1]\setminus\bigcup_{n\in\Bbb N}(a_n,b_n)$, contradicting the assumption that $\{(a_n,b_n):n\in\Bbb N\}$ is a cover of $[0,1]$. For completeness there are a couple of things that you ought to say first: you really should start with an arbitrary open cover $\mathscr{U}$ of $[0,1]$ and then justify replacing it by a countable cover by open intervals. How you do this depends on what tools you consider to be available. To answer the final question, note that each step can increase the number of closed intervals by at most one: in my construction this happens exactly when there is a $k$ such that $c_k\le a_n<b_n\le d_k$. Since $|\mathscr{C}_n=1|$ for $n=0,1,2$, this means that $|\mathscr{C}_n|\le n-1$ for $n\ge 2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857609272003174, "perplexity": 58.29648668030843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00060-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.aimsciences.org/journal/1531-3492/2009/12/1
# American Institute of Mathematical Sciences ISSN: 1531-3492 eISSN: 1553-524X All Issues ## Discrete & Continuous Dynamical Systems - B July 2009 , Volume 12 , Issue 1 Select all articles Export/Reference: 2009, 12(1): 1-22 doi: 10.3934/dcdsb.2009.12.1 +[Abstract](2117) +[PDF](720.1KB) Abstract: Two coupled partial differential equations which describe the motion of a viscoelastic (Kelvin-Voigt type) Timoshenko beam are formulated with the complementarity conditions. This dynamic impact problem is considered a boundary thin obstacle problem. The existence of solutions is proved. A major concern is to pursue an investigation into conservation of energy (or energy balance), which is performed both theoretically and numerically. 2009, 12(1): 23-38 doi: 10.3934/dcdsb.2009.12.23 +[Abstract](2020) +[PDF](208.4KB) Abstract: This work extends the model developed by Gao (1996) for the vibrations of a nonlinear beam to the case when one of its ends is constrained to move between two reactive or rigid stops. Contact is modeled with the normal compliance condition for the deformable stops, and with the Signorini condition for the rigid stops. The existence of weak solutions to the problem with reactive stops is shown by using truncation and an abstract existence theorem involving pseudomonotone operators. The solution of the Signorini-type problem with rigid stops is obtained by passing to the limit when the normal compliance coefficient approaches infinity. This requires a continuity property for the beam operator similar to a continuity property for the wave operator that is a consequence of the so-called div-curl lemma of compensated compactness. 2009, 12(1): 39-76 doi: 10.3934/dcdsb.2009.12.39 +[Abstract](1994) +[PDF](1394.1KB) Abstract: We consider a general model of chemotaxis with finite speed of propagation in one space dimension. For this model we establish a general result of stability of some constant states both for the Cauchy problem on the whole real line and for the Neumann problem on a bounded interval. These results are obtained using the linearized operators and the accurate analysis of their nonlinear perturbations. Numerical schemes are proposed to approximate these equations, and the expected qualitative behavior for large times is compared to several numerical tests. 2009, 12(1): 77-108 doi: 10.3934/dcdsb.2009.12.77 +[Abstract](2446) +[PDF](512.8KB) Abstract: In this work we derive a hierarchy of new mathematical models for describing the motion of phototactic bacteria, i.e., bacteria that move towards light. These models are based on recent experiments suggesting that the motion of such bacteria depends on the individual bacteria, on group dynamics, and on the interaction between bacteria and their environment. Our first model is a collisionless interacting particle system in which we follow the location of the bacteria, their velocity, and their internal excitation (a parameter whose role is assumed to be related to communication between bacteria). In this model, the light source acts as an external force. The resulting particle system is an extension of the Cucker-Smale flocking model. We prove that when all particles are fully excited, their asymptotic velocity tends to an identical (pre-determined) terminal velocity. Our second model is a kinetic model for the one-particle distribution function that includes an internal variable representing the excitation level. The kinetic model is a Vlasov-type equation that is derived from the particle system using the BBGKY hierarchy and molecular chaos assumption. Since bacteria tend to move in areas that were previously traveled by other bacteria, a surface memory effect is added to the kinetic model as a turning operator that accounts for the collisions between bacteria and the environment. The third and final model is derived as a formal macroscopic limit of the kinetic model. It is shown to be the Vlasov-McKean equation coupled with a reaction-diffusion equation. 2009, 12(1): 109-131 doi: 10.3934/dcdsb.2009.12.109 +[Abstract](2084) +[PDF](602.8KB) Abstract: We introduce a characterization of exponential dichotomies for linear difference equations that can be tested numerically and enables the approximation of dichotomy rates and projectors with high accuracy. The test is based on computing the bounded solutions of a specific inhomogeneous difference equation. For this task a boundary value and a least squares approach is applied. The results are illustrated using Hénon's map. We compute approximations of dichotomy rates and projectors of the variational equation, along a homoclinic orbit and an orbit on the attractor as well as for an almost periodic example. For the boundary value and the least squares approach, we analyze in detail errors that occur, when restricting the infinite dimensional problem to a finite interval. 2009, 12(1): 133-149 doi: 10.3934/dcdsb.2009.12.133 +[Abstract](2913) +[PDF](799.7KB) Abstract: A global bifurcation result is obtained for families of competitive systems of difference equations $x_{n+1} = f_\alpha(x_n,y_n)$ $y_{n+1} = g_\alpha(x_n,y_n)$ where $\alpha$ is a parameter, $f_\alpha$ and $g_\alpha$ are continuous real valued functions on a rectangular domain $\mathcal{R}_\alpha \subset \mathbb{R}^2$ such that $f_\alpha(x,y)$ is non-decreasing in $x$ and non-increasing in $y$, and $g_\alpha(x, y)$ is non-increasing in $x$ and non-decreasing in $y$. A unique interior fixed point is assumed for all values of the parameter $\alpha$. As an application of the main result for competitive systems a global period-doubling bifurcation result is obtained for families of second order difference equations of the type $x_{n+1} = F_\alpha(x_n, x_{n-1}), \quad n=0,1, \ldots$ where $\alpha$ is a parameter, $F_\alpha:\mathcal{I_\alpha}\times \mathcal{I_\alpha} \rightarrow \mathcal{I_\alpha}$ is a decreasing function in the first variable and increasing in the second variable, and $\mathcal{I_\alpha}$ is a interval in $\mathbb{R}$, and there is a unique interior equilibrium point. Examples of application of the main results are also given. 2009, 12(1): 151-168 doi: 10.3934/dcdsb.2009.12.151 +[Abstract](2348) +[PDF](2061.4KB) Abstract: The purpose of this paper is to present qualitative and bifurcation analysis near the degenerate equilibrium in models of interactions between lymphocyte cells and solid tumor and to understand the development of tumor growth. Theoretical analysis shows that these cancer models can exhibit Bogdanov-Takens bifurcation under sufficiently small perturbation of the system parameters whether it is vascularized or not. Periodic oscillation behavior and coexistence of the immune system and the tumor in the host are found to be influenced significantly by the choice of bifurcation parameters. It is also confirmed that bifurcations of codimension higher than 2 cannot occur at this equilibrium in both cases. The analytic bifurcation diagrams and numerical simulations are given. Some anomalous properties are discovered from comparing the vascularized case with the avascular case. 2009, 12(1): 169-186 doi: 10.3934/dcdsb.2009.12.169 +[Abstract](3563) +[PDF](681.8KB) Abstract: The global dynamics of a periodic SIS epidemic model with maturation delay is investigated. We first obtain sufficient conditions for the single population growth equation to admit a globally attractive positive periodic solution. Then we introduce the basic reproduction ratio $\mathcal{R}_0$ for the epidemic model, and show that the disease dies out when $\mathcal{R}_0<1$, and the disease remains endemic when $\mathcal{R}_0>1$. Numerical simulations are also provided to confirm our analytic results. 2009, 12(1): 187-203 doi: 10.3934/dcdsb.2009.12.187 +[Abstract](2533) +[PDF](270.7KB) Abstract: The theory of Lyapunov exponents and methods from ergodic theory have been employed by several authors in order to study persistence properties of dynamical systems generated by ODEs or by maps. Here we derive sufficient conditions for uniform persistence, formulated in the language of Lyapunov exponents, for a large class of dissipative discrete-time dynamical systems on the positive orthant of $\mathbb{R}^m$, having the property that a nontrivial compact invariant set exists on a bounding hyperplane. We require that all so-called normal Lyapunov exponents be positive on such invariant sets. We apply the results to a plant-herbivore model, showing that both plant and herbivore persist, and to a model of a fungal disease in a stage-structured host, showing that the host persists and the disease is endemic. 2009, 12(1): 205-218 doi: 10.3934/dcdsb.2009.12.205 +[Abstract](1984) +[PDF](156.6KB) Abstract: This note is concerned with the identification of the absorption coefficient in a parabolic system. It introduces an algorithm that can be used to recover the unknown function. The algorithm is iterative in nature. It assumes an initial value for the unknown function and updates it at each iteration. Using the assumed value, the algorithm obtains a background field and computes the equation for the error at each iteration. The error equation includes the correction to the assumed value of the unknown function. Using the measurements obtained at the boundaries, the algorithm introduces two formulations for the error dynamics. By equating the responses of these two formulations it is then possible to obtain an equation for the unknown correction term. A number of numerical examples are also used to study the performance of the algorithm. 2009, 12(1): 219-225 doi: 10.3934/dcdsb.2009.12.219 +[Abstract](2110) +[PDF](127.9KB) Abstract: In this paper, we consider the initial-boundary value problem of Burgers equation with a time delay. Using a fixed point theorem and a comparison principle, we show that the time-delayed Burgers equation is exponentially stable under small delays. The result is more explicit, but also complements, the result given by Weijiu Liu [Discrete and Continuous Dynamical Systems-Series B, 2:1(2002),47-56], which was based on the Liapunov function approach. 2009, 12(1): 227-250 doi: 10.3934/dcdsb.2009.12.227 +[Abstract](1717) +[PDF](470.1KB) Abstract: To mimic the striking capability of microbial culture for growth adaptation after the onset of the novel environmental conditions, a modified heterogeneous microbial population model in the chemostat with essential resources is proposed which considers adaptation by spontaneously phenotype-switching between normally growing cells and persister cells having reduced growth rate. A basic reproductive number $R_0$ is introduced so that the population dies out when $R_0<1$, and when $R_0>1$ the population will be asymptotic to a steady state of persister cells, or a steady state of only normal cells, or a steady state corresponding to a heterogeneous population of both normal and persister cells. Our analysis confirms that inherent heterogeneity of bacterial populations is important in adaption to fluctuating environments and in the persistence of bacterial infections. 2009, 12(1): 251-260 doi: 10.3934/dcdsb.2009.12.251 +[Abstract](2227) +[PDF](143.2KB) Abstract: Some existence theorems are obtained for periodic and subharmonic solutions of ordinary $P$-Laplacian systems by the minimax methods in critical point theory. 2019  Impact Factor: 1.27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833602070808411, "perplexity": 386.33171527416215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00541.warc.gz"}
http://mathoverflow.net/questions/73636/how-can-i-make-a-non-gaussian-first-order-autoregressive-sequence-of-random-vari
How can I make a Non-Gaussian first order autoregressive sequence of random variables independent? Hi everybody, Consider a sequence of Non-Gaussian first order autoregressive random variables of length $N$, $\mathbf{X}=\{x_i\}_{i=1}^N$, generated from a common stationary distribution $p(\mathbf{x})$, with covariance matrix $$\mathbf{K}_{\mathbf{x}\mathbf{x}}=Toeplitz(1, \rho, \rho^2, \ldots, \rho^{N-1}),$$ where $\rho$ is a normalized correlation coefficient. Can you please help me find some approaches to make $\mathbf{X}$ as a sequence of independent identically distributed (i.i.d.) random variables.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597054481506348, "perplexity": 175.51145483771586}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00220-ip-10-16-133-185.ec2.internal.warc.gz"}
https://ahilado.wordpress.com/2017/06/
In Valuations and Completions we introduced the $p$-adic numbers $\mathbb{Q}_{p}$, which, like the real numbers, are the completion of the rational numbers under a certain kind of valuation. There is one such valuation for each prime number $p$, and another for the “infinite prime”, which is just the usual absolute value. Each valuation may be thought of as encoding number theoretic information related to the prime $p$, or to the “infinite prime”, for the case of the absolute value (more technically, the $p$-adic valuations are referred to as nonarchimedean valuations, while the absolute value is an example of an archimedean valuation). We can consider valuations not only for the rational numbers, but for more general algebraic number fields as well. In its abstract form, given an algebraic number field $K$, a (multiplicative) valuation of $K$ is simply any function $|\ |$ from $K$ to $\mathbb{R}$ satisfying the following properties: (i) $|x|\geq 0$, where $x=0$ if and only if $x=0$ (ii) $|xy|=|x||y|$ (iii) $|x+y|\leq|x|+|y|$ If this seems reminiscent of the discussion in Metric, Norm, and Inner Product, it is because a valuation does, in fact, define a metric on $K$, and by extension, a topology. Two valuations are equivalent if they define the same topology; another way to phrase this statement is that two valuations $|\ |_{1}$ and $|\ |_{2}$ are equivalent if $|x|_{1}=|x|_{2}^{s}$ for some positive real number $s$, for all $x\in K$.  The valuation is nonarchimedean if $|x+y|\leq\text{max}\{|x|,|y|\}$; otherwise, it is archimedean. Just as in the case of rational numbers, we also have an exponential valuation, defined as a function $v$ from the field $K$ to $\mathbb{R}\cup \infty$ satisfying the following conditions: (i) $v(x)=\infty$ if and only if $x=0$ (ii) $v(xy)=v(x)+v(y)$ (iii) $v(x+y)\geq\text{min}\{v(x),v(y)\}$ Two exponential valuations $v_{1}$ and $v_{2}$ are equivalent if $v_{1}(x)=sv_{2}(x)$ for some real number $s$, for all $x\in K$. The idea of valuations allows us to make certain concepts in algebraic number theory (see Algebraic Numbers) more abstract. We define a place $v$ of an algebraic number field $K$ as an equivalence class of valuations of $K$. We write $K_{v}$ to denote the completion of $K$ under the place $v$; these are the generalizations of the $p$-adic numbers and real numbers to algebraic number fields other than $\mathbb{Q}$. The nonarchimedean places are also called the finite places, while the archimedean places are also called the infinite places. To express whether a place $v$ is a finite place or an infinite place, we write $v|\infty$ or $v\nmid\infty$ respectively. The infinite places are of two kinds; the ones for which $K_{v}$ is isomorphic to $\mathbb{R}$ are called the real places, while the ones for which $K_{v}$ is isomorphic to $\mathbb{C}$ are called the complex places. The number of real places and complex places of $K$, denoted by $r_{1}$ and $r_{2}$ respectively, satisfy the equation $r_{1}+2r_{2}=n$, where $n$ is the degree of $K$ over $\mathbb{Q}$, i.e. $n=[K:\mathbb{Q}]$. By the way, in some of the literature, such as in the book Algebraic Number Theory by Jurgen Neukirch, “places” are also referred to as “primes“. This is intentional – one may actually think of our definition of places as being like a more abstract replacement of the definition of primes. This is quite advantageous in driving home the concept of primes as equivalence classes of valuations; however, to avoid confusion, we will stick to using the term “places” here, along with its corresponding notation. When $v$ is a nonarchimedean valuation, we let $\mathfrak{o}_{v}$ denote the set of all elements $x$ of $K_{v}$ for which $|x|_{v}\leq 1$. It is an example of a ring with special properties called a valuation ring. This means that, for any $x$ in $K$, either $x$ or $x^{-1}$ must be in $\mathfrak{o}_{v}$. We let $\mathfrak{o}_{v}^{*}$ denote the set of all elements of $\mathfrak{o}_{v}$ for which $|x|_{v}=1$, and we let $\mathfrak{p}_{v}$ denote the set of all elements of $\mathfrak{o}_{v}$ for which $|x|_{v}< 1$. It is the unique maximal ideal of $\mathfrak{o}_{v}$. Now we proceed to consider the modern point of view in algebraic number theory, which is to consider all these equivalence classes of valuations together. This will lead us to the language of adeles and ideles. An adele $\alpha$ of $K$ is a family $(\alpha_{v})$ of elements $\alpha_{v}$ of $K_{v}$ where $\alpha_{v}\in K_{v}$, and $\alpha_{v}\in\mathfrak{o}_{v}$ for all but finitely many $v$. We can define addition and multiplication componentwise on adeles, and the resulting ring of adeles is then denoted $\mathbb{A}_{K}$. The group of units of the ring of adeles is called the group of ideles, denoted $I_{K}$. For a finite set of primes $S$ that includes the infinite primes, we let $\displaystyle \mathbb{A}_{K}^{S}=\prod_{v\in S}K_{v}\times\prod_{v\notin S}\mathfrak{o}_{v}$ and $\displaystyle I_{K}^{S}=\prod_{v\in S}K_{v}^{*}\times\prod_{v\notin S}\mathfrak{o}_{v}^{*}$. We denote the set of infinite primes by $S_{\infty}$. Then $\mathfrak{o}_{K}$, the ring of integers of the number field $K$, is given by $K\cap\mathbb{A}_{K}^{S_{\infty}}$, while $\mathfrak{o}_{K}^{*}$, the group of units of $\mathfrak{o}_{K}$, is given by $K^{*}\cap I_{K}^{S_{\infty}}$. Any element of $K$ is also an element of $\mathbb{A}_{K}$, and any element of $K^{*}$ (the group of units of $K$) is also an element of $I_{K}$. The elements of $I_{K}$ which are also elements of $K^{*}$ are called the principal ideles. This should not be confused with the concept of principal ideals; however the terminology is perhaps suggestive on purpose. In fact, ideles and fractional ideals are related. Any fractional ideal $\mathfrak{a}$ can be expressed in the form $\displaystyle \mathfrak{a}=\prod_{\mathfrak{p}}\mathfrak{p}^{\nu_{\mathfrak{p}}}$. Therefore, we have a mapping $\displaystyle \alpha\mapsto (\alpha)=\prod_{\mathfrak{p}}\mathfrak{p}^{v_{\mathfrak{p}}(\alpha_v)}$ from the group of ideles to the group of fractional ideals. This mapping is surjective, and its kernel is $I_{K}^{S_{\infty}}$. The quotient group $I_{K}/K^{*}$ is called the idele class group of $K$, and is denoted by $C_{K}$. Again, this is not to be confused with the ideal class group we discussed in Algebraic Numbers, although the two are related; in the language of ideles, the ideal class group is defined as $I_{K}/I_{K}^{S_{\infty}}K^{*}$, and is denoted by $Cl_{K}$. There is a surjective homomorphism $C_{K}\mapsto Cl_{K}$ induced by the surjective homomorphism from the group of ideles to the group of fractional ideals that we have described in the preceding paragraph. An important aspect of the concept of adeles and ideles is that they can be equipped with topologies (see Basics of Topology and Continuous Functions). For the adeles, this topology is generated by the neighborhoods of $0$ in $\mathbb{A}_{K}^{S_{\infty}}$ under the product topology. For the ideles, this topology is defined by the condition that the mapping $\alpha\mapsto (\alpha,\alpha^{-1})$ from $I_{K}$ into $\mathbb{A}_{K}\times\mathbb{A}_{K}$ be a homeomorphism onto its image. Both topologies are locally compact, which means that every element has a neighborhood which is compact, i.e. every open cover of that neighborhood has a finite subcover. For the group of ideles, its topology is compatible with its group structure, which makes it into a locally compact topological group. In this post, we have therefore seen how the theory of valuations can allow us to consider a more abstract viewpoint for algebraic number theory, and how considering all the valuations together to form adeles and ideles allows us to rephrase the usual concepts related to algebraic number fields, such as the ring of integers, its group of units, and the ideal class group, in a new form. In addition, the topologies on the adeles and ideles can be used to obtain new results; for instance, because the group of ideles is a locally compact topological (abelian) group, we can use the methods of harmonic analysis (see Some Basics of Fourier Analysis) to study it. This is the content of the famous thesis of the mathematician John Tate. Another direction where the concept of adeles and ideles can take us is class field theory, which relates the idele class group to the other important group in algebraic number theory, the Galois group (see Galois Groups). The language of adeles and ideles can also be applied not only to algebraic number fields but also to function fields of curves over finite fields. Together these fields are also known as global fields. References: Tate’s Thesis on Wikipedia Class Field Theory on Wikipedia Algebraic Number Theory by Jurgen Neukirch Algebraic Number Theory by J. W. S. Cassels and A. Frohlich A Panorama of Pure Mathematics by Jean Dieudonne In Category Theory we introduced the language of categories, and in many posts in this blog we have seen how useful it is in describing concepts in modern mathematics, for example in the two most recent posts, The Theory of Motives and Algebraic Spaces and Stacks. In this post, we introduce another important concept in category theory, that of adjoint functors, as well as the closely related notion of monads. Manifestations of these ideas are quite ubiquitous in modern mathematics, and we enumerate a few examples in this post. An adjunction between two categories $\mathbf{C}$ and $\mathbf{D}$ is a pair of functors, $F:\mathbf{C}\rightarrow \mathbf{D}$, and $G:\mathbf{D}\rightarrow \mathbf{C}$, such that there exists a bijection $\displaystyle \text{Hom}_{\mathbf{D}}(F(X),Y)\cong\text{Hom}_{\mathbf{C}}(X,G(Y))$ for all objects $X$ of $\mathbf{C}$ and all objects $Y$ of $\mathbf{D}$. We say that $F$ is left-adjoint to $G$, and that $G$ is right-adjoint to $F$. We may also write $F\dashv G$. An adjunction determines two natural transformations $\eta: 1_{\mathbf{C}}\rightarrow G\circ F$ and $\epsilon:F\circ G\rightarrow 1_{\mathbf{D}}$, called the unit and counit, respectively. Conversely, the functors $F$ and $G$, together with the natural transformations $\eta$ and $\epsilon$, are enough to determine the adjunction, therefore we can also denote the adjunction by $(F,G,\eta,\epsilon)$. We give an example of an adjunction. Let $K$ be a fixed field, and consider the functors $F:\textbf{Sets}\rightarrow\textbf{Vect}_{K}$ $\displaystyle G:\textbf{Vect}_{K}\rightarrow\textbf{Sets}$ where $F$ is the functor which assigns to a set $X$ the vector space $F(X)$ made up of formal linear combinations of elements of $X$ with coefficients in $K$; in other words, an element of $F(X)$ can be written as $\sum_{i}a_{i}x_{i}$, where $a_{i}\in K$ and $x_{i}\in X$, and $G$ is the forgetful functor, which assigns to a vector space $V$ the set $G(V)$ of elements (vectors) of $V$; in other words it simply “forgets” the vector space structure on $V$. For every function $g:X\rightarrow G(V)$ in $\textbf{Sets}$ we have a linear transformation $f:F(X)\rightarrow V$ in $\textbf{Vect}_{K}$ given by $f(\sum_{i}a_{i}x_{i})=\sum_{i}a_{i}g(x_{i})$. The correspondence $\psi:g\rightarrow f$ has an inverse $\varphi$, given by restricting $f$ to $X$ (so that our only linear transformations are of the form $f(x_{i})$, and we can obtain set-theoretic functions corresponding to these linear transformations). Hence we have a bijection $\displaystyle \text{Hom}_{\textbf{Vect}_{K}}(F(X),V)\cong\text{Hom}_{\textbf{Sets}}(X,G(V))$. We therefore see that the two functors $F$ and $G$ form an adjunction; the functor $F$ (sometimes called the free functor) is left-adjoint to the forgetful functor $G$, and $G$ is right-adjoint to $F$. As another example, consider now the category of modules over a commutative ring $R$, and the functors $-\otimes_{R}B$ and $\text{Hom}_{R}(B,-)$ (see The Hom and Tensor Functors). For every morphism $g:A\otimes_{R}B\rightarrow C$ we have another morphism $f: A\rightarrow\text{Hom}_{R}(B,C)$ given by $[f(a)](b)=g(a,b)$. We actually have a bijection $\displaystyle \text{Hom}(A\otimes_{R}B,C)\cong\text{Hom}(A,\text{Hom}_{R}(B,C))$. This is called the Tensor-Hom adjunction. Closely related to the concept of an adjunction is the concept of a monad. A monad is a triple $(T,\eta,\mu)$ where $T$ is a functor from $\mathbf{C}$ to itself, $\eta$ is a natural transformation from $1_{\mathbf{C}}$ to $T$, and $\mu$ is a natural transformation from $\mu:T^{2}\rightarrow T$, satisfying the following properties: $\displaystyle \mu\circ\mu_{T}=\mu\circ T\mu$ $\displaystyle \mu\circ\eta_{T}=\mu\circ T\eta=1$ Dual to the concept of a monad is the concept of a comonad. A comonad on a category $\mathbf{C}$ may be thought of as a monad on the opposite category $\mathbf{C}^{\text{op}}$. As an example of a monad, we can consider the action of a fixed group $G$ on a set (such as the symmetric group permuting the elements of the set, for example). In this case, our category will be $\mathbf{Sets}$, and $T$, $\eta$, and $\mu$ are given by $\displaystyle T(X)=G\times X$ $\displaystyle \eta:X\rightarrow G\times X$ given by $x\rightarrow\langle g,x\rangle$ $\displaystyle \mu:G\times (G\times X)\rightarrow G\times X$ given by $\langle g_{1},\langle g_{2},x\rangle\rangle\rightarrow \langle g_{1}g_{2},x\rangle$ Adjunctions and monads are related in the following way. Let $F:\mathbf{C}\rightarrow\mathbf{D}$ and $G:\mathbf{D}\rightarrow\mathbf{C}$ be a pair of adjoint functors with unit $\eta$ and counit $\epsilon$. Then we have a monad on $\mathbf{C}$ given by $(G\circ F,\eta,G\epsilon_{F})$. We can also obtain a comonad given by $(F\circ G,\epsilon,F\eta_{G})$. Conversely, if we have a monad $(T,\eta,\mu)$ on the category $\mathbf{C}$, we can obtain a pair of adjoint functors $F:\mathbf{C}\rightarrow\mathbf{C}^{T}$ and $G:\mathbf{C}^{T}\rightarrow\mathbf{C}$, where $\mathbf{C}^{T}$ is the Eilenberg-Moore category, whose objects (called $T$-algebras) are pairs $(A,\alpha)$, where $A$ is an object of $\mathbf{C}$, and $\alpha$ is a morphism $T(A)\rightarrow A$ satisfying $\displaystyle \alpha\circ \eta_{A}=1_{A}$ $\displaystyle \alpha\circ \mu_{A}=\alpha\circ T(\alpha)$, and whose morphisms $h:(A,\alpha)\rightarrow (B,\beta)$ are morphisms $h:A\rightarrow B$ in $\mathbf{C}$ such that $\displaystyle h\circ\alpha=\beta\circ T(h)$. In the example we gave above in the discussion on monads, the $T$-algebras are exactly the sets with the action of the group $G$. If $X$ is such a set, then the corresponding $T$-algebra is the pair $(X,h)$, where the function $h:G\times X\rightarrow X$ satisfies $\displaystyle h(g_{1},h(g_{2},x))=h(g_{1}g_{2},x)$ $\displaystyle h(e,x)=x$. For comonads, we have a dual notion of coalgebras. These “dual” ideas are important objects of study in themselves, for example in topos theory. Another reason to consider comonads and coalgebras is that in mathematics there often arises a situation where we have three functors $\displaystyle L:\mathbf{D}\rightarrow\mathbf{C}$ $\displaystyle F:\mathbf{C}\rightarrow\mathbf{D}$ $\displaystyle R:\mathbf{D}\rightarrow\mathbf{C}$ where $L$ is left-adjoint to $F$, and $R$ is right-adjoint to $F$ (a so-called adjoint triple). As an example, consider the forgetful functor $F:\textbf{Top}\rightarrow\textbf{Sets}$ which assigns to a topological space its underlying set. It has both a left-adjoint $L:\textbf{Sets}\rightarrow\textbf{Top}$ which assigns to a set $X$ the trivial topology (where the only open sets are the empty set and $X$ itself), and a right-adjoint $R:\textbf{Sets}\rightarrow\textbf{Top}$ which assigns to the set $X$ the discrete topology (where every subset of $X$ is an open set). Therefore we have a monad and a comonad on $\textbf{Sets}$ given by $F\circ L$ and $F\circ R$ respectively. Many more examples of adjoint functors and monads can be found in pretty much all areas of mathematics. And according to a principle attributed to the mathematician Saunders Mac Lane (one of the founders of category theory, along with Samuel Eilenberg), such a structure that occurs widely enough in mathematics deserves to be studied for its own sake. References: Categories for the Working Mathematician by Saunders Mac Lane Category Theory by Steve Awodey # Algebraic Spaces and Stacks We introduced the concept of a moduli space in The Moduli Space of Elliptic Curves, and constructed explicitly the moduli space of elliptic curves, using the methods of complex analysis. In this post, we introduce the concepts of algebraic spaces and stacks, far-reaching generalizations of the concepts of varieties and schemes (see Varieties and Schemes Revisited), that are very useful, among other things, for constructing “moduli stacks“, which are an improvement over the naive notion of moduli space, namely in that one can obtain from it all “families of objects” by pulling back a “universal object”. We need first the concept of a fibered category (also spelled fibred category). Given a category $\mathcal{C}$, we say that some other category $\mathcal{S}$ is a category over $\mathcal{C}$ if there is a functor $p$ from $\mathcal{S}$ to $\mathcal{C}$ (this should be reminiscent of our discussion in Grothendieck’s Relative Point of View). If $\mathcal{S}$ is a category over some other category $\mathcal{C}$, we say that it is a fibered category (over $\mathcal{C}$) if for every object $U=p(x)$ and morphism $f: V\rightarrow U$ in $\mathcal{C}$, there is a strongly cartesian morphism $\phi: f^{*}x\rightarrow x$ in $\mathcal{S}$ with $f=p(\phi)$. This means that any other morphism $\psi: z\rightarrow x$ whose image $p(\psi)$ under the functor $p$ factors as $p(\psi)=p(\phi)\circ h$ must also factor as $\psi=\phi\circ \theta$ under some unique morphism $\theta: z\rightarrow f^{*}x$ whose image under the functor $p$ is $h$. We refer to $f^{*}x$ as the pullback of $x$ along $f$. Under the functor $p$, the objects of $\mathcal{S}$ which get sent to $U$ in $\mathcal{C}$ and the morphisms of $\mathcal{S}$ which get sent to the identity morphism $i_{U}$ in $\mathcal{C}$ form a subcategory of $\mathcal{S}$ called the fiber over $U$. We will also write it as $\mathcal{S}_{U}$. An important example of a fibered category is given by an ordinary presheaf on a category $\mathcal{C}$, i.e. a functor $F:\mathcal{C}^{\text{op}}\rightarrow (\text{Set})$; we can consider it as a category fibered in sets $\mathcal{S}_{F}\rightarrow\mathcal{C}$. A special kind of fibered category that we will need later on is a category fibered in groupoids. A groupoid is simply a category where all morphisms have inverses, and a category fibered in groupoids is a fibered category where all the fibers are groupoids. A set is a special kind of groupoid, since it may be thought of as a category whose only morphisms are the identity morphisms (which are trivially their own inverses). Hence, the example given in the previous paragraph, that of a presheaf, is also an example of a category fibered in groupoids, since it is fibered in sets. Now that we have the concept of fibered categories, we next want to define prestacks and stacks. Central to the definition of prestacks and stacks is the concept known as descent, so we have to discuss it first. The theory of descent can be thought of as a formalization of the idea of “gluing”. Let $\mathcal{U}=\{f_{i}:U_{i}\rightarrow U\}$ be a covering (see Sheaves and More Category Theory: The Grothendieck Topos) of the object $U$ of $\mathcal{C}$. An object with descent data is a collection of objects $X_{i}$ in $\mathcal{S}_{U}$ together with transition isomorphisms $\varphi_{ij}:\text{pr}_{0}^{*}X_{i}\simeq\text{pr}_{1}^{*}X_{j}$ in $\mathcal{S}_{U_{i}\times_{U}U_{j}}$, satisfying the cocycle condition $\displaystyle \text{pr}_{02}^{*}\varphi_{ik}=\text{pr}_{01}^{*}\varphi_{ij}\circ \text{pr}_{12}^{*}\varphi_{jk}:\text{pr}_{0}^{*}X_{i}\rightarrow \text{pr}_{2}^{*}X_{k}$ The morphisms $\text{pr}_{0}:U_{i}\times_{U}U_{j}\rightarrow U_{i}$ and the $\text{pr}_{1}:U_{i}\times_{U}U_{j}\rightarrow U_{j}$ are the projection morphisms. The notations $\text{pr}_{0}^{*}X_{i}$ and $\text{pr}_{1}^{*}X_{j}$ means that we are “pulling back” $X_{i}$ and $X_{j}$ from $\mathcal{S}_{U_{i}}$ and $\mathcal{S}_{U_{j}}$, respectively, to $\mathcal{S}_{U_{i}\times_{U}U_{j}}$. A morphism between two objects with descent data is a a collection of morphisms $\psi_{i}:X_{i}\rightarrow X'_{i}$ in $\mathcal{S}_{U_{i}}$ such that $\varphi'_{ij}\circ\text{pr}_{0}^{*}\psi_{i}=\text{pr}_{1}^{*}\psi_{j}\circ\varphi_{ij}$. Therefore we obtain a category, the category of objects with descent data, denoted $\mathcal{DD}(\mathcal{U})$. We can define a functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ by assigning to each object $X$ of $\mathcal{S}_{U}$ the object with descent data given by the pullback $f_{i}^{*}X$ and the canonical isomorphism $\text{pr}_{0}^{*}f_{i}^{*}X\rightarrow\text{pr}_{1}^{*}f_{j}^{*}X$. An object with descent data that is in the essential image of this functor is called effective. Before we give the definitions of prestacks and stacks, we recall some definitions from category theory: A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is faithful if the induced map $\text{Hom}_{\mathcal{A}}(x,y)\rightarrow \text{Hom}_{\mathcal{B}}(F(x),F(y))$ is injective for any two objects $x$ and $y$ of $\mathcal{A}$. A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is full if the induced map $\text{Hom}_{\mathcal{A}}(x,y)\rightarrow \text{Hom}_{\mathcal{B}}(F(x),F(y))$ is surjective for any two objects $x$ and $y$ of $\mathcal{A}$. A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is essentially surjective if any object $y$ of $\mathcal{B}$ is isomorphic to the image $F(x)$ of some object $x$ in $\mathcal{A}$ under $F$. A functor which is both faithful and full is called fully faithful. If, in addition, it is also essentially surjective, then it is called an equivalence of categories. Now we give the definitions of prestacks and stacks using the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ we have defined earlier. If the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ is fully faithful, then the fibered category $\mathcal{S}\rightarrow\mathcal{C}$ is a prestack. If the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ is an equivalence of categories, then the fibered category $\mathcal{S}\rightarrow\mathcal{C}$ is a stack. Going back to the example of a presheaf as a fibered category, we now look at what it means when it satisfies the conditions for being a prestack, or a stack: (i) $F$ is a prestack if and only if it is a separated functor, (ii) $F$ is stack if and only if it is a sheaf. We now have the abstract idea of a stack in terms of category theory. Next we want to have more specific examples of interest in algebraic geometry, namely, algebraic spaces and algebraic stacks. For this we need first the idea of a representable functor (and the closely related idea of a representable presheaf). The importance of representability is that this will allow us to “transfer” interesting properties of morphisms between schemes such as being surjective, etale, or smooth, to functors between categories or natural transformations between functors. Therefore we will be able to say that a functor or natural transformation is surjective, or etale, or smooth, which is important, because we will define algebraic spaces and stacks as functors and categories, respectively, but we want them to still be closely related, or similar enough, to schemes. A representable functor is a functor from $\mathcal{C}$ to $\textbf{Sets}$ which is naturally isomorphic to the functor which assigns to any object $X$ the set of morphisms $\text{Hom}(X,U)$, for some fixed object $U$ of $\mathcal{C}$. A representable presheaf is a contravariant functor from $\mathcal{C}$ to $\textbf{Sets}$ which is naturally isomorphic to the functor which assigns to any object $X$ the set of morphisms $\text{Hom}(U,X)$, for some fixed object $U$ of $\mathcal{C}$. If $\mathcal{C}$ is the category of schemes, the latter functor is also called the functor of points of the object $U$. We take this opportunity to emphasize a very important concept in modern algebraic geometry. The functor of points $h_{U}$ of a scheme $U$ may be identified with $U$ itself. There are many advantages to this point of view (which is also known as functorial algebraic geometry); in particular we will need it later when we give the definition of algebraic spaces and stacks. We now have the idea of a representable functor. Next we want to have an idea of a representable natural transformation (or representable morphism) of functors. We will need another prerequisite, that of a fiber product of functors. Let $F,G,H:\mathcal{C}^{\text{op}}\rightarrow \textbf{Sets}$ be functors, and let $a:F\rightarrow G$ and $b:H\rightarrow G$ be natural transformations between these functors. Then the fiber product $F\times_{a,G,b}H$ is a functor from $\mathcal{C}^{\text{op}}$ to $\textbf{Sets}$, and is given by the formula $\displaystyle (F\times_{a,G,b}H)(X)=F(X)\times_{a_{X},G(X),b_{X}}H(X)$ for any object $X$ of $\mathcal{C}$. Let $F,G:\mathcal{C}^{\text{op}}\rightarrow \textbf{Sets}$ be functors. We say that a natural transformation $a:F\rightarrow G$ is representable, or that $F$ is relatively representable over $G$ if for every $U\in\text{Ob}(\mathcal{C})$ and any $\xi\in G(U)$ the functor $h_{U}\times_{G}F$ is representable. We now let $(\text{Sch}/S)_{\text{fppf}}$ be the site (a category with a Grothendieck topology –  see also More Category Theory: The Grothendieck Topos) whose underlying category is the category of $S$-schemes, and whose coverings are given by families of flat, locally finitely presented morphisms. Any etale covering or Zariski covering is an example of this “fppf covering” (“fppf” stands for fidelement plate de presentation finie, which is French for faithfully flat and finitely presented). An algebraic space over a scheme $S$ is a presheaf $\displaystyle F:((\text{Sch}/S)_{\text{fppf}})^{\text{op}}\rightarrow \textbf{Sets}$ with the following properties (1) The presheaf $F$ is a sheaf. (2) The diagonal morphism $F\rightarrow F\times F$ is representable. (3) There exists a scheme $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and a map $h_{U}\rightarrow F$ which is surjective, and etale (This is often written simply as $U\rightarrow F$). The scheme $U$ is also called an atlas. The diagonal morphism being representable implies that the natural transformation $h_{U}\rightarrow F$ is also representable, and this is what allows us to describe it as surjective and etale, as has been explained earlier. An algebraic space is a generalization of the notion of a scheme. In fact, a scheme is simply the case where, for the third condition, we have $U$ is the disjoint union of affine schemes $U_{i}$ and where the map $h_{U}\rightarrow F$ is an open immersion. We recall that a scheme may be thought of as being made up of affine schemes “glued together”. This “gluing” is obtained using the Zariski topology. The notion of an algebraic space generalizes this to the etale topology. Next we want to define algebraic stacks. Unlike algebraic spaces, which we defined as presheaves (functors), we will define algebraic stacks as categories, so we need to once again revisit the notion of representability in terms of categories. Let $\mathcal{C}$ be a category. A category fibered in groupoids $p:\mathcal{S}\rightarrow\mathcal{C}$ is called representable if there exists an object $X$ of $\mathcal{C}$ and an equivalence $j:\mathcal{S}\rightarrow \mathcal{C}/X$ (The notation $\mathcal{C}/X$ signifies a slice category, whose objects are morphisms $f:U\rightarrow X$ in $\mathcal{C}$, and whose morphisms are morphisms $h:U\rightarrow V$ in $\mathcal{C}$ such that $f=g\circ h$, where $g:U\rightarrow X$). We give two specific special cases of interest to us (although in this post we will only need the latter): Let $\mathcal{X}$ be a category fibered in groupoids over $(\text{Sch}/S)_{\text{fppf}}$. Then $\mathcal{X}$ is representable by a scheme if there exists a scheme $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and an equivalence $j:\mathcal{X}\rightarrow (\text{Sch}/U)_{\text{fppf}}$ of categories over $(\text{Sch}/S)_{\text{fppf}}$. A category fibered in groupoids $p : \mathcal{X}\rightarrow (\text{Sch}/S)_{\text{fppf}}$ is representable by an algebraic space over $S$ if there exists an algebraic space $F$ over $S$ and an equivalence $j:\mathcal{X}\rightarrow \mathcal{S}_{F}$ of categories over $(\text{Sch}/S)_{\text{fppf}}$. Next, following what we did earlier for the case of algebraic spaces, we want to define the notion of representability (by algebraic spaces) for morphisms of categories fibered in groupoids (these are simply functors satisfying some compatibility conditions with the extra structure of the category). We will need, once again, the notion of a fiber product, this time of categories over some other fixed category. Let $F:\mathcal{X}\rightarrow\mathcal{S}$ and $G:\mathcal{Y}\rightarrow\mathcal{S}$ be morphisms of categories over $\mathcal{C}$. The fiber product $\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}$ is given by the following description: (1) an object of $\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}$ is a quadruple $(U,x,y,f)$, where $U\in\text{Ob}(\mathcal{C})$, $x\in\text{Ob}(\mathcal{X}_{U})$, $y\in\text{Ob}(\mathcal{Y}_{U})$, and $f : F(x)\rightarrow G(y)$ is an isomorphism in $\mathcal{S}_{U}$, (2) a morphism $(U,x,y,f) \rightarrow (U',x',y',f')$ is given by a pair $(a,b)$, where $a:x\rightarrow x'$ is a morphism in $X$, and $b:y\rightarrow y'$ is a morphism in $Y$ such that $a$ and $b$ induce the same morphism $U\rightarrow U'$, and $f'\circ F(a)=G(b)\circ f$. Let $S$ be a scheme. A morphism $f:\mathcal{X}\rightarrow \mathcal{Y}$ of categories fibered in groupoids over $(\text{Sch}/S)_{\text{fppf}}$ is called representable by algebraic spaces if for any $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and any $y:(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{Y}$ the category fibered in groupoids $\displaystyle (\text{Sch}/U)_{\text{fppf}}\times_{y,\mathcal{Y}}\mathcal{X}$ over $(\text{Sch}/U)_{\text{fppf}}$ is representable by an algebraic space over $U$. An algebraic stack (or Artin stack) over a scheme $S$ is a category $\displaystyle p:\mathcal{X}\rightarrow (\text{Sch}/S)_{\text{fppf}}$ with the following properties: (1) The category $\mathcal{X}$ is a stack in groupoids over $(\text{Sch}/S)_{\text{fppf}}$ . (2) The diagonal $\Delta:\mathcal{X}\rightarrow \mathcal{X}\times\mathcal{X}$ is representable by algebraic spaces. (3) There exists a scheme $U\in\text{Ob}((\text{Sch/S})_{\text{fppf}})$ and a morphism $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ which is surjective and smooth (This is often written simply as $U\rightarrow\mathcal{X}$). Again, the scheme $U$ is called an atlas. If the morphism $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ is surjective and etale, we have a Deligne-Mumford stack. Just as an algebraic space is a generalization of the notion of a scheme, an algebraic stack is also a generalization of the notion of an algebraic space (recall that that a presheaf can be thought of as category fibered in sets, which themselves are special cases of groupoids). Therefore, the definition of an algebraic stack closely resembles the definition of an algebraic space given earlier, including the requirement that the diagonal morphism (which in this case is a functor between categories) be representable, so that the functor $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ is also representable, and we can describe it as being surjective and smooth (or surjective and etale). As an example of an application of the ideas just discussed, we mention the moduli stack of elliptic curves (which we denote by $\mathcal{M}_{1,1}$ – the reason for this notation will become clear later). A family of elliptic curves over some “base space” $B$ is a fibration $\pi:X\rightarrow B$ with a section $O:B\rightarrow X$ such that the fiber $\pi^{-1}(b)$ over any point $b$ of $B$ is an elliptic curve with origin $O(b)$. Ideally what we want is to be able to obtain every family $X\rightarrow B$ by pulling back a “universal object” $E\rightarrow\mathcal{M}_{1,1}$ via the map $B\rightarrow\mathcal{M}_{1,1}$. This is something that even the notion of moduli space that we discussed in The Moduli Space of Elliptic Curves cannot do (we suggestively denote that moduli space by $M_{1,1}$). So we need the concept of stacks to construct this “moduli stack” that has this property. A more thorough discussion would need the notion of quotient stacks and orbifolds, but we only mention that the moduli stack of elliptic curves is in fact a Deligne-Mumford stack. More generally, we can construct the moduli stack of curves of genus $g$ with $\nu$ marked points, denoted $\mathcal{M}_{g,\nu}$. The moduli stack of elliptic curves is simply the special case $\mathcal{M}_{1,1}$. Aside from just curves of course, we can construct moduli stacks for many more mathematical objects, such subschemes of some fixed scheme, or vector bundles, also on some fixed scheme. The subject of algebraic stacks is a vast one, as may perhaps be inferred from the size of one of the main references for this post, the open-source reference The Stacks Project, which consists of almost 6,000 pages at the time of this writing. All that has been attempted in this post is but an extremely “bare bones” introduction to some of its more basic concepts. Hopefully more on stacks will be featured in future posts on the blog. References: Stack on Wikipedia Algebraic Space on Wikipedia Fibred Category on Wikipedia Descent Theory on Wikipedia Stack on nLab Grothendieck Fibration on nLab Algebraic Space on nLab Algebraic Stack on nLab Moduli Stack of Elliptic Curves on nLab Stacks for Everybody by Barbara Fantechi What is…a Stack? by Dan Edidin Notes on the Construction of the Moduli Space of Curves by Dan Edidin Notes on Grothendieck Topologies, Fibered Categories and Descent Theory by Angelo Vistoli Lectures on Moduli Spaces of Elliptic Curves by Richard Hain The Stacks Project Algebraic Spaces and Stacks by Martin Olsson Fundamental Algebraic Geometry: Grothendieck’s FGA Explained by Barbara Fantechi, Lothar Gottsche, Luc Illusie, Steven L. Kleiman, Nitin Nitsure, and Angelo Vistoli # The Theory of Motives The theory of motives originated from the observation, sometime in the 1960’s, that in algebraic geometry there were several different cohomology theories (see Homology and Cohomology and Cohomology in Algebraic Geometry), such as Betti cohomology, de Rham cohomology, $l$-adic cohomology, and crystalline cohomology. The search for a “universal cohomology theory”, such that all these other cohomology theories could be obtained from such a universal cohomology theory is what led to the theory of motives. The four cohomology theories enumerated above are examples of what is called a Weil cohomology theory. A Weil cohomology theory, denoted $H^{*}$, is a functor (see Category Theory) from the category $\mathcal{V}(k)$ of smooth projective varieties over some field $k$ to the category $\textbf{GrAlg}(K)$ of graded $K$-algebras, for some other field $K$ which must be of characteristic zero, satisfying the following axioms: (1) (Finite-dimensionality) The homogeneous components $H^{i}(X)$ of $H^{*}(X)$ are finite dimensional for all $i$, and $H^{i}(X)=0$ whenever $i<0$ or $i>2n$, where $n$ is the dimension of the smooth projective variety $X$. (2) (Poincare duality) There is an orientation isomorphism $H^{2n}\cong K$, and a nondegenerate bilinear pairing $H^{i}(X)\times H^{2n-i}(X)\rightarrow H^{2n}\cong K$. (3) (Kunneth formula) There is an isomorphism $\displaystyle H^{*}(X\times Y)\cong H^{*}(X)\otimes H^{*}(Y)$. (4) (Cycle map) There is a mapping $\gamma_{X}^{i}$ from $C^{i}(X)$, the abelian group of algebraic cycles of codimension $i$ on $X$ (see Algebraic Cycles and Intersection Theory), to $H^{i}(X)$, which is functorial with respect to pullbacks and pushforwards, has the multiplicative property $\gamma_{X\times Y}^{i+j}(Z\times W)=\gamma_{X}^{i}(Z)\otimes \gamma_{Y}^{j}(W)$, and such that $\gamma_{\text{pt}}^{i}$ is the inclusion $\mathbb{Z}\hookrightarrow K$. (5) (Weak Lefschetz axiom) If $W$ is a smooth hyperplane section of $X$, and $j:W\rightarrow X$ is the inclusion, the induced map $j^{*}:H^{i}(X)\rightarrow H^{i}(W)$ is an isomorphism for $i\leq n-2$, and a monomorphism for $i\leq n-1$. (6) (Hard Lefschetz axiom) The Lefschetz operator $\displaystyle \mathcal{L}:H^{i}(X)\rightarrow H^{i+2}(X)$ given by $\displaystyle \mathcal{L}(x)=x\cdot\gamma_{X}^{1}(W)$ for some smooth hyperplane section $W$ of $X$, with the product $\cdot$ provided by the graded $K$-algebra structure of $H^{*}(X)$, induces an isomorphism $\displaystyle \mathcal{L}^{i}:H^{n-i}(X)\rightarrow H^{n+i}(X)$. The idea behind the theory of motives is that all Weil cohomology theories should factor through a “category of motives”, i.e. any Weil cohomology theory $\displaystyle H^{*}: \mathcal{V}(k)\rightarrow \textbf{GrAlg}(K)$ can be expressed as the following composition of functors: $\displaystyle H^{*}: \mathcal{V}(k)\xrightarrow{h} \mathcal{M}(k)\rightarrow\textbf{GrAlg}(K)$ where $\mathcal{M}(k)$ is the category of motives. We can get different Weil cohomology theories, such as Betti cohomology, de Rham cohomology, $l$-adic cohomology, and crystalline cohomology, via different functors (called realization functors) from the category of motives to a category of graded algebras over some field $K$. This explains the term “motive”, which actually comes from the French word “motif”, which itself is already used in music and visual arts, among other things, as some kind of common underlying “theme” with different possible manifestations. Let us now try to construct this category of motives. This category is often referred to in the literature as a “linearization” of the category of smooth projective varieties. This means that we obtain it from some sense starting with the category of smooth projective varieties, but we also want to modify it so that it we can do linear algebra, or more properly homological algebra, in some sense. In other words, we want it to behave like the category of modules over some ring. With this in mind, we want the category to be an abelian category, so that we can make sense of notions such as kernels, cokernels, and exact sequences. An abelian category is a category that satisfies the following properties: (1) The morphisms form an abelian group. (2) There is a zero object. (3) There are finite products and coproducts. (4) Every morphism $f:X\rightarrow Y$ has a kernel and cokernel, and satisfies a decomposition $\displaystyle K\xrightarrow{k} X\xrightarrow{i} I\xrightarrow{j} Y\xrightarrow{c} K'$ where $K$ is the kernel of $f$, $K'$ is the cokernel of $f$, and $I$ is the kernel of $c$ and the cokernel of $k$ (not to be confused with our notation for fields). In order to proceed with our construction of the category of motives, which we now know we want to be an abelian category, we discuss the notion of correspondences. The group of correspondences of degree $r$ from a smooth projective variety $X$ to another smooth projective variety $Y$, written $\text{Corr}^{r}(X,Y)$, is defined to be the group of algebraic cycles of $X\times Y$ of codimension $n+r$, where $n$ is the dimension of $X$, i.e. $\text{Corr}^{r}(X,Y)=C^{n+r}(X\times Y)$ A morphism (of varieties, in the usual sense) $f:Y\rightarrow X$ determines a correspondence from $X$ to $Y$ of degree $0$ given by the transpose of the graph of $f$ in $X\times Y$. Therefore we may think of correspondences as generalizations of the usual concept of morphisms of varieties. As we have learned in Algebraic Cycles and Intersection Theory, whenever we are dealing with algebraic cycles, it is often useful to consider them only up to some equivalence relation. In the aforementioned post we introduced the notion of rational equivalence. This time we consider also homological equivalence and numerical equivalence between algebraic cycles. We say that two algebraic cycles $Z_{1}$ and $Z_{2}$ are homologically equivalent if they have the same image under the cycle map, and we say that they are numerically equivalent if the intersection numbers $Z_{1}\cdot Z$ and $Z_{2}\cdot Z$ are equal for all $Z$ of complementary dimension. There are other such equivalence relations on algebraic cycles, but in this post we will only mostly be using rational equivalence, homological equivalence, and numerical equivalence. Since correspondences are algebraic cycles, we often consider them only up to these equivalence relations, and denote the quotient group we obtain by $\text{Corr}_{\sim}^{r}(X,Y)$, where $\sim$ is the equivalence relation imposed, for example, for numerical equivalence we write $\text{Corr}_{\text{num}}^{r}(X,Y)$. Taking the tensor product of the abelian group $\text{Corr}_{\sim}^{r}(X,Y)$ with the rational numbers $\mathbb{Q}$, we obtain the vector space $\displaystyle \text{Corr}_{\sim}^{r}(X,Y)_{\mathbb{Q}}=\text{Corr}_{\sim}^{r}(X,Y)\otimes_{\mathbb{Z}}\mathbb{Q}$ To obtain something closer to an abelian category (more precisely, we will obtain what is known as a pseudo-abelian category, but in the case where the equivalence relation is numerical equivalence, we will actually obtain an abelian category), we need to consider “projectors”, correspondences $p$ of degree $0$ from a variety $X$ to itself such that $p^{2}=p$. So now we form a category, whose objects are $h(X,p)$ for a variety $X$ and projector $p$, and whose morphisms are given by $\displaystyle \text{Hom}(h(X,p),h(Y,q))=q\circ\text{Corr}_{\sim}^{0}(X,Y)_{\mathbb{Q}}\circ p$. We call this category the category of pure effective motives, and denote it by $\mathcal{M}_{\sim}^{\text{eff}}(k)$. The process described above is also known as passing to the pseudo-abelian (or Karoubian) envelope. We write $h^{i}(X,p)$ for the objects of $\mathcal{M}_{\sim}^{\text{eff}}(k)$ that map to $H^{i}(X)$. In the case that $X$ is the projective line $\mathbb{P}^{1}$, and $p$ is the diagonal $\Delta_{\mathbb{P}^{1}}$, we find that $h(\mathbb{P}^{1},\Delta_{\mathbb{P}^{1}})=h^{0}\mathbb{P}^{1}\oplus h^{2}\mathbb{P}^{1}$ which can be rewritten also as $\displaystyle h(\mathbb{P}^{1},\Delta_{\mathbb{P}^{1}})=\mathbb{I}\oplus\mathbb{L}$ where $\mathbb{I}$ is the image of a point in the category of pure effective motives, and $\mathbb{L}$ is known as the Lefschetz motive. It is also denoted by $\mathbb{Q}(-1)$. The above decomposition corresponds to the projective line $\mathbb{P}^{1}$ being a union of the affine line $\mathbb{A}^{1}$ and a “point at infinity”, which we may denote by $\mathbb{A}^{0}$: $\displaystyle \mathbb{P}^{1}=\mathbb{A}^{0}\cup\mathbb{A}^{1}$ More generally, we have $\displaystyle h(\mathbb{P}^{n},\Delta_{\mathbb{P}^{n}})=\mathbb{I}\oplus\mathbb{L}\oplus...\oplus\mathbb{L}^{n}$ corresponding to $\displaystyle \mathbb{P}^{n}=\mathbb{A}^{0}\cup\mathbb{A}^{1}\cup...\cup\mathbb{A}^{n}$. The category of effective pure motives is an example of a tensor category. This means it has a bifunctor $\otimes: \mathcal{M}_{\sim}^{\text{eff}}\times\mathcal{M}_{\sim}^{\text{eff}}\rightarrow\mathcal{M}_{\sim}^{\text{eff}}$ which generalizes the usual notion of a tensor product, and in this particular case it is given by taking the product of two varieties. We can ask for more, however, and construct a category of motives which is not just a tensor category but a rigid tensor category, which provides us with a notion of duals. By formally inverting the Lefschetz motive (the formal inverse of the Lefschetz motive is then known as the Tate motive, and is denoted by $\mathbb{Q}(1)$), we can obtain this rigid tensor category, whose objects are triples $h(X,p,m)$, where $X$ is a variety, $e$ is a projector, and $m$ is an integer. The morphisms of this category are given by $\displaystyle \text{Hom}(h(X,p,n),h(Y,q,m))=q\circ\text{Corr}_{\sim}^{n-m}(X,Y)_{\mathbb{Q}}\circ p$. This category is called the category of pure motives, and is denoted by $\mathcal{M}_{\sim}(k)$. The category $\mathcal{M}_{\text{rat}}(k)$ is called the category of Chow motives, while the category $\mathcal{M}_{\text{num}}(k)$ is called the category of Grothendieck (or numerical) motives. The category of Chow motives has the advantage that it is known to be “universal”, in the sense that every Weil cohomology theory factors through it, as discussed earlier; however, in general it is not even abelian, which is a desirable property we would like our category of motives to have. Meanwhile, the category of Grothendieck motives is known to be abelian, but it is not yet known if it is universal. If the so-called “standard conjectures on algebraic cycles“, which we will enumerate below, are proved, then the category of Grothendieck motives will be known to be universal. We have seen that the category of pure motives forms a rigid tensor category. Closely related to this concept, and of interest to us, is the notion of a Tannakian category. More precisely, a Tannakian category is a $k$-linear rigid tensor category with an exact faithful functor (called a fiber functor) to the category of finite-dimensional vector spaces over some field extension $K$ of $k$. One of the things that makes Tannakian categories interesting is that there is an equivalence of categories between a Tannakian category $\mathcal{C}$ and the category $\text{Rep}_{G}$ of finite-dimensional linear representations of the group of automorphisms of its fiber functor, which is also known as the Tannakian Galois group, or, if the Tannakian category is a “category of motives” of some sort, the motivic Galois group. This aspect of Tannakian categories may be thought of as a higher-dimensional analogue of the classical theory of Galois groups, which can be stated as an equivalence of categories between the category of finite separable field extensions of a field $k$ and the category of finite sets equipped with an action of the Galois group $\text{Gal}(\bar{k}/k)$, where $\bar{k}$ is the algebraic closure of $k$. So we see that being a Tannakian category is yet another desirable property that we would like our category of motives to have. For this not only do we have to tweak the tensor product structure of our category, we also need certain conjectural properties to hold. These are the same conjectures we have hinted at earlier, called the standard conjectures on algebraic cycles, formulated by Alexander Grothendieck at around the same time he initially developed the theory of motives. These conjectures have some very important consequences in algebraic geometry, and while they remain unproved to this day, the search for their proof (or disproof) is an important part of modern mathematical research on the theory of motives. They are the following: (A) (Standard conjecture of Lefschetz type) For $i\leq n$, the operator $\Lambda$ defined by $\displaystyle \Lambda=(\mathcal{L}^{n-i+2})^{-1}\circ\mathcal{L}\circ (\mathcal{L}^{n-i}):H^{i}\rightarrow H^{i-2}$ $\displaystyle \Lambda=(\mathcal{L}^{n-i})\circ\mathcal{L}\circ (\mathcal{L}^{n-i+2})^{-1}:H^{2n-i+2}\rightarrow H^{2n-i}$ is induced by algebraic cycles. (B) (Standard conjecture of Hodge type) For all $i\leq n/2$, the pairing $\displaystyle x,y\mapsto (-1)^{i}(\mathcal{L}x\cdot y)$ is positive definite. (C) (Standard conjecture of Kunneth type) The projectors $H^{*}(X)\rightarrow H^{i}(X)$ are induced by algebraic cycles in $X\times X$ with rational coefficients. This implies the following decomposition of the diagonal: $\displaystyle \Delta_{X}=\pi_{0}+...+\pi_{2n}$ which in turn implies the decomposition $\displaystyle h(X,\Delta_{X},0)=h(X,\pi_{0},0)\oplus...\oplus h(X,\pi_{2n},0)$ which, writing $h(X,\Delta_{X},0)$ as $hX$ and $h(X,\pi_{i},0)$ as $h^{i}(X)$, we can also compactly and suggestively write as $\displaystyle hX=h^{0}X\oplus...\oplus h^{2n}X$. In other words, every object $hX=h(X,\Delta_{X},0)$ of our “category of motives” decomposes into graded “pieces” $h^{i}(X)=h(X,\pi_{i},0)$ of pure “weight$i$. We have already seen earlier that this is indeed the case when $X=\mathbb{P}^{n}$. We will need this conjecture to hold if we want our category to be a Tannakian category. (D) (Standard conjecture on numerical equivalence and homological equivalence) If an algebraic cycle is numerically equivalent to zero, then its cohomology class is zero. If the category of Grothendieck motives is to be “universal”, so that every Weil cohomology theory factors through it, this conjecture must be satisfied. In Algebraic Cycles and Intersection Theory and Some Useful Links on the Hodge Conjecture, Kahler Manifolds, and Complex Algebraic Geometry, we have made mention of the two famous conjectures in algebraic geometry known as the Hodge conjecture and the Tate conjecture. In fact, these two closely related conjectures can be phrased in the language of motives as the conjectures stating that the realization functors from the category of motives to the category of pure Hodge structures and continuous $l$-adic representations of $\text{Gal}(\bar{k}/k)$, respectively, be fully faithful. These conjectures are closely related to the standard conjectures on algebraic cycles as well. We have now constructed the category of pure motives, for smooth projective varieties. For more general varieties and schemes, there is an analogous idea of “mixed motives“, which at the moment remain conjectural, although there exist several related constructions which are the closest thing we currently have to such a theory of mixed motives. If we want to construct a theory of mixed motives, instead of Weil cohomology theories we must instead consider what are known as “mixed Weil cohomology theories“, which are expected to have the following properties: (1) (Homotopy invariance) The projection $\pi:X\rightarrow\mathbb{A}^{1}$ induces an isomorphism $\displaystyle \pi^{*}:H^{*}(X)\xrightarrow{\cong}H^{*}(X\times\mathbb{A}^{1})$ (2) (Mayer-Vietoris sequence) If $U$ and $V$ are open coverings of $X$, then there is a long exact sequence $\displaystyle ...\rightarrow H^{i}(U\cap V)\rightarrow H^{i}(X)\rightarrow H^{i}(U)\oplus H^{i}(V)\rightarrow H^{i}(U\cap V)\rightarrow...$ (3) (Duality) There is a duality between cohomology $H^{*}$ and cohomology with compact support $H_{c}^{*}$. (4) (Kunneth formula) This is the same axiom as the one in the case of pure motives. We would like a category of mixed motives, which serves as an analogue to the category of pure motives in that all mixed Weil cohomology theories factor through it, but as mentioned earlier, no such category exists at the moment. However, the mathematicians Annette Huber-Klawitter, Masaki Hanamura, Marc Levine, and Vladimir Voevodsky have constructed different versions of a triangulated category of mixed motives, denoted $\mathcal{DM}(k)$. A triangulated category $\mathcal{T}$ is an additive category with an automorphism $T: \mathcal{T}\rightarrow\mathcal{T}$ called the “shift functor” (we will also denote $T(X)$ by $X[1]$, and $T^{n}(X)$ by $X[n]$, for $n\in\mathbb{Z}$) and a family of “distinguished triangles $\displaystyle X\rightarrow Y\rightarrow Z\rightarrow X[1]$ which satisfies the following axioms: (1) For any object $X$ of $\mathcal{T}$, the triangle $X\xrightarrow{\text{id}}X\rightarrow 0\rightarrow X[1]$ is a distinguished triangle. (2) For any morphism $u:X\rightarrow Y$ of $\mathcal{T}$, there is an object $Z$ of $\mathcal{T}$ such that $X\xrightarrow{u}Y\rightarrow Z\rightarrow X[1]$ is a distinguished triangle. (3) Any triangle isomorphic to a distinguished triangle is a distinguished triangle. (4) If $X\rightarrow Y\rightarrow Z\rightarrow X[1]$ is a distinguished triangle, then the two “rotations” $Y\rightarrow Z\rightarrow Z[1]\rightarrow Y[1]$ and $Z[-1]\rightarrow X\rightarrow Y\rightarrow Z$ are also distinguished triangles. (5) Given two distinguished triangles $X\xrightarrow{u}Y\xrightarrow{v}Z\xrightarrow{w}X[1]$ and $X'\xrightarrow{u'}Y'\xrightarrow{v'}Z'\xrightarrow{w'}X'[1]$ and morphisms $f:X\rightarrow X'$ an $g:Y\rightarrow Y'$ such that the square “commutes”, i.e. $u'\circ f=g\circ u$, there exists a morphisms $h:Z\rightarrow Z$ such that all other squares commute. (6) Given three distinguished triangles $X\xrightarrow{u}Y\xrightarrow{j}Z'\xrightarrow{k}X[1]$$Y\xrightarrow{v}Z\xrightarrow{l}X'\xrightarrow{i}Y[1]$, and $X\xrightarrow{v\circ u}Z\xrightarrow{m}Y'\xrightarrow{n}X[1]$, there exists a distinguished triangle $Z'\xrightarrow{f}Y'\xrightarrow{g}X'\xrightarrow{h}Z'[1]$ such that “everything commutes”. A $t$-structure on a triangulated category $\mathcal{T}$ is made up of two full subcategories $\mathcal{T}^{\geq 0}$ and $\mathcal{T}^{\leq 0}$ satisfying the following properties (writing $\mathcal{T}^{\leq n}$ and $\mathcal{T}^{\leq n}$ to denote $\mathcal{T}^{\leq 0}[-n]$ and $\mathcal{T}^{\geq 0}[-n]$ respectively): (1) $\mathcal{T}^{\leq -1}\subset \mathcal{T}^{\leq 0}$ and $\mathcal{T}^{\geq 1}\subset \mathcal{T}^{\geq 0}$ (2) $\displaystyle \text{Hom}(X,Y)=0$ for any object $X$ of $\mathcal{T}^{\leq 0}$ and any object $Y$ of $\mathcal{T}^{\geq 1}$ (3) for any object $Y$ of $\mathcal{T}$ we have a distinguished triangle $\displaystyle X\rightarrow Y\rightarrow Z\rightarrow X[1]$ where $X$ is an object of $\mathcal{T}^{\leq 0}$ and $Z$ is an object of $\mathcal{T}^{\geq 1}$. The full subcategory $\mathcal{T}^{0}=\mathcal{T}^{\leq 0}\cap\mathcal{T}^{\geq 0}$ is called the heart of the $t$-structure, and it is an abelian category. It is conjectured that the category of mixed motives $\mathcal{MM}(k)$ is the heart of the $t$-structure of the triangulated category of mixed motives $\mathcal{DM}(k)$. Voevodsky’s construction proceeds in a manner somewhat analogous to the construction of the category of pure motives as above, starting with schemes (say, over a field $k$, although a more general scheme may be used) as objects and correspondences as morphisms, but then makes use of concepts from abstract homotopy theory, such as taking the bounded homotopy category of bounded complexes, and localization with respect to a certain subcategory, before passing to the pseudo-abelian envelope and then formally inverting the Tate object $\mathbb{Z}(1)$. The triangulated category obtained is called the category of geometric motives, and is denoted by $\mathcal{DM}_{\text{gm}}(k)$. The schemes and correspondences involved in the construction of $\mathcal{DM}_{\text{gm}}(k)$ are required to satisfy certain properties which eliminates the need to consider the equivalence relations which form a large part of the study of the category of pure motives. Closely related to the triangulated category of mixed motives is motivic cohomology, which is defined in terms of the former as $\displaystyle H^{i}(X,\mathbb{Z}(m))=\text{Hom}_{\mathcal{DM}(k)}(X,\mathbb{Z}(m)[i])$ where $\mathbb{Z}(m)$ is the tensor product of $m$ copies of the Tate object $\mathbb{Z}(1)$, and the notation $\mathbb{Z}(m)[i]$ tells us that the shift functor of the triangulated category is applied to the object $\mathbb{Z}(m)$ $i$ times. Motivic cohomology is related to the Chow group, which we have introduced in Algebraic Cycles and Intersection Theory, and also to algebraic K-theory, which is another way by which the ideas of homotopy theory are applied to more general areas of abstract algebra and linear algebra. These ideas were used by Voevodsky to prove several related theorems, from the Milnor conjecture to its generalization, the Bloch-Kato conjecture (also known as the norm residue isomorphism theorem). Historically, one of the motivations for Grothendieck’s attempt to obtain a universal cohomology theory was to prove the Weil conjectures, which is a higher-dimensional analogue of the Riemann hypothesis for curves over finite fields first proved by Andre Weil himself (see The Riemann Hypothesis for Curves over Finite Fields). In fact, if the standard conjectures on algebraic cycles are proved, then a proof of the Weil conjectures would follow via an approach that closely mirrors Weil’s original proof (since cohomology provides a Lefschetz fixed-point formula –  we have mentioned in The Riemann Hypothesis for Curves over Finite Fields that the study of fixed points is an important part of Weil’s proof). The last of the Weil conjectures were eventually proved by Grothendieck’s student Pierre Deligne, but via a different approach that bypassed the standard conjectures. A proof of the standard conjectures, which would lead to a perhaps more elegant proof of the Weil conjectures, is still being pursued to this day. The theory of motives is not only related to analogues of the Riemann hypothesis, which concerns the location of zeroes of L-functions, but to L-functions in general. For instance, it is also related to the Langlands program, which concerns another aspect of L-functions, namely their analytic continuation and functional equation, and to the Birch and Swinnerton-Dyer conjecture, which concerns their values at special points. We recall in The Riemann Hypothesis for Curves over Finite Fields that the Frobenius morphism played an important part in counting the points of a curve over a finite field, which in turn we needed to define the zeta function (of which the L-function can be thought of as a generalization) of the curve. The Frobenius morphism is an element of the Galois group, and we recall that a category of motives which is a Tannakian category is equivalent to the category of representations of its motivic Galois group. Therefore we can see how we can define “motivic L-functions” using the theory of motives. As the L-functions occupy a central place in many areas of modern mathematics, the theory of motives promises much to be gained from its study, if only we could make progress in deciphering the many mysteries that surround it, of which we have only scratched the surface in this post. The applications of motives are not limited to L-functions either – the study of periods, which relate Betti cohomology and de Rham cohomology, and lead to transcendental numbers which can be defined using only algebraic concepts, is also strongly connected to the theory of motives. Recent work by the mathematicians Alain Connes and Matilde Marcolli has also suggested applications to physics, particularly in relation to Feynman diagrams in quantum field theory. There is also another generalization of the theory of motives, developed by Maxim Kontsevich, in the context of noncommutative geometry. References: Weil Cohomology Theory on Wikipedia Motive on Wikipedia Standard Conjectures on Algebraic Cycles on Wikipedia Motive on nLab Pure Motive on nLab Mixed Motive on nLab The Tate Conjecture over Finite Fields on Hard Arithmetic What is…a Motive? by Barry Mazur Motives – Grothendieck’s Dream by James S. Milne Noncommutative Geometry, Quantum Fields, and Motives by Alain Connes and Matilde Marcolli Algebraic Cycles and the Weil Conjectures by Steven L. Kleiman The Standard Conjectures by Steven L. Kleiman Feynman Motives by Matilde Marcolli Une Introduction aux Motifs (Motifs Purs, Motifs Mixtes, Periodes) by Yves Andre
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 688, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739530086517334, "perplexity": 159.46382797812797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00141.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-1-introduction-to-algebraic-expressions-1-7-multiplication-and-division-of-real-numbers-1-7-exercise-set-page-58/96
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $-\dfrac{10}{17}$ To get the reciprocal of a number, interchange the numerator and the denominator. Hence, the reciprocal of the given expression, $-1.7$, is \begin{array}{l} -\dfrac{17}{10} \\\\= -\dfrac{10}{17} .\end{array}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962955713272095, "perplexity": 3171.013032045462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00163.warc.gz"}
https://ita.skanev.com/C/03/09.html
# Exercise C.3.9 Show that for any random variable $X$ that takes on only the values $0$ and $1$, we have $\Var[X] = \E[X]\E[1-X]$. Let's first calculate the expectations: $$\E[X] = 0 \cdot \Pr\{X = 0\} + 1 \cdot \Pr\{X = 1\} = \Pr\{X = 1\} \\ \E[1-X] = \Pr\{X = 0\} \\ \E[X]\E[1-X] = \Pr\{X = 0\} \cdot \Pr\{X = 1\}$$ Now - the variance: $$\Var[X] = \E[X^2] - \E^2[X] = \Pr\{X = 1\} - (\Pr\{X = 1\})^2 = \Pr\{X = 1\} (1 - \Pr\{X = 1\}) = \Pr\{X = 0\} \cdot Pr\{X = 1\}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284596800804138, "perplexity": 208.7406605722885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00078.warc.gz"}
http://www.ck12.org/book/Probability-and-Statistics-%2528Advanced-Placement%2529/r1/section/5.1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 5.1: The Standard Normal Probability Distribution Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Identify the characteristics of a normal distribution. • Identify and use the Empirical Rule (\begin{align*}68-95-99.7\end{align*} rule) for normal distributions. • Calculate a \begin{align*}z-\end{align*}score and relate it to probability. • Determine if a data set corresponds to a normal distribution. ## Introduction Most high schools have a set amount of time in between classes in which students must get to their next class. If you were to stand at the door of your statistics class and watch the students coming in, think about how the students would enter. Usually, one or two students enter early, then more students come in, then a large group of students enter, and then the number of students entering decreases again, with one or two students barely making it on time, or perhaps even coming in late! Try the same by watching students enter your school cafeteria at lunchtime. Spend some time in a fast food restaurant or café before, during, and after the lunch hour and you will most likely observe similar behavior. Have you ever popped popcorn in a microwave? Think about what happens in terms of the rate at which the kernels pop. Better yet, actually do it and listen to what happens! For the first few minutes nothing happens, then after a while a few kernels start popping. This rate increases to the point at which you hear most of the kernels popping and then it gradually decreases again until just a kernel or two pops. Try measuring the height, or shoe size, or the width of the hands of the students in your class. In most situations, you will probably find that there are a couple of very students with very low measurements and a couple with very high measurements with the majority of students centered around a particular value. Sometimes the door handles in office buildings show a wear pattern caused by thousands, maybe millions of times being pulled or pushed to open the door. Often you will see that there is a middle region that shows by far the most amount of wear at the place where people opening the door are the most likely to grab the handle, surrounded by areas on either side showing less wear. On average, people are more likely to have grabbed the handle in the same spot and less likely to use the extremes on either side. All of these examples show a typical pattern that seems to be a part of many real life phenomena. In statistics, because this pattern is so pervasive, it seems to fit to call it “normal”, or more formally the normal distribution. The normal distribution is an extremely important concept because it occurs so often in the data we collect from the natural world, as well as many of the more theoretical ideas that are the foundation of statistics. This chapter explores the details of the normal distribution. ## The Characteristics of a Normal Distribution ### Shape If you think of graphing data from each of the examples in the introduction, the distributions from each of these situations would be mound-shaped and mostly symmetric. A normal distribution is a perfectly symmetric, mound-shaped distribution. It is commonly referred to the as a normal, or bell curve. Because so many real data sets closely approximate a normal distribution, we can use the idealized normal curve to learn a great deal about such data. In practical data collection, the distribution will never be exactly symmetric, so just like situations involving probability, a true normal distribution results from an infinite collection of data, or from the probabilities of a continuous random variable. ### Center Due to this exact symmetry the center of the normal distribution, or a data set that approximates a normal distribution, is located at the highest point of the distribution, and all the statistical measures of center we have already studied, mean, median, and mode are equal. It is also important to realize that this center peak divides the data into two equal parts. Let’s go back to our popcorn example. The bag advertises a certain time, beyond which you risk burning the popcorn. From experience, the manufacturers know when most of the popcorn will stop popping, but there is still a chance that a rare kernel will pop after longer, or shorter periods of time. The directions usually tell you to stop when the time between popping is a few seconds, but aren’t you tempted to keep going so you don’t end up with a bag full of un-popped kernels? Because this is real, and not theoretical, there will be a time when it will stop popping and start burning, but there is always a chance, no matter how small, that one more kernel will pop if you keep the microwave going. In the idealized normal distribution of a continuous random variable, the distribution continues infinitely in both directions. Because of this infinite spread, range would not be a possible statistical measure of spread. The most common way to measure the spread of a normal distribution then is using the standard deviation, or typical distance away from the mean. Because of the symmetry of a normal distribution, the standard deviation indicates how far away from the maximum peak the data will be. Here are two normal distributions with the same center(mean): The first distribution pictured above has a smaller standard deviation and so the bulk of the data is concentrated more heavily around the mean. There is less data at the extremes compared to the second distribution pictured above, which has a larger standard deviation and therefore the data is spread farther from the mean value with more of the data appearing in the tails. ## Investigating the Normal Distribution on a TI-83/4 Graphing Calculator We can graph a normal curve for a probability distribution on the TI-83/4. Press [y=]. To create a normal distribution, we will draw an idealized curve using something called a density function. We will learn more about density functions in the next lesson. The command is called a probability density function and it is found by pressing [2nd] [DISTR] [1]. Enter an \begin{align*}X\end{align*} to represent the random variable, followed by the mean and the standard deviation. For this example, choose a mean of \begin{align*}5\end{align*} and a standard deviation of \begin{align*}1\end{align*}. Choose [2nd] [QUIT] to go to the home screen. We can draw a vertical line at the mean to show it is in the center of the distribution by pressing [2nd] [DRAW] and choosing VERTICAL. Enter the mean (5) and press [ENTER] Remember that even though the graph appears to touch the \begin{align*}x-\end{align*}axis it is actually just very close to it. This will graph \begin{align*}3\end{align*} different normal distributions with various standard deviations to make it easy to see the change in spread. ## The Empirical Rule Because of the similar shape of all normal distributions we can measure the percentage of data that is a certain distance from the mean no matter what the standard deviation of the set is. The following graph shows a normal distribution with \begin{align*}\mu=0\end{align*} and \begin{align*}\sigma=1\end{align*}. This curve is called a standard normal distribution. In this case, the values of \begin{align*}x\end{align*} represent the number of standard deviations away from the mean. Notice that vertical lines are drawn at points that are exactly one standard deviation to the left and right of the mean. We have consistently described standard deviation as a measure of the “typical” distance away from the mean. How much of the data is actually within one standard deviation of the mean? To answer this question, think about the space, or area under the curve. The entire data set, or \begin{align*}100\%\end{align*} of it, is contained by the whole curve. What percentage would you estimate is between the two lines? It is a reasonable estimate to say it is about \begin{align*}2/3\end{align*} of the total area. In a more advanced statistics course, you could use calculus to actually calculate this area. To help estimate the answer, we can use a graphing calculator. Graph a standard normal distribution over an appropriate window. Now press [2nd] [DISTR] and choose DRAW ShadeNorm. Insert \begin{align*}–1\end{align*}, \begin{align*}1\end{align*} after the ShadeNorm command and it will shade the area within one standard deviation of the mean. The calculator also gives a very accurate estimate of the area. We can see from this that approximately \begin{align*}68\;\mathrm{percent}\end{align*} of the area is within one standard deviation of the mean. If we venture two standard deviations away from the mean, how much of the data should we expect to capture? Make the changes to the ShadeNorm command to find out. Notice from the shading, that almost all of the distribution is shaded and the percentage of data is close to \begin{align*}95\%\end{align*}. If you were to venture \begin{align*}3\end{align*} standard deviations from the mean, \begin{align*}99.7\%\end{align*}, or virtually all of the data is captured, which tells us that very little of the data in a normal distribution is more than \begin{align*}3\end{align*} standard deviations from the mean. Notice that the shading of the calculator actually makes it look like the entire distribution is shaded because of the limitations of the screen resolution, but as we have already discovered, there is still some area under the curve further out than that. These three approximate percentages, \begin{align*}68, 95\end{align*} and \begin{align*}99.7\end{align*} are extremely important and useful for beginning statistics students and is called the empirical rule. The empirical rule states that the percentages of data in a normal distribution within \begin{align*}1, 2\end{align*}, and \begin{align*}3\end{align*} standard deviations of the mean, are approximately \begin{align*}68, 95\end{align*}, and \begin{align*}99.7\end{align*}, respectively. ## Z-Scores A \begin{align*}z-\end{align*}score is a measure of the number of standard deviations a particular data point is away from the mean. For example, let’s say the mean score on a test for your statistics class were an \begin{align*}82\end{align*} with a standard deviation of \begin{align*}7\end{align*} points. If your score was an \begin{align*}89\end{align*}, it is exactly one standard deviation to the right of the mean, therefore your \begin{align*}z-\end{align*}score would be \begin{align*}1\end{align*}. If, on the other hand you scored a \begin{align*}75\end{align*}, your score is exactly one standard deviation below the mean, and your \begin{align*}z-\end{align*}score would be \begin{align*}-1\end{align*}. To show that it is below the mean, we will assign it a \begin{align*}z-\end{align*}score of negative one. All values that are below the mean will have negative \begin{align*}z-\end{align*}scores. A \begin{align*}z-\end{align*}score of negative two would represent a value that is exactly \begin{align*}2\end{align*} standard deviations below the mean, or \begin{align*}82 - 14 = 68\end{align*} in this example. To calculate a \begin{align*}z-\end{align*}score in which the numbers are not so obvious, you take the deviation and divide it by the standard deviation. \begin{align*}z=\frac{\text{Deviation}}{\text{Standard Deviation}}\end{align*} You may recall that deviation is the observed value of the variable, subtracted by the mean value, so in symbolic terms, the \begin{align*}z-\end{align*}score would be: \begin{align*}z=\frac {x-\bar x}{sd}\end{align*} Ex. What is the \begin{align*}z-\end{align*}score for an \begin{align*}A\end{align*} on this test? (assume that an \begin{align*}A\end{align*} is a \begin{align*}93\end{align*}). \begin{align*}z&=\frac {x-\bar x}{sd}\\ z&=\frac {93-82}{7}\\ z&=\frac {11}{7}\approx 1.57\end{align*} It is not necessary to have a normal distribution to calculate a \begin{align*}z-\end{align*}score, but the \begin{align*}z-\end{align*}score has much more significance when it relates to a normal distribution. For example, if we know that the test scores from the last example are distributed normally, then a \begin{align*}z-\end{align*}score can tell us something about how our test score relates to the rest of the class. From the empirical rule we know that about \begin{align*}68\;\mathrm{percent}\end{align*} of the students would have scored between a \begin{align*}z-\end{align*}score of \begin{align*}–1\end{align*} and \begin{align*}1\end{align*}, or between a \begin{align*}75\end{align*} and an \begin{align*}89\end{align*}. If \begin{align*}68\%\end{align*} of the data is between those two values, then that leaves a remaining \begin{align*}32\%\end{align*} in the tail areas. Because of symmetry, that leaves \begin{align*}16\%\end{align*} in each individual tail. If we combine the two percentages, approximately \begin{align*}84\%\end{align*} of the data is below an \begin{align*}89\end{align*} score. We typically refer to this as a percentile. A student with this score could conclude that he or she performed better than \begin{align*}84\%\end{align*} of the class, and that he or she was in the \begin{align*}84^{th}\end{align*} percentile. This same conclusion can be put in terms of a probability distribution as well. We could say that if a student from this class were chosen at random the probability that we would choose a student with a score of \begin{align*}89\end{align*} or less is \begin{align*}.84\end{align*}, or there is an \begin{align*}84\%\end{align*} chance of picking such a student. ## Assessing Normality The best way to determine if a data set approximates a normal distribution is to look at a visual representation. Histograms and box plots can be useful indicators of normality, but are not always definitive. It is often easier to tell if a data set is not normal from these plots. If a data set is skewed right it means that the right tail is significantly larger than the left. Likewise, skewed left means the left tail has more weight than the right. A bimodal distribution has two modes, or peaks, as if two normal distributions were added together. Multimodal distributions with two or more modes often reflect two different types. For instance, a histogram of the heights of American \begin{align*}30\end{align*}-year-old adults, you will see a bimodal distribution -- one mode for males, one mode for females. Now that we know how to calculate \begin{align*}z-\end{align*}scores, there is a plot we can use to determine if a distribution is normal. If we calculate the \begin{align*}z-\end{align*}scores for a data set and plot them against the actual values, this is called a normal probability plot, or a normal quantile plot. If the data set is normal, then this plot will be perfectly linear. The closer to being linear the normal probability plot is, the more closely the data set approximates a normal distribution. Look below at a histogram and the normal probability plot for the same data. The histogram is fairly symmetric and mound-shaped and appears to display the characteristics of a normal distribution. When the \begin{align*}z-\end{align*}scores are plotted against the data values, the normal probability plot appears strongly linear, indicating that the data set closely approximates a normal distribution. Example: The following data set tracked high school seniors' involvement in traffic accidents. The participants were asked the following question: “During the last \begin{align*}12\end{align*} months, how many accidents have you had while you were driving (whether or not you were responsible)?” Year Percentage of high school seniors who said they were involved in no traffic accidents 1991 \begin{align*}75.7\end{align*} 1992 \begin{align*}76.9\end{align*} 1993 \begin{align*}76.1\end{align*} 1994 \begin{align*}75.7\end{align*} 1995 \begin{align*}75.3\end{align*} 1996 \begin{align*}74.1\end{align*} 1997 \begin{align*}74.4\end{align*} 1998 \begin{align*}74.4\end{align*} 1999 \begin{align*}75.1\end{align*} 2000 \begin{align*}75.1\end{align*} 2001 \begin{align*}75.5\end{align*} 2002 \begin{align*}75.5\end{align*} 2003 \begin{align*}75.8\end{align*} Figure: Percentage of high school seniors who said they were involved in no traffic accidents. Source: Sourcebook of Criminal Justice Statistics: http://www.albany.edu/sourcebook/pdf/t352.pdf Here is a histogram and a box plot of this data. The histogram appears to show a roughly mound-shaped and symmetric distribution. The box plot does not appear to be significantly skewed, but the various sections of the plot also do not appear to be overly symmetric either. In the following chart the \begin{align*}z-\end{align*}scores for this data set have been calculated. The mean percentage is approximately \begin{align*}75.35\end{align*} Year Percentage \begin{align*}z-\end{align*}score 1991 \begin{align*}75.7\end{align*} \begin{align*}.45\end{align*} 1992 \begin{align*}76.9\end{align*} \begin{align*}2.03\end{align*} 1993 \begin{align*}76.1\end{align*} \begin{align*}.98\end{align*} 1994 \begin{align*}75.7\end{align*} \begin{align*}.45\end{align*} 1995 \begin{align*}75.3\end{align*} \begin{align*}-.07\end{align*} 1996 \begin{align*}74.1\end{align*} \begin{align*}-1.65\end{align*} 1997 \begin{align*}74.4\end{align*} \begin{align*}-1.25\end{align*} 1998 \begin{align*}74.4\end{align*} \begin{align*} -1.25\end{align*} 1999 \begin{align*}75.1\end{align*} \begin{align*}-.33\end{align*} 2000 \begin{align*}75.1\end{align*} \begin{align*}-.33\end{align*} 2001 \begin{align*}75.5\end{align*} \begin{align*}.19\end{align*} 2002 \begin{align*}75.5\end{align*} \begin{align*}.19\end{align*} 2003 \begin{align*}75.8\end{align*} \begin{align*}.59\end{align*} Figure: Table of \begin{align*}z-\end{align*}scores for senior no-accident data. Here is a plot of the percentages and the \begin{align*}z-\end{align*}scores, or the normal probability plot. While not perfectly linear, this plot shows does have a strong linear pattern and we would therefore conclude that the distribution is reasonably normal. One additional clue about normality might be gained from investigating the empirical rule. Remember than in an idealized normal curve, approximately \begin{align*}68\%\end{align*} of the data should be within one standard deviation of the mean. If we count, there are \begin{align*}9\;\mathrm{years}\end{align*} for which the \begin{align*}z-\end{align*}scores are between \begin{align*}-1\end{align*} and \begin{align*}1\end{align*}. As a percentage of the total data, \begin{align*}9/13\end{align*} is about \begin{align*}69\%\end{align*}, or very close to the ideal value. This data set is so small that it is difficult to verify the other percentages, but they are still not unreasonable. About \begin{align*}92\%\end{align*} of the data (all but one of the points) ends up within \begin{align*}2\end{align*} standard deviations of the mean, and all of the data (Which is in line with the theoretical \begin{align*}99.7\%\end{align*}) is located between \begin{align*}z-\end{align*}scores of \begin{align*}-3\end{align*} and \begin{align*}3\end{align*}. ## Lesson Summary A normal distribution is a perfectly symmetric, mound-shaped distribution that appears in many practical and real data sets and is an especially important foundation for making conclusions about data called inference. A standard normal distribution is a normal distribution in which the mean is \begin{align*}0\end{align*} and the standard deviation is \begin{align*}1\end{align*}. A \begin{align*}z-\end{align*}score is a measure of the number of standard deviations a particular data value is away from the mean. The formula for calculating a \begin{align*}z-\end{align*}score is: \begin{align*}z=\frac {x-\bar x}{sd}\end{align*} \begin{align*}Z-\end{align*}scores are useful for comparing two distributions with different centers and/or spreads. When you convert an entire distribution to \begin{align*}z-\end{align*}scores, you are actually changing it to a standardized distribution. A distribution has \begin{align*}z-\end{align*}scores regardless of whether or not it is normal in shape. If the distribution is normal, however, the \begin{align*}z-\end{align*}scores are useful in explaining how much of the data is contained within a certain distance of the mean. The empirical rule is the name given to the observation that approximately \begin{align*}68\%\end{align*} of the data is within \begin{align*}1\end{align*} standard deviation of the mean, about \begin{align*}95\%\end{align*} is within \begin{align*}2\end{align*} standard deviations of the mean, and \begin{align*}99.7\%\end{align*} of the data is within \begin{align*}3\end{align*} standard deviations of the mean. Some refer to this as the \begin{align*}68-95-99.7\end{align*}. There is no straight-forward test for normality. You should learn to recognize the normality of a distribution by examining the shape and symmetry of its visual display. However, a normal probability or normal quantile plot is a useful tool to help check the normality of a distribution. This graph is a plot of the \begin{align*}z-\end{align*}scores of a data set against the actual values. If the distribution is normal, this plot will be linear. ## Points To Consider 1. How can we use normal distributions to make meaningful conclusions about samples and experiments? 2. How do we calculate probabilities and areas under the normal curve that are not covered by the empirical rule? 3. What are the other types of distributions that can occur in different probability situations? ## Review Questions 1. Which of the following data sets is most likely to be normally distributed? For the other choices, explain why you believe they would not follow a normal distribution. 1. The hand span (measured from the tip of the thumb to the tip of the extended \begin{align*}5^{th}\end{align*} finger) of a random sample of high school seniors. 2. The annual salaries of all employees of a large shipping company. 3. The annual salaries of a random sample of \begin{align*}50\end{align*} CEOs of major companies, \begin{align*}25\end{align*} women and \begin{align*}25\end{align*} men. 4. The dates of \begin{align*}100\end{align*} pennies taken from a cash drawer in a convenience store. 2. The grades on a statistics mid-term for a high school are normally distributed with \begin{align*}\mu = 81\end{align*} and \begin{align*}\sigma = 6.3\end{align*}. Calculate the \begin{align*}z-\end{align*}scores for each of the following exam grades. Draw and label a sketch for each example. 1. \begin{align*}65\end{align*} 2. \begin{align*}83\end{align*} 3. \begin{align*}93\end{align*} 4. \begin{align*}100\end{align*} 3. Assume that the mean weight of \begin{align*}1\end{align*} year-old girls in the US is normally distributed with a mean of about \begin{align*}9.5 \;\mathrm{kilograms}\end{align*} with a standard deviation of approximately \begin{align*}1.1 \;\mathrm{kilograms}\end{align*}. Without using a calculator, estimate the percentage of \begin{align*}1\end{align*} year-old girls in the US that meet the following conditions. Draw a sketch and shade the proper region for each problem. 1. Less than \begin{align*}8.4 \;\mathrm{kg}\end{align*} 2. Between \begin{align*}7.3 \;\mathrm{kg}\end{align*} and \begin{align*}11.7 \;\mathrm{kg}\end{align*} 3. More than \begin{align*}12.8 \;\mathrm{kg}\end{align*} 4. For a standard normal distribution, place the following in order from smallest to largest. 1. The percentage of data below \begin{align*}1\end{align*} 2. The percentage of data below \begin{align*}-1\end{align*} 3. The mean 4. The standard deviation 5. The percentage of data above \begin{align*}2\end{align*} 5. The 2007 AP Statistics examination scores were not normally distributed, with \begin{align*}\mu = 2.80\end{align*} and \begin{align*}\sigma = 1.34^1\end{align*}. What is the approximate \begin{align*}z-\end{align*}score that corresponds to an exam score of \begin{align*}5\end{align*} (The scores range from \begin{align*}1-5\end{align*}). 1. \begin{align*}0.786\end{align*} 2. \begin{align*}1.46\end{align*} 3. \begin{align*}1.64\end{align*} 4. \begin{align*}2.20\end{align*} 5. A \begin{align*}z-\end{align*}score can not be calculated because the distribution is not normal. \begin{align*}^1\end{align*}Data available on the College Board Website: 6. The heights of \begin{align*}5^{th}\end{align*} grade boys in the United States is approximately normally distributed with a mean height of \begin{align*}143.5 \;\mathrm{cm}\end{align*} and a standard deviation of about \begin{align*}7.1 \;\mathrm{cm}\end{align*}. What is the probability that a randomly chosen \begin{align*}5^{th}\end{align*} grade boy would be taller than \begin{align*}157.7 \;\mathrm{cm}\end{align*}? 7. A statistics class bought some sprinkle (or jimmies) doughnuts for a treat and noticed that the number of sprinkles seemed to vary from doughnut to doughnut. So, they counted the sprinkles on each doughnut. Here are the results: \begin{align*}241, 282, 258, 224, 133, 335, 322, 323, 354, 194, 332, 274, 233, 147, 213, 262, 227, 366\end{align*} (a) Create a histogram, dot plot, or box plot for this data. Comment on the shape, center and spread of the distribution. (b) Find the mean and standard deviation of the distribution of sprinkles. Complete the following chart by standardizing all the values: \begin{align*}\mu = \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \qquad \qquad \sigma = \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;}\end{align*} Number of Sprinkles Deviation \begin{align*}Z-\end{align*}scores \begin{align*}241\end{align*} \begin{align*}282\end{align*} \begin{align*}258\end{align*} \begin{align*}223\end{align*} \begin{align*}133\end{align*} \begin{align*}335\end{align*} \begin{align*}322\end{align*} \begin{align*}323\end{align*} \begin{align*}354\end{align*} \begin{align*}194\end{align*} \begin{align*}332\end{align*} \begin{align*}274\end{align*} \begin{align*}233\end{align*} \begin{align*}147\end{align*} \begin{align*}213\end{align*} \begin{align*}262\end{align*} \begin{align*}227\end{align*} \begin{align*}366\end{align*} Figure: A table to be filled in for the sprinkles question. (c) Create a normal probability plot from your results. (d) Based on this plot, comment on the normality of the distribution of sprinkle counts on these doughnuts. Open-ended Investigation: Munchkin Lab. Teacher Notes: For this activity, obtain two large boxes of Dunkin Donuts’ munchkins. Each box should contain only one type of munchkin. I have found students prefer the glazed and the chocolate, but the activity can be modified according to your preference. If you do not have Dunkin Donuts near you, the bakery section of your supermarket should have boxed donut holes or something similar you can use. You will also need an electronic balance capable of measuring to the nearest \begin{align*}10^{th}\end{align*} of a gram. Your science teachers will be able to help you out with this if you do not have one. I have used this activity before introducing the concepts in this chapter. If you remove the words “\begin{align*}z-\end{align*}score”, the normal probability plot and the last two questions, students will be able to investigate and develop an intuitive understanding for standardized scores and the empirical rule, before defining them. Experience has shown that this data very closely approximates a normal distribution and students will be able to calculate the \begin{align*}z-\end{align*}scores and verify that their results come very close to the theoretical values of the empirical rule. 1. You would expect this situation to vary normally with most students’ hand spans centering around a particular value and a few students having much larger or much smaller hand spans. 2. Most employees could be hourly laborers and drivers and their salaries might be normally distributed, but the few management and corporate salaries would most likely be much higher, giving a skewed right distribution. 3. Many studies have been published detailing the shrinking, but still prevalent income gap between male and female workers. This distribution would most likely be bi-modal, with each gender distribution by itself possibly being normal. 4. You might expect most of the pennies to be this year or last year, fewer still in the previous few years, and the occasional penny that is even older. The distribution would most likely be skewed left. 1. \begin{align*}z \approx -2.54\end{align*} 2. \begin{align*}z \approx 0.32\end{align*} 3. \begin{align*}z \approx 1.90\end{align*} 4. \begin{align*}z \approx 3.02\end{align*} 1. Because the data is normally distributed, students should use the \begin{align*}68-95-99.7\end{align*} rule to answer these questions. 1. about \begin{align*}16\%\end{align*} (less than one standard deviation below the mean) 2. about \begin{align*}95\%\end{align*} (within \begin{align*}2\end{align*} standard deviations) 3. about \begin{align*}0.15\%\end{align*} (more than \begin{align*}3\end{align*} standard deviations above the mean) 2. The standard normal curve has a mean of zero and a standard deviation of one, so all the values correspond to \begin{align*}z-\end{align*}scores. The corresponding values are approximately: 1. \begin{align*}0.84\end{align*} 2. \begin{align*}0.16\end{align*} 3. \begin{align*}0\end{align*} 4. \begin{align*}1\end{align*} 5. \begin{align*}0.025\end{align*} Therefore the correct order is: c, e, b, a, d 3. c 4. \begin{align*}0.025. 157.7\end{align*} is exactly \begin{align*}2\end{align*} standard deviations above the mean height. According to the empirical rule, the probability of a randomly chosen value being within \begin{align*}2\end{align*} standard deviations is about \begin{align*}0.95\end{align*}, which leaves \begin{align*}0.05\end{align*} in the tails. We are interested in the upper tail only as we are looking for the probability of being above this value. 5. (a) Here are the possible plots showing a symmetric, mound shaped distribution. (b) \begin{align*}\mu = 262.222 \qquad \qquad s = 67.837\end{align*} Number of Sprinkles Deviations \begin{align*}Z-\end{align*}scores \begin{align*}241\end{align*} \begin{align*}-21.2222\end{align*} \begin{align*}-0.313\end{align*} \begin{align*}282\end{align*} \begin{align*}19.7778\end{align*} \begin{align*}0.292\end{align*} \begin{align*}258\end{align*} \begin{align*}-4.22222\end{align*} \begin{align*}-0.062\end{align*} \begin{align*}223\end{align*} \begin{align*}-38.2222\end{align*} \begin{align*}-0.563\end{align*} \begin{align*}133\end{align*} \begin{align*}-129.222\end{align*} \begin{align*}-1.905\end{align*} \begin{align*}335\end{align*} \begin{align*}72.7778\end{align*} \begin{align*}1.073\end{align*} \begin{align*}322\end{align*} \begin{align*}59.7778\end{align*} \begin{align*}0.881\end{align*} \begin{align*}323\end{align*} \begin{align*}60.7778\end{align*} \begin{align*}0.896\end{align*} \begin{align*}354\end{align*} \begin{align*}91.7778\end{align*} \begin{align*}1.353\end{align*} \begin{align*}194\end{align*} \begin{align*}-68.2222\end{align*} \begin{align*}-1.006\end{align*} \begin{align*}332\end{align*} \begin{align*}69.7778\end{align*} ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 264, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199779391288757, "perplexity": 866.4143594756213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00352-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/160686-limit-evaluation-3rd-root.html
# Thread: Limit Evaluation 3rd root 1. ## Limit Evaluation 3rd root evaluate: $\displaystyle \displaystyle\lim_{x \to \infty} (x+1)^{2/3}-(x-1)^{2/3}$ Attached Thumbnails 2. $\displaystyle a^3- b^3= (a- b)(a^2+ ab+ b^2)$ With $\displaystyle a= (x+1)^{2/3}$ and $\displaystyle b= (x- 1)^{2/3}$, that says the $\displaystyle (x+1)^2- (x-1)^2= ((x+1)^{2/3}- (x-1)^{2/3})((x+1)^{4/3}+ ((x+1)(x-1))^{2/3}+ (x-1)^2)$. $\displaystyle (x+1)^{2/3}- (x-1)^{2/3}= \frac{(x+1)^2- (x-1)^2}{(x+1)^{4/3}+ ((x+1)(x-1))^{2/3}+ (x-1)^2}$ 3. Another way is squeeze, using the following inequality which is valid for all $\displaystyle x\ge\frac{1+\sqrt{5}}{2}$: $\displaystyle (x+1)^\frac{2}{3} \le (x-1)^\frac{2}{3}+(x-1)^{-\frac{1}{3}}$ (the reverse inequality is true for all $\displaystyle x\le\frac{1-\sqrt{5}}{2}$, with which you can calculate the limit at $\displaystyle -\infty$)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610276818275452, "perplexity": 1522.320221580398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00420.warc.gz"}
https://www.arxiv-vanity.com/papers/1703.00570/
# Background free search for neutrinoless double beta decay with Gerda Phase II August 4, 2021 ###### Abstract The Standard Model of particle physics cannot explain the dominance of matter over anti-matter in our Universe. In many model extensions this is a very natural consequence of neutrinos being their own anti-particles (Majorana particles) which implies that a lepton number violating radioactive decay named neutrinoless double beta () decay should exist. The detection of this extremely rare hypothetical process requires utmost suppression of any kind of backgrounds. The Gerda collaboration searches for decay of Ge () by operating bare detectors made from germanium with enriched Ge fraction in liquid argon. Here, we report on first data of Gerda Phase II. A background level of   has been achieved which is the world-best if weighted by the narrow energy-signal region of germanium detectors. Combining Phase I and II data we find no signal and deduce a new lower limit for the half-life of  yr at 90 % C.L. Our sensitivity of  yr is competitive with the one of experiments with significantly larger isotope mass. Gerda is the first experiment that will be background-free up to its design exposure. This progress relies on a novel active veto system, the superior germanium detector energy resolution and the improved background recognition of our new detectors. The unique discovery potential of an essentially background-free search for decay motivates a larger germanium experiment with higher sensitivity. decay, , Ge, enriched Ge detectors, active veto ###### pacs: 23.40.-s, 21.10.Tg, 27.50.+e, 29.40.Wk Gerda collaboration also at:]Moscow Inst. of Physics and Technology, Moscow, Russia also at:]Int. Univ. for Nature, Society and Man, Dubna, Russia ## I Introduction One of the most puzzling aspects of cosmology is the unknown reason for the dominance of matter over anti-matter in our Universe. Within the Standard Model of particle physics there is no explanation for this observation and hence a new mechanism has to be responsible. A favored model called leptogenesis Davidson et al. (2008) links the matter dominance to the nature of neutrinos and to the violation of lepton number, i.e. the total number of electrons, muons, taus and neutrinos minus the number of their anti-particles. In most extensions of the Standard Model Mohapatra and A.Y.Smirnov (2006); Mohapatra et al. (2007); Päs and Rodejohann (2015) neutrinos are assumed to be their own anti-particles (Majorana particles). This might lead to lepton number violating processes at the TeV energy scale observable at the LHC Päs and Rodejohann (2015) and would result in neutrinoless double beta () decay where a nucleus of mass number and charge decays as . Lepton number violation has not been unambiguously observed so far. There are several experimental decay programs ongoing using for example Ge Agostini et al. (2013a); Cuesta et al. (2015), Te Alfonso et al. (2015); Andringa et al. (2016) or Xe Gando et al. (2016); Albert et al. (2014); Martin-Albo et al. (2016). They all measure the sum of the electron energies released in the decay which corresponds to the mass difference of the two nuclei. The decay half-life is at least 15 orders of magnitude longer than the age of the universe. Its observation requires therefore the best suppression of backgrounds. In the GERmanium Detector Array (Gerda) experiment bare germanium detectors are operated in liquid argon (LAr). The detectors are made from germanium with the Ge isotope fraction enriched from 7.8 % to about 87 %. Since source and detector of decay are identical in this calorimetric approach the detection efficiency is high. This Article presents the first result from Gerda Phase II. In the first phase of data taking (Phase I), a limit of  yr (90 % C.L.) was found Agostini et al. (2013a) for an exposure of 21.6 kgyr and a background of 0.01  at  keV Mount et al. (2010). At that time, the result was based on data from 10 detectors (17.6 kg total mass). In December 2015, Phase II started with 37 detectors (35.6 kg) from enriched material. The mass is hence doubled relative to Phase I. The ambitious goal is an improvement of the half-life sensitivity to  yr for about 100 kgyr exposure by reducing the background level by an order of magnitude. The latter is achieved by vetoing background events through the detection of their energy deposition in LAr and the characteristic time profile of their signals in the germanium detectors. The expected background is less than one count in the energy region of interest up to the design exposure which means that Gerda will be the first “background free” experiment in the field. We will demonstrate in this Article that Gerda has reached the envisioned background level which is the world-best level if weighted by our superior energy resolution. Gerda is therefore best suited to not only quote limits but to identify with high confidence a signal. ## Ii The experiment The Gerda experiment Ackermann et al. (2013) is located at the underground Laboratori Nazionali del Gran Sasso (LNGS) of INFN, Italy. A rock overburden of about 3500 m water equivalent removes the hadronic components of cosmic ray showers and reduces the muon flux at the experiment by six orders of magnitude to 1.2 /(mh). The basic idea is to operate bare germanium detectors in a radiopure cryogenic liquid like LAr for cooling to their operating temperature of 90 K and for shielding against external radiation originating from the walls (see Extended Data Fig. 1 for a sketch of the setup) Heusser (1995). In Gerda, a 64 m LAr cryostat is inside a 590 m water tank. The clean water completes the passive shield. Above the water tank is a clean room with a glove box and lock for the assembly of germanium detectors into strings and the integration of the liquid argon veto system. Gerda deploys 7 coaxial detectors from the former Heidelberg-Moscow Klapdor-Kleingrothaus et al. (2004) and IGEX Aalseth et al. (2002) experiments and 30 broad energy (BEGe) detectors  Agostini et al. (2015a). All diodes have p-type doping (see Extended Data Fig. 2). Electron-hole pairs created in the 1–2 mm thick n electrode mostly recombine such that the active volume is reduced. A superior identification of the event topology and hence background rejection is available for the BEGe type (see below). The enriched detectors are assembled into 6 strings surrounding the central one which consists of three coaxial detectors of natural isotopic composition. Each string is inside a nylon cylinder (see Extended Data Fig. 3) to limit the LAr volume from which radioactive ions like K can be collected to the outer detector surfaces Agostini et al. (2014a). All detectors are connected to custom made low radioactivity charge sensitive amplifiers Riboldi et al. (2015) (30 MHz bandwidth, 0.8 keV full width at half maximum (FWHM) resolution) located in LAr about 35 cm above the detectors. The charge signal traces are digitized with 100 MHz sampling rate and stored on disk for offline analysis. In background events some energy is often also deposited in the argon. The resulting scintillation light Agostini et al. (2015b) can be detected to veto them. In Phase II, a cylindrical volume of 0.5 m diameter and 2.2 m height around the detector strings (see Extended Data Fig. 1 and 4) is instrumented with light sensors. The central 0.9 m of the cylinder are defined by a curtain of wavelength shifting fibers which surround the 0.4 m high detector array. The fibers are read-out at both ends with 90 silicon photomulipliers (SiPM) Janicsko et al. (2016). Groups of six  mm SiPMs are connected together to a charge sensitive amplifier. Sixteen 3” low-background photomultpliers (PMT) designed for cryogenic operation are mounted at the top and bottom surfaces of the cylindrical volume. The distance to any detector is at least 0.7 m to limit the PMT background contribution from their intrinsic Th/U radioactivity. All LAr veto channels are digitized and read-out together with the germanium channels if at least one detector has an energy deposition above 100 keV. The nylon cylinders, the fibers, the PMTs and all surfaces of the instrumented LAr cylindrical volume are covered with a wavelength shifter to shift the LAr scintillation light from 128 nm to about 400 nm to match the peak quantum efficiency of the PMTs and the absorption maximum of the fibers. The water tank is instrumented with 66 PMTs to detect Cherenkov light from muons passing through the experiment. On top of the clean room are three layers of plastic scintillator panels covering the central 43 m to complete the muon veto Freund et al. (2016). ## Iii Data analysis The data analysis flow is very similar to that of Phase I. The offline analysis of the digitized germanium signals is described in Refs. Agostini et al. (2013a, 2012, 2011a). A data blinding procedure is again applied. Events with a reconstructed energy in the interval  keV are not analyzed but only stored on disk. After the entire analysis chain has been frozen, these blinded events have been processed. The gain stability of each germanium detector is continuously monitored by injecting charge pulses (test pulses) into the front-end electronics with a rate of 0.05 Hz. The test pulses are also used to monitor leakage current and noise. Only data recorded during stable operating conditions (e.g. gain stability better than 0.1 %) are used for the physics analysis. This corresponds to about 85 % of the total data written on disk. Signals originated from electrical discharges in the high voltage line or bursts of noise are rejected during the offline event reconstruction by a set of multi-parametric cuts based on the flatness of the baseline, polarity and time structure of the pulse. Physical events at are accepted with an efficiency larger than 99.9 % estimated with lines in calibration data, test pulse events and template signals injected in the data set. Conversely, a visual inspection of all events above 1.6 MeV shows that no unphysical event survives the cuts. The energy deposited in a germanium detector is reconstructed offline with an improved digital filter Agostini et al. (2015c), whose parameters are optimized for each detector and for several periods. The energy scale and resolution are determined with weekly calibration runs with Th sources. The long-term stability of the scale is assessed by monitoring the shift of the position of the 2615 keV peak between consecutive calibrations. It is typically smaller than 1 keV for BEGe detectors and somewhat worse for some coaxial ones. The FWHM resolution at 2.6 MeV is between 2.6–4.0 keV for BEGe and 3.4–4.4 keV for coaxial detectors. The width of the strongest lines in the physics data (1525 keV from K and 1460 keV from K) is found to be 0.5 keV larger than the expectation for the coaxial detectors (see Fig. 1). In order to estimate the expected energy resolution at an additional noise term is added to take this into account. For decays in the active part of a detector volume, the total energy of is detected in 92 % of the cases in this detector. Multiple detector coincidences are therefore discarded as background events. Two consecutive candidate events within 1 ms are also rejected (dead time ) to discriminate time-correlated decays from primordial radioisotopes, as e.g. the radon progenies Bi and Po. Candidate events are also refuted if a muon trigger occurred within 10 s prior to a germanium detector trigger. More than 99 % of the muons that deposit energy in a germanium detector are rejected this way. The induced dead time is 0.1 %. The traces from PMTs and SiPMs are analyzed offline to search for LAr scintillation signals in coincidences with a germanium detector trigger. An event is rejected if any of the light detectors record a signal of amplitude above 50 % of the expectation for a single photo-electron within 5 s from the germanium trigger. 99 % of the photons occur in this window. Accidental coincidences between the LAr veto system and germanium detectors create a dead time of  % which is measured with test pulse events and cross checked with the counts in the K peak. Fig. 2 shows the energy spectra for BEGe and coaxial detectors of Phase II with and without the LAr veto cut. Below  keV the spectra are dominated by Ar decays, up to 1.7 MeV by events from double beta decay with two neutrino emission (), above 2.6 MeV by decays on the detector surface and around by a mixture of events, K decays and those from the U and Th decay chains. The two spectra are similar except for the number of events which is on average higher for coaxial detectors. The number of counts shows a large variation between the detectors. The power of the LAr veto is best demonstrated by the K line at 1525 keV which is suppressed by a factor 5 (see inset) due to the particle depositing up to 2 MeV energy in the LAr. The figure also shows the predicted spectrum from Ge using our Phase I result for the half-life of  yr Agostini et al. (2015d). The time profile of the germanium detector current signal is used to discriminate decays from background events. While the former have point-like energy deposition in the germanium (single site events, SSE), the latter have often multiple depositions (multi site events, MSE) or depositions on the detector surface. The same pulse shape discrimination (PSD) techniques of Phase I Agostini et al. (2013b) are applied. Events in the double escape peak (DEP) and at the Compton edge of 2615 keV gammas in calibration data have a similar time profile as decays and are hence proxies for SSE. These samples are used to define the PSD cuts and the related detection efficiencies. The latter are cross checked with decays. The geometry of BEGe detectors allows to apply a simple mono-parametric PSD based on the maximum of the detector current pulse normalized to the total energy  Budjáš et al. (2009); Agostini et al. (2011b). The energy dependence of the mean and the resolution of are measured for every detector with calibration events. After correcting for these dependences and normalizing the mean of DEP events to 1, the acceptance range is determined for each detector individually: the lower cut is set to keep 90 % of DEP events and the upper position is twice the low-side separation from 1. Fig. 3 shows a scatter plot of the PSD parameter versus energy and the projection to the energy axis. Events marked in red survive the PSD selection. Below 1.7 MeV events dominate with a survival fraction of  %. The two potassium peaks and Compton scattered photons reconstruct at (below the SSE band). All 234  events at higher energies exhibit and are easily removed. The average survival fraction Wagner (2017) is  %. The uncertainty takes into account the systematic difference between the centroids of DEP and events and different fractions of MSE in DEP and events. For coaxial detectors a mono-parametric PSD is not sufficient since SSE do not have a simple signature Agostini et al. (2013b). Instead two neural network algorithms are applied to discriminate SSE from MSE and from surface events. The first one is identical to the one used in Phase I. The cut on the neural network qualifier is set to yield a survival fraction of DEP events of 90 % for each detector. For the determination of the efficiency, events in physics data and a complete Monte Carlo simulation Kirsch (2014) of physics data and calibration data are used. The simulation considers the detector and the electronics response to energy depositions including the drift of charges in the crystal Bruyneel et al. (2016). We find a survival fraction for events of  % where the error is derived from variations of the simulation parameters. The second neural network algorithm is applied for the first time and identifies surface events on the p contact. Training is done with physics data from two different energy intervals. After the LAr veto cut events in the range 1.0–1.3 MeV are almost exclusively from decay and hence signal-like. Events above 3.5 MeV are almost all from decays on the p electrode and represent background events in the training. As efficiency we measure a value of  % for a event sample not used in the training. The combined PSD efficiency for coaxial detectors is  %. ## Iv Results This analysis includes the data sets used in the previous publication Agostini et al. (2013a, 2015e), an additional coaxial detector period from 2013 (labeled “PI extra”) and the Phase II data from December 2015 until June 2016 (labeled “PIIa coaxial” and “PIIa BEGe”). Table 1 lists the relevant parameters for all data sets. The exposures in the active volumes of the detectors for Ge are 234 and 109 molyr for Phase I and II, respectively. The efficiency is the product of the Ge isotope fraction (87 %), the active volume fraction (87–90 %), the event fraction reconstructed at full energy in a single crystal (92 %), pulse shape selection (79–92 %) and the live time fraction (97.7 %). For the Phase I data sets the event selection including the PSD classification is unchanged. An improved energy reconstruction Agostini et al. (2015c) is applied to the data as well as an updated value for the coaxial detector PSD efficiency of the neural network analysis of  % Kirsch (2014). Fig. 4 shows the spectra for the combined Phase I data sets and the two Phase II sets. The analysis range is from 1930 to 2190 keV without the intervals  keV and  keV of known peaks predicted by our background model Agostini et al. (2014a). For the coaxial detectors four events survive the cuts which means that the background is reduced by a factor of three compared to Phase I (see ’PI golden’ in Tab. 1). Due to the better PSD performance, only one event remains in the BEGe data which corresponds to a background of . Consequently, the Phase II background goal is reached. We perform both a Frequentist and a Bayesian analysis based on an unbinned extended likelihood function Agostini et al. (2015e). The fit function for every data set is a flat distribution for the background (one free parameter per set) and for a possible signal a Gaussian centered at with a width according to the corresponding resolution listed in Tab. 1. The signal strength is calculated for each set according to its exposure, efficiency and the inverse half-life which is a common free parameter. Systematic uncertainties like a 0.2 keV uncertainty of the energy scale at are included in the analysis as pull terms in the likelihood function. The implementation takes correlations into account. The Frequentist analysis uses the Neyman construction of the confidence interval and the standard two-sided test statistics Olive et al. (2014); Cowan et al. (2011) with the restriction to the physical region : the frequency distribution of the test statistic is generated using Monte Carlo simulations for different assumed values. The limit was determined by finding the largest value of for which at most 10 % of the simulated experiments had a value of the test statistic more unlikely than the one measured in our data (see Extended Data Fig. 5). Details of the statistical analysis can be found in the appendix. The best fit yields zero signal events and a 90 % C.L. limit of 2.0 events in 34.4 kgyr total exposure or T0ν1/2>5.3⋅1025yr. (1) The (median) sensitivity assuming no signal is  yr (see Extended Data Fig. 5). The systematic errors weaken the limit by 1 %. The Bayesian fit yields for a prior flat in between 0 and  yr a limit of  yr (90 % C.I.). The sensitivity assuming no signal is  yr. ## V Discussion The second phase of Gerda collects data since December 2015 in stable conditions with all channels working. The background at for the BEGe detectors is  . This is a major achievement since the value is consistent with our ambitious design goal. We find no hint for a decay signal in our combined data and place a limit of  yr (90 % C.L., sensitivity  yr). For light Majorana neutrino exchange and a nuclear matrix element range for Ge between 2.8 and 6.1 Menendez et al. (2009); Horoi and Neacsu (2016); Barea et al. (2015); Hyvärinen and Suhonen (2015); Simkovic et al. (2013); Vaquero et al. (2013); Yao et al. (2015) the Gerda half-life limit converts to 0.15–0.33 eV (90 % C.L.). We expect only a fraction of a background event in the energy region of interest (1 FWHM) at design exposure of 100 kgyr. Gerda is hence the first “background free” experiment in the field. Our sensitivity grows therefore almost linearly with time instead of by square root like for competing experiments and reaches  yr for the half-life limit within 3 years of continuous operation. With the same exposure we have a 50 % chance to detect a signal with significance if the half-life is below  yr. Phase II has demonstrated that the concept of background suppression by exploiting the good pulse shape performance of BEGe detectors and by detecting the argon scintillation light works. The background at is at a world-best level: it is lower by typically a factor of 10 compared to experiments using other isotopes after normalization by the energy resolution and total efficiency ; i.e. (BIFWHM/) is superior. This is the reason why the Gerda half-life sensitivity of  yr for an exposure of 343 molyr is similar to the one of Kamland-Zen for Xe of  yr based on a more than 10-fold exposure of 3700 molyr Gando et al. (2016). A discovery of decay would have far reaching consequences for our understanding of particle physics and cosmology. Key features for a convincing case are an ultra low background with a simple flat distribution, excellent energy resolution and the possibility to identify the events with high confidence as signal-like as opposed to an unknown -line from a nuclear transition. The latter is achieved by the detector pulse shape analysis and possibly a signature in the argon. The concept to operate bare germanium detectors in liquid argon has proven to have the best performance for a discovery which motivates future extensions of the program. The Gerda cryostat can hold 200 kg of detectors. Such an experiment will remain background-free until an exposure of 1000 kgyr provided the background can be further reduced by a factor of five. The discovery sensitivity would then improve by an order of magnitude to a half-life of  yr. The 200 kg setup is conceived as a first step for a more ambitious 1 ton experiment which would ultimately boost the sensitivity to  yr corresponding to the 10–20 meV range. Both extensions are being pursued by the newly formed LEGeND Collaboration (http://www.legend-exp.org) ## Appendix A Acknowledgments The Gerda experiment is supported financially by the German Federal Ministry for Education and Research (BMBF), the German Research Foundation (DFG) via the Excellence Cluster Universe, the Italian Istituto Nazionale di Fisica Nucleare (INFN), the Max Planck Society (MPG), the Polish National Science Centre (NCN), the Russian Foundation for Basic Research (RFBR), and the Swiss National Science Foundation (SNF). The institutions acknowledge also internal financial support. The Gerda collaboration thanks the directors and the staff of the LNGS for their continuous strong support of the Gerda experiment. ## Appendix B Appendix: Statistical Methods This section discusses the statistical analysis of the Gerda data. In particular, the procedures to derive the limit on , the median sensitivity of the experiment and the treatment of systematic uncertainties are described. A combined analysis of data from Phase I and II is performed by fitting simultaneously the six data sets of Table 1. The parameter of interest for this analysis is the strength of a possible  decay signal: . The number of expected  events in the -th data set as a function of  is given by: μSi=ln2⋅(NA/ma)⋅ϵi⋅Ei⋅S , (2) where is Avogadro’s number, the global signal efficiency of the -th data set, the exposure and the molar mass. The exposure quoted is the total detector mass multiplied by the data taking time. The global signal efficiency accounts for the fraction of Ge in the detector material, the fraction of the detector active volume, the efficiency of the analysis cuts, the fractional live time of the experiment and the probability that  decay events in the active detector volume have a reconstructed energy at . The total number of expected background events as a function of the background index is: μBi=Ei⋅BIi⋅ΔE, (3) where =240 keV is the width of the energy region around  used for the fit. Each data set is fitted with an unbinned likelihood function assuming a Gaussian distribution for the signal and a flat distribution for the background: Li(Di|S,BIi,θi) = Nobsi∏j=1 1μSi+μBi⋅[μSi⋅1√2πσiexp(−(Ej−Qββ−δi)22σ2i)+μBi⋅1ΔE] (4) where are the individual event energies, is the total number of events observed in the -th data set, is the energy resolution and is a possible systematic energy offset. The parameters with systematic uncertainties are indicated with . The parameters and are bound to positive values. The total likelihood is constructed as the product of all weighted with the Poisson terms pdg : L(D|S,\bf{BI},θ)=∏i⎡⎣e−(μSi+μBi)⋅(μSi+μBi)NobsiNobsi!⋅Li(Di|S,BIi,θi)⎤⎦ (5) where , and . A frequentist analysis is performed using a two-sided test statistics Cowan et al. (2011) based on the profile likelihood (): tS=−2lnλ(S)=−2lnL(S,^^\bf BI,^^θ)L(^S,^\bf BI,^θ) (6) where and in the numerator denote the value of the parameters that maximizes for a fixed . In the denominator, , and are the values corresponding to the absolute maximum likelihood. The confidence intervals are constructed for a discrete set of values . For each , possible realizations of the experiments are generated via Monte Carlo according to the parameters of Table 1 and the expected number of counts from Eqs. 2 and 3. For each realization is evaluated. From the entire set the probability distribution is calculated. The p-value of the data for a specific is computed as: pSj=∫∞tobsf(tS|Sj)d(tSj) (7) where is the value of the test statistics of the Gerda data for . The values of are shown by the solid line in Extended Data Fig. 5. The 90 % C.L. interval is given by all values with . Such an interval has the correct coverage by construction. The current analysis yields a one-sided interval, i.e. a limit of  yr. The expectation for the frequentist limit (i.e. the experimental sensitivity) was evaluated from the distribution of built from Monte Carlo generated data sets with no injected signal (). The distribution of is shown in Extended Data Fig. 5: the dashed line is the median of the distribution and the color bands indicate the 68 % and 90 % probability central intervals. The experimental sensitivity corresponds to the  value at which the median crosses the p-value threshold of 0.1 :  yr (90% C.L.). Systematics uncertainties are folded into the likelihood by varying the parameters in the fits and constraining them by adding to the likelihood multiplicative Gaussian penalty terms. The central values and the standard deviations of the penalty terms for and are taken from Table 1. The penalty term on has a central value equal to zero and standard deviation of 0.2 keV. Instead of the two-sided test statistics one can use a one-sided test statistic defined as Cowan et al. (2011): ~tS={0,^S>S≥0−2lnλ(S),^S≤S (8) By construction for for all realizations and consequently is always included in the 90 % C.L. interval, i.e. the one-sided test statistic will always yield a limit. In our case the resulting limit would be 50% stronger. Similar to other experiments Albert et al. (2014); Gando et al. (2016), we want to be able to detect a possible signal and thus we decided a priori to adopt the two-sided test statistic. It is noteworthy that, although the coverage of both test statistics is correct by construction, deciding which one to use according to the outcome of the experiment would result in the flip-flop issue discussed by Feldman and Cousins Feldman:1997qc . The statistical analysis is also performed within a Bayesian framework. The combined posterior probability density function (PDF) is calculated from the six data sets according to Bayes’ theorem: P(S,\bf BI|D,θ)∝L(D|S,\bf BI,θ) P(S) ∏iP(BIi) (9) The likelihood is given by Eq. (5), while and are the prior PDFs for  and for the background indices, respectively. The one-dimensional posterior PDF of the parameter  of interest is derived by marginalization over all nuisance parameters BI. The marginalization is performed by the BAT toolkit bat via a Markov chain Monte Carlo numerical integration. A flat PDF between 0 and 0.1  is considered as prior for all background indices. As in Ref. Agostini et al. (2013a), a flat prior distribution is taken for  between 0 and  /yr, i.e. all counting rates up to a maximum are considered to be equiprobable. The parameters in the likelihood are fixed during the Bayesian analysis and the uncertainties are folded into the posterior PDF as last step by an integral average: ⟨P(S|D)⟩=∫P(S|D,θ)∏ig(θi)dθi (10) with being Gaussian distributions like for the frequentist analysis. The integration is performed numerically by a Monte Carlo approach. The median sensitivity of the experiment in the case of no signal is  yr (90% C.I.). The posterior PDF for our data has an exponential shape with the mode at . Its 90 % probability quantile yields  yr. As in any Bayesian analysis, results depend on the choice of the priors. For our limit we assume all signal count rates to be a priori equiprobable. Alternative reasonable choices are for instance: equiprobable Majorana neutrino masses, which yields a prior proportional to ; or scale invariance in the counting rate, namely a flat prior in . The limits derived with these assumptions are significantly stronger (50 % or more), since for both alternatives the prior PDFs increase the probability of low  values. The systematic uncertainties weaken the limit on  by less than 1% both in the frequentist and Bayesian analysis. In general, the impact of systematic uncertainties on limits is marginal in the low-statistics regime that characterizes our experiment (see also Ref. cousins ). The limit derived from the Gerda data is slightly stronger than the median sensitivity. This effect is more significant in the frequentist analysis as one would expect, see e.g. Ref. biller for a detailed discussion. The probability of obtaining a frequentist (Bayesian) limit stronger than the actual one is 33 % (35 %). ## References • Davidson et al. (2008) S. Davidson, E. Nardi,  and Y. Nir, Phys. Rept. 446, 105 (2008). • Mohapatra and A.Y.Smirnov (2006) R. Mohapatra and A.Y.Smirnov, Ann. Rev. Nucl. Part. Sci. 56, 569 (2006). • Mohapatra et al. (2007) R. Mohapatra et al., Rept. Prog. Phys. 70, 1757 (2007). • Päs and Rodejohann (2015) H. Päs and W. Rodejohann, New J. Phys. 17, 115010 (2015). • Agostini et al. (2013a) M. Agostini et al. (GERDA Collaboration), Phys. Rev. Lett. 111, 122503 (2013a). • Cuesta et al. (2015) C. Cuesta et al. (Majorana Collaboration), AIP Conf. Proc. 1686, 020005 (2015). • Alfonso et al. (2015) K. Alfonso et al. (Cuore Collaboration), Phys. Rev. Lett. 115, 102502 (2015). • Andringa et al. (2016) S. Andringa et al. (SNO+ Collaboration), Adv. High Energy Phys. 2016, 6194250 (2016). • Gando et al. (2016) A. Gando et al. (Kamland-Zen Collaboration), Phys. Rev. Lett. 117, 082503 (2016). • Albert et al. (2014) J. Albert et al. (EXO-200 Collaboration), Nature 510, 229 (2014). • Martin-Albo et al. (2016) J. Martin-Albo et al. (NEXT-100 Collaboration), JHEP 1605, 159 (2016). • Mount et al. (2010) B. J. Mount, M. Redshaw,  and E. G. Myers, Phys.Rev. C 81, 032501 (2010). • Ackermann et al. (2013) K.-H. Ackermann et al. (GERDA Collaboration), Eur. Phys. J. C 73, 2330 (2013). • Heusser (1995) G. Heusser, Ann. Rev. Nucl. Part. Sci. 45, 543 (1995). • Klapdor-Kleingrothaus et al. (2004) H. V. Klapdor-Kleingrothaus et al., Phys. Lett. B 586, 198 (2004). • Aalseth et al. (2002) C. E. Aalseth et al. (IGEX Collaboration), Phys. Rev. D 65, 092007 (2002). • Agostini et al. (2015a) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 39 (2015a). • Agostini et al. (2014a) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 74, 2764 (2014a). • Riboldi et al. (2015) S. Riboldi et al.,  (2015), http://ieeexplore.ieee.org/document/7465549, (2015) . • Agostini et al. (2015b) M. Agostini et al., Euro. Phys. J. C 75, 506 (2015b). • Janicsko et al. (2016) J. Janicsko et al.,  (2016), https://arxiv.org/abs/1606.04254 . • Freund et al. (2016) K. Freund et al., Eur. Phys. J. C 76, 298 (2016). • Agostini et al. (2012) M. Agostini et al., J. Phys.: Conf. Ser. 368, 012047 (2012). • Agostini et al. (2011a) M. Agostini et al., J. Instrum. 6, P08013 (2011a). • Agostini et al. (2015c) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 255 (2015c). • Agostini et al. (2015d) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 416 (2015d). • Agostini et al. (2013b) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 73, 2583 (2013b). • Budjáš et al. (2009) D. Budjáš et al., JINST 4, P10007 (2009). • Agostini et al. (2011b) M. Agostini et al., JINST 6, P03005 (2011b). • Wagner (2017) V. Wagner, Pulse Shape Analysis for the GERDA Experiment to Set a New Limit on the Half-life of 0 Decay of Ge (PhD thesis University of Heidelberg, 2017). • Kirsch (2014) A. Kirsch, Search for neutrinoless double beta decay in GERDA Phase I (PhD thesis University of Heidelberg, 2014). • Bruyneel et al. (2016) B. Bruyneel, B. Birkenbach,  and P. Reiter, Eur. Phys. J. A 52, 70 (2016). • Agostini et al. (2015e) M. Agostini et al. (GERDA Collaboration), Physics Procedia 61, 828 (2015e). • Olive et al. (2014) K. Olive et al. (Particle Data Group), Chin. Phys. C 38, 090001 (2014). • Cowan et al. (2011) G. Cowan et al., Eur. Phys. J. C 71, 1554 (2011). • Menendez et al. (2009) J. Menendez et al., Nucl. Phys. A 818, 139 (2009). • Horoi and Neacsu (2016) M. Horoi and A. Neacsu, Phys. Rev. C 93, 024308 (2016). • Barea et al. (2015) J. Barea, J. Kotila,  and F. Iachello, Phys. Rev. C 91, 034304 (2015). • Hyvärinen and Suhonen (2015) J. Hyvärinen and J. Suhonen, Phys. Rev. C 91, 024613 (2015). • Simkovic et al. (2013) F. Simkovic et al., Phys. Rev. C. 87, 045501 (2013). • Vaquero et al. (2013) N. L. Vaquero, T. Rodriguez,  and J. Egido, Phys. Rev. Lett. 111, 142501 (2013). • Yao et al. (2015) J. Yao et al., Phys. Rev. C 91, 024316 (2015). • (43) J. Beringer et al., Review of Particle Physics, Phys. Rev. D86 (2012) 010001. • (44) G. J. Feldman and R. D. Cousins, Phys. Rev. D 57 (1998) 3873-3889 • (45) A. Caldwell, D. Kollar, and K. Kröninger, Comput. Phys. Commun.180 (2009) 2197-2209. • (46) R. Cousins and V. Highland, Nucl. Instr. Meth. A320 (1992) 331-335. • (47) S. V. Biller and S. M. Oser, Nucl. Instr. Meth. A774 (2015) 103-119.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003246426582336, "perplexity": 1420.9716471486395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00399.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/129477-frequency-modulation-synthesis-series.html
# Thread: Frequency modulation synthesis in series 1. ## Frequency modulation synthesis in series Hello there, I'm currently working on a project about the mathematics of FM synthesis (for a general overview see the relevant chapter in ☮ Music: a Mathematical Offering ☮). I'm trying to do some expansion and simplification of using multiple modulating waves and the maths is getting a bit trying. I want to rearrange an equation of trignometric functions so they are in bessel function form. So far I've done it for parallel modulating waves, by showing if we have something of the form $sin(c_2 + I_2 sin(\theta_2) + I_1 sin(\theta_2))$ then it can be rearranged to $\sum_{k_1} \sum_{k_2} J_{k_1} (I_1) J_{k_2} (I_2) sin (c_2 + k_1 \theta_1 + k_2 \theta_2))$ using standard addition formulae for trignometric equations and the Bessel function expansion of $sin(z sin\theta)) = \sum_{n=0}^{oo} J_{2n+1}(z)sin((2n+1) \theta)$ (I can provide a full proof if needed, but I hope the sketch will give an idea of what I'm aiming for) So I'm trying to do the same for series: I'm starting with $sin(\alpha_1 + I_1 sin (\alpha_2 + I_2 sin \theta_2)$ Using addition formula for $sin$ and then applying the Bessel function formula again, I arrive at $sin (c_1 + I_1 \sum_{k_1}J_{k_1} (I_2) sin (c_1 + k_1 \theta_1)))$ Now here's where I get stuck. Can anyone suggest how I might expand/simplify this last equation? Can it even be done? 2. I've actually found the solution to the problem but not the proof, so if someone could help me understand it that would be great. Rewriting in the authors own terms, he starts with the equation $s(t) = sin(2 \pi f_c t + I sin[2 \pi f_1 t + I_2 sin\{2 \pi f_2 t\}])$ and ends with $s(t) = \sum_k J_k(I_1) \times J_n (k I_2) sin(2\pi [f_c + k_1 f_1 + n f_2]t)$ Can anyone explain the inbetween steps for me?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631268382072449, "perplexity": 397.11474752918565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00537.warc.gz"}
https://planetmath.org/GeneralizedEigenspace
# generalized eigenspace Let $V$ be a vector space (over a field $k$), and $T$ a linear operator on $V$, and $\lambda$ an eigenvalue of $T$. The set $E_{\lambda}$ of all generalized eigenvectors of $T$ corresponding to $\lambda$, together with the zero vector $0$, is called the generalized eigenspace of $T$ corresponding to $\lambda$. In short, the generalized eigenspace of $T$ corresponding to $\lambda$ is the set $E_{\lambda}:=\{v\in V\mid(T-\lambda I)^{i}(v)=0\textrm{ for some positive % integer }i\}.$ Here are some properties of $E_{\lambda}$: 1. 1. $W_{\lambda}\subseteq E_{\lambda}$, where $W_{\lambda}$ is the eigenspace of $T$ corresponding to $\lambda$. 2. 2. $E_{\lambda}$ is a subspace of $V$ and $E_{\lambda}$ is $T$-invariant. 3. 3. If $V$ is finite dimensional, then $\dim(E_{\lambda})$ is the algebraic multiplicity of $\lambda$. 4. 4. $E_{\lambda_{1}}\cap E_{\lambda_{2}}=0$ iff $\lambda_{1}\neq\lambda_{2}$. More generally, $E_{A}\cap E_{B}=0$ iff $A$ and $B$ are disjoint sets of eigenvalues of $T$, and $E_{A}$ (or $E_{B}$) is defined as the sum of all $E_{\lambda}$, where $\lambda\in A$ (or $B$). 5. 5. If $V$ is finite dimensional and $T$ is a linear operator on $V$ such that its characteristic polynomial $p_{T}$ splits (over $k$), then $V=\bigoplus_{\lambda\in S}E_{\lambda},$ where $S$ is the set of all eigenvalues of $T$. 6. 6. Assume that $T$ and $V$ have the same properties as in (5). By the Jordan canonical form theorem, there exists an ordered basis $\beta$ of $V$ such that $[T]_{\beta}$ is a Jordan canonical form. Furthermore, if we set $\beta_{i}=\beta\cap E_{\lambda_{i}}$, then $[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$, the matrix representation of $T|_{E_{\lambda}}$, the restriction of $T$ to $E_{\lambda_{i}}$, is a Jordan canonical form. In other words, $[T]_{\beta}=\begin{pmatrix}J_{1}&O&\cdots&O\\ O&J_{2}&\cdots&O\\ \vdots&\vdots&\ddots&\vdots\\ O&O&\cdots&J_{n}\end{pmatrix}$ where each $J_{i}=[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$ is a Jordan canonical form, and $O$ is a zero matrix. 7. 7. Conversely, for each $E_{\lambda_{i}}$, there exists an ordered basis $\beta_{i}$ for $E_{\lambda_{i}}$ such that $J_{i}:=[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$ is a Jordan canonical form. As a result, $\beta:=\bigcup_{i=1}^{n}\beta_{i}$ with linear order extending each $\beta_{i}$, such that $v_{i} for $v_{i}\in\beta_{i}$ and $v_{j}\in\beta_{j}$ for $i, is an ordered basis for $V$ such that $[T]_{\beta}$ is a Jordan canonical form, being the direct sum of matrices $J_{i}$. 8. 8. Each $J_{i}$ above can be further decomposed into Jordan blocks, and it turns out that the number of Jordan blocks in each $J_{i}$ is the dimension of $W_{\lambda_{i}}$, the eigenspace of $T$ corresponding to $\lambda_{i}$. More to come… ## References • 1 Friedberg, Insell, Spence. Linear Algebra. Prentice-Hall Inc., 1997. Title generalized eigenspace GeneralizedEigenspace 2013-03-22 17:23:36 2013-03-22 17:23:36 CWoo (3771) CWoo (3771) 8 CWoo (3771) Definition msc 15A18 GeneralizedEigenvector
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 77, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969877004623413, "perplexity": 95.3019741359609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00233.warc.gz"}
https://web2.0calc.com/questions/congruence-and-residues
+0 # Congruence and Residues +1 189 2 +28 Remove the integers which are congruent to 3 (mod 7) from the following list of five integers, and sum the integers that remain.$$85 \qquad 49,\!479 \qquad -67 \qquad 12,\!000,\!003 \qquad -3$$ May 11, 2020 #1 +2 85 mod 7 =1 Keep 49,479 mod 7 = 3 Remove -67 mod 7 =3 Remove 12,000,003 mod 7 =1 Keep -3 mod 7 = 4 Keep [85 +12,000,003 - 3] =12,000,085 May 11, 2020 #2 +111566 +2 There is more than one answer for this. for instance: -3 mod 7 does equal -3 but it also equals 4.  Which one do you want? Maybe it is mod 7 of the result that is wanted. In which case there would be only one answer. Melody  May 11, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312608003616333, "perplexity": 1887.310047843874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00047.warc.gz"}
https://export.arxiv.org/abs/1911.04530v1
hep-ph (what is this?) # Title: NLO impact factor for inclusive photon$+$dijet production in $e+A$ DIS at small $x$ Abstract: We compute the next-to-leading order (NLO) impact factor for inclusive photon $+$dijet production in electron-nucleus (e+A) deeply inelastic scattering (DIS) at small $x$. An important ingredient in our computation is the simple structure of shock wave" fermion and gluon propagators. This allows one to employ standard momentum space Feynman diagram techniques for higher order computations in the Regge limit of fixed $Q^2\gg \Lambda_{\rm QCD}^2$ and $x\rightarrow 0$. Our computations in the Color Glass Condensate (CGC) effective field theory include the resummation of all-twist power corrections $Q_s^2/Q^2$, where $Q_s$ is the saturation scale in the nucleus. We discuss the structure of ultraviolet, collinear and soft divergences in the CGC, and extract the leading logs in $x$; the structure of the corresponding rapidity divergences gives a nontrivial first principles derivation of the JIMWLK renormalization group evolution equation for multiparton lightlike Wilson line correlators. Explicit expressions are given for the $x$-independent $O(\alpha_s)$ contributions that constitute the NLO impact factor. These results, combined with extant results on NLO JIMWLK evolution, provide the ingredients to compute the inclusive photon $+$ dijet cross-section at small $x$ to $O(\alpha_s^3 \ln(x))$. First results for the NLO impact factor in inclusive dijet production are recovered in the soft photon limit. A byproduct of our computation is the LO photon+ 3 jet (quark-antiquark-gluon) cross-section. Comments: 104 pages, 35 figures Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th); Nuclear Theory (nucl-th) Cite as: arXiv:1911.04530 [hep-ph] (or arXiv:1911.04530v1 [hep-ph] for this version) ## Submission history From: Kaushik Roy [view email] [v1] Mon, 11 Nov 2019 19:23:09 GMT (2004kb,D) [v2] Tue, 3 Dec 2019 00:59:03 GMT (2005kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209451079368591, "perplexity": 4516.646523582532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00495.warc.gz"}
http://mathhelpforum.com/trigonometry/74250-trig-help-verify-answers.html
# Math Help - Trig Help ( verify answers) 1. ## Trig Help ( verify answers) Convert the angle given in degrees to radian measure in terms of .380° to 4 sig.figs 1.018 Solve the equation for all nonnegative values of less than . Do by calculator, if needed, and give the answers to three significant digits in the order of increasing. (not sure what to do) A satellite is in a circular orbit 225 km above the equator of the earth. How many kilometres must it travel for its longitude to change by 86.3°? Assume the radius of the earth equals 6400 kilometres.(round to the nearest whole number) i got 9639.80 =9600 2. Originally Posted by rock candy Convert the angle given in degrees to radian measure in terms of .380° Unfortunately, I cannot see any of your attachments. "6.63 rad" is the NUMERICAL value but the problem asked you give it in terms of $\pi$: Since there are 360 degrees or $2\pi$ radians so the conversion factor is $\frac{2\pi}{360}= \frac{\pi}{180}$ radians per degree. $\frac{\pi}{180}(380)= \frac{380}{180}\pi$. Just do the fraction part. Unfortunately, I cannot see the attachments. to 4 sig.figs 1.018 Solve the equation for all nonnegative values of less than . Do by calculator, if needed, and give the answers to three significant digits in the order of increasing. (not sure what to do) A satellite is in a circular orbit 225 km above the equator of the earth. How many kilometres must it travel for its longitude to change by 86.3°? Assume the radius of the earth equals 6400 kilometres.(round to the nearest whole number) i got 9639.80 =9600
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533917903900146, "perplexity": 886.0486226893905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447556252.139/warc/CC-MAIN-20141224185916-00081-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/212195/form-of-weakly-continuous-linear-functional
# Form of weakly continuous linear functional This was originally a problem in Stratila and Zsido's "Lectures on von Neumann algebras" (E.1.2). I've spent so much time working on it, and right now I cannot see how the result can be so simple. The problem goes like this: Let $\omega$ be a weakly continuous linear functional on $B(\mathscr{H})$. Then there exist two families of mutually orthogonal vectors $\{\xi_1,\ldots,\xi_n\},\ \{\eta_1,\ldots,\eta_n\}$ in $\mathscr{H}$ such that $$\omega(T)=\sum_{i=1}^n\langle T\xi_i,\eta_i\rangle,\quad T\in B(\mathscr{H}),$$$$\|\omega\|=\sum_{i=1}^n\|\xi_i\|\|\eta_i\|.$$ I've tried altering the proof that any weakly continuous linear functional can be written in the above form with no extra assumptions on the vectors, and gotten as far as proving that the $\xi_i$'s can be chosen to be mutually orthogonal (orthonormal, in fact), but that's about it. Does anybody have any suggestions of what to do? I thought about using some facts about compact operators, but seeing as it is not a prerequisite of understanding the section containing the problem, I'm assuming the proof can be elementary (even though it's marked as one of the harder exercises). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793879389762878, "perplexity": 105.42446546974247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/100294/question-on-fundamental-weights-and-representations
# Question on fundamental weights and representations I am a bit confused about the notion of "fundamental weights". In a complexified setting, I am thinking of my Lie algebra to be decomposed as, $\cal{g} = \cal{t} \oplus _\alpha \cal{g}_\alpha$ where the $\cal{g}_\alpha$ are the root-spaces. Now given a root $\alpha_j$, one defines its co-root $H_{\alpha_j} \in [\cal{g}_{\alpha _j}, \cal{g}_{-\alpha _j}]$ such that $\alpha_j (H_{\alpha _j}) = 2$ • Now one seems to define the "fundamental weights" as a set rank $G$ elements $\omega_i \in t^*$ such that, $\omega_i (H_{\alpha _j}) = \delta_{ij}$ • In the above definition is it necessary that the $\alpha_j$ have to be simple roots? (..i get this feeling when looking at examples..) I guess one can get away by defining the action of the fundamental weights on the co-roots of simple roots only because the co-roots are themselves enough to give a basis for $t^*$ just like the simple-roots. Is that right? • For the case of $SU(n)$ one chooses the simple root spaces to be the spans of the matrices $E_{ij}$ - which have a $1$ at the $(i,j)$ position and a $0$ everywhere else. If the Cartan subalgebra is spanned by matrices of the form $H_\lambda = diag(\lambda_i)$, then one has the roots $\alpha_{ij}$ defined as, $[H_\lambda,E_{ij}] = \alpha_{ij}(H_\lambda)E_{ij} = (\lambda_i - \lambda_j)E_{ij}$ Now since $\alpha_{ji} = - \alpha_{ij}$, one would search for the co-root $H_{\alpha_{ij}} \in [E_{ij},E_{ji}]$. Hence I would have naively expected that $H_{\alpha_{ij}} = E_{ii} - E_{jj}$ for all pairs of $i<j$. But why is it that in literature I see the co-roots of $SU(N)$ to be taken as, $H_{\alpha _ {i i+1}} = E_{ii} - E_{i+1,i+1}$? Is this again a question of some standard choice of basis? • From the above how does it follow that the fundamental weights $\omega_i$ of $SU(N)$ are given as $\omega_i (H_\lambda) = \sum _{k=1} ^{k=i} \lambda_k$ ? • How is all the above related to the idea that there are $N-1$ fundamental representations of $SU(N)$? And how are they demarcated? - Dear Anirbit, perhaps you have interests to illuminate this problem: cartan-matrix-for-an-exotic-type-of-lie-algebra –  miss-tery Jan 10 '14 at 20:25 Fundamental weights correspond to fundamental roots (i.e. simple roots). Each choice of simple roots leads to a different choice of fundamental weights. There aren't really any fundamental weights associated with other (non-simple) roots (or at least this terminology isn't standard to my knowledge). [Note: The rank of $\mathfrak{sl}_N$ (or equivalently $SU(N)$) is $N-1$. I will set $\ell=N-1$.] Basics: First, a set of simple roots must be chosen (any two systems of simple roots are conjugate under the action of the Weyl group). Say $\{\alpha_1,\dots,\alpha_\ell \}$ is you set of simple roots. Suppose we have also fixed a set of Chevalley generators $\{ E_i, F_i, H_i \;|\; i=1,\dots,\ell \}$ so these are elements such that $H_i \in [\mathfrak{g}_{\alpha_i},\mathfrak{g}_{-\alpha_i}]$ such that $\alpha_i(H_i)=2$ and $[E_i,F_i]=H_i$ where $E_i \in\mathfrak{g}_{\alpha_i}$ and $F_i \in\mathfrak{g}_{-\alpha_i}$. Then $\alpha_j(H_i)=a_{ji}$ = the $i,j$-entry of the Cartan matrix (or the $j,i$-entry of the Cartan matrix, depending on whose convention you are using) so in particular $\alpha_i(H_i)=a_{ii}=2$. Next, what you have for the fundamental weights is not quite correct. The fundamental weights $\{\omega_1,\dots,\omega_\ell \}$ form a basis for $t^*$ which is dual to the (basis of) simple coroots $\{H_1,\dots,H_\ell\}$. In other words, $\omega_i(H_j)=\delta_{ij}$ (the Kronecker delta: $\delta_{ii}=1$ and $\delta_{ij}=0$ for $i\not=j$). In particular, $\omega_i(H_i)=1$ (not $2$). Next, take a finite dimensional irreducible $\mathfrak{g}$-module. From the theory we know it is a highest weight module, say $V(\lambda)$ which is the direct sum of weight spaces. These weights are of the form $c_1\omega_1+\cdots+c_\ell\omega_\ell$ where $c_i \in \mathbb{Z}$ (integral linear combinations of fundamental weights). In particular, the roots of $\mathfrak{g}$ along with $0$ (the zero functional) are the weights of the adjoint representation. So roots are integral linear combinations of fundamental weights. Actually, it turns out that $\alpha_i = a_{i1}\omega_1+a_{i2}\omega_2+\cdots+a_{i\ell}\omega_{\ell}$ so the Cartan matrix (or its transpose) is the change of basis matrix from fundamental weights to simple roots. The importance of the fundamental weights is that they form a basis for the lattice of weights of finite dimensional representations of $\mathfrak{g}$. So $\{H_1,\dots,H_\ell\}$ (simple co-roots) form a basis for $t$. Both $\{\alpha_1,\dots,\alpha_\ell\}$ (simple roots) and $\{\omega_1,\dots,\omega_\ell\}$ (fundamental weights) are bases for $t^*$. The fundamental weight basis is dual to the simple co-root basis. And the Cartan matrix is a change of basis matrix from the simple roots to the fundamental weights. Next, for $\mathfrak{sl}_N$ (the root space decomposition is for the Lie algebra not the Lie group $SU(N)$). While $E_{ij}$ ($i \not= j$) are root vectors, only $E_{i,i+1}$ and $E_{i+1,i}$ are in simple root spaces. In particular, $E_i = E_{i,i+1} \in (\mathfrak{sl}_n)_{\alpha_i}$ (the $\alpha_i$ root space) and $F_i = E_{i+1,i} \in (\mathfrak{sl}_n)_{-\alpha_i}$ (the $-\alpha_i$ root space). Then $H_i = [E_i,F_i] = E_{i,i+1}E_{i+1,i} - E_{i+1,i}E_{i,i+1} = E_{i,i} - E_{i+1,i+1}$ (the simple co-roots). Your other $E_{ii}-E_{jj}$ are co-roots as well just not necessarily simple co-roots. If $H_\lambda = \mathrm{diag}(\lambda_1,\dots,\lambda_\ell)$, then $H_\lambda=\lambda_1H_1+(\lambda_1+\lambda_2)H_2+\cdots+(\lambda_1+\cdots+\lambda_\ell)H_\ell$. For example: Consider $H_\lambda = \mathrm{diag}(\lambda_1,\lambda_2,\lambda_3)$. Keep in mind that since $H_\lambda \in \mathfrak{sl}_3$ it has trace=0, so $\lambda_3=-\lambda_1-\lambda_2$. Thus $$\begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{bmatrix} = \begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & -\lambda_1 & 0 \\ 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 \\ 0 & \lambda_1+\lambda_2 & 0 \\ 0 & 0 & -\lambda_1-\lambda_2 \end{bmatrix}$$ $$= \lambda_1\begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix}+(\lambda_1+\lambda_2)\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}$$ So in general, $\omega_i(H_\lambda) = \omega_i(\lambda_1H_1+(\lambda_1+\lambda_2)H_2+\cdots+(\lambda_1+\cdots+\lambda_\ell)H_\ell) = \lambda_1+\cdots+\lambda_i$ since $\omega_i(H_i)=1$ and $\omega_i(H_j)=0$ for $i \not= j$. The $N-1$ fundamental representations of $SU(N)$ are the highest weight representations with highest weights $\omega_1,\dots,\omega_{\ell}$. These are often denoted $V(\omega_1),\dots,V(\omega_\ell)$. All other (finite dimensional) irreducible representations appear as subrepresentations of tensor products of these representations. Edit: I will try to add a brief account highest weight modules. Here goes... Let $\mathfrak{g}$ be a finite dimensional semi-simple Lie algebra. Then every finite dimensional $\mathfrak{g}$-module (i.e. representation) is completely reducible (can be written as a finite direct sum of irreducible modules). Then it can be shown that each irreducible module is a highest weight module. So in the end, if we know everything about highest weight modules, then we'll essentially know everything about all modules. What is a highest weight module? Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra with Cartan subalgebra $\mathfrak{h}$ (Cartan subalgebra = maximal toral subalgebra = your "$t$"). In addition fix a set of simple roots $\{ \alpha_1,\dots,\alpha_\ell\}$ and fundamental weights $\{ \omega_1,\dots,\omega_\ell \}$. Let $V$ be a $\mathfrak{g}$-module. Then $V$ is a weight module if $V = \oplus_{\mu \in \mathfrak{h}^*} V_\mu$ (the direct sum of weight spaces) where $V_\mu = \{ v\in V \;|\; h \cdot v = \mu(h)v \}$. If $V_\mu \not= \{0\}$, then $V_\mu$ is a weight space and $\mu \in \mathfrak{h}^*$ is called a weight. [Example: If you consider $\mathfrak{g}$ itself as a $\mathfrak{g}$-module, then the weights of the adjoint action are the roots along with the zero functional.] So if $v \not=0$ is in the $\mu$ weight space and $h \in \mathfrak{h}$, then $v$ is an eigenvector for the action of $h$ with eigenvalue $\mu(h)$. Thus $V_\mu$ is the simultaneous eigenspace for the operators given by the action of each $h \in \mathfrak{h}$ with eigenvalues $\mu(h)$. It can be shown that a finite dimensional irreducible $\mathfrak{g}$-module is a weight module and there exists a unique weight $\lambda \in \mathfrak{h}^*$ such that $\lambda+\alpha_i$ is not a weight for all $i=1,\dots,\ell$. So thinking of $\alpha_i$ as pointing "up" in some sense, $\lambda$ is as high as you can go. It's the highest weight. Next, every weight in the module is of the form $\lambda-(c_1\alpha_1+\cdots+c_\ell\alpha_\ell)$ for some non-negative integers $c_i$ (all weights lie below the highest weight). Also, the structure of an irreducible module is completely determined by its highest weight. So if $V$ and $W$ are irreducible highest weight modules, then $V \cong W$ if and only if $V$ and $W$ have the same highest weight. Moreover, it turns out you can construct (a unique) irreducible highest weight module for any $\lambda \in \mathfrak{h}^*$. We usually call this module something like $V(\lambda)$. However, it turns out that although $V(\lambda)$ is an irreducible highest weight module, it is finite dimensional if and only if $\lambda=c_1\omega_1+\cdots+c_\ell\omega_\ell$ where each $c_i$ is a non-negative integer. Fix a set of non-negative integers $c_i$. Then suppose we tensor product the highest weight module $V(\omega_i)$ (a fundamental module) $c_i$-times with itself and then tensor all of these together. Then we will have a (reducible) module which contains a copy of the irreducible highest weight module $V(c_1\omega_1+\cdots+c_\ell\omega_\ell)$. Thus the fundamental modules give us a way of constructing all finite dimensional irreducible highest weight modules [although the tensor product will include copies of other irreducible modules in general so we'll have to filter out this unwanted extra stuff.] Your final question. Given a highest weight for $SU(N)$ (equivalently $\mathfrak{sl}_N$), how does one write down matrices for the action associated with the corresponding highest weight module? That is a non-trivial, quite complicated computation. Even the answer for $SU(3)$ is complicated. So I'm going to pass on that one. :) - Thanks a lot for this almost text-book kind of answer! It was awesome. I have corrected some of my typos that you pointed out. I have some further clarifications to ask about what you said - (1) Shouldn't your definition of the Cartan matrix be $\alpha_j (H_i) = a_{ji}$ to be consistent with your convention of saying $\alpha_i = a_{ij}\omega_j$ ? (2) About the simple roots of $SU(N)$, I guess you are choosing them to be the set $\{ E_{i i+1} \}_{i=1} ^{i=n-1}$.. right? –  Anirbit Jan 21 '12 at 20:59 (3) I am not very clear about the idea of this "highest weight module". If you could kindly add in a few more lines of explanation like I did not understand what you meant in that line, "..say $V(\lambda)$ which is the direct sum of weight spaces. These weights are of the form $c_1\omega_1+\cdots+c_\ell\omega_\ell$ where $c_i \in \mathbb{Z}$ (integral linear combinations of fundamental weights)..." –  Anirbit Jan 21 '12 at 21:04 (4) About the issue of "fundamental representations' of $SU(N)$ I guess I did not make my question very clear. Can you kindly elaborate on as to how does picking a the highest weight say some $\omega_i$ (for $i \in \{ 1,...,n-1\}$) specifies the representation. Like if I pick some $g \in SU(N)$ then how do I write down the matrix for $g$ knowing what the highest weight is - say some $\omega_i$. I know how to do this for $SU(3)$ bcause that can be written in the language of quantum angular momentum but otherwise I don't see anything. –  Anirbit Jan 21 '12 at 21:05 @Anirbit Yes. About (1), you are correct. I used one convention one place and another further down :( As for (2), yes and no. $E_{ii+1}$ are elements of simple root spaces. But the simple roots are linear functionals (elements of $t^*$ instead of $\mathfrak{g}$). The $E_{ii+1}$ are root vectors. Root vectors are elements of the algebra whose weights (think of eigenvalues) are roots. So in some sense $E_{i,i+1}$ are basically the eigenvectors to go with the eigenvalues $\alpha_i$. –  Bill Cook Jan 22 '12 at 20:13 @Anirbit I don't have time right now, but I'll try to edit the post later to address (3) and (4)...although you'll need a textbook for a real full answer :) –  Bill Cook Jan 22 '12 at 20:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698145985603333, "perplexity": 178.51381942463496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065464.19/warc/CC-MAIN-20150827025425-00255-ip-10-171-96-226.ec2.internal.warc.gz"}
https://arxiv.org/abs/1707.02986
astro-ph.HE (what is this?) # Title: A dependence of the tidal disruption event rate on global stellar surface mass density and stellar velocity dispersion Abstract: The rate of tidal disruption events (TDEs), $R_\text{TDE}$, is predicted to depend on stellar conditions near the super-massive black hole (SMBH), which are on difficult-to-measure sub-parsec scales. We test whether $R_\text{TDE}$ depends on kpc-scale global galaxy properties, which are observable. We concentrate on stellar surface mass density, $\Sigma_{M_\star}$, and velocity dispersion, $\sigma_v$, which correlate with the stellar density and velocity dispersion of the stars around the SMBH. We consider 35 TDE candidates, with and without known X-ray emission. The hosts range from star-forming to quiescent to quiescent with strong Balmer absorption lines. The last (often with post-starburst spectra) are overrepresented in our sample by a factor of $35^{+21}_{-17}$ or $18^{+8}_{-7}$, depending on the strength of the H$\delta$ absorption line. For a subsample of hosts with homogeneous measurements, $\Sigma_{M_\star}=10^9$-$10^{10}~{\rm M_\odot / kpc^2}$, higher on average than for a volume-weighted control sample of Sloan Digital Sky Survey galaxies with similar redshifts and stellar masses. This is because: (1) most of the TDE hosts here are quiescent galaxies, which tend to have higher $\Sigma_{M_\star}$ than the star-forming galaxies that dominate the control, and (2) the star-forming hosts have higher average $\Sigma_{M_\star}$ than the star-forming control. There is also a weak suggestion that TDE hosts have lower $\sigma_v$ than for the quiescent control. Assuming that $R_{\rm TDE}\propto \Sigma_{M_\star}^\alpha \times \sigma_v^\beta$, and applying a statistical model to the TDE hosts and control sample, we estimate $\hat{\alpha}=0.9 \pm 0.2$ and $\hat{\beta}=-1.0 \pm 0.6$. This is broadly consistent with $R_\text{TDE}$ being tied to the dynamical relaxation of stars surrounding the SMBH. Comments: Accepted for publication in ApJ Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA) DOI: 10.3847/1538-4357/aaa3fd Cite as: arXiv:1707.02986 [astro-ph.HE] (or arXiv:1707.02986v3 [astro-ph.HE] for this version) ## Submission history From: Or Graur [view email] [v1] Mon, 10 Jul 2017 18:00:04 GMT (5296kb,D) [v2] Fri, 4 Aug 2017 21:03:14 GMT (6430kb,D) [v3] Thu, 21 Dec 2017 14:10:36 GMT (6999kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281417727470398, "perplexity": 3405.980857292257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00502.warc.gz"}
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=47&journalID=308&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> COMPUTER SCIENCE (Total: 1985 journals)     - ANIMATION AND SIMULATION (29 journals)    - ARTIFICIAL INTELLIGENCE (98 journals)    - AUTOMATION AND ROBOTICS (98 journals)    - CLOUD COMPUTING AND NETWORKS (63 journals)    - COMPUTER ARCHITECTURE (9 journals)    - COMPUTER ENGINEERING (9 journals)    - COMPUTER GAMES (16 journals)    - COMPUTER PROGRAMMING (23 journals)    - COMPUTER SCIENCE (1153 journals)    - COMPUTER SECURITY (45 journals)    - DATA BASE MANAGEMENT (13 journals)    - DATA MINING (32 journals)    - E-BUSINESS (22 journals)    - E-LEARNING (27 journals)    - ELECTRONIC DATA PROCESSING (21 journals)    - IMAGE AND VIDEO PROCESSING (40 journals)    - INFORMATION SYSTEMS (104 journals)    - INTERNET (92 journals)    - SOCIAL WEB (50 journals)    - SOFTWARE (33 journals)    - THEORY OF COMPUTING (8 journals) COMPUTER SCIENCE (1153 journals)                  1 2 3 4 5 6 | Last Advances in Computational Mathematics   [SJR: 1.255]   [H-I: 44]   [15 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1572-9044 - ISSN (Online) 1019-7168    Published by Springer-Verlag  [2355 journals] • The Galerkin boundary element method for transient Stokes flow • Authors: Young Ok Choi; Johannes Tausch Pages: 473 - 493 Abstract: Since the fundamental solution for transient Stokes flow in three dimensions is complicated it is difficult to implement discretization methods for boundary integral formulations. We derive a representation of the Stokeslet and stresslet in terms of incomplete gamma functions and investigate the nature of the singularity of the single- and double layer potentials. Further, we give analytical formulas for the time integration and develop Galerkin schemes with tensor product piecewise polynomial ansatz functions. Numerical results demonstrate optimal convergence rates. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9493-9 Issue No: Vol. 43, No. 3 (2017) • Efficient algorithms for cur and interpolative matrix decompositions • Authors: Sergey Voronin; Per-Gunnar Martinsson Pages: 495 - 516 Abstract: The manuscript describes efficient algorithms for the computation of the CUR and ID decompositions. The methods used are based on simple modifications to the classical truncated pivoted QR decomposition, which means that highly optimized library codes can be utilized for implementation. For certain applications, further acceleration can be attained by incorporating techniques based on randomized projections. Numerical experiments demonstrate advantageous performance compared to existing techniques for computing CUR factorizations. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9494-8 Issue No: Vol. 43, No. 3 (2017) • Finite element approximation of a free boundary plasma problem • Authors: Jintao Cui; Thirupathi Gudi Pages: 517 - 535 Abstract: In this article, we study a finite element approximation for a model free boundary plasma problem. Using a mixed approach (which resembles an optimal control problem with control constraints), we formulate a weak formulation and study the existence and uniqueness of a solution to the continuous model problem. Using the same setting, we formulate and analyze the discrete problem. We derive optimal order energy norm a priori error estimates proving the convergence of the method. Further, we derive a reliable and efficient a posteriori error estimator for the adaptive mesh refinement algorithm. Finally, we illustrate the theoretical results by some numerical examples. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9495-7 Issue No: Vol. 43, No. 3 (2017) • Complexity of oscillatory integrals on the real line • Authors: Erich Novak; Mario Ullrich; Henryk Woźniakowski; Shun Zhang Pages: 537 - 553 Abstract: We analyze univariate oscillatory integrals defined on the real line for functions from the standard Sobolev space $$H^{s} (\mathbb {R})$$ and from the space $$C^{s}(\mathbb {R})$$ with an arbitrary integer s ≥ 1. We find tight upper and lower bounds for the worst case error of optimal algorithms that use n function values. More specifically, we study integrals of the form 1 $$I_{k}^{\varrho} (f) = {\int}_{\mathbb{R}} f(x) \,\mathrm{e}^{-i\,kx} \varrho(x) \, \mathrm{d} x\ \ \ \text{for}\ \ f\in H^{s}(\mathbb{R})\ \ \text{or}\ \ f\in C^{s}(\mathbb{R})$$ with $$k\in {\mathbb {R}}$$ and a smooth density function ρ such as $$\rho (x) = \frac {1}{\sqrt {2 \pi }} \exp (-x^{2}/2)$$ . The optimal error bounds are $${\Theta }((n+\max (1, k ))^{-s})$$ with the factors in the Θ notation dependent only on s and ϱ. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9496-6 Issue No: Vol. 43, No. 3 (2017) • System identification in dynamical sampling • Authors: Sui Tang Pages: 555 - 580 Abstract: We consider the problem of spatiotemporal sampling in a discrete infinite dimensional spatially invariant evolutionary process x (n) = A n x to recover an unknown convolution operator A given by a filter $$a \in \ell ^{1}(\mathbb {Z})$$ and an unknown initial state x modeled as a vector in $$\ell ^{2}(\mathbb {Z})$$ . Traditionally, under appropriate hypotheses, any x can be recovered from its samples on $$\mathbb {Z}$$ and A can be recovered by the classical techniques of deconvolution. In this paper, we will exploit the spatiotemporal correlation and propose a new sampling scheme to recover A and x that allows us to sample the evolving states x,A x,⋯ ,A N−1 x on a sub-lattice of $$\mathbb {Z}$$ , and thus achieve a spatiotemporal trade off. The spatiotemporal trade off is motivated by several industrial applications (Lu and Vetterli, 2249–2252, 2009). Specifically, we show that $\{x(m\mathbb {Z}), Ax(m\mathbb {Z}), \cdots , A^{N-1}x(m\mathbb {Z}): N \geq 2m\}$ contains enough information to recover a typical “low pass filter” a and x almost surely, thus generalizing the idea of the finite dimensional case in Aldroubi and Krishtal, arXiv:1412.1538 (2014). In particular, we provide an algorithm based on a generalized Prony method for the case when both a and x are of finite impulse response and an upper bound of their support is known. We also perform a perturbation analysis based on the spectral properties of the operator A and initial state x, and verify the results by several numerical experiments. Finally, we provide several other numerical techniques to stabilize the proposed method, with some examples to demonstrate the improvement. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9497-5 Issue No: Vol. 43, No. 3 (2017) • Zooming from global to local: a multiscale RBF approach • Authors: Q. T. Le Gia; I. H. Sloan; H. Wendland Pages: 581 - 606 Abstract: Because physical phenomena on Earth’s surface occur on many different length scales, it makes sense when seeking an efficient approximation to start with a crude global approximation, and then make a sequence of corrections on finer and finer scales. It also makes sense eventually to seek fine scale features locally, rather than globally. In the present work, we start with a global multiscale radial basis function (RBF) approximation, based on a sequence of point sets with decreasing mesh norm, and a sequence of (spherical) radial basis functions with proportionally decreasing scale centered at the points. We then prove that we can “zoom in” on a region of particular interest, by carrying out further stages of multiscale refinement on a local region. The proof combines multiscale techniques for the sphere from Le Gia, Sloan and Wendland, SIAM J. Numer. Anal. 48 (2010) and Applied Comp. Harm. Anal. 32 (2012), with those for a bounded region in ℝ d from Wendland, Numer. Math. 116 (2010). The zooming in process can be continued indefinitely, since the condition numbers of matrices at the different scales remain bounded. A numerical example illustrates the process. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9498-4 Issue No: Vol. 43, No. 3 (2017) • On a new property of n -poised and G C n sets • Authors: Vahagn Bayramyan; Hakop Hakopian Pages: 607 - 626 Abstract: In this paper we consider n-poised planar node sets, as well as more special ones, called G C n sets. For the latter sets each n-fundamental polynomial is a product of n linear factors as it always holds in the univariate case. A line ℓ is called k-node line for a node set $$\mathcal X$$ if it passes through exactly k nodes. An (n + 1)-node line is called maximal line. In 1982 M. Gasca and J. I. Maeztu conjectured that every G C n set possesses necessarily a maximal line. Till now the conjecture is confirmed to be true for n ≤ 5. It is well-known that any maximal line M of $$\mathcal X$$ is used by each node in $$\mathcal X\setminus M,$$ meaning that it is a factor of the fundamental polynomial. In this paper we prove, in particular, that if the Gasca-Maeztu conjecture is true then any n-node line of G C n set $$\mathcal {X}$$ is used either by exactly $$\binom {n}{2}$$ nodes or by exactly $$\binom {n-1}{2}$$ nodes. We prove also similar statements concerning n-node or (n − 1)-node lines in more general n-poised sets. This is a new phenomenon in n-poised and G C n sets. At the end we present a conjecture concerning any k-node line. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9499-3 Issue No: Vol. 43, No. 3 (2017) • Energetic BEM-FEM coupling for the numerical solution of the damped wave equation • Authors: A. Aimi; M. Diligenti; C. Guardasoni Pages: 627 - 651 Abstract: Time-dependent problems modeled by hyperbolic partial differential equations can be reformulated in terms of boundary integral equations and solved via the boundary element method. In this context, the analysis of damping phenomena that occur in many physics and engineering problems is a novelty. Starting from a recently developed energetic space-time weak formulation for the coupling of boundary integral equations and hyperbolic partial differential equations related to wave propagation problems, we consider here an extension for the damped wave equation in layered media. A coupling algorithm is presented, which allows a flexible use of finite element method and boundary element method as local discretization techniques. Stability and convergence, proved by energy arguments, are crucial in guaranteeing accurate solutions for simulations on large time intervals. Several numerical benchmarks, whose numerical results confirm theoretical ones, are illustrated and discussed. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9500-1 Issue No: Vol. 43, No. 3 (2017) • Dynamics of two-cell systems with discrete delays Pages: 653 - 676 Abstract: We consider the system of delay differential equations (DDE) representing the models containing two cells with time-delayed connections. We investigate global, local stability and the bifurcations of the trivial solution under some generic conditions on the Taylor coefficients of the DDE. Regarding eigenvalues of the connection matrix as bifurcation parameters, we obtain codimension one bifurcations (including pitchfork, transcritical and Hopf bifurcation) and Takens-Bogdanov bifurcation as a codimension two bifurcation. For application purposes, this is important since one can now identify the possible asymptotic dynamics of the DDE near the bifurcation points by computing quantities which depend explicitly on the Taylor coefficients of the original DDE. Finally, we show that the analytical results agree with numerical simulations. PubDate: 2017-06-01 DOI: 10.1007/s10444-016-9501-0 Issue No: Vol. 43, No. 3 (2017) • High-order positivity-preserving hybrid finite-volume-finite-difference methods for chemotaxis systems • Authors: Alina Chertock; Yekaterina Epshteyn; Hengrui Hu; Alexander Kurganov Abstract: Chemotaxis refers to mechanisms by which cellular motion occurs in response to an external stimulus, usually a chemical one. Chemotaxis phenomenon plays an important role in bacteria/cell aggregation and pattern formation mechanisms, as well as in tumor growth. A common property of all chemotaxis systems is their ability to model a concentration phenomenon that mathematically results in rapid growth of solutions in small neighborhoods of concentration points/curves. The solutions may blow up or may exhibit a very singular, spiky behavior. There is consequently a need for accurate and computationally efficient numerical methods for the chemotaxis models. In this work, we develop and study novel high-order hybrid finite-volume-finite-difference schemes for the Patlak-Keller-Segel chemotaxis system and related models. We demonstrate high-accuracy, stability and computational efficiency of the proposed schemes in a number of numerical examples. PubDate: 2017-07-21 DOI: 10.1007/s10444-017-9545-9 • On the dimension of trivariate spline spaces with the highest order smoothness on 3D T-meshes • Authors: Chao Zeng; Jiansong Deng Abstract: T-meshes are a type of rectangular partitions of planar domains which allow hanging vertices. Because of the special structure of T-meshes, adaptive local refinement is possible for splines defined on this type of meshes, which provides a solution for the defect of NURBS. In this paper, we generalize the definitions to the three-dimensional (3D) case and discuss a fundamental problem – the dimension of trivariate spline spaces on 3D T-meshes. We focus on a special case where splines are C d−1 continuous for degree d. The smoothing cofactor method for trivariate splines is explored for this situation. We obtain a general dimension formula and present lower and upper bounds for the dimension. At last, we introduce a type of 3D T-meshes, where we can give an explicit dimension formula. PubDate: 2017-07-12 DOI: 10.1007/s10444-017-9551-y • Uniform and high-order discretization schemes for Sturm–Liouville problems via Fer streamers • Authors: Alberto Gil C. P. Ramos Abstract: The current paper concerns the uniform and high-order discretization of the novel approach to the computation of Sturm–Liouville problems via Fer streamers, put forth in Ramos and Iserles (Numer. Math. 131(3), 541—565 2015). In particular, the discretization schemes are shown to enjoy large step sizes uniform over the entire eigenvalue range and tight error estimates uniform for every eigenvalue. They are made explicit for global orders 4,7,10. In addition, the present paper provides total error estimates that quantify the interplay between the truncation and the discretization in the approach by Fer streamers. PubDate: 2017-07-04 DOI: 10.1007/s10444-017-9547-7 • Convergence and quasi-optimality of an adaptive finite element method for optimal control problems with integral control constraint • Authors: Haitao Leng; Yanping Chen Abstract: In this paper we study the convergence of an adaptive finite element method for optimal control problems with integral control constraint. For discretization, we use piecewise constant discretization for the control and continuous piecewise linear discretization for the state and the co-state. The contraction, between two consecutive loops, is proved. Additionally, we find the adaptive finite element method has the optimal convergence rate. In the end, we give some examples to support our theoretical analysis. PubDate: 2017-07-03 DOI: 10.1007/s10444-017-9546-8 • Computationally efficient modular nonlinear filter stabilization for high Reynolds number flows • Authors: Aziz Takhirov; Alexander Lozovskiy Abstract: The nonlinear filter based stabilization proposed in Layton et al. (J. Math. Fluid Mech. 14(2), 325–354 2012) allows to incorporate an eddy viscosity model into an existing laminar flow codes in a modular way. However, the proposed nonlinear filtering step requires the assembly of the associated matrix at each time step and solving a linear system with an indefinte matrix. We propose computationally efficient version of the filtering step that only requires the assembly once, and the solution of two symmetric, positive definite systems at each time step. We also test a new indicator function based on the entropy viscosity model of Guermond (Int. J. Numer. Meth. Fluids. 57(9), 1153–1170 2008); Guermond et al. (J. Sci. Comput. 49(1), 35–50 2011). PubDate: 2017-06-21 DOI: 10.1007/s10444-017-9544-x • Convergent expansions of the Bessel functions in terms of elementary functions • Authors: José L. López Abstract: We consider the Bessel functions J ν (z) and Y ν (z) for R ν > −1/2 and R z ≥ 0. We derive a convergent expansion of J ν (z) in terms of the derivatives of $$(\sin z)/z$$ , and a convergent expansion of Y ν (z) in terms of derivatives of $$(1-\cos z)/z$$ , derivatives of (1 − e −z )/z and Γ(2ν, z). Both expansions hold uniformly in z in any fixed horizontal strip and are accompanied by error bounds. The accuracy of the approximations is illustrated with some numerical experiments. PubDate: 2017-06-19 DOI: 10.1007/s10444-017-9543-y • A plane wave method combined with local spectral elements for nonhomogeneous Helmholtz equation and time-harmonic Maxwell equations • Authors: Qiya Hu; Long Yuan Abstract: In this paper we are concerned with plane wave discretizations of nonhomogeneous Helmholtz equation and time-harmonic Maxwell equations. To this end, we design a plane wave method combined with local spectral elements for the discretization of such nonhomogeneous equations. This method contains two steps: we first solve a series of nonhomogeneous local problems on auxiliary smooth subdomains by the spectral element method, and then apply the plane wave method to the discretization of the resulting (locally homogeneous) residue problem on the global solution domain. We derive error estimates of the approximate solutions generated by this method. The numerical results show that the resulting approximate solutions possess high accuracy. PubDate: 2017-06-09 DOI: 10.1007/s10444-017-9542-z • Bernstein-Bézier techniques for divergence of polynomial spline vector fields in ℝ n • Authors: Tatyana Sorokina Abstract: Bernstein-Bézier techniques for analyzing polynomial spline fields in n variables and their divergence are developed. Dimension and a minimal determining set for continuous piecewise divergence-free spline fields on the Alfeld split of a simplex in ℝ n are obtained using the new techniques, as well as the dimension formula for continuous piecewise divergence-free splines on the Alfeld refinement of an arbitrary simplicial partition in ℝ n . PubDate: 2017-05-30 DOI: 10.1007/s10444-017-9541-0 • Analysis of the grad-div stabilization for the time-dependent Navier–Stokes equations with inf-sup stable finite elements • Authors: Javier de Frutos; Bosco García-Archilla; Volker John; Julia Novo Abstract: This paper studies inf-sup stable finite element discretizations of the evolutionary Navier–Stokes equations with a grad-div type stabilization. The analysis covers both the case in which the solution is assumed to be smooth and consequently has to satisfy nonlocal compatibility conditions as well as the practically relevant situation in which the nonlocal compatibility conditions are not satisfied. The constants in the error bounds obtained do not depend on negative powers of the viscosity. Taking into account the loss of regularity suffered by the solution of the Navier–Stokes equations at the initial time in the absence of nonlocal compatibility conditions of the data, error bounds of order $$\mathcal O(h^{2})$$ in space are proved. The analysis is optimal for quadratic/linear inf-sup stable pairs of finite elements. Both the continuous-in-time case and the fully discrete scheme with the backward Euler method as time integrator are analyzed. PubDate: 2017-05-25 DOI: 10.1007/s10444-017-9540-1 • Hermite subdivision on manifolds via parallel transport • Authors: Caroline Moosmüller Abstract: We propose a new adaption of linear Hermite subdivision schemes to the manifold setting. Our construction is intrinsic, as it is based solely on geodesics and on the parallel transport operator of the manifold. The resulting nonlinear Hermite subdivision schemes are analyzed with respect to convergence and C 1 smoothness. Similar to previous work on manifold-valued subdivision, this analysis is carried out by proving that a so-called proximity condition is fulfilled. This condition allows to conclude convergence and smoothness properties of the manifold-valued scheme from its linear counterpart, provided that the input data are dense enough. Therefore the main part of this paper is concerned with showing that our nonlinear Hermite scheme is “close enough”, i.e., in proximity, to the linear scheme it is derived from. PubDate: 2017-05-16 DOI: 10.1007/s10444-017-9516-1 • A numerical method for solving three-dimensional elliptic interface problems with triple junction points • Authors: Liqun Wang; Songming Hou; Liwei Shi Abstract: Elliptic interface problems with multi-domains have wide applications in engineering and science. However, it is challenging for most existing methods to solve three-dimensional elliptic interface problems with multi-domains due to local geometric complexity, especially for problems with matrix coefficient and sharp-edged interface. There are some recent work in two dimensions for multi-domains and in three dimensions for two domains. However, the extension to three dimensional multi-domain elliptic interface problems is non-trivial. In this paper, we present an efficient non-traditional finite element method with non-body-fitting grids for three-dimensional elliptic interface problems with multi-domains. Numerical experiments show that this method achieves close to second order accurate in the L ∞ norm for piecewise smooth solutions. PubDate: 2017-05-12 DOI: 10.1007/s10444-017-9539-7 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880743145942688, "perplexity": 1241.6508136672558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00380.warc.gz"}
http://msemac.redwoods.edu/~darnold/math50c/matlab/arclength/index.xhtml
## Arc Length and Functions in Matlab Consider the parametric equations $\begin{eqnarray} x&=&2 \cos t\\ y&=&3 \sin t \end{eqnarray}$ on the interval [0,2 pi]. To calculate the length of this path, one employs the arc length formula. $L=\int_0^{2\pi}\sqrt{(dx/dt)^2+(dy/dt)^2}$ However, (dx//dt)^2=(-2 sin t)^2=4 sin^2 t and (dy//dt)^2=(3 cos t)^2=9 cos^2 t. Hence, $\begin{eqnarray} L&=&\int_0^{2\pi}\sqrt{4\sin^2 t+9\cos^2 t}\\ L&=&\int_0^{2\pi}\sqrt{4(1-\cos^2 t)+9\cos^2 t}\\ L&=&\int_0^{2\pi}\sqrt{4+5\cos^2 t} \end{eqnarray}$ Because this last integral has no closed-form solution, we will need to apply some numerical routine (such as Simpson's rule) to obtain a decimal approximation for the integral. We are going to employ Matlab's quad command for this purpose, but first we must digress and learn how to write functions in Matlab. ### Directory Structure If you work on a computer in PS116, open MyComputer and browse to your Documents folder. There you should create a Math50C folder (no spaces in filenames), then create a subfolder named Matlab in the Math50C folder. In the Matlab folder, create another subfolder names ArcLength (again, no spaces in filenames). Once you've complete your directory structure as described in the previous paragraph, start Matlab then change the current working directory to the ArcLength folder. The easiest way to do this is to click the three-button icon directly to the right of the navigation edit box on the Matlab toolbar, then browse to the ArcLength folder. Check that your current directory points to the ArcLength folder by reading the contents of the Navigation box or executing the command pwd at the Matlab prompt. If you are working at home, you want to start the same sort of directory structure. • On a Mac, in your Documents folder, create a Math50C folder, then a Matlab folder inside the Math50c folder, then an ArcLength folder inside the Matlab folder. • On a PC running Windows, in your My Documents folder, create a Math50C folder, then a Matlab folder insider the Math50C folder, then an ArcLength folder inside the Math50C folder. Note that the directory structure described above is only a recommendation. You are perfectly free to create your own names and structure. However, don't fall into the trap of dumping all of your work into a single folder. You will regret such a decision as the number of files begins to grow through the course of the semester. ### Function M-files Very Important: Change the current directory to point at the ArcLength folder. Check this with the pwd command at the Matlab prompt. Open Matlab's editor. There are several ways that you can open the editor. 1. You can select File->New->M-file from Matlab's menu. 2. You can click the New M-file icon on Matlab's toobar. 3. You can type edit at the Matlab prompt. Of these three options, the last is our favorite. When the editor opens, enter the following code: function y=f(t) y=sqrt(4+5*cos(t)^2); The presence of the keyword function determines that this Matlab file is special: it is a function and not a simple script file. Note the first line of the file has the form: function output-variable = function_name(input-variable) We make several observations: 1. The keyword function dicatates that this file is a function file, not a script. 2. The function name is f. 3. The input variable is t. 4. The output variable is y. Because the function name is f, the rules of Matlab require that we save this file as f.m. That is, we must take the function name, append .m, then save. If you click the Save icon on the toolbar of the editor (or select File->Save), you can save the file in the usual manner. By default, Matlab suggests the file should be saved in the current directory (ArcLength) with the name f.m, but it is your responsibility to make sure that you are saving the file as f.m in the ArcLength directory. Return to the command window. Again, make sure you are in the ArcLength directory with the command pwd. If not, change directories. Because f(t)=sqrt(4+5 cos^2 t, then f(0)=sqrt(4+5 cos^2(0))=sqrt(4+5(1)^2)=sqrt(9)=3. We will now test our function. >> f(0) ans = 3 #### Trouble-Shooting It's possible that you might arrive at a different result. There could be a number of reasons for this: • Check your code in the editor. Did you type in the function correctly? • Did you save the file as f.m in the directory ArcLength? • Is your current directory ArcLength? Check this with the pwd command. If all is still not well, there are two Matlab commands that can help. • The command which f will give the path of the function f that Matlab will execute. If this path points to an f.m that is in a directory other than ArcLength, then check that you saved the file f.m to the ArcLength folder and your current directory is ArcLength (use the pwd command.) • The command type f will type the contents of the file f.m to the Matlab command window. If it types a function that is completely different from yours, then you know that Matlab is finding a file f.m on its path in another location. Again, check that you saved the file in the ArcLen folder and your current directory points to the ArcLength Folder. #### Making Your Function Array Smart If you've made it to this point in the activity, then you know your function returns the correct response if you enter a single value. However, just to make sure, try the following code: >> t=0 t = 0 >> f(t) ans = 3 This is the same result as above. If this doesn't work, return to the Trouble-Shooting section and make the appropriate correction. Before continuing, this example must work. Now, how will our function perform on a vector of values? Create a vector of t-values: >> t=0:pi/2:2*pi t = 0 1.5708 3.1416 4.7124 6.2832 The ideal would be that our function would be applied to each entry of this vector. Let's see: >> f(t) ??? Error using ==> mpower Matrix must be square. Error in ==> f at 2 y=sqrt(4+5*cos(t)^2); Let's analyze the error message: 1. First, the input t is a vector. 2. Matlab's cosine function is "array smart" so cos(t) is a vector, created by taking the cosine of every entry of the vector t. 3. Thus, cos(t) is a vector. The error is trying to raise a vector to the second power with the command cos(t)^2. This is an illegal operation that spawned the error message shown above. We need to square every entry of the vector cos(t). To do this we use "dot notation." Make the following change to the file f.m and save. function y=f(t) y=sqrt(4+5*cos(t).^2); >> f(t) ans = 3 2 3 2 3 Aha! Our function is now "array smart." It evaluated the function at each entry of the vector t. The careful reader will use pencil and paper to evaluate the function at 0, pi//2, pi, 3pi//2, and 2pi to verify this result. It is now a simple matter to approximate the integral $L=\int_0^{2\pi} \sqrt{4+5\cos^2 t}$. Simply enter the following at the Matlab command prompt: >> quad(@f,0,2*pi) ans = 15.8654 Thus, $L=\int_0^{2\pi} \sqrt{4+5\cos^2 t}\approx 15.8654$. If you type help quad at the Matlab prompt, a description of Matlab's quad command results, the first paragraph of which is: QUAD Numerically evaluate integral, adaptive Simpson quadrature. Q = QUAD(FUN,A,B) tries to approximate the integral of scalar-valued function FUN from A to B to within an error of 1.e-6 using recursive Y=FUN(X) should accept a vector argument X and return a vector result Y, the integrand evaluated at each element of X. Students of calculus will focus on the phrase "tries to approximate the integral of scalar-valued function FUN from A to B to within an error of 1.e-6 using recursive adaptive Simpson quadrature." Surely this means that the quad command is using a sophisticated adaptation of Simpson's method that provides a approximation that is within 1 times 10^{-6} of the correct answer. Here are a few more observations regarding the help file's description of use: • FUN is a function handle. In our call, quad(@f,0,2*pi), the "at" symbol @ is used to create a function handle "on-the-fly". We'll have more to say about function handles in future activities. For now, simply prefix the @ symbol to the name of your function. • In the description Q = QUAD(FUN,A,B), A and B are the lower and upper bounds of the integral. Thus, we passed the lower and upper bounds of our integral in the command quad(@f,0,2*pi). ### Some Comments on Writing Functions We share some final thoughts on writing functions. First, you can use whatever name you want for your function. Clearly, you won't always want to use f as your function name. For one thing, you can have only one file at a time named f.m in your ArcLength folder. Similarly, you can use whatever names you wish for your input and output variables. With these thoughts in mind, create a new file in the editor and enter the following code: function stink = skunk(rattled) stink=sqrt(4+5*cos(rattled).^2); Some observations: 1. The name of the function is skunk. Therefore, the file should be saved as skunk.m in the ArcLength directory. 2. The input variable is named rattled instead of t in this function M-file. 3. The output variable is named stink instead of y in this function M-file. Note again that we made the function array smart. We can test this with >> t=0:pi/2:2*pi t = 0 1.5708 3.1416 4.7124 6.2832 and >> skunk(t) ans = 3 2 3 2 3 This is identical to our previous result. Note that the contents of the vector t in the command workspace is passed into the contents of the input variable rattled in the function workspace, so these names do not have to match. We can pass a function handle to the skunk function to the quad command in a similar manner, by prefixing the function name with the at @ symbol. >> quad(@skunk,0,2*pi) ans = 15.8654 Same result! ### Exercises Consider the following parametric representation: $\begin{eqnarray} x&=&t^2\\ y&=&t^3, \end{eqnarray}$ defined on the interval 0 le t le 1. Perform each of the following tasks. 1. Sketch the parametric graph of the parametric function. Turn on the grid with grid on. Provide axis labels and an appropriate title, then obtain a printout. 2. On the printout of your plot, devise a strategy for estimating the length of the curve. You might try drawing a few line segments then using either the distance formula or the Pythagorean theorem to obtain an estimate of their total length. 3. Set up the integral on the printout of your plot for determining the length of the arc. 4. Write a function M-file for the integrand and obtain a printout of the file. 5. Use the quad command to approximate the integral in part (3). Place this command and the rsulting approximation on the printout of your plot and compare with the estimate in part (2) 6. Turn in the printout of your plot and the printout of your function M-file.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210342526435852, "perplexity": 1542.1942453851862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00385-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Massive-Neutrinos%3A-Phenomenological-and-Perez-Gonzalez/ca872bd5736ab476b0e2ce1bda0ace7d69fc65e9
• Corpus ID: 119437260 # Massive Neutrinos: Phenomenological and Cosmological Consequences @article{PerezGonzalez2017MassiveNP, title={Massive Neutrinos: Phenomenological and Cosmological Consequences}, author={Yuber F. Perez-Gonzalez}, journal={arXiv: High Energy Physics - Phenomenology}, year={2017} } • Y. Perez-Gonzalez • Published 18 December 2017 • Physics • arXiv: High Energy Physics - Phenomenology In this thesis we will address three different phenomena related to neutrino physics: mass models, detection of the cosmic neutrino background and the neutrino background in Dark Matter searches, considering the different characteristics in each case. In the study of neutrino mass models, we will consider models for both Majorana and Dirac neutrinos; specifically, we will probe the neutrinophilic two-Higgs-doublet model. Regarding the detection of relic neutrinos, we will analyse the… 1 Citations ### Neutrino discovery limit of Dark Matter direct detection experiments in the presence of non-standard interactions • Physics Journal of High Energy Physics • 2018 A bstractThe detection of coherent neutrino-nucleus scattering by the COHERENT collaboration has set on quantitative grounds the existence of an irreducible neutrino background in direct detection ## References SHOWING 1-10 OF 238 REFERENCES ### Dirac neutrinos from a second Higgs doublet • Physics • 2009 We propose a minimal extension of the standard model in which neutrinos are Dirac particles and their tiny masses are explained without requiring tiny Yukawa couplings. A second Higgs doublet with a ### Impact of Beyond the Standard Model physics in the detection of the Cosmic Neutrino Background • Physics • 2017 A bstractWe discuss the effect of Beyond the Standard Model charged current interactions on the detection of the Cosmic Neutrino Background by neutrino capture on tritium in a PTOLEMY-like detector. • Physics • 2011 ### Dark matter origins of neutrino masses • Physics • 2015 We propose a simple scenario that directly connects the dark matter (DM) and neutrino mass scales. Based on an interaction between the DM particle $\chi$ and the neutrino $\nu$ of the form ### Detecting non-relativistic cosmic neutrinos by capture on tritium: phenomenology and physics potential • Physics • 2014 We study the physics potential of the detection of the Cosmic Neutrino Background via neutrino capture on tritium, taking the proposed PTOLEMY experiment as a case study. With the projected energy ### Dark matter and exotic neutrino interactions in direct detection searches • Physics • 2017 A bstractWe investigate the effect of new physics interacting with both Dark Matter (DM) and neutrinos at DM direct detection experiments. Working within a simplified model formalism, we consider ### Calculation of the local density of relic neutrinos • Physics • 2017 Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV . We consider the gravitational clustering of relic neutrinos within the ### CP violation and baryogenesis due to heavy Majorana neutrinos We analyze the scenario of baryogenesis through leptogenesis induced by the out-of-equilibrium decays of heavy Majorana neutrinos and pay special attention to CP violation. Extending a recently
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957677960395813, "perplexity": 2681.3568992814226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00518.warc.gz"}
https://www.allaboutcircuits.com/technical-articles/ltspice-performance-analysis-of-a-precision-current-pump/
Technical Article # LTspice Performance Analysis of a Precision Current Pump October 20, 2020 by Robert Keim ## In this article, we will use simulations to assess important aspects of the performance of an op-amp-based current source. The previous article introduced a circuit that I am referring to as the two-op-amp current source (or current pump). Here’s the schematic: ##### Diagram of a precision current pump. Image used courtesy of Analog Devices I presented an LTspice implementation of this topology, and we looked at the results of a basic simulation. However, I would like to know more about this circuit, especially since it is described as a precision current pump. What kind of precision can we really expect from this circuit? 1. How precise is the output current under ideal conditions? 2. How is the precision of the output current influenced by load variations? 3. What is the typical and worst-case precision when resistor tolerances are taken into account? ### Baseline Precision This is the circuit that we’ll use for the first simulation: The voltage applied to the differential input stage changes from –250 mV to 250 mV during a 100 ms interval. The formula that relates input voltage to output current tells us that the current flowing through the load should be VIN/100. To see how closely the generated load current matches the theoretical prediction, we will plot the difference between the simulated load current and the mathematically calculated load current. The error is extremely small, and its magnitude varies in proportion to the magnitude of the load current. When we’re talking about a voltage regulator, load regulation refers to the regulator’s ability to maintain a constant voltage despite variations in load resistance. We can apply this same concept to a current source: How well does the circuit maintain the specified output current for different values of RLOAD? For this simulation, we’ll provide a fixed input voltage of 250 mV, and we’ll use a “step” directive to vary the load from 1 Ω to 1000 Ω in 10 Ω steps. A “measure” directive allows us to plot error versus the stepped parameter (i.e., the load resistance) rather than versus time; this is accomplished by opening the error log (View -> SPICE Error Log), right-clicking, and selecting “Plot .step’ed .meas data.” For larger load resistances, the output-current error does increase significantly—from about 50 nA to 800 nA. However, 800 nA is still a very small error. How much do you think the load regulation will change if we replace the ideal op-amp with a macromodel intended to approximate the performance of a real op-amp? Let’s take a look. The percentage of variation in output error is quite similar. In the first simulation, the error increased by a factor of 15.7 over the range of load resistance. In the second simulation, where I used the macromodel for the LT1001A, it increased by a factor of 12.1. What’s interesting is that the LT1001A performed better than the LTspice “ideal single-pole operational amplifier”—the magnitude of the error was much lower over the entire range, and the error was more stable relative to load resistance. I’m not sure how to explain that. Maybe the ideal single-pole op-amp isn’t as ideal as I thought. ### The Effect of Resistor Tolerances We don’t need simulations to determine the effect of variations in the resistance of R1; the mathematical relationship between input voltage and output current gives us a clear idea of how much error will be introduced by an R1 value that deviates from the nominal value. Also, the circuit diagram taken from the app note indicates how the ratio of R4 to R2 will affect output current, since this ratio determines AV, and IOUT is directly proportional to VIN multiplied by AV. Less clear, however, is the effect of imperfect matching between resistors. The circuit diagram indicates that R2 and R3 should be matched and that R4 and R5 should be matched. We can investigate this by performing a Monte Carlo simulation in which resistor values are varied within their tolerance range. If the simulation includes a large number of Monte Carlo runs, the maximum and minimum errors reported in the simulation results can be interpreted as the worst-case error associated with resistor tolerance. For this simulation, we will leave R2 and R4 fixed at 100 kΩ; this prevents variations in AV. We will degrade the circuit’s matching by applying the Monte Carlo function to the values of R3 and R5. As indicated by the “step” SPICE directive, one simulation consists of 100 runs. The value “mc(100k,0.01)” specifies a nominal resistance of 100 kΩ with a tolerance of 1%. Here is a plot of output-current error for the 100 runs. The average error is 15.6 µA, which is 0.6% of the expected 2.5 mA output current, and under worst-case conditions, the actual output current deviates from the expected current by approximately 40 µA. I’d call that very good precision. Let’s see how the situation improves when we use 0.1% tolerance instead of 1%. Now the average error is 1.6 µA, which is only 0.06% of the expected output current, and the worst-case error has decreased into the 4 µA range. ### Conclusion We’ve carried out LTspice simulations that have provided valuable insight into the performance of the two-op-amp current pump. Resistive tolerance of 1%, with the resistors that determine input gain fixed at their theoretical value, allows for high precision. A tolerance of 0.1% applied to all resistors would provide good performance, and since 0.1% resistors are readily available and not expensive, I agree with the author of the app note when he recommends 0.1% tolerance rather than 1% tolerance. • A Analog_Tim October 26, 2020 Current sources always make for an interesting article. Many thanks for sharing this. Is the plot for the 1% resistor monte carlo run right - it looks the same as the 0.1% resistor run? Like. • RK37 October 27, 2020 Thanks for pointing that out! There were some image mix-ups when the article was being prepared for publication. Everything is fixed now. Like. • A apkemu November 01, 2020 It seems to me that output_error is measured in nV instead of nA according to the directive measure. output_error avg (V…-V…), and that is changing everything… except output_error avg (I) Like.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826358914375305, "perplexity": 1299.4123986455604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00220.warc.gz"}
https://en.wikipedia.org/wiki/Thompson_groups
# Thompson groups In mathematics, the Thompson groups (also called Thompson's groups, vagabond groups or chameleon groups) are three groups, commonly denoted $F \subseteq T \subseteq V$, which were introduced by Richard Thompson in some unpublished handwritten notes in 1965 as a possible counterexample to von Neumann conjecture. Of the three, F is the most widely studied, and is sometimes referred to as the Thompson group or Thompson's group. The Thompson groups, and F in particular, have a collection of unusual properties which have made them counterexamples to many general conjectures in group theory. All three Thompson groups are infinite but finitely presented. The groups T and V are (rare) examples of infinite but finitely-presented simple groups. The group F is not simple but its derived subgroup [F,F] is and the quotient of F by its derived subgroup is the free abelian group of rank 2. F is totally ordered, has exponential growth, and does not contain a subgroup isomorphic to the free group of rank 2. It is conjectured that F is not amenable and hence a further counterexample to the long-standing but recently disproved von Neumann conjecture for finitely-presented groups: it is known that F is not elementary amenable. Higman (1974) introduced an infinite family of finitely presented simple groups, including Thompson's group V as a special case. ## Presentations A finite presentation of F is given by the following expression: $\langle A,B \mid\ [AB^{-1},A^{-1}BA] = [AB^{-1},A^{-2}BA^{2}] = \mathrm{id} \rangle$ where [x,y] is the usual group theory commutator, xyx−1y−1. Although F has a finite presentation with 2 generators and 2 relations, it is most easily and intuitively described by the infinite presentation: $\langle x_0, x_1, x_2, \dots\ \mid\ x_k^{-1} x_n x_k = x_{n+1}\ \mathrm{for}\ k The two presentations are related by x0=A, xn = A1−nBAn−1 for n>0. ## Other representations The Thompson group F is generated by operations like this on binary trees. Here L and T are nodes, but A B and R can be replaced by more general trees. The group F also has realizations in terms of operations on ordered rooted binary trees, and as the group of piecewise linear homeomorphisms of the unit interval that preserve orientation and whose non-differentiable points are dyadic rationals and whose slopes are all powers of 2. The group F can also be considered as acting on the unit circle by identifying the two endpoints of the unit interval, and the group T is then the group of automorphisms of the unit circle obtained by adding the homeomorphism xx+1/2 mod 1 to F. On binary trees this corresponds to exchanging the two trees below the root. The group V is obtained from T by adding the discontinuous map that fixes the points of the half-open interval [0,1/2) and exchanges [1/2,3/4) and [3/4,1) in the obvious way. On binary trees this corresponds to exchanging the two trees below the right-hand descendant of the root (if it exists). The Thompson group F is the group of order-preserving automorphisms of the free Jónsson–Tarski algebra on one generator. ## Amenability The conjecture of Thompson that F is not amenable was further popularized by R. Geoghegan --- see also the Cannon-Floyd-Parry article cited in the references below. Its current status is open: E. Shavgulidze[1] published a paper in 2009 in which he claimed to prove that F is amenable, but an error was found, as is explained in the MR review. It is known that F is not elementary amenable.[citation needed] If F is not amenable, then it would be another counterexample to the long-standing but recently disproved von Neumann conjecture for finitely-presented groups, which suggested that a finitely-presented group is amenable if and only if it does not contain a copy of the free group of rank 2. ## Connections with topology The group F was rediscovered at least twice by topologists during the 1970s. In a paper which was only published much later but was in circulation as a preprint at that time, P. Freyd and A. Heller [2] showed that the shift map on F induces an unsplittable homotopy idempotent on the Eilenberg-MacLane space K(F,1) and that this is universal in an interesting sense. This is explained in detail in Geoghegan's book (see references below). Independently, J. Dydak and P. Minc [3] created a less well-known model of F in connection with a problem in shape theory. In 1979, R. Geoghegan made four conjectures about F: (1) F has type FP; (2) All homotopy groups of F at infinity are trivial; (3) F has no non-abelian free subgroups; (4) F is non-amenable. (1) was proved by K. S. Brown and R. Geoghegan in a strong form: there is a K(F,1) with two cells in each positive dimension.[4] (2) was also proved by Brown and Geoghegan [5] in the sense that the cohomology H*(F,ZF) was shown to be trivial; since a previous theorem of M. Mihalik [6] implies that F is simply connected at infinity, and the stated result implies that all homology at infinity vanishes, the claim about homotopy groups follows. (3) was proved by M. Brin and C. Squier.[7] The status of (4) is discussed above. It is unknown if F satisfies the Farrell–Jones conjecture. It is even unknown if the Whitehead group of F (see Whitehead torsion) or the projective class group of F (see Wall's finiteness obstruction) is trivial, though it easily shown that F satisfies the Strong Bass Conjecture. D. Farley [8] has shown that F acts as deck transformations on a locally finite CAT(0) cubical complex (necessarily of infinite dimension). A consequence is that F satisfies the Baum-Connes conjecture.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968451023101807, "perplexity": 510.92515422536104}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466260.18/warc/CC-MAIN-20151124205426-00290-ip-10-71-132-137.ec2.internal.warc.gz"}
https://brilliant.org/problems/logs-2s-and-3s/
# Logs, 2s, and 3s Algebra Level 2 Find all real solutions $$x$$ to $3\log_2(x) - 1 = \log_2\left(\frac32 x-1\right).$ Enter your answer as the sum of all such $$x$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374404907226562, "perplexity": 4547.978317631013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00274.warc.gz"}
https://nigerianscholars.com/past-questions/english-language/question/188449/
Home » » Choose the option that has the same vowel sound as the one represented by the le... # Choose the option that has the same vowel sound as the one represented by the le... ### Question Choose the option that has the same vowel sound as the one represented by the letters underlined. faeces A) polices B) pain C) peasant D) pear
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835792541503906, "perplexity": 1784.7947362751308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00573.warc.gz"}
http://mathhelpforum.com/algebra/12386-variety-equations.html
1. A Variety Of Equations Please solve and explain how these are solved. Please solve and explain how these are solved. Hi, to Q1: the brackets are not necessary. Collect like terms: 7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3) to Q3.: 11x + 6y = 153 156 = -3y + 10x ==> 156 - 10x = -3y ==> -312 + 20x = 6y Now plug in the term for 6y into the first equation: 11x + (-312 + 20x) = 153 31x = 465 x = 15 plug in this value into the second equation to calculate y: 6y = -312 + 20*15 = -12 y = -2 EB 3. 7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3) How did you do that? -6* -3 -3* -3 = -21 I got -15 Solve using the elimination method. . . 11x + 6y .= .153 . . 156 .= .-3y + 10x We have: .11x + 6y .= .153 . [1] . . . . . . . .10x .- 3y .= .156 . [2] Multiply [2] by 2: .20x - .6y .= .312 . . . . . . Add [1]: .11x + 6y .= .153 . . And we have: .31x = 465 . . x = 15 Substitute into [1]: .11(15) + 6y .= .153 . . y = -2 7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3) How did you do that? -6* -3 -3* -3 = -21 I got -15 Hello, you have to calculate the product of powers. Therefore you must use all rules concerning the calculations with powers. I've attached a screen-shot of the transformation. Maybe this helps a little bit further. EB Attached Thumbnails 6. Thanks. That completes the picture.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719974160194397, "perplexity": 4558.676998228979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00118-ip-10-145-167-34.ec2.internal.warc.gz"}
https://lakschool.com/index.php/en/math/circles-and-spheres/spheres-and-lines
Math Circles and spheres Spheres and lines # Spheres and lines There are three possible relative positions for a sphere and a line in space. ! ### Remember • A passant is a straight line that has no point in common with the sphere. • A tangent has exactly one point in common. • A secant has two different points in common with the sphere. A sphere and a line can therefore have one, two or no common point. The individual coordinates are used in the equation of a sphere to calculate the intersection points. i ### Method 1. Write out coordinates of $g$ 2. Insert and solve equations in the equation of a sphere 3. Insert $r$ into the line to get intersection(s) ### Example $g: \vec{x} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} + r \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ $k: (x+1)^2+(y-2)^2$ $+(z-1)^2=17$ 1. #### Break $g$ into 3 equations We replace $\vec{x}$ and write out the respective coordinates as our own equation. $\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} + r \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ 1. $x=5+3r$ 2. $y=6+2r$ 3. $z=5+2r$ 2. #### Insert coordinates The equations are now used in the equation of a sphere for $x$, $y$ and $z$. $(x+1)^2+(y-2)^2$ $+(z-1)^2=17$ $(5+3r+1)^2$ $+(6+2r-2)^2$ $+(5+2r-1)^2=17$ $(6+3r)^2$ $+(4+2r)^2$ $+(4+2r)^2=17$ Use binomial theorem to resolve parentheses $36+36r+9r^2$ $+16+16r+4r^2$ $+16+16r+4r^2=17$ $17r^2+68r+68=17\quad|-17$ $17r^2+68r+51=0\quad|:17$ $r^2+4r+3=0$ $r_{1,2}=-\frac{p}2\pm\sqrt{(\frac{p}2)^2-q}$ $r_{1,2}=-2\pm\sqrt{2^2-3}$ $r_{1,2}=-2\pm1$ $r_{1}=-1$ and $r_{2}=-3$ 3. #### Insert $r$ The two calculated $r$ are inserted into the equation of a line in order to obtain the intersection points. $\vec{OS_1} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} - 1 \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ $=\begin{pmatrix} 2 \\ 4 \\ 3 \end{pmatrix}$ $\vec{OS_2} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} - 3 \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ $=\begin{pmatrix} -4 \\ 0 \\ -1 \end{pmatrix}$ It is a secant that intersects the sphere at $S_1(2|4|3)$ and $S_2(-4|0|-1)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004949331283569, "perplexity": 650.0786325958347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00323.warc.gz"}
https://qanda.ai/en/solutions/JcUZNtXkND-uadratic-Equation-Sum-of-Roots-Product-of-Roots-1-x24x30-2-6x212x-180-3-x24x-210
Symbol Problem uadratic Equation Sum of Roots Product of Roots $1$ $x^{2}+4x+3=0$ $2$ $6x^{2}+12x-18=0$ $3.$ $x^{2}+4x-21=0$ $4.$ $2x^{2}+3x-2=0$ $5.$ $8x^{2}=6x+9$ $6$ $2x^{2}-3x=0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871617555618286, "perplexity": 67.35148971516344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00253.warc.gz"}
https://zbmath.org/?q=an%3A0761.34026
× # zbMATH — the first resource for mathematics On the reducibility of linear differential equations with quasiperiodic coefficients. (English) Zbl 0761.34026 We say that a matrix $$Q(t)$$ is a quasiperiodic matrix of time with basic frequencies $$\omega_ 1,\dots,\omega_ r$$ if $$Q(t)=F(\omega_ 1 t,\dots,\omega_ r t)$$, where $$F=F(v_ 1,\dots,v_ r)$$ is $$2\pi$$ periodic in all its arguments. The author considers the system (1) $$x'=(A+\varepsilon Q(t))x$$, where $$A$$ is a constant matrix and $$Q(t)$$ is a quasiperiodic analytic matrix with $$r$$ basic frequencies. Suppose $$A$$ has different eigenvalues (including the purely imaginary case) and the set formed by the eigenvalues of $$A$$ and the basic frequencies of $$Q(t)$$ satisfies a nonresonant condition. It is proved under a nondegeneracy condition that there exists a Cantorian set $${\mathcal S}\subset(0,\varepsilon_ 0)$$ ($$\varepsilon_ 0>0$$) with positive Lebesgue measure such that for $$\varepsilon\in{\mathcal S}$$ (1) is reducible (i.e. there exists a nonsingular quasiperiodic matrix $$P(t)$$ such that $$P(t)$$, $$P^{-1}(t)$$ and $$P'(t)$$ are bounded on $$R$$ and the change of variables $$x=P(t)y$$ transforms (1) to $$y'=By$$ with a constant matrix $$B$$). ##### MSC: 34C20 Transformation and reduction of ordinary differential equations and systems, normal forms 34A30 Linear ordinary differential equations and systems 34C27 Almost and pseudo-almost periodic solutions to ordinary differential equations ##### Keywords: quasiperiodic function; reducible system; basic frequencies Full Text: ##### References: [1] Arnol’d, V.I, Small denominators and problems of stability of motion in classical and celestial mechanics, Russian math. surveys, 18, No. 6, 85-191, (1963) · Zbl 0135.42701 [2] Bogoljubov, N.N; Mitropoliski, Ju.A; Samoilenko, A.M, Methods of accelerated convergence in nonlinear mechanics, (1976), Springer-Verlag New York [3] Fink, A.M, Almost periodic differential equations, () · Zbl 0325.34039 [4] Johnson, R.A; Sell, G.R, Smoothness of spectral subbundles and reducibility of quasi-periodic linear differential systems, J. differential equations, 41, 262-288, (1981) · Zbl 0443.34037 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452706336975098, "perplexity": 929.5611094622268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00347.warc.gz"}
http://mathhelpforum.com/calculus/116493-solved-integration-part-problem-two-different-answers.html
# Math Help - [SOLVED] Integration by part problem with two different answers 1. ## [SOLVED] Integration by part problem with two different answers I want to find the integration of following integral: $\int x^3.e^{x^2}\, dx$ So i apply integration by part: $u.v - \int v \,du\,\,$ $ Let \,\, u = x^3, du = 3x^2\,\, and \,\, dv = e^{x^2}, v = 2x.e^{x^2} $ So now th result is: $\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2} - \int\,\,6x^3.e^{x^2}\,\, dx$ $7\!\!\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2}$ $\int x^3.e^{x^2}\, dx = \frac{2x^4.e^{x^2}}{7} + C$ Am i right? Because the answer to this problem is different back of the book. Answer is $\frac{(x^2 -1).e^{x^2}}{2} + C$ The way i did it i can't find anything wrong with it. Can anyone kindly tell what is wrong with my way of solving this problem? 2. Originally Posted by x3bnm I want to find the integration of following integral: $\int x^3.e^{x^2}\, dx$ So i apply integration by part: $u.v - \int v \,du\,\,$ $ Let \,\, u = x^3, du = 3x^2\,\, and \,\, dv = e^{x^2}, v = 2x.e^{x^2} $ So now th result is: $\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2} - \int\,\,6x^3.e^{x^2}\,\, dx$ $7\!\!\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2}$ $\int x^3.e^{x^2}\, dx = \frac{2x^4.e^{x^2}}{7} + C$ Am i right? Because the answer to this problem is different back of the book. Answer is $\frac{(x^2 -1).e^{x^2}}{2} + C$ The way i did it i can't find anything wrong with it. Can anyone kindly tell what is wrong with my way of solving this problem? I think that we can clear some of the "clutter". Let $z=x^2$ so $dz=2x$ and this becomes $\frac{1}{2}\int z\cdot e^{z}dz$. Which is easily done. Does that help? EDIT: Ahh, did you differentiate when you should have integrated? For $v$ 3. >EDIT: Ahh, did you differentiate when you should have integrated? For Yes i did. That's the mistake i made. Thanks for finding it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646786451339722, "perplexity": 566.313207016504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00216-ip-10-146-231-18.ec2.internal.warc.gz"}
https://dml.cz/handle/10338.dmlcz/144789
# Article Full entry | PDF   (0.2 MB) Keywords: dimension filtration; sequentially Cohen-Macaulay filtration; cohomological dimension; bigraded module; Cohen-Macaulay module Summary: Let \$K\$ be a field and \$S=K[x_1,\ldots ,x_m, y_1,\ldots ,y_n]\$ be the standard bigraded polynomial ring over \$K\$. In this paper, we explicitly describe the structure of finitely generated bigraded ``sequentially Cohen-Macaulay'' \$S\$-modules with respect to \$Q=(y_1,\ldots ,y_n)\$. Next, we give a characterization of sequentially Cohen-Macaulay modules with respect to \$Q\$ in terms of local cohomology modules. Cohen-Macaulay modules that are sequentially Cohen-Macaulay with respect to \$Q\$ are considered. References: [1] Capani, A., Niesi, G., Robbiano, L.: CoCoA, a system for doing Computations in Commutative Algebra. (1995), http://cocoa.dima.unige.it./research/publications.html, 1995. [2] Chardin, M., Jouanolou, J.-P., Rahimi, A.: The eventual stability of depth, associated primes and cohomology of a graded module. J. Commut. Algebra 5 (2013), 63-92. DOI 10.1216/JCA-2013-5-1-63 | MR 3084122 | Zbl 1275.13014 [3] Cuong, N. T., Cuong, D. T.: On sequentially Cohen-Macaulay modules. Kodai Math. J. 30 (2007), 409-428. DOI 10.2996/kmj/1193924944 | MR 2372128 | Zbl 1139.13011 [4] Cuong, N. T., Cuong, D. T.: On the structure of sequentially generalized Cohen-Macaulay modules. J. Algebra 317 (2007), 714-742. DOI 10.1016/j.jalgebra.2007.06.026 | MR 2362938 | Zbl 1137.13010 [5] Eisenbud, D.: Commutative Algebra. With a View Toward Algebraic Geometry. Graduate Texts in Mathematics 150 Springer, Berlin (1995). MR 1322960 | Zbl 0819.13001 [6] Rahimi, A.: Sequentially Cohen-Macaulayness of bigraded modules. (to appear) in Rocky Mt. J. Math. [7] Rahimi, A.: Relative Cohen-Macaulayness of bigraded modules. J. Algebra 323 (2010), 1745-1757. DOI 10.1016/j.jalgebra.2009.11.026 | MR 2588136 | Zbl 1184.13053 [8] Schenzel, P.: On the dimension filtration and Cohen-Macaulay filtered modules. Commutative Algebra and Algebraic Geometry. Proc. of the Ferrara Meeting, Italy F. Van Oystaeyen Lecture Notes Pure Appl. Math. 206 Marcel Dekker, New York (1999), 245-264. MR 1702109 | Zbl 0942.13015 Partner of
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682147264480591, "perplexity": 4393.527027916911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00267.warc.gz"}
https://kerodon.net/tag/01VT
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ ### 5.3.3 Homotopy Transport for Cartesian Fibrations We now study the behavior of the transport functors of §5.3.2 with respect to composition. Proposition 5.3.3.1 (Transitivity). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets and let $\sigma$ be a $2$-simplex of $\operatorname{\mathcal{D}}$, which we display as a diagram $\xymatrix@R =50pt@C=50pt{ & Y \ar [dr]^{g} & \\ X \ar [ur]^{f} \ar [rr]^{h} & & Z. }$ Let $f^{\ast }: \operatorname{\mathcal{C}}_{Y} \rightarrow \operatorname{\mathcal{C}}_{X}$ and $g^{\ast }: \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}_{Y}$ be functors which are given by contravariant transport along $f$ and $g$, respectively. Then the composite functor $f^{\ast } \circ g^{\ast }: \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}_{X}$ is given by contravariant transport along $h$. Proof. Without loss of generality, we may replace $q$ by the projection map $\Delta ^{2} \times _{ \operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}\rightarrow \Delta ^2$, and thereby reduce to the case where $\operatorname{\mathcal{D}}= \Delta ^2$ and $\sigma$ is the unique nondegenerate $2$-simplex of $\operatorname{\mathcal{D}}$. In this case, $\operatorname{\mathcal{C}}$ is an $\infty$-category. Let $h: \Delta ^1 \times \operatorname{\mathcal{C}}_{Y} \rightarrow \operatorname{\mathcal{C}}$ and $h': \Delta ^1 \times \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}$ be morphisms which witness $f^{\ast }$ and $g^{\ast }$ as given by contravariant transport along $f$ and $g$, respectively. Then the composite map $\Delta ^{1} \times \operatorname{\mathcal{C}}_{Z} \xrightarrow { \operatorname{id}\times g^{\ast } } \Delta ^1 \times \operatorname{\mathcal{C}}_{Y} \xrightarrow {h} \operatorname{\mathcal{C}}$ can be identified with a morphism $\alpha$ from $f^{\ast } \circ g^{\ast }$ to $g^{\ast }$ in the $\infty$-category $\operatorname{Fun}( \operatorname{\mathcal{C}}_{Z}, \operatorname{\mathcal{C}})$. Similarly, $h'$ can be identified with a morphism $\beta$ from $g^{\ast }$ to $\operatorname{id}_{\operatorname{\mathcal{C}}_{Z}}$ in the $\infty$-category $\operatorname{Fun}( \operatorname{\mathcal{C}}_{Z}, \operatorname{\mathcal{C}})$. Note that for each object $C \in \operatorname{\mathcal{C}}_{Z}$, the induced maps $\alpha _{C}: (f^{\ast } \circ g^{\ast })(C) \rightarrow g^{\ast }(C) \quad \quad \beta _{C}: g^{\ast }(C) \rightarrow C$ are $q$-cartesian. Let $\gamma : f^{\ast } \circ g^{\ast } \rightarrow \operatorname{id}_{\operatorname{\mathcal{C}}_{Z}}$ be a composition of $\alpha$ with $\beta$. Then, for each object $C \in \operatorname{\mathcal{C}}_{Z}$, the morphism $\gamma _{C}: (f^{\ast } \circ g^{\ast }(C) \rightarrow C$ is also $q$-cartesian (Corollary 5.2.2.5). It follows that $\gamma$ can be identified with a morphism of simplicial sets $\Delta ^1 \times \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}$ which witnesses $f^{\ast } \circ g^{\ast }$ as given by contravariant transport along $h$.. $\square$ Warning 5.3.3.2. The conclusion of Proposition 5.3.3.1 is generally not satisfied if $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is only assumed to be a locally cartesian fibration of simplicial sets. We will return to this point in § (see Proposition ). Construction 5.3.3.3 (The Homotopy Transport Representation: Cartesian Case). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets and let $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ denote the homotopy category of $\infty$-categories (Construction 4.5.1.1). It follows from Proposition 5.3.3.1 and Example 5.3.2.6 that there is a unique morphism of simplicial sets $\operatorname{hTr}_{q}: \operatorname{\mathcal{D}}^{\operatorname{op}} \rightarrow \operatorname{N}_{\bullet }( \mathrm{h} \mathit{\operatorname{Cat}_{\infty } } )$ with the following properties: • For each vertex $X$ of the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(X)$ is the $\infty$-category $\operatorname{\mathcal{C}}_{X} = \{ X\} \times _{\operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}$ (regarded as an object of $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. • For each edge $e: X \rightarrow Y$ in the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(e)$ is the (isomorphism class of) the contravariant transport functor $[e^{\ast }]$ of Notation 5.3.2.5, regarded as an element of $\operatorname{Hom}_{ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }} }( \operatorname{\mathcal{C}}_{Y}, \operatorname{\mathcal{C}}_{X} ) = \pi _0( \operatorname{Fun}( \operatorname{\mathcal{C}}_{Y}, \operatorname{\mathcal{C}}_{X})^{\simeq } )$. Let $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ denote the homotopy category of the simplicial set $\operatorname{\mathcal{D}}$ (Notation 1.2.5.3). Then the morphism $\operatorname{hTr}_{q}$ determines a functor of ordinary categories $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$, which we will denote also by $\operatorname{hTr}_{q}$ and will refer to as the homotopy transport representation of the cartesian fibration $q$. Example 5.3.3.4. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of categories (Definition 5.1.4.8), so that the induced map $\operatorname{N}_{\bullet }(q): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is a cartesian fibration of $\infty$-categories (Example 5.2.4.2). Then the homotopy transport representation $\operatorname{hTr}_{\operatorname{N}_{\bullet }(q)}: \operatorname{\mathcal{D}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ is given by the composition $\operatorname{\mathcal{D}}^{\operatorname{op}} \xrightarrow { \chi _{q} } \operatorname{Pith}(\mathbf{Cat}) \rightarrow \mathrm{h} \mathit{\operatorname{Cat}} \xrightarrow { \operatorname{N}_{\bullet } } \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}.$ Here $\chi _{q}$ denotes the transport representation of Construction 5.1.5.10 (with respect to any cleavage of the fibration $q$), the second functor is the truncation map of Remark 2.3.2.12, and $\operatorname{N}_{\bullet }$ is the fully faithful functor of Remark 4.5.1.3. Stated more informally, the homotopy transport representation $\operatorname{hTr}_{ \operatorname{N}_{\bullet }(q)}$ of Construction 5.3.3.3 can be obtained from the transport representation $\chi _{ \operatorname{N}_{\bullet }(q)}$ of Construction 5.1.5.10 by passing from the $2$-category $\mathbf{Cat}$ to its homotopy category. Example 5.3.3.5. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between ordinary categories which is a fibration in sets (Definition 5.1.2.1), so that the induced map $\operatorname{N}_{\bullet }(q): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is a right fibration, and in particular a cartesian fibration. Then the homotopy transport representation $\operatorname{hTr}_{\operatorname{N}_{\bullet }(q)}$ of Construction 5.3.3.3 is given by the composition $\operatorname{\mathcal{D}}^{\operatorname{op}} \xrightarrow { \chi _{q} } \operatorname{Set}\hookrightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty } },$ where $\chi _{q}$ is the transport representation of Construction 5.1.2.14 and $\operatorname{Set}\hookrightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ is the fully faithful embedding which associates to each set $X$ the associated discrete simplicial set, regarded as an $\infty$-category. Remark 5.3.3.6. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets, and let $\operatorname{hTr}_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ be the homotopy transport representation of Construction 5.3.3.3. It follows from Proposition 5.2.4.12 that $q$ is a right fibration if and only if the functor $\operatorname{hTr}_{q}$ factors through the full subcategory $\mathrm{h} \mathit{\operatorname{Kan}} \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. In particular, if $q$ is a right fibration, then Construction 5.3.3.3 determines a functor $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Kan}}$ which we will also refer to as the homotopy transport representation of $q$. For later reference, we record a dual version of Construction 5.3.3.3: Construction 5.3.3.7 (The Covariant Transport Functor). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cocartesian fibration of simplicial sets and let $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ denote the homotopy category of $\infty$-categories. Then there is a unique morphism of simplicial sets $\operatorname{hTr}_{q}: \operatorname{\mathcal{D}}\rightarrow \operatorname{N}_{\bullet }( \mathrm{h} \mathit{\operatorname{Cat}_{\infty } } )$ with the following properties: • For each vertex $X$ of the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(X)$ is the $\infty$-category $\operatorname{\mathcal{C}}_{X} = \{ X\} \times _{\operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}$ (regarded as an object of $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. • For each edge $e: X \rightarrow Y$ in the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(e)$ is the (isomorphism class of) the covariant transport functor $[e_!]$ of Notation 5.3.2.12, regarded as an element of $\operatorname{Hom}_{ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }} }( \operatorname{\mathcal{C}}_{X}, \operatorname{\mathcal{C}}_{Y} ) = \pi _0( \operatorname{Fun}( \operatorname{\mathcal{C}}_{X}, \operatorname{\mathcal{C}}_{Y})^{\simeq } )$. Let $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ denote the homotopy category of the simplicial set $\operatorname{\mathcal{D}}$ (Notation 1.2.5.3). Then the morphism $\operatorname{hTr}_{q}$ determines a functor of ordinary categories $\mathrm{h} \mathit{\operatorname{\mathcal{D}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$, which we will denote also by $\operatorname{hTr}_{q}$ and will refer to as the homotopy transport representation of the cocartesian fibration $q$. Warning 5.3.3.8. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a morphism of simplicial sets which is both a cartesian fibration and a cocartesian fibration. Then Constructions 5.3.3.7 and 5.3.3.3 supply functors $\mathrm{h} \mathit{\operatorname{\mathcal{C}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ and $\mathrm{h} \mathit{\operatorname{\mathcal{C}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ respectively, which are both referred to as the homotopy transport representation of $q$ and denoted by $\operatorname{hTr}_{q}$. We will see later that these two functors are interchangeable data: either can be recovered from the other (see Proposition ). Example 5.3.3.9. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a morphism of simplicial sets. Combining Remark 5.3.3.6 with Theorem 5.3.2.14, we deduce that the following conditions are equivalent: • The morphism $q$ is a Kan fibration. • The morphism $q$ is a cartesian fibration and the homotopy transport representation $\operatorname{hTr}_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ of Construction 5.3.3.3 factors through the subcategory $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. • The morphism $q$ is a cocartesian fibration and the homotopy transport representation $\operatorname{hTr}'_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ of Construction 5.3.3.7 factors through the subcategory $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. If these conditions are satisfied, then $\operatorname{hTr}'_{q}$ is given by the composition $\mathrm{h} \mathit{\operatorname{\mathcal{D}}} \xrightarrow { \operatorname{hTr}_{q}^{\operatorname{op}} } ( \mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } )^{\operatorname{op}} \xrightarrow {\iota } \mathrm{h} \mathit{\operatorname{Kan}}^{\simeq },$ where $\iota$ is the isomorphism which carries each morphism in $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq }$ to its inverse (see Warning 5.1.2.16).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970825910568237, "perplexity": 154.17668507832767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00104.warc.gz"}
https://www.physicsforums.com/threads/moment-about-arbitary-point.834216/
# Moment about arbitary point • #1 310 1 ## Homework Statement Why the moment isn't=15* 3Nm? Since we are taking moment about point O...it's 3m away from O ## The Attempt at a Solution #### Attachments • IMG_20150924_140943.jpg 45.5 KB · Views: 335 • #2 haruspex Science Advisor Homework Helper Gold Member 2020 Award 36,394 6,935 ## Homework Statement Why the moment isn't=15* 3Nm? Since we are taking moment about point O...it's 3m away from O ## The Attempt at a Solution Are you referring to the 15kNm moment that is applied? You don't multiply that by a distance. (It would give you something with units kNm2.) a force times a perpendicular distance gives a moment, but an applied moment is already a moment. Exactly where it is applied makes no difference, only its magnitude and direction matter. • #3 310 1 Are you referring to the 15kNm moment that is applied? You don't multiply that by a distance. (It would give you something with units kNm2.) a force times a perpendicular distance gives a moment, but an applied moment is already a moment. Exactly where it is applied makes no difference, only its magnitude and direction matter. ya , i knew that . But , how can be moment be applied? only force can be applied , right? • #4 haruspex Science Advisor Homework Helper Gold Member 2020 Award 36,394 6,935 ya , i knew that . But , how can be moment be applied? only force can be applied , right? Are you asking as a practical matter how it is possible to apply a moment as opposed to a force? There does not need to be a way to do that. Consider turning a nut using a spanner. One can think of it as applying a torque, or as applying two equal and opposite forces along parallel but different lines of action. If you are told a moment of some specified magnitude and direction is applied, you do not need to care about how it is applied. • #5 310 1 Are you asking as a practical matter how it is possible to apply a moment as opposed to a force? There does not need to be a way to do that. Consider turning a nut using a spanner. One can think of it as applying a torque, or as applying two equal and opposite forces along parallel but different lines of action. If you are told a moment of some specified magnitude and direction is applied, you do not need to care about how it is applied. yes, this will only occur in the exercise , but not in daily life ? • #6 haruspex Science Advisor Homework Helper Gold Member 2020 Award 36,394 6,935 yes, this will only occur in the exercise , but not in daily life ? I cannot think of a way to apply a torque to an object (in an inertial frame) other than by a combination of linear forces. • Last Post Replies 1 Views 2K • Last Post Replies 1 Views 565 • Last Post Replies 4 Views 2K • Last Post Replies 12 Views 8K • Last Post Replies 6 Views 594 • Last Post Replies 32 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 9 Views 2K • Last Post Replies 18 Views 4K • Last Post Replies 3 Views 3K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523054957389832, "perplexity": 1040.3634961505586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00303.warc.gz"}