url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://chempaths.chemeddl.org/services/chempaths/?q=book/Using%20Chemical%20Equations%20in%20Calculations/2141/thermochemical-equations
# Thermochemical Equations Submitted by jwmoore on Sat, 03/26/2011 - 10:37 Back to Thermochemical Equations EnergyA system's capacity to do work. changes which accompany chemical reactions are almost always expressed by thermochemical equations, such as CH4(g) + 2O2(g) → CO2(g) + 2H2O(l)     (25°C, 1 atmAbbreviation for atmosphere, a unit of pressure equal to 101.325 kPa or 760 mmHg. pressureForce per unit area; in gases arising from the force exerted by collisions of gas molecules with the wall of the container.) ΔHm = –890 kJ      (1) Here the ΔHm (delta H subscript m) tells us whether heatEnergy transferred as a result of a temperature difference; a form of energy stored in the movement of atomic-sized particles. energy is released or absorbed when the reaction occurs as written, and also enables us to find the actual quantity of energy involved. By convention, if ΔHm is positive, heat is absorbed by the reaction; i.e., it is endothermicIn chemical thermodynamics, describes a process in which energy is transferred from the surroundings to the system as a result of a temperature difference.. More commonly, ΔHm is negative as in Eq. (1), indicating that heat energy is released rather than absorbed by the reaction, and that the reaction is exothermicDescribes a process in which energy is transferred to the surroundings as a result of a temperature difference.. This convention as to whether ΔHm is positive or negative looks at the heat change in terms of the matterAnything that occupies space and has mass; contrasted with energy. actually involved in the reaction rather than its surroundings. In the reaction in Eq. (1), the C, H, and O atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. have collectively lost energy and it is this loss which is indicated by a negative value of ΔHm. It is important to notice that ΔHm is the energy for the reaction as written. In the case of Equation (1), that represents the formation of 1 mol of carbon dioxide and 2 mol of water. The quantity of heat released or absorbed by a reaction is proportional to the amount of each substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. consumed or produced by the reaction. Thus Eq. (1) tells us that 890.4 kJ of heat energy is given off for every moleThat chemical amount of a substance containing the same number of units as 12 g of carbon-12. of CH4 which is consumed. Alternatively, it tells us that 890.4 kJ is released for every 2 moles of H2O produced. Seen in this way, ΔHm is a conversion factorA relationship between two units of measure that is derived from the proportionality of one quantity to another; for example, the mass of a substances is proportional to its volume and the conversion factor from volume to mass is density. enabling us to calculate the heat absorbed or released when a given amount of substance is consumed or produced. If q is the quantity of heat absorbed or released and n is the amount of substance involved, then $\Delta H_{\text{m}}=\frac{q}{n}$      (2) EXAMPLE 1 How much heat energy is obtained when 1 kg of ethane gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container., C2H6, is burned in oxygen according to the equation: 2C2H6(g) + 7O2(g) → 4CO2(g) + 6H2O(l) ΔHm = –3120 kJ      (3) SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. The massA measure of the force required to impart unit acceleration to an object; mass is proportional to chemical amount, which represents the quantity of matter in an object. of C2H6 is easily converted to the amount of C2H6 from which the heat energy q is easily calculated by means of Eq. (2). The value of ΔHm is –3120 kJ per per 2 mol C2H6. The road map is $m_{\text{C}_{\text{2}}\text{H}_{\text{6}}}\text{ }\xrightarrow{M}\text{ }n_{\text{C}_{\text{2}}\text{H}_{\text{6}}}\text{ }\xrightarrow{\Delta H_{m}}\text{ }q$ so that $q=\text{1 }\times \text{ 10}^{\text{3}}\text{ g C}_{\text{2}}\text{H}_{\text{6}}\text{ }\times \text{ }\frac{\text{1 mol C}_{\text{2}}\text{H}_{\text{6}}}{\text{30}\text{.07 g C}_{\text{2}}\text{H}_{\text{6}}}\text{ }\times \text{ }\frac{-\text{3120 kJ}}{\text{2 mol C}_{\text{2}}\text{H}_{\text{6}}}$ = − 51 879 kJ = − 51.88 MJ Note: By convention a negative value of q corresponds to a release of heat energy by the matter involved in the reaction. The quantity ΔHm is referred to as an enthalpyA thermodynamic state function, symbol H, that equals internal energy plus pressure x volume; the change in enthalpy corresponds to the energy transferred as a result of a temperature difference (heat transfer) when a reaction occurs at constant pressure. change for the reaction. In this context the symbol Δ (delta) signifies change in” while H is the symbol for the quantity being changed, namely the enthalpy. We will deal with the enthalpy in some detail in Chap. 15. For the moment we can think of it as a property of matter which increases when matter absorbs energy and decreases when matter releases energy. It is important to realize that the value of ΔHm given in thermochemical equations like (1) or (3) depends on the physical state of both the reactants and the products. Thus, if water were obtained as a gas instead of a liquidA state of matter in which the atomic-scale particles remain close together but are able to change their positions so that the matter takes the shape of its container in the reaction in Eq. (1), the value of ΔHm would be different from -890.4 kJ. It is also necessary to specify both the temperatureA physical property that indicates whether one object can transfer thermal energy to another object. and pressure since the value of ΔHm depends very slightly on these variables. If these are not specified [as in Eq. (3)] they usually refer to 25°C and to normal atmospheric pressure. Two more characteristics of thermochemical equations arise from the law of conservation of energy. The first is that writing an equation in the reverse direction changes the sign of the enthalpy change. For example, H2O(l) → H2O(g)     ΔHm = 44 kJ      (4a) tells us that when a mole of liquid water vaporizes, 44 kJ of heat is absorbed. This corresponds to the fact that heat is absorbed from your skin when perspiration evaporates, and you cool off. CondensationThe process in which a liquid forms from gas or vapor of the same substance. A chemical reaction in which two molecules combine to form a very small molecule and a larger molecule than either of the two reactants. of 1 mol of water vapor, on the other hand, gives off exactly the same quantity of heat. H2O(g) → H2O(l)     ΔHm = –44 kJ      (4b) To see why this must be true, suppose that ΔHm [Eq. (4a)] = 44 kJ while ΔHm [Eq. (4b)] = –50.0 kJ. If we took 1 mol of liquid water and allowed it to evaporate, 44 kJ would be absorbed. We could then condense the water vapor, and 50.0 kJ would be given off. We could again have 1 mol of liquid water at 25°C but we would also have 6 kJ of heat which had been created from nowhere! This would violate the law of conservation of energy. The only way the problem can he avoided is for ΔHm of the reverse reaction to be equal in magnitude but opposite in sign from ΔHm of the forward reaction. That is, ΔHm forward = –ΔHm reverse
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168418407440186, "perplexity": 643.8474621650885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00289-ip-10-185-27-174.ec2.internal.warc.gz"}
http://forum.dominionstrategy.com/index.php?action=profile;area=showposts;sa=messages;u=102
# Dominion Strategy Forum • October 23, 2019, 08:46:25 am • Welcome, Guest ### News: DominionStrategy Wiki ### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Messages - DStu Filter to certain boards: Pages: [1] 2 3 ... 98 1 ##### Variants and Fan Cards / Re: Fixing Dominion – returns « on: March 06, 2016, 12:44:47 pm » I'm not sure Donald made Chapel as strong with a plan. A lot of cards in base are grossly mispriced (looking at you, Adventurer). But i guess he knew that, even if Chapel was extremely strong, it wouldn't ruin the game, as trashing on its own isn't a strategy. It's like Alms in that respect. Quote from: Donald http://forum.dominionstrategy.com/index.php?topic=115.0 This started out "trash any number of cards" and went to the ever-so-slightly weaker "trash up to 4 cards." I tested a version with "trash up to 3 cards." It was horrible. Just, way slower than the current version, like you wouldn't believe. 2 ##### General Discussion / Re: Maths thread. « on: February 26, 2016, 01:23:22 am » Why is it all spiky? It should change direction just once right? Just not enough trials? the time it takes until some plant is drawn (which is mostly geometric with mean 16^3/3, slightly off from this because the 3 fields are not independent). Due to linearity of expectation, it is exactly this for any given plant plot, right? It's not exactly geometric, as the three chosen fields are not independent. You can't chose one field twice. You can see this in the extreme case with only three fields , where you would always chose every field, so the time to chose field 1 is constant 1, and not geometric with mean 1. But with 4k fields and just 100 targets the deviation should be quite small, and I also think I know how to make it exact. Why does the distribution matter rather than just the mean? Wait, duh. Nevermind. I think you are right they might really be exactly geometric, I had a wrong assumption before. I'm not sure if just the expectation matters, as the distribution controls the spawning of new plants, and these again spawn plants, you have some exponential behaviour here, which is nonlinear. Not sure if this is relevant in this context. :e I think basically the question is, is https://en.wikipedia.org/wiki/Wald%27s_equation applicable here. And I don't think it's obvious to see that condition 2) is satisfied. 3 ##### General Discussion / Re: Maths thread. « on: February 25, 2016, 04:47:58 pm » Why is it all spiky? It should change direction just once right? Just not enough trials? the time it takes until some plant is drawn (which is mostly geometric with mean 16^3/3, slightly off from this because the 3 fields are not independent). Due to linearity of expectation, it is exactly this for any given plant plot, right? It's not exactly geometric, as the three chosen fields are not independent. You can't chose one field twice. You can see this in the extreme case with only three fields , where you would always chose every field, so the time to chose field 1 is constant 1, and not geometric with mean 1. But with 4k fields and just 100 targets the deviation should be quite small, and I also think I know how to make it exact. 4 ##### General Discussion / Re: Maths thread. « on: February 25, 2016, 12:53:25 pm » Why is it all spiky? It should change direction just once right? Just not enough trials? Yeah, 300 is still a bit low, even if the coupling is already quite strong. I think I greatly increase the number of trials I can simulate by not drawing the 3 fields every tic (which will miss most of the time), but just jumping over all these by simulating the time it takes until some plant is drawn (which is mostly geometric with mean 16^3/3, slightly off from this because the 3 fields are not independent). Just don't have time at the moment, but I think one sees what one should do anyway, somewhere around 100 stop planting... 5 ##### General Discussion / Re: Maths thread. « on: February 25, 2016, 04:57:08 am » However I see no point in ever going past 138. Obviously. However, seems like the the optimum is somewhere around 100. (100 samples each, coupled until the first one starts harvesting to reduce variance and improve speed) Code: [Select] import numpy as npimport numpy.random as rndimport pandas as pdimport matplotlib.pyplot as pltdef iterate(field, planted, harvested, stop):    def grow(planted, i):        if planted > i:            field[i] = field[i] + 1            if field[i]==16:                field[i]=0                return 1        return 0            i = rnd.randint(N)    j=-1    k=-1    while (j<0) or (i==j):        j = rnd.randint(N)    while (k<0) or (i==k) or (j==k):        k = rnd.randint(N)        new_plants = 0    new_plants += grow(planted, i)    new_plants += grow(planted, j)    new_plants += grow(planted, k)    harvested += max(0, planted + new_plants - stop)    planted = min(planted + new_plants, stop)        return field, planted, harvestedrep = 100res = 2goal = 138start = 90end = 150result = pd.Series(index=range(start, end, res))samples = pd.DataFrame(columns = range(rep), index=result.index)stops = np.asarray(result.index)count = np.zeros(len(result.index))for i in range(rep):    burn_in = 0    planted = 1    harvested = 0    field = np.zeros([N], dtype=int)    avg = np.zeros(len(result.index))    stop = result.index.min()    while planted<stop:        burn_in += 1        field, planted, harvested = iterate(field, planted, harvested, stop)    #until here the same for everyone    save_field = field.copy()    save_planted = planted    for stop in range(len(stops)):        s = stops[stop]        count[stop] += burn_in        while (harvested<goal):            field, planted, harvested = iterate(field, planted, harvested, s)            count[stop] += 1                field = save_field.copy()        planted = save_planted        harvested = 0    avg = pd.Series(count/rep, index=result.index)avg :edit Went a bit further with the coupling, and did 300 samples Code: [Select] ...rep = 300res = 2goal = 138start = 60end = 140result = pd.Series(index=range(start, end, res))samples = pd.DataFrame(columns = range(rep), index=result.index)stops = np.asarray(result.index)count = np.zeros(len(result.index))for i in range(rep):    print(i)    burn_in = 0    planted = 1    harvested = 0    field = np.zeros([N], dtype=int)    avg = np.zeros(len(result.index))    for stop in range(len(stops)):        s = stops[stop]        while planted<stop:            burn_in += 1            field, planted, harvested = iterate(field, planted, harvested, s)        #until here the same for everyone following        np.copyto(save_field, field)        save_planted = planted        save_harvested = harvested        count[stop] += burn_in        while (harvested<goal):            field, planted, harvested = iterate(field, planted, harvested, s)            count[stop] += 1        np.copyto(field, save_field)        planted = save_planted + save_harvested        harvested = 0    avg = pd.Series(count/rep, index=result.index)avg 6 ##### General Discussion / Re: Maths thread. « on: February 24, 2016, 01:05:43 pm » Do you know the age of the plant? That would change things a bit... Basically you would want to stop earlier, because your expected time to grow 138 cones given the ages is smaller than without knowing the ages. Not unless you're cheating. Of course, you could estimate them, since you know when you planted it. Of course this is very theoretical, because in real Minecraft life you just plant all of them since you can retrieve them after you've planted them. I think it's even true if you don't know the age.  Liopoli assumes that it takes on average 16^4/3 ticks to spawn a cone for every plant. That's only true if the time to spawn is geometrically distributed, here it's the sum of 16 geometric distributions.  Basically, for a random plant far enough in the future, you would guess that on average it has age 8, so it does only need 8*16^3/3 ticks to spawn a new cone. It now get's a bit more complicated, because at the time you want to start harvesting probably most of the plants a quiet young (because you just planted them, but anyway even if they have age 2 they produce a new cone a but faster compared to the geometric situation), and only the older ones can be assumed to have a random age.  But basically I would guess because of this you want to start harvesting a bit earlier, as the first new cones spawn a bit faster, so you have less time for a new plant to earn back its investment (which now really takes 16^4/3 ticks on average). :e also, if my simulations are correct, it doesn't really matter. Stop somewhere between 100 and 150, the variance is much larger than the difference in expectation values... 7 ##### General Discussion / Re: Maths thread. « on: February 24, 2016, 12:30:10 pm » Do you know the age of the plant? That would change things a bit... Basically you would want to stop earlier, because your expected time to grow 138 cones given the ages is smaller than without knowing the ages.* * This might be wrong, but in this case liopoils approach is also wrong. There is a small ( I think) inaccuracy in it, but if * is wrong it is too larger to ignore. 8 ##### General Discussion / Re: Maths thread. « on: February 24, 2016, 05:57:37 am » They don't grow when that 164/3 chance happens, they only age by one.  When the age reaches 16, they grow.  One thing I forgot to mention that's important, is that then the age goes back to zero.  So, each one doesn't have a chance to grow on each tick, each is a part of the way through. What do you mean by "wait until the last one finished growing"?  I feel like something's not right there. They age with 16^3/3, and they need 16 ages to grow, that makes it 16^4/3, or? 9 ##### Dominion General Discussion / Re: Interview with Donald X. « on: February 10, 2016, 05:06:13 am » Does this mean you stopped testing online halfway through Empires or that you moved too 3/4 player testing online on Empires. If so, that's a very interesting decision. Doug found better things to do with his time midway through Empires. So, he stopped updating isotropic, so we couldn't test new cards on it anymore. Does that mean isotropic is going down? 10 ##### General Discussion / Re: Maths thread. « on: February 07, 2016, 04:44:49 am » I think your variance argument is basically Jensen in the special case k=2, so not really more gimmickly, more like a bit more on point... Edit: I think my inequalities above are the both in the wrong direction, probably you can them get right in the case c>1 (and using k+2 instead of k+1), but not in c<1. 11 ##### General Discussion / Re: Maths thread. « on: February 07, 2016, 04:20:08 am » Without absolute values and with real moments, you get any 0<c<1 just by Bernoulli(c)... Edit: Hmm, no negative values in here, somehow my reasoning for 0<c<1 above does not work. Not completely surprising though, didn't completely think it through, was more an analogy for c>1, might have some wrong sign or inequality in there.... 12 ##### General Discussion / Re: Maths thread. « on: February 07, 2016, 04:18:35 am » Yeah, I think my proof was for with absolute values. So E[|X|^k]. Otherwise, x^\alpha is not really concave for \alpha<1. c>1 should still work on first glance. 13 ##### General Discussion / Re: Maths thread. « on: February 07, 2016, 03:45:47 am » More of a probability question, but I wondered about this randomly, and think I actually solved it. For what values of c does there exist a non-constant random variable X such that for all positive integers k, E[X^k] = c? I would guess just 1 and 0 With Jensen's inequality (as x->x^\frac{k+1}k is convex), you have E[X^{k+1}] >= E[X^k]^{\frac{k+1}k}, that excludes c<1.  For 0<c<1, you take E[X^{k-1}] <= E[X^k]^\frac{k-1}k.  0 and 1 are constant under this mapping, so no problem there. Negative c can't be because of even k's. Edit: Of course 0 also doesn't work because of non-constant X, so at least second moment is larger than 0. Edit2: and of course 1 obviously works with 2*Bernoulli(1/2). Edit3: Ok moments, not centralized moments. Then take a slightly different distribution, but still a Bernoulli variant, just center it yourself 14 ##### General Discussion / Re: Maths thread. « on: February 06, 2016, 04:47:12 pm » The problem is that the question is ambiguous, since variance can mean two different things, either sN-12 or sN2. See Wolfram Mathworld. WW assumed that the variance is sN2, and in that case he is right: with n=6 you can get sN2 arbitrarily close to 6, but not exactly 6. His solution for n=7 is correct under this interpretation Tables and DStu assume the variance is sN-12, and in this case n=6 is possible and optimal. I used VARIANCE in LO, it seems to be s_N, and I would argue that this is what should be used.  s_{N-1} you take if you have a random sample from a distribution, to get an unbiased estimator. This is not what we want in this case, we talk about the variance of a list.  This is as if the list is the complete distribution, to get the variance of this one you must take s_N. Which confuses me now a bit, as I nevertheless "proved" that N=6 is possible. Might have some mistake in there... Edit: A fuck it, I'm too drunk for this shit. Was s_{N-1} all along in my formula, at least this clears that. My above post is wrong because wrong definition of variance, WW is right, use s_N! 15 ##### General Discussion / Re: Maths thread. « on: February 06, 2016, 01:10:46 pm » What do you mean by the smallest list? By definition the list must have n elements, so they will all be the same size. Unless I am looking for the smallest n that you can do this for? Yeah, I mean the smallest n. Sorry, it was late at night while I worded it. WW: Your solution to part 1 is correct, part 2 is not. Hint: Don't assume the list must be "symmetric" around the mean I can prove that 6 is possible . 2.75, 3.8, 6, 6, 8.7, 8.75 have mean value 6 and variance 6.051 2.75, 4, 6, 6, 8.5, 8.75 have mean value 6 and variance 5.675 so we have the numbers 2.75,6,6,8.75 and 3.8+h,  8.7-h. For h in [0,0.2], the range doesn't change, nor does the mode or median.  Mean doesn't change for any h.  The variance is continuous in h, so by mean value theorem there exists an h \in (0,0.2) such that variance is 6. (The h is probably quite easy to find as variance should be monotonous in h, too (but what for a mathematician would I be if I wouldn't be satisfied with proving the existence, and would continue to find out what the value is. Especially if it already stated that the solution is not unique anyway) 16 ##### General Discussion / Re: Maths thread. « on: February 01, 2016, 01:25:56 pm » I see. Although you don't have to "upload", In case of lio's calculation, all I had to do was to copy the code into a wiki page, add [itex] tags, and it complied flawlessly (then you can copy the image codes), which is why I wondered if maybe you use a tool that generates the code. It seems a bit strange to me, because I don't find TeX sourcecode pretty to look at, and isn't it easier to say SR(4) than \sqrt{4}? I would not assume SR(4) means square root without some context.  It looks like the name of some group or something.  But \sqrt{} is pretty unambiguous. http://www.thesrgroup.com/ 17 ##### General Discussion / Re: Maths thread. « on: February 01, 2016, 10:58:27 am » I see. Although you don't have to "upload", In case of lio's calculation, all I had to do was to copy the code into a wiki page, add [itex] tags, and it complied flawlessly (then you can copy the image codes), which is why I wondered if maybe you use a tool that generates the code. It seems a bit strange to me, because I don't find TeX sourcecode pretty to look at, and isn't it easier to say SR(4) than \sqrt{4}? probably depends on how often you have written \sqrt{4}. 18 ##### General Discussion / Re: Math and lotto stuf « on: January 13, 2016, 03:58:21 am » Or as my kid likes to count: 1 million trillion infinity which is obviously > 1 infinity. You might want to take them to a trip to the Hilbert Hotel. 19 ##### General Discussion / Re: Math and lotto stuf « on: January 11, 2016, 02:10:29 pm » Yeah, but does anyone use that definition anymore? I've only seen it used to explain the origin of the word. There are countries that do, so might be translation error 20 ##### General Discussion / Re: Math and lotto stuf « on: January 11, 2016, 02:04:03 pm » Lottery $1.3 Billion Population 300 Million =$4.33 Million per person! ... I'm thinking they messed up some mundane detail. https://en.wikipedia.org/wiki/Long_and_short_scales probably. 21 ##### General Discussion / Re: Math and lotto stuf « on: January 11, 2016, 11:13:20 am » It's not hard to get positive expectation; the tricky part is how long it takes to pay off. The lottery itself won't be around long enough; the country hosting it won't be around long enough. More importantly, you also need infinite captial to be able to stick to your strategy, which makes winning a finite amount a bit of an edge case... Well, presumably you have a steady income throughout the infinite time you'll need. I can't see this being an issue. True. So let's do it! 22 ##### General Discussion / Re: Math and lotto stuf « on: January 11, 2016, 11:04:25 am » It's not hard to get positive expectation; the tricky part is how long it takes to pay off. The lottery itself won't be around long enough; the country hosting it won't be around long enough. More importantly, you also need infinite captial to be able to stick to your strategy, which makes winning a finite amount a bit of an edge case... 23 ##### Goko Dominion Online / Re: Four Weeks Without Dominion ... « on: November 11, 2015, 05:15:00 am » Interesting that this also applies to the Google Play Store and Amazon Store, so werothegreat's troll post above is incorrect when he claims there's no restriction on Android. That might apply to the play store, but that doesn't apply it to Android automatically, as you can easily get apps from other stores to your Android device. Afaik, this is not true for ios. 24 ##### Goko Dominion Online / Re: Be careful when installing Dominion Online « on: November 08, 2015, 04:09:35 pm » hopefully no one accidentally points the Dominion Online installer at C:\Windows\System32 or C:\ 25 ##### Goko Dominion Online / Re: Be careful when installing Dominion Online « on: November 08, 2015, 04:07:00 pm » rm -rf \$STEAMROOT/* Pages: [1] 2 3 ... 98 Page created in 0.104 seconds with 19 queries.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823746919631958, "perplexity": 3300.500691101026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00290.warc.gz"}
https://pub.uni-bielefeld.de/record/1796101
### THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES Götze F, Tikhomirov A (2010) THEORY OF PROBABILITY AND ITS APPLICATIONS 54(1): 129-U7. Zeitschriftenaufsatz | Veröffentlicht | Englisch Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis! Autor*in Götze, FriedrichUniBi; Tikhomirov, Alexander Einrichtung Abstract / Bemerkung It is shown that the Kolmogorov distance between the spectral distribution function of a random covariance matrix 1/p XXT, where X is an n x p matrix with independent entries and the distribution function of the Marchenko-Pastur law is of order O(n(-1/2)). The bounds hold uniformly for any p, including p/n equal or close to 1. Stichworte sample covariance matrix; spectral; distribution function; Marchenko-Pastur distribution Erscheinungsjahr 2010 Zeitschriftentitel THEORY OF PROBABILITY AND ITS APPLICATIONS Band 54 Ausgabe 1 Seite(n) 129-U7 ISSN 0040-585X Page URI https://pub.uni-bielefeld.de/record/1796101 ### Zitieren Götze F, Tikhomirov A. THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES. THEORY OF PROBABILITY AND ITS APPLICATIONS. 2010;54(1):129-U7. Götze, F., & Tikhomirov, A. (2010). THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES. THEORY OF PROBABILITY AND ITS APPLICATIONS, 54(1), 129-U7. https://doi.org/10.1137/S0040585X97983985 Götze, F., and Tikhomirov, A. (2010). THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES. THEORY OF PROBABILITY AND ITS APPLICATIONS 54, 129-U7. Götze, F., & Tikhomirov, A., 2010. THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES. THEORY OF PROBABILITY AND ITS APPLICATIONS, 54(1), p 129-U7. F. Götze and A. Tikhomirov, “THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES”, THEORY OF PROBABILITY AND ITS APPLICATIONS, vol. 54, 2010, pp. 129-U7. Götze, F., Tikhomirov, A.: THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES. THEORY OF PROBABILITY AND ITS APPLICATIONS. 54, 129-U7 (2010). Götze, Friedrich, and Tikhomirov, Alexander. “THE RATE OF CONVERGENCE OF SPECTRA OF SAMPLE COVARIANCE MATRICES”. THEORY OF PROBABILITY AND ITS APPLICATIONS 54.1 (2010): 129-U7. Open Data PUB ### Web of Science Dieser Datensatz im Web of Science®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574138522148132, "perplexity": 2021.188288827411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00193.warc.gz"}
https://mathoverflow.net/questions/190802/zeros-of-the-derivative-of-riemanns-xi-function
# Zeros of the derivative of Riemann's $\xi$-function The Riemann xi function $\xi(s)$ is defined as $$\xi(s)=\frac12 s(s-1)\pi^{-s/2}\Gamma(s/2)\zeta(s).$$ It is an entire function whose zeros are precisely those of $\zeta(s)$. Since $\xi$ is real valued on the critical line $s=1/2+it$, there is a zero of the derivative $\xi^\prime$ between each successive pair of zeros of $\xi$, and thus the theorem of Levinson shows that at least $1/3$ (since improved) of the zeros of $\xi^\prime$ lie on the critical line. In Zeros of the derivative of Riemann's $\xi$-function BAMS v. 80 (5) 1974 pp. 951-954, Levinson adapted his method to show directly that more than $7/10$ of the zeros of $\xi^\prime(s)$ occur on the critical line. In the proof he writes, (with $H(s)=\frac12 s(s-1)\pi^{-s/2}\Gamma(s/2)$, $F(s)$ defined by $H(s)=\exp(F(s))$, and $G(s)$ complicated in terms of $\zeta(s)$ and $\zeta^\prime(s)$) "… then (3) becomes $$\xi^\prime(s)=F^\prime(s)H(s)G(s)-F^\prime(1-s)H(1-s)G(1-s)$$ … by Stirling's formula $\arg H(1/2+it)$ changes rapidly and by itself would supply the full quota of zeros of $\xi^\prime(s)$ on $\sigma=1/2$." This is as close as he comes in the paper to suggesting that all the zeros of $\xi^\prime$ are on the critical line. Does this conjecture explicitly appear anywhere in the literature? Is it folklore? • I think you intended to write "7/10 of the zeros of the derivative"...? – paul garrett Dec 15 '14 at 23:05 • A proof that, on RH, $\xi'(s)=0 \implies \mathrm{Re}(s)=1/2$ is outlined in exercise 1 on page 443 of Montgomery & Vaughan's "Multiplicative Number Theory." – Micah Milinovich Dec 16 '14 at 20:03 In exercise 1 on page 443 of their book "Multiplicative Number Theory," Montgomery & Vaughan outline a proof of the statement: "Assuming the Riemann Hypothesis, $\xi'(s)=0 \implies \mathrm{Re}(s)=1/2$." Assuming RH, let $s=\sigma+it$ and let $\rho=\frac{1}{2}+i\gamma$ denote a zero of $\xi(s)$. The main idea of their argument is that, on RH, it follows from that Hadamard product for $\xi(s)$ that $$\mathrm{Re} \frac{\xi'}{\xi}(s) = \sum_{\rho} \mathrm{Re}\frac{1}{s-\rho} = \sum_\gamma \frac{\sigma-1/2}{(\sigma-1/2)^2+(t-\gamma)^2}.$$ Now if $\xi'(s)=0$, then the left-hand side of the above expression is zero. On the other hand, the only way that the sum over $\gamma$ vanishes is if $\sigma=1/2$, i.e. $\mathrm{Re}(s)=1/2$. • I'd be very surprised if this argument was not known to Levinson as it is similar to Levinson & Montgomery's proof of Speiser's Theorem: link.springer.com/article/10.1007%2FBF02392141 – Micah Milinovich Dec 17 '14 at 0:13 • Note This is the en.wikipedia.org/wiki/Gauss%E2%80%93Lucas_theorem adapted to an entire function with appropriate order and all its roots on some strip or some line. – reuns Nov 21 '16 at 1:09 • And hence : $\xi(s)$ has no zeros on $Re(s) > 1/2+\epsilon$ (and $Re(s) < 1/2-\sigma_0$) $\implies$ that $\xi'(s)$ has no zeros on $Re(s) > 1/2+\epsilon$ and $Re(s) < 1/2-\epsilon$ – reuns Nov 21 '16 at 1:10 The Riemann hypothesis implies that the zeros of derivatives of all orders of $\xi$ lie on the critical line. B. Conrey, Zeros of derivatives of Riemann’s xi-function on the critical line, J. Number Theory 16 (1983), 49-74. • Conrey says "It can be shown that..." but does not prove it or cite a reference. Is it folklore? – Stopple Dec 16 '14 at 5:00 The Riemann hypothesis implies that the function $\Xi(z)=\xi(1/2+iz)$ is in the Laguerre-Pólya class. Therefore it is a limit, uniformly on compact sets, of a sequence of polynomials with real roots. The derivatives are also again in the same class and have therefore only real zeros. It follows that all zeros of $\xi(s)$ and its derivatives will be on the critical line. I imagine that this is due to Pólya, but have not his Collected Works to confirm. Given the simplicity of the proof I noticed, I will give a sketch of it. We have $$\Xi(t)=\Xi(0)\prod_{n=1}^\infty \Bigl(1-\frac{t^2}{\alpha_n^2}\Bigr).$$ The Riemann hypothesis is that all $\alpha_n$ are real. Therefore, assuming RH, $\Xi(t)$ is the limit uniformly in compact sets of $\bf C$ of the polynomials $$P_N(t):=\Xi(0)\prod_{n=1}^N \Bigl(1-\frac{t^2}{\alpha_n^2}\Bigr).$$ We are assuming that all roots of these polynomials are real. Therefore the same will happen with any derivative $P_N^{(k)}(t)$. By the general Theorems of Complex Analysis $\lim_{N\to\infty}P_N^{(k)}(t)=\Xi^{(k)}(t)$ uniformly in compact sets. By the argument principle any zero of $\Xi^{(k)}(t)$ is limit of zeros of $P_N^{(k)}(t)$. Therefore any zero of $\Xi^{(k)}(t)$ is real. The relation $\Xi(t)=\xi(\frac12+it)$ implies that all the zeros of the derivatives of $\xi(s)$ are in the critical line. (It is essentially contained in the paper by G. Pólya, Bemerkung zur Theorie der ganzen Funktionen, Collected papers II, 154--162.) Related proof is in Lagarias paper p.1. The RH is equivalent to $$\Re\left(\frac{\xi'(s)}{\xi(s)}\right) > 0$$ When $\Re(s) > \frac12$. • It is not proven in that paper, but rather Lagarias cites Hinkkanen that the above is true. From this it follows that RH $\Rightarrow$ the zeros of $\xi^\prime$ lie on the critical line. – Stopple Dec 16 '14 at 18:18 • @Stopple Agreed. Though I believe some results from the paper are stronger and show the same. – joro Dec 17 '14 at 14:40 The same argument as in the Gauss-Lucas theorem (adapted to an entire function with appropriate order) works for showing that If $$\xi(1/2+s)$$ has no zeros on $$|Re(s)| > \alpha$$, then $$\xi'(1/2+s)$$ has no zeros on $$|Re(s)| > \alpha$$. grouping the term $$\rho$$ with its complex conjugate $$\overline{\rho}$$, $$\displaystyle \frac{\xi'(s)}{\xi(s)} = \sum_\rho \frac{1}{s-\rho}$$ converges conditionally so that $$\begin{array}{l}\displaystyle\ \ \ \ \xi'(\beta)=0 \\ \displaystyle\text{and }\xi(\beta)\ne 0\end{array}\implies \frac{\xi'(\beta)}{\xi(\beta)} = \sum_\rho \frac{\overline{\beta}-\overline{\rho}}{|\beta-\rho|^2}=0\quad\implies \quad\beta=\frac{\displaystyle\sum_\rho \frac{\rho}{|\beta-\rho|^2}}{\displaystyle\sum_{\rho}\frac{1}{|\beta-\rho|^2}}$$ which is a weighted sum of the zeros of $$\xi(s)$$, i.e. the zeros of $$\xi'(s)$$ lie in the convex hull of the zeros of $$\xi(s)$$. Now for $$\zeta(s)$$ it is different, the zeros of $$\zeta'(s)$$ are not in the convex hull of the zeros of $$\zeta(s)$$, since $$\zeta(s)$$ has a pole at $$s=1$$ and the trivial zeros sum $$\sum_{n=2}^\infty \frac{1}{s+2n}$$ diverges. See zeros of $$\zeta'(s)$$ and the Riemann hypothesis for a proof that under the RH $$\zeta'(s)$$ has no zeros on $$Re(0,1/2)$$. I'd like if someone could derive a similar theorem assuming instead that $$\zeta(s)$$ has no zeros on $$Re(s) > \sigma_0$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686875343322754, "perplexity": 198.05860253161185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869785.9/warc/CC-MAIN-20201020021700-20201020051700-00387.warc.gz"}
https://www.physicsforums.com/threads/rocket-with-variable-mass-how-do-you-solve-for-ratio-of-weight-of-fuel-and-weight-of-rocket.887142/
# Rocket with variable mass, how do you solve for ratio of weight of fuel and weight of rocket? Tags: 1. Sep 28, 2016 ### Sudo 1. The problem statement, all variables and given/known data Rockets are propelled by the momentum of the exhaust gases expelled from the tail. Since these gases arise from the reaction of the fuels carried in the rocket, the mass of the rocket is not constant, but decreases as the fuel is expended. Show that for a Rocket starting initially from rest, and taking the velocity of the exhaust gases relative to the rocket, $v'$ = 2.1 m/s and a rate of mass loss per second L = 1/60 of the initial mass, to reach the escape velocity of the Earth ($v_e$ = 11.2 km/s), the ratio of the weight of the fuel to the weight of the rocket must be almost 300! 2. Relevant equations I know that $F=m(\frac {dp} {dt})=ma=m\frac {dv} {dt}$ 3. The attempt at a solution First, I work in the frame of the rocket; so we have two forces, the weight $W=mg$ and the force of the exhaust exiting the rocket $\frac {dm} {dt}v'$. Both forces are pointing downward, thus I set up the following equation: $$ma=m\frac {dv} {dt}=-v'(\frac {dm} {dt})-mg$$ Now I rewrite $m\frac {dv} {dt}$ as $m\frac {dv} {dm}\frac {dm} {dt}$; now I have this expression:$$m\frac {dv} {dm}\frac {dm} {dt}=-v'(\frac {dm} {dt})-mg$$As $\frac {dm} {dt}=L=-\frac {m_0} {60}$ and dividing both sides by $m$ we have:$$\frac {dv} {dm}(-\frac {m_0} {60})=-\frac {v'} {m}(-\frac {m_0} {60})-g$$Dividing both sides by $L$ yields:$$\frac {dv} {dm}=-\frac {v'} {m}+\frac {60g} {m_0}$$Now we're ready to write our differential equation as follows:$$dv=-\frac {v'} {m}~dm+\frac {60g} {m_0}~dm$$We can now integrate, this knowing that we start from $v_0=0$ and from a certain initial mass given by $m_0=m_{rocket}+m_{fuel}$ and a final mass $m_f=m_{rocket}$$$\int_0^v dv=-v'\int_{m_0}^{m_f} \frac {1} {m} \ dm+\frac {60g} {m_0}\int_{m_0}^{m_f} dm$$After integrating and evaluating I get the following expression:$$v=-v'ln(\frac {m_f} {m_0})+\frac {60g} {m_0}(m_f-m_0)$$This can be rewritten as:$$v=-v'ln(\frac {m_{rocket}} {m_{rocket}+m_{fuel}})-\frac {60g} {m_{rocket}+m_{fuel}}(m_{fuel})$$Then, if $m_{fuel}>>m_{rocket}$ we can ignore the term $m_{rocket}$ in $\frac {1} {m_{rocket}+m_{fuel}}$, and the expression can be rewritten as:$$v=-v'ln(\frac {m_{rocket}} {m_{fuel}})-60g=v'ln(\frac {m_{fuel}} {m_{rocket}})-60g$$Solving for the desired ratio I get:$$\frac {m_{fuel}} {m_{rocket}}=e^{\frac {v+60g} {v'}}$$However when plugging in the equation the values $v'=2.1 m/s$, $v=11.2 km/s$, and $g$ the value is much, much bigger than 300. What did I did wrong? Did I chooses the incorrect frame? To my understanding my equation $ma=m\frac {dv} {dt}=-v'(\frac {dm} {dt})-mg$ should be true for this frame, thus $v'$ indeed is 2.1 m/s Thank you for your time! As always I appreciate detailed answers, thanks again! 2. Sep 28, 2016 ### TSny The value given for $v'$ looks too small. Could it actually be 2.1 km/s? You stated that both the thrust force and the gravitational force act downward. But the rocket goes upward. However, I think your equations are correct. Last edited: Sep 28, 2016 3. Sep 29, 2016 ### Sudo I'm not completely sure, the problem states m/s. Do you think that, even when considering that $v'$ is relative to the rocket (i.e., measured by someone in the rocket, not an observer on ground), 2.1 m/s would still, anyway, be too small? 4. Sep 29, 2016 ### jbriggs444 Too small by exactly a factor of 1000. An exhaust velocity so laughably low and a coincidence so striking that it is quite obviously an error in the problem statement. An arthritic cripple could throw bowling balls out the back end of a rocket faster than 2.1 m/s. 300 such throws would achieve a velocity around 25 miles per hour (*) -- somewhat less than escape velocity. (*) Assuming that the arthritic cripple starts with a pile of 300 bowling balls on his lap and has the same mass as one of the balls. Last edited: Sep 29, 2016 Draft saved Draft deleted Similar Discussions: Rocket with variable mass, how do you solve for ratio of weight of fuel and weight of rocket?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983606219291687, "perplexity": 563.7312935021265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00657.warc.gz"}
http://mathoverflow.net/questions/97652/smooth-curve-in-the-hilbert-flag-scheme
# Smooth curve in the Hilbert flag scheme Let $d$ be an integer greater than $0$. Let $P_2$ be the Hilbert polynomial of a degree $d$ surface in $\mathbb{P}^3$. Recall, the Hilbert flag scheme $\mathrm{Hilb}_{P_1,P_2}$ parametrizes curves $C$ contained in a degree $d$ surface $X$ in $\mathbb{P}^3$ such that $P_1$ is the Hilbert polynomial of $C$. Now consider the first projection map from the above Hilbert flag scheme to the Hilbert scheme of curves with Hilbert polynomial $P_1$, denoted $\mathrm{Hilb}_{P_1}$. The question is: For which Hilbert polynomials $P_1$ can we say that there exists at least one smooth curve in every irreducible component of the image? More simply, for which Hilbert polynomials $P_1$ can we say that there exists a smooth curve $C$ in $\mathrm{Hilb}_{P_1}$ such that it is contained in some degree $d$ surface in $\mathbb{P}^3$? - I think there is a reasonable question buried in here somewhere. Could you perhaps clarify what you mean in the last sentence? – J.C. Ottem May 23 '12 at 1:16 For which Hilbert polynomials $P_1$ is there at least one smooth curve in the image of the Hilbert flag scheme under the first projection map (to $\mathrm{Hilb}_{P_1}$)? – Naga Venkata May 23 '12 at 9:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905938982963562, "perplexity": 111.18226935577368}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00319-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=233332
# 3 space and 1 time by K S Mallesh Tags: space, time P: 4 I often feel think that the three-dimensional space and the one-dimensional time can be viewed as arising due to a basic spinorial structure. I am drawing an analogy with the triplet and singlet states which arise when two spin 1/2 are added. Can we say that space and time basically arise from the coupling of two spinorial structures, the triplet combination giving the 3-d space and the singlet combination giving the 1-dimensional time? Please bear with me if my observation looks very crazy. P: 410 There is a relationship between spacetime and spinors, but it isn't the one that you think (although I will ponder upon it). You can assign to every 4-vector satisfying $$v^2 = 1$$ a 2x2 special unitary matrix satisfying $$\det u = 1$$, etc. Now SU(2) matrices transform as two copies of SU(2), whereas spinors transform as one copy. So it is often said that spinors are "square roots" of vectors. There's a related construction for null vectors and this goes by the name of twistors. P: 27 the basic space time group : Lorentz group can be decomposed into SU(2)XSU(2). So you see all inertial frames are basically connected by a bispinor structure... Related Discussions Special & General Relativity 10 Special & General Relativity 34 General Physics 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222164154052734, "perplexity": 615.9794833060262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653416/warc/CC-MAIN-20140305060733-00088-ip-10-183-142-35.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/396818/why-are-relativistic-quantum-field-theories-so-much-more-restrictive-than-non-re/396868
# Why are relativistic quantum field theories so much more restrictive than non-relativistic ones? Part of the reason that relativistic QFT is so hard to learn is that there are piles of 'no-go theorems' that rule out simple physical examples and physical intuition. A very common answer to the question "why can't we do X simpler, or think about it this way" is "because of this no-go theorem". To give a few examples, we have: Of course all these theorems have additional assumptions I'm leaving out for brevity, but the point is that Lorentz invariance is a crucial assumption for every one. On the other hand, nonrelativistic QFT, as practiced in condensed matter physics, doesn't have nearly as many restrictions, resulting in much nicer examples. But the only difference appears to be that they work with a rotational symmetry group of $SO(d)$ while particle physicists use the Lorentz group $SO(d-1, 1)$, hardly a big change. Is there a fundamental, intuitive reason that relativistic QFT is so much more restricted? • I thought it might be linked to the fact that the Lorentz group isn't compact, but then again the salient comparison is actually the Galilean group versus the Poincaré group! Mar 31, 2018 at 13:30 • I think almost all of the things you mentioned are just a consequence of unitarity, not necessarily Lorentz invariance, although maybe in my thinking Lorentz invariance is entering in some subtle way. I think things like Reeh-Schlieder though can be stated nonrelativistically. Mar 31, 2018 at 17:47 • There is a no-go theorem which states that you cannot intuitively understand why relativistic QFT is so much more restricted. – ACat Apr 1, 2018 at 17:48 • @Dvij ... which is a corollary of the more general no-go theorem "you cannot intuitively understand anything in quantum mechanics" ;-P Apr 1, 2018 at 17:53 • @Dvij I would laugh if it weren't so true! Apr 1, 2018 at 22:36 One of the reasons relativistic theories are so restrictive is because of the rigidity of the the symmetry group. Indeed, the (homogeneous part) of the same is simple, as opposed to that of non-relativistic systems, which is not. The isometry group of Minkowski spacetime is $$\mathrm{Poincar\acute{e}}=\mathrm{ISO}(\mathbb R^{1,d-1})=\mathrm O(1,d-1)\ltimes\mathbb R^d$$ whose homogeneous part is $\mathrm O(1,d-1)$, the so-called Lorentz Group1. This group is simple. On the other hand, the isometry group of Galilean space+time is2 $$\text{Bargmann}=\mathrm{ISO}(\mathbb R^1\times\mathbb R^{d-1})\times\mathrm U(1)=(\mathrm O(d-1)\ltimes\mathbb R^{d-1})\ltimes(\mathrm U(1)\times\mathbb R^1\times\mathbb R^{d-1})$$ whose homogeneous part is $\mathrm O(d-1)\ltimes\mathbb R^{d-1}$, the so-called (homogeneous) Galilei Group. This group is not semi-simple (it contains a non-trivial normal subgroup, that of boosts). There is in fact a classification of all physically admissible kinematical symmetry groups (due to Lévy-Leblond), which pretty much singles out Poincaré as the only group with the above properties. There is a single family of such groups, which contains two parameters: the AdS radius $\ell$ and the speed of light $c$ (and all the rotation invariant İnönü-Wigner contractions thereof). As long as $\ell$ is finite, the group is simple. If you take $\ell\to\infty$ you get Poincaré which has a non-trivial normal subgroup, the group of translations (and if you quotient out this group, you get a simple group, Lorentz). If you also take $c\to\infty$ you get Bargmann (or Galilei), which also has a non-trivial normal subgroup (and if you quotient out this group, you do not get a simple group; rather, you get Galilei, which has a non-trivial normal subgroup, that of boosts). Another reason is that the postulate of causality is trivial in non-relativistic systems (because there is an absolute notion of time), but it imposes strong restrictions on relativistic systems (because there is no absolute notion of time). This postulate is translated into the quantum theory through the axiom of locality, $$[\phi(x),\phi(y)]=0\quad\forall x,y\quad \text{s.t.}\quad (x-y)^2<0$$ where $[\cdot,\cdot]$ denotes a supercommutator. In other words, any two operators whose support are casually disconnected must (super)commute. In non-relativistic systems this axiom is vacuous because all spacetime intervals are timelike, $(x-y)^2>0$, that is, all spacetime points are casually connected. In relativistic systems, this axiom is very strong. These two remarks can be applied to the theorems you quote: • Reeh-Schlieder depends on the locality axiom, so it is no surprise it no longer applies to non-relativistic systems. • Coleman-Mandula (see here for a proof). The rotation group is compact and therefore it admits finite-dimensional unitary representations. On the other hand, the Lorentz group is non-compact and therefore the only finite-dimensional unitary representation is the trivial one. Note that this is used in the step 4 in the proof above; it is here where the proof breaks down. • Haag also applies to non-relativistic systems, so it is not a good example of OP's point. See this PSE post for more details. • Weinberg-Witten. To begin with, this theorem is about massless particles, so it is not clear what such particles even mean in non-relativistic systems. From the point of view of irreducible representations they may be meaningful, at least in principle. But they need not correspond to helicity representations (precisely because the little group of the reference momentum is not simple). Therefore, the theorem breaks down (as it depends crucially on helicity representations). • Spin-statistics. As in Reeh-Schlieder, in non-relativistic systems the locality axiom is vacuous, so it implies no restriction on operators. • CPT. Idem. • Coleman-Gross. I'm not familiar with this result so I cannot comment. I don't even know whether it is violated in non-relativistic systems. 1: More generally, the indefinite orthogonal (or pseudo-orthogonal) group $\mathrm O(p,q)$ is defined as the set of $(p+q)$-dimensional matrices, with real coefficients, that leave invariant the metric with signature $(p,q)$: $$\mathrm O(p,q):=\{M\in \mathrm{M}_{p+q}(\mathbb R)\ \mid\ M\eta M^T\equiv \eta\},\qquad \eta:=\mathrm{diag}(\overbrace{-1,\dots,-1}^p,\overbrace{+1,\dots,+1}^q)$$ The special indefinite orthogonal group $\mathrm{SO}(p,q)$ is the subset of $\mathrm O(p,q)$ with unit determinant. If $pq\neq0$, the group $\mathrm{SO}(p,q)$ has two disconnected components. In this answer, "Lorentz group" may refer to the orthogonal group with signature $(1,d-1)$; to its $\det(M)\equiv+1$ component; or to its orthochronus subgroup $M^0{}_0\ge+1$. Only the latter is simply-connected. The topology of the group is mostly irrelevant for this answer, so we shall make no distinction between the three different possible notions of "Lorentz group". 2: One can prove that the inhomogeneous Galilei algebra, and unlike the Poincaré algebra, has a non-trivial second co-homology group. In other words, it admits a non-trivial central extension. The Bargmann group is defined precisely as the centrally extended inhomogeneous Galilei group. Strictly speaking, all we know is that the central extension has the algebra $\mathbb R$; at the group level, it could lead to a factor of $\mathrm U(1)$ as above, or to a factor of $\mathbb R$. In quantum mechanics the first option is more natural, because we may identify this phase with the $\mathrm U(1)$ symmetry of the Schrödinger equation (which has a larger symmetry group, the so-called Schrödinger group). Again, the details of the topology of the group are mostly irrelevant for this answer. • Can you really construct a counterexample to Reeh-Schlieder? Apr 1, 2018 at 6:03 • @RyanThorngren The concept of non-relativistic Reeh-Schlieder is meaningless (because the concept of local algebras is also meaningless: if $c\to\infty$, all points are causally connected). Therefore, strictly speaking there are no counter-examples, because the theorem is incompatible with the situation. But in OP's sense, which states "... forbids position operators in relativistic QFT", then the counter-example is straightforward: in non-relativistic systems, the position operator $\hat X$ is well-defined, Galilei covariant, and local. No such operator exists in the relativistic regime. Apr 1, 2018 at 14:15 • I would state Reeh-Schlieder that $\langle O^\dagger(x) O(0) \rangle \ge 0$, with 0 only for the identity operator. This follows from reflection positivity. I don't know how to construct the position operator in many-body quantum mechanics. Apr 1, 2018 at 17:37 • @RyanThorngren 1) The version of R-S I had in mind is that of wikipedia, i.e., that the vacuum is a cyclic vector (for algebras over open sets in Minkowski). I don't know whether this is equivalent to your version. 2) In principle, you just take $\hat X=\bigoplus_i \hat X_i$, where $\hat X_i=1\otimes\cdots 1\otimes \hat X\otimes 1\cdots 1$, where the $\hat X$ is at the position $i$. Apr 1, 2018 at 17:46 • I think they are equivalent, or at least the version on wikipedia follows from the statement I made. Local observables can just mean ones whose support on Hilbert space is bounded in real space. I don't think the X operator you write down is so bounded in, say, a hopping Hamiltonian model. Apr 1, 2018 at 19:30 Lorentz invariance is also an indirect contributor to the restrictions that renormalizability places on the theory. The logic goes something likes this: 1. The action must be Lorentz invariant, thus the number of spatial derivatives must equal the number of time derivatives in the action. 2. We want the Hamiltonian to play the role of energy (has lower bound and provides stability), therefore the action can have no more than two time derivatives (for more, see: this previous answer on renormalization). 3. By 1 and 2, the propagator $\Delta(x,y)$ will, necessarily, diverge like $\left([x^\mu-y^\mu][x_\mu-y_\mu]\right)^{-1}$ as $x\rightarrow y$ (equivalently, the momentum space form looks like $(p^\mu p_\mu)^{-1}$ for $p\rightarrow \infty$). It is that third step that leads to the divergences that require renormalization, and renormalizibility is very restrictive on what terms the action is allowed to contain. Without Lorentz invariance, we could add more spatial derivatives without time derivatives, produce a well behaved finite propagator, and work with a much broader class of theories. Granted, as discussed in the linked answer, you could relax 2, some, but that doesn't allow any theory, just more. Here's one (incomplete) perspective, mostly about the infrared: For a field with given charges under Lorentz and all other symmetries, there is essentially only one theory with quadratic action to first order in derivatives. For integer spins it's $\partial_\mu \phi \partial^\mu \phi + m^2 \phi^2$ and for half integer spins it's $\bar \psi \gamma^\mu D_\mu \psi + m \bar \psi \psi$. This is a fact of representation theory, and the fact that all you have to contract spacetime indices are $g_{\mu \nu}$, $\gamma^\mu$, and things with more spacetime indices. Note that the form of these actions determine the bare propagators to be the relativistic ones $1/(p^2 - m^2)$ and $1/(p-m)$, respectively. On the other hand, if you were to break Lorentz symmetry, say by choosing a vector field $v^\mu$, then you could write terms like $\phi v^\mu \partial_\mu \phi$, which would change the dispersion relation for $\phi$ to be linear in $p$ for momenta parallel to $v$. Note that for timelike $v$ this breaks the Lorentz group $SO(1,d)$ to $SO(d)$. For an applied magnetic field $F_{ij}$ on fermions we could add a term $\bar \psi F_{ij} \gamma^i \gamma^j \psi$ which can mess up spin-statistics. I think these new Gaussian fixed points" cause lots of things to go wonky (in the IR) when you do perturbation theory around them. On the other hand, there aren't that many terms that can lead these ones, and because of that most of the theories we end up studying in condensed matter have emergent Lorentz invariance in the IR. Some significant exceptions are theories with singular fermi surface or others with UV/IR" mixing that cause the field theory to see the lattice.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.913139283657074, "perplexity": 381.808616584843}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00285.warc.gz"}
http://math.stackexchange.com/questions/167957/closed-form-solution-of-fibonacci-like-sequence
Closed form solution of Fibonacci-like sequence Could someone please tell me the closed form solution of the equation below. $$F(n) = 2F(n-1) + 2F(n-2)$$ $$F(1) = 1$$ $$F(2) = 3$$ Is there any way it can be easily deduced if the closed form solution of Fibonacci is known? - The same solution method should work.... – Hurkyl Jul 7 '12 at 19:11 Well, how would you derive the Binet formula for Fibonacci numbers in the first place? You can adapt the same technique to this case. – J. M. is back. Jul 7 '12 at 19:15 I think we should know two "base cases," e.g. $$F(0)=1\\F(1)=1$$ otherwise we can never get the numerical value of $F(n)$. – Argon Jul 7 '12 at 19:17 @Argon, of course, but in the absence of base cases, we could still have a formula with two arbitrary constants... – J. M. is back. Jul 7 '12 at 19:23 For your particular initial conditions, you should be getting $$\frac1{12}\left((3-\sqrt{3})\left(1-\sqrt{3}\right)^k+(3+\sqrt{3})(1+\sqrt 3)^k\right)$$ – J. M. is back. Jul 7 '12 at 19:43 Any of the standard methods for solving such recurrences will work. In particular, whatever method you would use to get the Binet formula for the Fibonacci numbers will work here, once you establish initial conditions. If you set $F(0)=0$ and $F(1)=1$, as with the Fibonacci numbers, the closed form is $$F(n)=\frac{(1+\sqrt3)^n-(1-\sqrt3)^n}{2\sqrt3}\;;$$ I don’t see any way to derive this directly from the corresponding closed form for the Fibonacci numbers, however. By the way, with those initial values the sequence is OEIS A002605. Added: The general solution is $$F(n)=A(1+\sqrt3)^n+B(1-\sqrt3)^n\;;\tag{1}$$ Argon’s answer already shows you one of the standard methods of obtaining this. To find $A$ and $B$ for a given set of initial conditions, just substitute the known values of $n$ in $(1)$. If you want $F(1)=1$, you must have $$1=F(1)=A(1+\sqrt3)^1+B(1-\sqrt3)^1\;,$$ or $A+B+\sqrt3(A-B)=1$. To get $F(2)=3$, you must have \begin{align*}3&=F(2)=A(1+\sqrt3)^2+B(1-\sqrt3)^2\\ &=A(4+2\sqrt3)+B(4-2\sqrt3)\;, \end{align*} or $4(A+B)+2\sqrt3(A-B)=3$. You now have the system \left\{\begin{align*}&A+B+\sqrt3(A-B)=1\\ &4(A+B)+2\sqrt3(A-B)=3\;. \end{align*}\right. Multiply the first equation by $2$ and subtract from the second to get $2(A+B)=1$, and multiply the first equation by $4$ and subtract the second from it to get $2\sqrt3(A-B)=1$. Then you have the simple system \left\{\begin{align*}&A+B=\frac12\\&A-B=\frac1{2\sqrt3}\;,\end{align*}\right. which you should have no trouble solving for $A$ and $B$. - can you please tell the corresponding F(n) with F(1) = 1 and F(2) = 3 – Raj Jul 7 '12 at 19:40 @Raj: you don't know how to derive it yourself? – J. M. is back. Jul 7 '12 at 19:47 @Brian and J.M : Thanks you so much for your help.It means a lot to me. – Raj Jul 7 '12 at 20:10 $$F(n)=2F(n−1)+2F(n−2)=2(F(n-1)+F(n-2))$$ Gives us the recurrence relation $$r^n=2(r^{n-1}+r^{n-2})$$ we divide by $r^{n-2}$ to get $$r^2=2(r+1) \implies r^2-2r-2=0$$ which is our characteristic equation. The characteristic roots are $$\lambda_1=1-\sqrt{3} \\ \lambda_2=1+\sqrt{3}$$ Thus (because we have two different solutions) $$F(n)=c_1 \lambda_1^n+c_2\lambda_2^n = c_1(1-\sqrt{3})^n+c_2(1+\sqrt{3})^n$$ Where $c_1$ and $c_2$ are constants that are chosen based on the base cases $F(n)$. Brian M. Scott's answer explains how to obtain $c_1$ and $c_2$. - Any solution sequence can be written, with real constants $A,B,$ as $$A \; \left(1 + \sqrt 3 \right)^n + B \; \left(1 - \sqrt 3 \right)^n.$$ The set of such sequences is a vector space over $\mathbb R,$ of dimension 2. The expression below shows a linear combination of basis elements for the vector space. In comparison, suppose we took $$G(n) = 8 G(n-1) - 15 G(n-2).$$ Then, with real constants $A,B$ to be determined, we would have $$G(n) = A \cdot 5^n + B \cdot 3^n$$ - Define $g(z) = \sum_{n \ge 0} F(n + 1) z^n$, write the recurrence as: $$F(n + 3) = 2 F(n + 2) + 2 F(n + 1) \qquad F(1) = 1, F(2) = 3$$ Multiply the recurrence by $z$, sum over $n \ge 0$ and get: $$\frac{g(z) - F(1) - F(2)}{z^2} = 2 \frac{g(z) - F(1)}{z} + 2 g(z)$$ Solve for $g(z)$: $$g(z) = \frac{1 + z}{1 - 2 z - 2 z^2} = \frac{2 + \sqrt{3}}{2 \sqrt{3}} \cdot \frac{1}{1 - (1 + \sqrt{3}) z)} + \frac{2 - \sqrt{3}}{2 \sqrt{3}} \cdot \frac{1}{1 + (1 - \sqrt{3})z}$$ Two geometric series: $$T(n+ 1) = \frac{2 + \sqrt{3}}{2 \sqrt{3}} \cdot (1 + \sqrt{3})^n + \frac{2 - \sqrt{3}}{2 \sqrt{3}} \cdot (1 - \sqrt{3})^n$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475418329238892, "perplexity": 262.77497895491166}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00094-ip-10-137-6-227.ec2.internal.warc.gz"}
https://pgadey.wordpress.com/2013/03/26/antipodal-points-after-vilcu/
## Antipodal points after Vîlcu Posted in Math by pgadey on 2013/03/26 I’ve been thinking a lot about convex bodies in ${{\mathbb R}^3}$ lately. This post is going to be a write up of a useful lemma in the paper: Vîlcu, Constin, On Two Conjectures of Steinhaus, Geom. Dedicata 79 (2000), 267-275. Let ${S}$ be a centrally symmetric convex body in ${{\mathbb R}^3}$. Let ${d(x,y)}$ denote thes intrinsic metric of ${S}$ and ${D = \sup_{x,y} d(x,y)}$ its intrinsic diameter. For a point ${x \in S}$ we write ${\bar{x}}$ for its image under the central symmetry. Lemma 1 (Vîlcu) If ${d(x,y) = D}$ then ${y = \bar{x}}$. This lemma says that if a pair realizes the inner diameter of a centrally symmetric convex body, the pair has to be centrally symmetric. This aligns well with our intuition about the sphere and cube, for example. Proof: Let ${d(x,y) = D}$ and suppose, for contradiction, that ${y \neq \bar{x}}$. Pick some length minimizing geodesic ${\gamma_{x\bar{x}}}$ connecting ${x}$ to ${\bar{x}}$. Let ${\Gamma = \gamma_{x\bar{x}} \cup \bar{\gamma}_{x\bar{x}}}$ denote the concatenation of the paths ${\gamma_{x\bar{x}}}$ and ${\bar{\gamma}_{x\bar{x}}}$. We check that ${\Gamma}$ is self-intersection free. Certainly ${\gamma_{x\bar{x}}}$ is self-intersection free, because it is a minimizing geodesic. Suppose that ${p \in \gamma_{x\bar{x}} \cap \bar{\gamma}_{x\bar{x}}}$. Then we have two minimizing geodesics from ${x}$ to ${\bar{x}}$ intesecting at a point on their interior. Hence they must coincide, a contradiction. Thus ${\Gamma}$ is self-intersection free and hence ${\Gamma}$ seperates ${S}$ into two open regions. Let ${S \setminus \Gamma = S_1 \cup S_2}$. We know ${S_2 = \bar{S}_1}$ since the central symmetry has to swap the components. Suppose, for contradiction, that ${y \in \Gamma}$. Then ${d(x,y) \leq d(x,\bar{x})}$. If ${d(x,y) = d(x,\bar{x})}$ then ${y = \bar{x}}$, contradicting our hypothesis on ${y}$. If ${d(x,y) < d(x, \bar{x})}$ then we contradict the maximality of the diameter. Without loss of generality, take ${y \in S_1}$. We have ${\bar{y} \in \bar{S}_1}$. Take a minimizing geodesic ${\gamma_{y\bar{y}}}$ joining ${y}$ to ${\bar{y}}$. We have that ${\gamma_{y\bar{y}} \cap \Gamma \neq \emptyset}$ by the Jordan curve theorem. Take ${z \in \gamma_{y\bar{y}} \cap \Gamma}$. Suppose ${z \in \gamma_{x\bar{x}}}$. The triangle inequality gives us: $\displaystyle \begin{array}{rcl} d(x,y) & \leq & d(x,z) + d(z,y)\\ d(\bar{x},\bar{y}) & \leq & d(\bar{x},z) + d(z,\bar{y}) \end{array}$ Thus: $\displaystyle d(x,y) + d(\bar{x},\bar{y}) \leq d(x,z) + d(z,\bar{x}) + d(y, z) + d(z, \bar{y}) = d(x,\bar{x}) + d(y,\bar{y})$ The maximality of the diameter and the equality ${d(x,y) = d(\bar{x},\bar{y})}$ then give: $\displaystyle d(x,y) + d(\bar{x}, \bar{y}) = d(x,\bar{x}) + d(y,\bar{y})$ Suppose that ${d(x,y) < d(x,z) + d(z,y)}$ and ${d(\bar{x}, \bar{y}) < d(\bar{x},z) + d(z,\bar{y})}$. We then contradict the equality above. Thus, one of the two strict inequalities is an equality. Suppose without loss of generality that ${d(x,y) = d(x,z) + d(z,y)}$. Let ${\gamma_{xy}}$ denote a minimizing geodesic segment connecting ${x}$ to ${y}$. We then have that ${\gamma_{x\bar{x}} \subset \gamma_{xy}}$, otherwise we’ll have two geodesics diverging from one another at a point. We then form ${\gamma'_{xy}}$ by removing ${\gamma_{x\bar{x}}}$ from ${\gamma_{xy}}$ and replacing it with ${\bar{\gamma}_{x\bar{x}}}$. We then have that ${\gamma_{xy}}$ and ${\gamma'_{xy}}$ diverge at a point, a contradiction. $\Box$ Tagged with: , ,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 60, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965305328369141, "perplexity": 133.31147822437086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00021.warc.gz"}
http://groundstate.wikidot.com/yang-mills
Yang Mills Let $M$ be a 4-d manifold equipped with metric $g$, and denote the determinant of $g$ by $\lvert g \rvert$. Let $G \subset SO \left( l \right)$ be the compact structure group for the Yang-Mills action we will construct on M. Let $\mathfrak{g} \subset so \left( l \right)$ denote the Lie algebra of $G$, and $\left( A , B \right) \equiv \mathrm{tr} A B^{\dagger}$ the trace inner product on $\mathfrak{g}$. Let $*$ represent the Hodge dual induced by the metric $g$. For ordinary real-valued $p$-forms in $\Lambda^p M$, the Hodge dual is defined in terms of the inner product on forms, which in turn is linearly extended from the definition (1) \begin{align} \langle e^1 \wedge \cdots \wedge e^p , f^1 \wedge \cdots \wedge f^p \rangle = \mathrm{det} \left[ g^{\mu \nu} e^i_{\mu} f^j_{\nu} \right]. \end{align} In terms of this inner product, the Hodge dual is defined by the expression (2) \begin{align} \omega \wedge * \mu = \langle \omega , \mu \rangle \cdot \mathrm{vol}, \end{align} where $\mathrm{vol}$ is the volume form induced by the metric $g$. In coordinates, this reduces to (3) \begin{align} * \omega = \frac{\sqrt{ \lvert g \rvert } }{p! \left( n - p \right)!} \omega_{\alpha_1 \dots \alpha_p} \left. \epsilon^{\alpha_1 \dots \alpha_p} \right. _{\beta_{p+1} \dots \beta_n} dx^{\beta_{p+1}} \wedge \dots \wedge dx^{\beta_n}, \end{align} where $\epsilon_{\alpha_1 \dots \alpha_n}$ is the totally antisymmetric symbol and $\left. \epsilon^{\alpha_1 \dots \alpha_p}\right. _{\beta_{p+1} \dots \beta_n} = \epsilon_{\gamma_1 \dots \gamma_p \beta_{p+1} \dots \beta_n} g^{\alpha_1 \gamma_1} \cdots g^{\alpha_p \gamma_p}$. For a Lie-algebra-valued $p$-form $\omega \otimes T \in \Lambda^p M \otimes \mathfrak{g}$, the definition of the Hodge dual is extended so that (4) \begin{align} * \left( T \otimes \omega \right) \equiv T^{\dagger} \otimes * \omega \end{align} The inner product of two Lie-algebra valued forms is (5) \begin{align} \langle T \otimes \omega , S \otimes \mu \rangle &= \mathrm{tr} \left[ \left( T \otimes \omega \right) \wedge * \left( S \otimes \mu \right) \right] = \mathrm{tr} \left[ \left( T \otimes \omega \right) \wedge \left( S^{\dagger} \otimes * \mu \right) \right] \\ &= \mathrm{tr} \left( T S^{\dagger} \right) \left( \omega \wedge * \mu \right) = \mathrm{tr} \left( T S^{\dagger} \right) \langle \omega , \mu \rangle \cdot \mathrm{vol} \end{align} The Lorentzian Yang-Mills action is given by (6) \begin{align} S_{YM} \left( A \right) = - \frac{1}{2} \int_M \mathrm{tr} \left( F \wedge * F \right), \end{align} while the Euclidean Yang-Mills action is (7) \begin{align} S^E_{YM} \left( A \right) = \frac{1}{2} \int_M \mathrm{tr} \left( F \wedge * F \right). \end{align} Note that the signs would be reversed, as in Nakahara, Baez/Muniain, or Jackiw, if we did not incorporate the adjoint into the definition of the Hodge dual. In terms of the definitions give above, we can rewrite the Lorentzian action as (8) \begin{align} - \frac{1}{2} \int_M \langle F , F \rangle \, \mathrm{vol} &= - \frac{1}{2} \int_M \langle \frac{1}{2} F_{\mu \nu} dx^{\mu} \wedge dx^{\nu} , \frac{1}{2} F_{\rho \sigma} dx^{\rho} \wedge dx^{\sigma} \rangle \, \mathrm{vol} \\ &= - \frac{1}{8} \int_M \mathrm{tr} \left( F_{\mu \nu} F^{\dagger}_{\rho \sigma} \right) \Bigl\langle dx^{\mu} \wedge dx^{\nu} , dx^{\rho} \wedge dx^{\sigma} \Bigr\rangle \mathrm{vol} \\ &= - \frac{1}{8} \int_M \left( F_{\mu \nu} , F_{\rho \sigma} \right) \det \left( \begin{array}{ll} g^{\mu \rho} & g^{\mu \sigma} \\ g^{\nu \rho} & g^{\nu \sigma} \end{array} \right) \mathrm{vol} \\ &= - \frac{1}{4} \int_M g^{\mu \rho} g^{\nu \sigma} \left(F_{\mu \nu} , F_{\rho \sigma} \right) \sqrt{\lvert g \rvert} dx_M \end{align} or in terms of a normalized (with respect to the trace inner product) basis of the Lie algebra, (9) \begin{align} - \frac{1}{4} \int_M F^I_{\mu \nu} F^{\mu \nu}_I \sqrt{\lvert g \rvert} dx_M, \end{align} where $I$ runs over the Lie algebra basis, and up and down $I$ indices are the same. Thus for Minkowski space, the Lagrangian density for Yang-Mills is $\mathcal{L} = - \frac{1}{4} F^{I}_{\rho \sigma} F_{I}^{\rho \sigma}$. Alternatively, we could have reached this coordinate expression for the Yang-Mills action by substituting the coordinate form of $*F$ (use 4 and 3) into 7. We find the momenta canonically conjugate to the components $A_{\mu}$ of the connection by (10) \begin{align} \Pi^{\mu} = \frac{\delta \mathcal{L}}{\delta \dot{A}_{\mu}}, \end{align} but since the Lagrangian does not depend on $\dot{A}_0$, we cannot assign a momentum canonically conjugate to $A_0$. We deal with this issue by defining our Hamiltonian formalism with the Weyl gauge, $A_0 = 0$, imposed, so that we now only have three configuration variables $A_1, A_2, A_3$. We find their canonical momenta to be (11) \begin{align} \Pi^{i} = E^i = \dot{A}_i, \end{align} where the Latin indices run over space ($i = 1, 2, 3$) only. Note that $E$ is the negative of the variable that in Maxwell theory would be called the electric field. We can also define the analog of the magnetic field $B^i = \frac{1}{2} \epsilon^{ijk} F_{jk}$. Note that the squared vector norms of $E$ and $B$ are given by (12) \begin{align} \lvert E \rvert ^2 = - F^{I}_{0i} F_{I}^{0i} \\ \lvert B \rvert ^2 = \frac{1}{2} F^{I}_{jk} F_{I}^{jk}, \end{align} so in terms of these we can write (13) \begin{align} S_{YM} \left( A \right) &= - \frac{1}{4} \int_{M} F^{I}_{\rho \sigma} F_{I}^{\rho \sigma} dx^0 \cdots dx^3 \\ &= \frac{1}{2} \int_M \lvert E \rvert ^2 - \lvert B \rvert ^2 dx^0 \cdots dx^3 \end{align} and a Legendre transform $\mathcal{H} = \dot{A}^I_i E_I^i - \mathcal{L}$ gives the Hamiltonian density (14) \begin{align} \mathcal{H} = \frac{1}{2} \left( \lvert E \rvert ^2 + \lvert B \rvert ^2 \right) \end{align} A similar analysis of the Euclidean action would give (15) \begin{align} S^E_{YM} = \frac{1}{2} \int_M \lvert E \lvert ^2 + \lvert B \rvert ^2 dx^0 \dots dx^3 \end{align} because of the differing sign in the definition of the action and the Euclidean signature of the metric. page revision: 78, last edited: 02 Feb 2017 19:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 15, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000081062316895, "perplexity": 1018.8093729988354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00067.warc.gz"}
http://2017s1-07mathematics.blogspot.com/2017/01/number-system.html
## Wednesday, January 18, 2017 ### NUMBER SYSTEM NUMBER SYSTEM AND CLASSIFICATION ===================================================================== Below is a diagram that shows the classification of the number system (i.e. the different types of numbers in the mathematical world). Your Task is (1) to define what the following types of numbers and (2) to give some examples. An example on Integer is done for you. Do (2) - (5) Name (index no.) (1) Integers (2) Whole Numbers (3) Natural Numbers (4) Rational Numbers (5) Irrational Numbers ======================================================================== Example: Mr Johari (008) (1) Integer An integer (pronounced IN-tuh-jer) is a whole number (not a fractional number) that can be positive, negative, or zero. Examples of integers are: -5, 1, 5, 8, 97, and 3,043. The set of integers, denoted Z, is formally defined as follows:   Z = {..., -3, -2, -1, 0, 1, 2, 3, ...} source: http://whatis.techtarget.com/ 1. This comment has been removed by the author. 2. Hansen (10) 2. Definition The numbers {0, 1, 2, 3, ...} etc. There is no fractional or decimal part. And no negatives. Example: 5, 49 and 980 are all whole numbers. Source: https://www.mathsisfun.com/definitions/whole-number.html 3. Natural Number The whole numbers from 1 upwards: 1, 2, 3, and so on ... Or from 0 upwards in some fields of mathematics: 0, 1, 2, 3 and so on ... No negative numbers and no fractions. Source: https://www.mathsisfun.com/definitions/natural-number.html 4. Rational Numbers A Rational Number is a real number that can be written as a simple fraction (i.e. as a ratio). Most numbers we use in everyday life are Rational Numbers. Example: 1.5 is a rational number because 1.5 = 3/2 (it can be written as a fraction) Source: http://www.mathsisfun.com/rational-numbers.html 5. Irrational Numbers An Irrational Number is a real number that cannot be written as a simple fraction. Irrational means not Rational Example:π = 3.1415926535897932384626433832795 (and more...) You cannot write down a simple fraction that equals Pi. Source: https://www.mathsisfun.com/irrational-numbers.html 1. A Distinction effort and quality 3. RATIONAL A rational number is a number that can be written as a ratio. That means it can be written as a fraction, in which both the numerator (the number on top) and the denominator (the number on the bottom) are whole numbers. The number 8 is a rational number because it can be written as the fraction 8/1. Likewise, 3/4 is a rational number because it can be written as a fraction. Even a big, clunky fraction like 7,324,908/56,003,492 is rational, simply because it can be written as a fraction. Every whole number is a rational number, because any whole number can be written as a fraction. For example, 4 can be written as 4/1, 65 can be written as 65/1, and 3,867 can be written as 3,867/1. INRATIONAL All numbers that are not rational are considered irrational. An irrational number can be written as a decimal, but not as a fraction. An irrational number has endless non-repeating digits to the right of the decimal point. Here are some irrational numbers: π = 3.141592… square root of 2 = 1.414213… Although irrational numbers are not often used in daily life, they do exist on the number line. In fact, between 0 and 1 on the number line, there are an infinite number of irrational numbers! 1. A Distinction effort and quality 4. Ng Zen Haan A WHOLE NUMBER is a number without fractions; an integer. Example : 5,49,640 A NATURAL NUMBER is the positive integers (whole numbers) 1, 2, 3, etc., and sometimes zero as well. A RATIONAL NUMBER is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. ... A real number that is not rational is called irrational. A IRRATIONAL NUMBER is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. 1. A Distinction effort and quality 5. Whole numbers: Whole Numbers are simply the numbers 0, 1, 2, 3, 4, 5, ... (and so on) Natural numbers : "Natural Numbers" can mean either "Counting Numbers" {1, 2, 3, ...}, or "Whole Numbers" {0, 1, 2, 3, ...}, depending on the subject. Rational numbers In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. Irrational numbers n mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. Wesley yep 1. Good work but cite source please. 6. Ignatius Lim (014) (2)Whole Numbers Whole Number are numbers which do not have a fraction; an integer.(Example:1,2,3,4,5,-1,-2,-3,-4,-5) (3)Natural Numbers Natural Numbers are numbers that are Whole Numbers however not a negative one.(Example:1,2,3,4,5 Source:http://whatis.techtarget.com/definition/natural-number (4)Rational Numbers Rational Numbers are numbers which can be turned into a ratio.(Examples:5,1.75,1.5) Source:http://www.mathsisfun.com/rational-numbers.html (5)Irrational Numbers are numbers which cannot be turned into a ratio eg. 1 divide 3,the formula pie. Source:Mr Johari 1. A Distinction effort and quality 7. (2) Whole Numbers The numbers {0, 1, 2, 3, ...} etc. There is no fractional or decimal part. And no negatives. Example: 5, 49 and 980 are all whole numbers. source: https://www.mathsisfun.com/definitions/whole-number.html (3) Natural Numbers The whole numbers from 1 upwards: 1, 2, 3, and so on ... Or from 0 upwards in some fields of mathematics: 0, 1, 2, 3 and so on ... No negative numbers and no fractions source: https://www.mathsisfun.com/definitions/natural-number.html (4) Rational Numbers A number that can be made by dividing two integers. (Note: integers have no fractions.) The word comes from "ratio". Examples: • 1/2 is a rational number (1 divided by 2, or the ratio of 1 to 2) • 0.75 is a rational number (3/4) • 1 is a rational number (1/1) • 2 is a rational number (2/1) • 2.12 is a rational number (212/100) • −6.6 is a rational number (−66/10) But Pi is not a rational number, it is an "Irrational Number". source: https://www.mathsisfun.com/definitions/rational-number.html (5) Irrational Numbers An irrational number is simply the opposite of a rational number. A irrational number is one that cannot be represented as the ratio of two integers. Example - Pi - √ 2 1. A Distinction effort and quality 2. source for (5) http://www.mathopenref.com/irrational-number.html 3. Love the quality of work shown. 8. (2) Whole Numbers a whole number is a number without a fraction, decimal and it is not a negative. e.g. 1, 5, 17, 22. https://www.mathsisfun.com/definitions/whole-number.html (3) Natural Numbers natural numbers are positive integers (no negative numbers and no fractions) counting numbers e.g. 1,2,3,4,5... onwards. https://www.mathsisfun.com/definitions/natural-number.html (4) Rational Numbers a rational number is a number that can be written as simple fraction (i.e ratio) e.g 1.5 is a rational number because 1.5 can be written as 3/2 http://www.mathsisfun.com/rational-numbers.html (5) Irrational Numbers an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. e.g. π = 3.14159.... we cannot write down a simple fraction that equals to pi, thus it is an irrational number. https://www.mathsisfun.com/irrational-numbers.html 1. A Distinction effort and quality 9. Ariel Chia (2) (2) Whole Numbers A whole number is a number that has no decimals, fractions, etc. An example of a whole number is 2. (3) Natural Numbers A natural number is similar to a whole number. An example of a natural number is 3. (4) Rational Numbers A rational number is any number that can be expressed as the quotient or fraction p/q of two integers. An example of a rational number is 8 because it can be written as the fraction 8/1. (5) Irrational Numbers An irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. It is the opposite of a rational number. An example of an irrational number is π. source: Wikipedia 1. A Distinction effort and quality 10. whole number- Whole numbers are numbers that has no fraction, an integer Natural numbers- The positive integers (whole numbers), and sometimes zero as well Rational numbers- Any number that can be expressed as the quotient or fraction of 2 integers Irrational number- A real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat 1. A Distinction effort and quality 11. (2)whole numbers: a number without fractions; an integer. e.g.:0, 1, 2, 3, 4, 5, ... (and so on) (3)natural numbers: the positive integers (whole numbers) 1, 2, 3, etc., and sometimes zero as well. 0 upwards in some fields of mathematics: 0, 1, 2, 3 and so on ... No negative numbers and no fractions. (4)Rational numbers: In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. ... A real number that is not rational is called irrational. e.g.• 0.75 is a rational number (3/4) (5)irrational numbers:In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat.e.g. π = 3.1415926535897932384626433832795 (and more...) You cannot write down a simple fraction that equals Pi. 1. Good work but cite source please. 12. 1)Whole Numbers are simply the numbers 0, 1, 2, 3, 4, 5, ... (and so on) No Fractions! Examples: 0, 7, 212 and 1023 are all whole numbers (But numbers like ½, 1.1 and 3.5 are not whole numbers.) 2)Natural Numbers "Natural Numbers" can mean either "Counting Numbers" {1, 2, 3, ...}, or "Whole Numbers" {0, 1, 2, 3, ...}, depending on the subject. counting numbers are Whole Numbers, but without the zero. Because you can't "count" zero. So they are 1, 2, 3, 4, 5, ... (and so on). 3)n mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. ... The decimal expansion of an irrational number continues without repeating.Example: 1.5 is a rational number because 1.5 = 3/2 (it can be written as a fraction) Rational Number 4)In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat.Example: π (Pi) is a famous irrational number. You cannot write down a simple fraction that equals Pi. The popular approximation of 22/7 = 3.1428571428571... is close but not accurate. Another clue is that the decimal goes on forever without repeating. sources: wikipedia and https://www.mathsisfun.com/whole-numbers.html 1. A Distinction effort and quality 13. Audrey (05) Whole numbers are the numbers 1,2,3,4,5,6,7,8,9 and so on. Whole numbers do not have decimals Natural numbers care positive integers. It can mean counting numbers or whole numbers depending on the subject. Natural numbers also do not have decimals Rational numbers are numbers that can be written as a simple fraction or ratio (e.g. 1.5) 1.5 is a rational number because 1.5 = 3/2 (it can be written as a fraction) An Irrational Number is a real number that cannot be written as a simple fraction. Pi is an irrational number because it cannot be written down as a simple fraction although we often estimate it as 22/7 or 3.14. those are close but not accurate. 1. Source: ttps://www.mathsisfun.com 2. A Distinction effort and quality 14. 2. Whole numbers The numbers {0, 1, 2, 3, ...} etc. There is no fractional or decimal part. And no negatives. https://www.mathsisfun.com/definitions/whole-number.html 3.Natural Numbers No negative numbers and no fractions. https://www.mathsisfun.com/definitions/natural-number.html 4.Rational Numbers A Rational Number is a real number that can be written as a simple fraction (i.e. as a ratio). http://www.mathsisfun.com/rational-numbers.html 5. Irrational Numbers An Irrational Number is a real number that cannot be written as a simple fraction. https://www.mathsisfun.com/irrational-numbers.html 1. A Distinction effort and quality 15. 1)A natural number is a number that occurs commonly and obviously in nature. As such, it is a whole, non-negative number. The set of natural numbers, denoted N, can be defined in either of two ways: N = {0, 1, 2, 3, ...} N = (1, 2, 3, 4, ...} 2)A whole number is fractional or decimal part. And no negatives.The numbers {0, 1, 2, 3, ...} etc 3)A rational number is a number that can be written as a ratio. That means it can be written as a fraction, in which both the numerator (the number on top) and the denominator (the number on the bottom) are whole numbers. The number 8 is a rational number because it can be written as the fraction 8/1. 4)In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. Shaun Ho (19) 1. Good but cite source please. 16. Joel Tan(22) (4)Rational numbers A rational number is a number that can be written as a ratio. That means it can be written as a fraction, in which both the numerator (the number on top) and the denominator (the number on the bottom) are whole numbers.Every whole number is a rational number, because any whole number can be written as a fraction. Examples:The number 8 is a rational number because it can be written as the fraction 8/1. Likewise, 3/4 is a rational number because it can be written as a fraction. Even a big, clunky fraction like 7,324,908/56,003,492 is rational, simply because it can be written as a fraction. sources: http://www.factmonster.com/ipka/A0876704.html 1. good but where are the other numbers? 17. How Hong Jie (11) (2) Whole Numbers Also called counting number. one of the positive integers or zero; any of the numbers (0, 1, 2, 3, …). source: dictionary.com (3)Natural Numbers In mathematics, the natural numbers are those used for counting (as in "there are six coins on the table") and ordering (as in "this is the third largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers"The natural numbers are the basis from which many other number sets may be built by extension: the integers, by including an additive inverse (−n) for each natural number n (and zero, if it is not there already, as its own additive inverse); the rational numbers, by including a multiplicative inverse (1/n) for each nonzero integer n; the real numbers by including with the rationals the (converging) Cauchy sequences of rationals; the complex numbers, by including with the real numbers the unresolved square root of minus one; and so on.[6][7] These chains of extensions make the natural numbers canonically embedded (identified) in the other number systems. Source:en.wikipedia.org (4)Rational Numbers: In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.[1] Since q may be equal to 1, every integer is a rational number. The set of all rational numbers, often referred to as "the rationals", is usually denoted by a boldface Q (or blackboard bold Q \mathbb {Q} , Unicode ℚ);[2] it was thus denoted in 1895 by Giuseppe Peano after quoziente, Italian for "quotient". Source: en.wikipedia.org (5)Irrational numbers: In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. For example, the number π starts with 3.14159265358979, but no finite number of digits can represent it exactly and it does not end in a segment that repeats itself infinitely often. The same can be said for any irrational number. As a consequence of Cantor's proof that the real numbers are uncountable and the rationals countable, it follows that almost all real numbers are irrational.[1] When the ratio of lengths of two line segments is irrational, the line segments are also described as being incommensurable, meaning they share no measure in common. Numbers that are irrational include the ratio π of a circle's circumference to its diameter, Euler's number e, the golden ratio φ, and the square root of two;[2][3][4] in fact all square roots of natural numbers, other than of perfect squares, are irrational. Source: en.wikipedia.org 1. A Distinction effort and quality - appreciate it! 18. 2.Also called counting number. one of the positive integers or zero; any of the numbers (0, 1, 2, 3, …). 3. In mathematics, the natural numbers are those used for counting (as in "there are six coins on the table") and ordering (as in "this is the third largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". 4.In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.[1] Since q may be equal to 1, every integer is a rational number. The set of all rational numbers, often referred to as "the rationals", 5.In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. For example, the number π starts with 3.14159265358979, but no finite number of digits can represent it exactly and it does not end in a segment that repeats itself infinitely often. The same can be said for any irrational number. 1. Good but cite source please. 19. Name: Yeo Teng Jun Class: S107 Index: 24 2. Whole Numbers Whole numbers are a set of numbers which are not negative integers and excludes fractions which range from 0 to infinity. Example: 5 is a whole number as it is a number that is not a fraction. 3. Natural Numbers Natural Numbers are used for counting, which ranges from 1 to infinity. A natural number is a number that occurs commonly and obviously in nature. As such, it is a whole, non-negative number. Example: 5 is a natural numbers as it is most commonly used for daily life. Source: https://en.wikipedia.org/wiki/Natural_number 4. Rational Numbers A rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. Example: 1/2 is a rational number as it is a fraction. Source: https://en.wikipedia.org/wiki/Rational_number 5. Irrational Numbers Irrational numbers are integers that has never ending digits. Example: Pi is an irrational number cause Pi = 3.1415926535897932384626446........ and so on. 1/3 is an irrational number as 1/3 is = 0.3333333333333333333333333333....and so on. 1. A Distinction effort and quality 20. Silas Benaiah Lew ji hin(20) 1)An integer (pronounced IN-tuh-jer) is a whole number (not a fractional number) that can be positive, negative, or zero. Examples of integers are: -5, 1, 5, 8, 97, and 3,043. Examples of numbers that are not integers are: -1.43, 1 3/4, 3.14, .09, and 5,643.1. source:Wikepedia 2)whole numbers are numbers with no fractions; an integer.E.G.1 5 678 source:wikepedia 3)the positive integers (whole numbers) 1, 2, 3, etc., and sometimes zero a.E.g. 1 2 3 4 5 6 7 8 9 10 source:wikepedia 4)In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. ... The decimal expansion of an irrational number continues without repeating.e.g. 0.4,0.75,5,4 source:wikepedia 5)In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat.e.g 0.333333333333333333,33.33333333 source Wikipedia 1. A Distinction effort and quality 21. Timothy Luk (23) irrational numbers: an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. pi is a good example of an irrational number π=3.141592653589793238462643383279502884197169399327980...etc source :wikipedia rational number: A rational number is a number that can be written as a ratio. That means it can be written as a fraction, in which both the numerator (the number on top) and the denominator (the number on the bottom) are whole numbers.The number 8 is a rational number because it can be written as the fraction 8/1. source: fact monster 1. A Distinction effort and quality 22. rational numbers: In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.[1] Since q may be equal to 1, every integer is a rational number. The set of all rational numbers, often referred to as "the rationals", 23. 2.Also called counting number. one of the positive integers or zero; any of the numbers (0, 1, 2, 3, …). 3. In mathematics, the natural numbers are those used for counting (as in "there are six coins on the table") and ordering (as in "this is the third largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". 4.In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.[1] Since q may be equal to 1, every integer is a rational number. The set of all rational numbers, often referred to as "the rationals", 5.In mathematics, an irrational number is a real number that cannot be expressed as a ratio of integers, i.e. as a fraction. Therefore, irrational numbers, when written as decimal numbers, do not terminate, nor do they repeat. For example, the number π starts with 3.14159265358979, but no finite number of digits can represent it exactly and it does not end in a segment that repeats itself infinitely often. The same can be said for any irrational number. 1. good but cite source please. 24. Ryan Lee(13) (2)Whole numbers Whole numbers are like integers,just that it does not include negative numbers Examples are 0,1,2,3 (3)Natural numbers natural numbers are positive integers Examples are 8,0, (4)Rational numbers Rational numbers are numbers that can be expressed as a fraction Examples are 7/11 (5)Irrational numbers Irrational numbers are numbers that cannot be expressed as a fraction Examples are pi, root 2,root 3 and root 5 1. good but cite source please. 25. Chua Xing Zi (3) (2) whole numbers Whole numbers are numbers without fractions and decimals. There is no negatives. All whole numbers are positive.Some examples of whole numbers are 5,49 and 980. (3)Natural Numbers A natural number is a number that occurs commonly and obviously in nature. As such, it is a whole, non-negative number.Some natural numbers are 1,2,3,4, and 5. (4)Rational numbers A rational number is a real number that can be written as a simple fraction or as a ratio. Most numbers we use in everyday life are rational numbers.Some examples of a rational number is 1.5,1.75 or even 5. (5)Irrational numbers An irrational number is a real number that cannot be expressed as a ratio or an integer . 1. good but cite source please.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046756625175476, "perplexity": 588.9123308943641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00636.warc.gz"}
https://thuses.com/algebraic-geometry/the-torsion-component-of-the-picard-scheme/
## The torsion component of the Picard scheme This post is a continuation of Sean Cotner’s most recent post [see An example of a non-reduced Picard scheme]. Since writing that post, Bogdan Zavyalov shared some notes of his proving the following strengthened version of the results described there. Main Theorem. Let be a noetherian local ring and let be a finite flat commutative group scheme over . There exists a smooth projective scheme over with geometrically connected -dimensional fibers such that . If the Cartier dual is etale (i.e., is of multiplicative type), then we may take to have -dimensional fibers instead. Moreover, can be taken to be the quotient of a complete intersection under the free action of . A major motivation for the Main Theorem is that it can be used to construct examples of smooth projective schemes over a DVR of equicharacteristic such that the Hodge numbers of the special and generic fibers are not equal; for this phenomenon, see the final section. This answers a question asked in the Stanford Number Theory Learning Seminar in a way not easily findable in the literature. The basic construction of as in the Main Theorem will be very similar to the one given in Cotner’s previous post, but there are a few major simplifications owing to the use of descent techniques, as well as some technical difficulties to overcome coming from the new relative setting. For example, there are some small intermediate arguments with algebraic spaces owing to the fact that certain Picard functors are not obviously representable (though all the relevant ones for us are a posteriori representable). We also remove all mention of Igusa’s theorem from the previous post. Our basic strategy in the general case is as follows. (See the rest of the post for notation and definitions.) 1. Show that any complete intersection of dimension has trivial . 2. Show that if is a -torsor, then the pullback map has kernel , the Cartier dual of . 3. Construct a projective space over on which acts, freely outside of a codimension closed subset. 4. Use Bertini’s theorem to slice the quotient by hypersurfaces to obtain a smooth projective scheme of dimension so that the pullback of in is a complete intersection on which acts freely. 5. Conclude that is a -torsor and thus by points 1 and 2. ### 1. Complete intersections and Picard schemes Definition. Let be a scheme and let be a closed subscheme of . We say that is a complete intersection of dimension if it is a flat finitely presented -scheme with fibers of pure dimension which are complete intersections. Lemma 1.1. Let be a complete intersection of dimension over a field and let be an integer. We have for all , , and for all . In particular, is geometrically connected. Moreover, if is smooth and then . Proof. We induct on the codimension of inside . If , then this follows from the familiar computations of the cohomology of projective space. In general, by the definition of a complete intersection there is some complete intersection and hyperplane of degree such that and moreover there is a short exact sequence where is the natural inclusion. Tensoring by and passing to cohomology gives exact sequences By induction, if then the two outside terms vanish. If , then we have the exact sequence If , this shows . If , then again . Since , it follows from the Stein factorization that is geometrically connected. Now suppose that is smooth and . To compute , recall first the Euler exact sequence for -dimensional projective space: From this, we see that . In general, if a smooth is the intersection of hypersurfaces whose degrees sum to , then we look at the conormal exact sequence for the natural inclusion. Passing to cohomology gives the short exact sequence and it suffices by the above to show that . This now follows from the short exact sequence since and . This completes the cohomology calculations. Recall the definition of the Picard functor: if is a morphism of schemes then we define to be the fppf sheafification of the functor sending an -scheme to . The formation of this sheaf commutes with base change on S, i.e., for all -schemes . In this generality, this functor is essentially useless, but Grothendieck and Artin proved the following remarkable representability theorems, see FGA Explained, Theorem 9.4.8, and Artin, Algebraization of Formal Moduli: I, Theorem 7.3. Theorem 1.2. (i) (Grothendieck) Suppose that is flat, finitely presented, projective Zariski-locally on , with geometrically integral fibers. Then is represented by a separated, locally finitely presented -scheme. If , then naturally for all -schemes . (ii) (Artin) If is flat, proper, finitely presented, and cohomologically flat in dimension (i.e., holds universally), then is a quasi-separated algebraic space locally of finite presentation over . Algebraic spaces only intervene for us in a rather technical way, as one can see in the proof of Lemma 1.3; ultimately everything we discuss will be a scheme, but to apply geometric methods we will need to know a priori that is an algebraic space. There are two further important notions for us: the identity component and the torsion component . The identity component is defined as the subfunctor of whose set of -points consists of those -points of such that for every there exists an algebraically closed extension of , a connected -scheme , points such that is an extension of , and a -point of such that in and . Note that if is representable then this is the same as the set-theoretic union , where is the identity component of the locally finite type -group scheme . The torsion component is defined as , where denotes the multiplication by map; this is also a subgroup functor of . For our purposes, will not be a very useful functor: in general, when has non-reduced geometric fibers, need not be an open subscheme of , though it is on fibers. (We will see examples of this phenomenon later.) However, the morphism of functors is always representable by open immersions when is proper over ; if is projective over then this morphism is also representable by closed immersions. For both of these assertions, see SGA6, Exp. XIII, Thm. 4.7. Lemma 1.3. Let be a complete intersection of dimension over a scheme . If is smooth or then . (In particular, is representable.) Proof. First suppose for an algebraically closed field . By Theorem 1.2(ii) and Lemma 1.1, is a quasi-separated algebraic space locally of finite type over . By Lemma 4.2 of Artin’s paper cited above, it follows in this case that in fact is a -group scheme. (Note that quasi-separatedness is part of the definition of an algebraic space in Artin’s paper.) As , we see that by Lemma 1.1. So it suffices to show that has no torsion. If , then a Lefschetz theorem (see SGA2, Exp. XII, Cor. 3.7, and note that no smoothness hypotheses on are necessary) states that , so indeed is torsion-free in this case. Now suppose is smooth. Recall that for all since is algebraically closed. If does not divide then , so we have By a Lefschetz theorem (see SGA2, Exp. XII, Cor. 3.5, and note again the lack of smoothness hypotheses), we have , so that this Hom set is trivial. To prove that there is no -torsion, note that there is an exact sequence of Zariski sheaves where the first map is given by and the second map is given by . The th power map is clearly injective since is reduced, and the composition of the two maps is evidently . Exactness in the middle is more involved: the idea is to use normality and local freeness of to reduce to proving that for a finitely generated field extension and , implies that is a th power. After checking this for purely transcendental extensions, one uses the existence of a separating transcendence basis to deduce the general case. Since the complete argument is rather long, we omit it. Now let be the image of the map in the exact sequence above, so there is a corresponding short exact sequence for and we obtain, passing to cohomology, an exact sequence where the right map is multiplication by . Since by Lemma 1.1, it follows that , and indeed . Now we work in the case of general . Using Lemma 1.1, a simple argument using cohomology and base change shows that is cohomologically flat in dimension , so is an algebraic space by Theorem 1.2(ii) and is also an algebraic space since is representable by open immersions. Since formation of commutes with arbitrary base change, we see that for all by the first paragraph. Thus the identity section is an isomorphism on fibers, and since is trivially flat over itself, the fibral isomorphism theorem (see EGA IV, Cor. 17.9.5) implies that . (Note that the fibral isomorphism criterion holds for a morphism from a scheme to an algebraic space: morphisms may be checked to be isomorphisms after etale base change, so this follows immediately from the fact that algebraic spaces admit etale covers which are schemes and the relative diagonal of an algebraic space is representable.) This completes the proof. Question. The above proof shows that all complete intersections in dimension (smooth or not) have trivial and no -torsion for any prime . Do there exist (non-smooth) complete intersections of dimension in characteristic such that ? Lemma 1.4. Let be a commutative finite locally free group scheme over a scheme and let be an fppf -torsor, where is cohomologically flat in dimension and satisfies the hypotheses of the Theorem. The pullback map has kernel isomorphic to , the Cartier dual of . The same is therefore true of . Proof. We work with the presheaf defined by for all -schemes . There is a pullback map , and we claim that it has kernel . After fppf sheafification, this easily gives the result. To prove this claim, it suffices by base change on to show that the kernel of the map can be canonically identified with the group , and we will do this below. Recall that fppf descent for quasicoherent sheaves says that a line bundle on is equivalent to the data of a line bundle on along with an isomorphism , where are the two canonical projections, satisfying the cocycle condition , where is the projection onto the and coordinates. Given a line bundle on , we obtain a datum of this form (a descent datum) via taking and using the canonical isomorphism of functors . So a line bundle on which becomes trivial after pullback to is the same as the datum of an isomorphism satisfying the cocycle condition. Since canonically, this is the same as an automorphism of , i.e., an element of , satisfying the cocycle condition. We will show below that the cocycle condition can be described more concretely in terms of . Since is a -torsor, there is a canonical isomorphism given functorially by . There is also an identification given functorially by . Under these identifications, the maps , , and are identified with maps via Suppose that is such that the cocycle condition holds. Functorially, this means for all and , where ranges over the category of -schemes. Since is cohomologically flat in dimension , we have naturally via pullback of units, so may be regarded as a morphism , and the cocycle condition is precisely saying that is a group homomorphism, i.e., . This completes the proof. ### 2. Group actions Let be a scheme, a finite locally free -group scheme, and a separated -scheme. An action of on is an -morphism such that is a group action for every -scheme . (As usual, this is equivalent to the commutativity of various diagrams like those in the ordinary definition of a group action.) Definition. If is an -scheme and , then the stabilizer of in is the functor sending a -scheme to . The free locus of the action is the functor sending an -scheme to . Lemma 2.1. If acts on and for some -scheme , then is representable by a closed -subgroup scheme of , and for any -scheme we have . The functor is represented by an open subscheme of , and for every -scheme . Proof. The claims about base change are simple from the functorial definition, and will be omitted. For the first representability claim, note that there is a Cartesian diagram where denotes the action map . Since is separated, is a closed embedding, and it follows by base change that is a closed -subgroup of , hence in particular a finite -scheme. Now if is any -morphism, we see that there is a Cartesian diagram so that is a finite -group scheme. The morphism is a proper monomorphism, hence a closed embedding, and we obtain the claim. Recall that if is a finite morphism, then the function is an upper semicontinuous function: this follows directly from Nakayama’s lemma. If is moreover an -group scheme then for all (since there exists a section ), and is trivial if and only if the fiber is trivial for all (apply the fibral isomorphism criterion to the identity section). It follows from these considerations applied to that if then is an open subscheme of and represents . This proves the lemma. We will also need the following Theorem proved in SGA3, Exp. V, Thm. 4.1 and Rem. 5.1. With notation as in the first paragraph of this section, recall that a morphism of -schemes is a -torsor if acts on by -automorphisms and the natural map given by is an isomorphism. Theorem 2.2. If is a quasi-projective -scheme and is a finite locally free -group scheme acting on then the ringed space quotient exists as a quasi-projective -scheme, and the natural morphism is finite and open. If acts freely on , then is a -torsor and represents the fppf sheaf quotient for the equivalence relation on defined by the action of . In particular, in this latter case the formation of the quotient commutes with all base change on . The relevance of quasi-projectivity in this statement is that it implies that any finite collection of points lying over an affine open of all lie in an affine open of . This permits one, using some nontrivial formal arguments, to reduce to the case that and are affine, in which case one proves that can be formed as the spectrum of the ring of invariants for the action of on . If is an open -stable subscheme then exists as an open subscheme of . If is an open subscheme then also naturally. If acts freely on then many properties of follow from properties of . For example, if is flat, then is flat: in general, elementary commutative algebra arguments show that it is true that flatness may be checked on any faithfully flat cover. Moreover, if is smooth then is also smooth: this follows from the fact that “smooth = locally finite presentation + flat with geometrically regular fibers”, reduction to the Noetherian setting, and Matsumura, Commutative Ring Theory, Thm. 23.7. It is simple to see that properness of is equivalent to that of , so that is projective whenever is projective. ### 3. Construction Let be a noetherian local ring and let be a finite flat commutative group scheme of rank over , as in the Main Theorem. For any integer there is a natural action of on the projective space . The following lemma is established in Raynaud’s paper -torsion du schéma de Picard, paragraph preceding Lemme 4.2.2. Lemma 3.1. The free locus is an open subset of with complementary codimension on fibers. Proof. We offer a very brief sketch of the idea and refer to loc. cit. for a detailed argument. First, pass to geometric fibers to assume that S is the spectrum of an algebraically closed field, say of characteristic . If has nontrivial stabilizer, then it contains a subgroup isomorphic to one of , , or for some prime number . One first shows that the locus fixed by any of these subgroups in has large codimension. In general, contains only finitely many subgroups of the form or for , but it may contain infinitely many subgroups of the form and . However, such subgroups are determined by their Lie algebras, so the collection of subgroups of each of these forms is parameterized by some projective space. One shows then that this projective space is of relatively small dimension to obtain the lemma. By Theorem 2.2, the ringed space quotient exists as a projective -scheme, so because is local it is an -closed subscheme of some projective space . By the discussion directly following Theorem 2.2, is a smooth open subscheme of ; its closed complement has fibers of codimension because dimension is insensitive to finite surjective maps. By Bertini’s theorem [see Poonen, Bertini theorems over finite fields, or Gabber, On space filling curves and Albanese varieties for a proof over finite fields], if then there exists hypersurfaces in the special fiber such that is a smooth integral -dimensional closed subscheme of disjoint from the image of . Lift each to a hypersurface in and let . Let be the (schematic) preimage of in . Lemma 3.2. The scheme is -smooth with equidimensional fibers of dimension . Proof. We prove first that is -flat. Note that the special fiber is integral of dimension , and Chevalley’s theorem on upper-semicontinuity of fiber dimension (see EGA IV, 13.1.3 for a proof for general ) implies that in fact every fiber of over has dimension at most . But every fiber of over is integral of dimension , and intersection with a hyperplane can cut the dimension down by at most ; this follows from Krull’s height theorem. Since every zerodivisor in a noetherian local ring is contained in a minimal prime ideal, if is equidimensional and is a nonunit then is a nonzerodivisor if and only if . By induction on , using the above considerations and the fact that has integral fibers, we see that has equidimensional fibers of dimension . If each is defined locally on by some function , it follows that is a regular sequence since this may be checked on fibers over . So if is an affine open on which is defined by , there is an exact sequence , where the first map is given by multiplication by . This map is injective after base change to any residue field of (as we noted above), so that is flat over : we see that for all , so flatness follows from Stacks Project, 00M5 and a Zorn’s lemma-style argument. These same considerations show by induction on that is -flat. Now recall that the -smooth locus in is open, and is -smooth at all points of since is -smooth at a point if and only if it is flat over at and is smooth in its fiber. Since is proper, it follows immediately that is smooth over , completing the proof of the Lemma. Now we take in Lemmas 3.1 and 3.2. Note that is cut out of by the preimages of the in , and it is easy to see from Lemma 3.2 that is a complete intersection of dimension . So if then is smooth and projective over with -dimensional geometrically integral fibers and is a complete intersection of dimension . By Lemma 1.3 we see that , and it follows from Lemma 1.4 that . If is etale and , then is smooth, being a -torsor over the smooth , so the same argument shows again that . This completes the proof of the Main Theorem. Question. If is not of multiplicative type, then does there necessarily exist a -dimensional smooth projective with ? (This is related to the previous question: if the answer to this question is “no”, then the construction above will yield a -dimensional complete intersection with nontrivial -torsion in its Picard group.) What is certainly true is that the above argument can be modified in a simple way to show that if is connected over a field then there does exist a -dimensional smooth projective with . ### 4. Jumping Hodge numbers We now give two examples of “pathologies” we can deduce from the Main Theorem. Recall that if are integers then the Hodge number of a proper scheme over a field is defined by . Classically (i.e., over the complex numbers), the Hodge numbers satisfy various magical properties: for example, if is smooth and projective then one always has . As mentioned in Cotner’s previous post, taking for a field of characteristic and , we find an example of a variety in characteristic which does not satisfy Hodge symmetry: namely, and . Another magical property of Hodge numbers over (or, more generally, over a field of characteristic ) is that they are constant in smooth projective families; this is proved via analytic methods. In the following two examples, we will see that this fails away from equicharacteristic . Example. Jumping Hodge numbers in mixed characteristic . Let be a DVR of mixed characteristic and , so that the generic fiber is etale and the special fiber is connected. Let be as in the Main Theorem, so is smooth and projective over , and . Recall that for any scheme over a field such that is representable, we have . Thus we have and , i.e., “the Hodge number jumps”. Notice also that has special fiber and generic fiber , so that this is not open in . Example. Jumping Hodge numbers in equicharacteristic Let be a DVR of equicharacteristic and let be a totally ramified generically separable extension of degree . Let denote the Weil restriction , so that has special fiber and generic fiber . Let denote the kernel of Frobenius on , so that is a finite flat commutative group scheme over with special fiber and generic fiber . So if is the Cartier dual of then we have and Let be the smooth projective -scheme whose existence guaranteed by the Main Theorem, so that . So as in the previous section we have and . Subscribe Notify of 1 Comment Inline Feedbacks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899685978889465, "perplexity": 336.1064970748935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00213.warc.gz"}
http://math.stackexchange.com/questions/76006/convergence-in-distribution-of-product-of-two-sequences-of-random-variables
# Convergence in distribution of product of two sequences of random variables For random variables $\{X_n\}$, $\{Y_n\}$ that converge in distribution to X and Y, what would it mean if $\{(X_n,Y_n)\}$ converges in distribution to (X,Y)? - What is the question? If $(X_n,Y_n)\to(X,Y)$ in distribution, then, for any continuous function $u$, $u(X_n,Y_n)\to u(X,Y)$. For example $u$ can be the projection on some coordinates. So yes, this implies that $X_n\to X$ and that $Y_n\to Y$. –  Did Oct 26 '11 at 6:54 I hope you want to know the meaning of $(X_n,Y_n)$ converges in distribution to $(X,Y)$. As in the one dimensional case, $(X_n,Y_n)$ is said to converge in distribution to $(X,Y)$ if $F_{(X_n,Y_n)}(x,y) \to F_{(X,Y)}(x,y)$ at all points $(x,y)$ where $F_{(X,Y)}$ is continuous, where $F_{(X,Y)}(x,y)=P(X\le x, Y\le y)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953038096427917, "perplexity": 72.95262511197289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862636.1/warc/CC-MAIN-20150124161102-00092-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-lat/0305025/
# Thermodynamics and In-Medium Hadron Properties From Lattice Qcd F. Karsch and E. Laermann Fakultät für Physik, Universität Bielefeld D-33615 Bielefeld, Germany ###### Abstract Non-perturbative studies of the thermodynamics of strongly interacting elementary particles within the context of lattice regularized QCD are being reviewed. After a short introduction into thermal QCD on the lattice we report on the present status of investigations of bulk properties. In particular, we discuss the present knowledge of the phase diagram including recent developments of QCD at non-zero baryon number density. We continue with the results obtained so far for the transition temperature as well as the temperature dependence of energy and pressure and comment on screening and the heavy quark free energies. A major section is devoted to the discussion of thermal modifications of hadron properties, taking special account of recent progress through the use of the maximum entropy method. ## 1 Introduction Understanding the properties of elementary particles at high temperature and density is one of the major goals of contemporary physics. Through the study of properties of elementary particle matter exposed to such extreme conditions we hope to learn about the equation of state that controlled the evolution of the early universe as well as the structure of compact stars. A large experimental program is devoted to the study of hot and dense matter created in ultrarelativistic heavy ion collisions. Lattice studies of QCD thermodynamics have established a theoretical basis for these experiments by providing quantitative information on the QCD phase transition, the equation of state and many other aspects of QCD thermodynamics. Already 20 years ago lattice calculations first demonstrated that a phase transition in purely gluonic matter exists [1, 2] and that the equation of state of gluonic matter rapidly approaches ideal gas behavior at high temperature [3]. These observables have been of central interest in numerical studies of the thermodynamics of strongly interacting matter ever since. The formalism explored in these studies, its further development and refinement has been presented in reviews [4] and the steady improvement of numerical results is regularly presented at major conferences [5]. Rather than discussing the broad spectrum of topics approached in lattice studies of QCD thermodynamics we will concentrate here on basic parameters, which are of direct importance for the discussion of experimental searches for the QCD transition to the high temperature and/or density regime, which generally is denoted as the Quark Gluon Plasma (QGP). In our discussion of the QCD phase diagram, the transition temperature and the equation of state we will also emphasize the recent progress made in lattice studies at non-zero baryon number density. A major part of this review, however, is devoted to a discussion of thermal modifications of hadron properties, a topic which is of central importance for the discussion of experimental signatures that can provide evidence for the thermal properties of the QGP as well as those of a dense hadronic gas. ### 1.1 QCD Thermodynamics A suitable starting point for a discussion of the equilibrium thermodynamics of elementary particles interacting only through the strong force is the QCD partition function represented in terms of a Euclidean path integral. The grand canonical partition function, , is given as an integral over the fundamental quark () and gluon () fields. In addition to its dependence on volume (), temperature () and a set of chemical potentials (), the partition function also depends on the coupling and on the quark masses for different quark flavors, Z(V,T,μf)=∫DAνD¯ψDψe−SE(V,T,μf). (1) Here the bosonic fields and the Grassmann valued fermion fields obey periodic and anti-periodic boundary conditions in Euclidean time, respectively. The Euclidean action contains a purely bosonic contribution () expressed in terms of the field strength tensor, , and a fermionic part (), which couples the gluon and quark fields through the standard minimal substitution, SE(V,T,μf) ≡ SG(V,T)+SF(V,T,μf), (2) SG(V,T) = 1/T∫0\rm dx4∫V\rm d3x12TrFμνFμν, (3) SF(V,T,μf) = (4) Basic thermodynamic quantities like the pressure () and the energy density () can then easily be obtained from the logarithm of the partition function, pT4 = 1VT3lnZ(T,V,μf)  , (5) ϵ−3pT4 = TddT(pT4)∣∣fixed μ/T  . (6) Moreover, the phase structure of QCD can be studied by analyzing observables which at least in certain limits are suitable order parameters for chiral symmetry restoration () or deconfinement (), i.e. the chiral condensate and its derivative the chiral susceptibility, ⟨¯ψfψf⟩=TV∂∂mflnZ(T,V,μf),χm=TVnf∑f=1∂2∂m2flnZ(T,V,μf) , (7) as well as the expectation value of the trace of the Polyakov loop111A more formal definition of which leads to a well defined Polyakov loop expectation value also in the continuum limit is given in Section 5., ⟨L⟩=1V⟨∑→xTrL(→x)⟩, (8) where the trace is normalized such that . Here denotes a closed line integral over gluon fields which represents a static quark source, L(→x)=e−∫1/T0dx0A0(x0,→x). (9) We may couple these static sources to a constant external field, , and consider its contribution to the QCD partition function. The corresponding susceptibility is then given by the second derivative with respect to , χL=V(⟨L2⟩−⟨L⟩2), (10) where has been set to zero again after taking the derivatives. ### 1.2 Lattice formulation of QCD Thermodynamics The path integral appearing in Eq. 1 is regularized by introducing a four dimensional space-time lattice of size with a lattice spacing . Volume and temperature are then related to the number of points in space and time directions, respectively, V=(Nσa)3,T−1=Nτa, (11) and also chemical potentials and quark masses are expressed in units of the lattice spacing, , . The lattice spacing then does not appear explicitly as a parameter in the discretized version of the QCD partition function. It is controlled through the bare couplings of the QCD Lagrangian, i.e. the gauge coupling222In the lattice community it is customary to introduce instead the coupling . and quark masses . At least on the naive level the discretization of the fermion sector is straightforwardly achieved by replacing derivatives by finite differences and by introducing dimensionless, Grassmann valued fermion fields. Enforced by the requirement of gauge invariance the discretization of the gauge sector, however, is a bit more involved. Here one introduces link variables which are associated with the link between two neighboring sites of the lattice and describe the parallel transport of the field from site to the neighboring site in the direction , Uμ(x)=Pexp(ig∫x+^μax\rm dxμAμ(x)), (12) where denotes the path ordering. The link variables are elements of the color group. We will not elaborate here any further on details of the lattice formulation which is described in excellent textbooks [6, 7]. In recent years much progress has also been made in constructing improved discretization schemes for the gluonic as well as fermionic sector of the QCD Lagrangian which greatly reduced the systematic errors introduced by the finite lattice cut-off. This improvement program and also the improvement of numerical algorithms is reviewed regularly at lattice conferences and is discussed in review articles [8]. It has been crucial also for the calculation of thermodynamic observables and their extrapolation to the continuum limit. The impressive accuracy that can be achieved through a systematic analysis of finite cut-off effects on the one hand and the use of improved actions on the other hand is apparent in the heavy quark mass limit of QCD, i.e. in the gauge theory. In particular, for bulk thermodynamic quantities like the pressure or energy density the discretization errors can become huge as these quantities depend on the fourth power of the lattice spacing . Nonetheless, improved discretization schemes lead to a large reduction of discretization errors and allow a safe extrapolation from lattices with small temporal extent to the continuum limit () at fixed temperature . This is illustrated in Fig. 1 which shows results for the pressure, , in a non-interacting gluon gas calculated on a lattice with finite temporal extent in comparison to the Stefan-Boltzmann value, , obtained in the continuum. A similarly strong reduction of cut-off effects can be achieved in the fermion sector when using improved fermion actions [4, 8]. ## 2 The QCD phase diagram There are many indications that strongly interacting matter at high temperatures/densities behaves fundamentally different from that at low temperatures/densities. On the one hand it is expected that the copious production of resonances in hadronic matter which will occur in a hot interacting hadron gas does set a natural limit to hadronic physics described in terms of ordinary hadronic states. On the other hand it is the property of asymptotic freedom which suggests that the basic constituents of QCD, quarks and gluons, should propagate almost freely at high temperature/densities. This suggests that the non-perturbative features characterizing hadronic physics at low energies, confinement and chiral symmetry breaking, get lost when strongly interacting matter is heated up or compressed. Although the early discussions of the phase structure of hadronic matter, e.g. based on model equations of state, seemed to suggest that the occurrence of a phase transition is a generic feature of strong interaction physics, we know now that this is not at all the case. Whether the transition from the low temperature/density hadronic regime to the high temperature/density regime is related to a true singular behavior of the partition function leading to a first or second order phase transition or whether it is just a more or less rapid crossover phenomenon crucially depends on the parameters of the QCD Lagrangian, i.e. the number of light or even massless quark flavors. In particular, whether the transition in QCD with values of quark masses as they are realized in nature is a true phase transition or not is a detailed quantitative question. The answer to it most likely is also dependent on the physical boundary conditions, i.e. whether the transition takes place at vanishing or non-vanishing values of net baryon number density (chemical potential). At vanishing or small values of the chemical potential the crucial control parameter is the strange quark mass. Quite general symmetry considerations and universality arguments for thermal phase transitions [13] suggest that in the limit of light up and down quark masses the transition is first order if also the strange quark mass is small enough whereas it is just a continuous crossover for strange quark masses larger than a certain critical value, . This critical value also depends on the value of the light quark masses . At the transition would be a second order phase transition with well defined universal properties, which are those of the three dimensional Ising model [14]. The dependence of the QCD phase transition on the number of flavors and the values of the quark masses has been analyzed in quite some detail for vanishing values of the chemical potential. The current understanding of this phase diagram is summarized in Fig. 2. Its basic features have been established in numerical calculations and are in agreement with the general considerations based on universality and the symmetries of the QCD Lagrangian. Nonetheless, the numerical values for critical temperatures and critical masses given in this figure should just be taken as indicative; not all of them have been determined with sufficient accuracy. It is, however, obvious that there is a broad range of quark mass values or equivalently pseudo-scalar meson masses, where the transition to the high temperature regime is not a phase transition but a continuous crossover. The regions of first order transitions at large and small quark masses are separated from this crossover regime through lines of second order transitions which belong to the universality class of the 3d Ising model [15]. The chiral critical line in the small quark mass region has been analyzed in quite some detail [15, 16, 17]. In the case of three degenerate quark masses (3-flavor QCD) it has been verified that the critical point belongs to the Ising universality class [15]. The critical quark mass at this point, however, is not a universal quantity and is not yet known to a satisfying precision: it corresponds to a pseudo-scalar mass varying between about 290 MeV in a standard discretization and about 200 MeV in an improved one. This result can be extended to the case of non-degenerate quark masses, i.e. where denotes the value of the two light quark masses, which are taken to be degenerate. The slope of the chiral critical line in the vicinity of the three flavor point can be obtained from a Taylor expansion, mcrits=mcrit+2(mcrit−ml). (13) This relation has been verified explicitly in a numerical calculation and is consistent with all existing studies of the chiral transition line [15, 16, 17]. A collection of results taken from Ref. [18] is shown in Fig. 3. If we assume that the linear extrapolation of the chiral critical line, Eq. 13, holds down to light quark mass values, which correspond to the physical pion mass, we estimate that the ratio of strange to light quark masses at this point is , depending on the action chosen. This clearly is too small to put the physical QCD point, which corresponds to into the first order regime of the phase diagram. At vanishing chemical potential the QCD transition thus most likely is a continuous crossover. The phase diagram shown in Fig. 2 for vanishing quark chemical potential333The baryon chemical potential is given by . () can be extended to non-zero values of . The chiral critical line discussed above then is part of a critical surface, . For small values of the chemical potential it can be analyzed using a Taylor expansion of the fermion determinant in terms of . A preliminary analysis performed for three flavor QCD [18] yields, (mcritT)μ=(mcritT)0+0.21(6)(μT)2+O((μ/T)4),nf=3 . (14) The positive slope obtained for suggests that at the physical value of the strange quark mass a first order transition can occur for values of the chemical potential larger than a critical value determined from (see Fig. 4). A first direct determination of the chiral critical point in QCD has been performed by Fodor and Katz [19]. As in the case of a Taylor expansion of the QCD partition functions in terms of the chemical potential, which we have discussed so far, they have also performed numerical calculations at . However, they then evaluate the exact ratios of fermion determinants calculated at and and use these in the statistical reweighting of gauge field configurations444This approach is well known under the name of Ferrenberg-Swendsen reweighting [21] and finds widespread application in statistical physics as well as in lattice QCD calculations. to extend the calculation to . They find for the chiral critical point [20]  MeV. Although this result still has to be established more firmly through calculations with lighter up and down quark masses on larger lattices and with improved actions, it also suggests that the dependence of the transition temperature on is rather weak. An alternative approach to numerical calculations at non-zero values of the baryon number density () is based on numerical calculations with imaginary chemical potentials [45, 23, 24] (). This allows straightforward numerical calculations for . The results obtained in this way then have to be analytically continued to real values. For small values of the chemical potential which have been analyzed so far they turn out to be consistent with the results obtained with reweighting techniques. Finally we want to note that the discussion of the dependence of the QCD phase diagram on the baryon chemical potential in general is a multi-parameter problem. As pointed out in Eq. 1 one generally has to deal with independent chemical potentials for each of the different quark flavors, which control the corresponding quark number densities, df=1Vzf∂∂zflnZ(V,T,μf),zf=eμf/T. (15) The chemical potentials thus are constrained by boundary conditions which are enforced on the quark number densities by a given physical system. In the case of dense matter created in a heavy ion collision this is due to the requirement that the overall strangeness content of the system vanishes. In a dense star, on the other hand, weak decays will lead to an equilibration of strangeness and it is the charge neutrality of a star which controls the relative magnitude of strange and light quark chemical potentials [25]. So far we have discussed a particular corner of the QCD phase diagram, i.e. the regime of small values of the chemical potential. In fact, the numerical techniques used today to simulate QCD with non-vanishing chemical potential seem to be reliable for and high temperature. Fortunately this is the regime, which also seems to be accessible experimentally. On the other hand there is the entire regime of low temperature and high density which is of great importance in the astrophysical context. In this regime it is expected that interesting new phases of dense matter exist. At low temperature and asymptotically large baryon number densities asymptotic freedom insures that the force between quarks will be dominated by one-gluon exchange. As this leads to an attractive force among quarks, it seems unavoidable that the naive perturbative ground state is unstable at large baryon number density and that the formation of a quark-quark condensate leads to a new color-superconducting phase of cold dense matter. As this part of the phase diagram is not in the focus of our following discussion we refer the interested reader to the many excellent reviews which appeared in recent years [26]. In Fig. 5 we just show a sketch of the phase diagram of QCD for a realistic quark mass spectrum. The transition to the plasma phase is expected to be a continuous crossover (dashed line) for small values of the quark chemical potential and turns into a first order transition only beyond a critical value for the chemical potential or baryon number density. The details of this phase diagram in particular in the low temperature regime are, however, largely unexplored in lattice calculations. ## 3 The transition temperature Although a detailed analysis of the QCD phase diagram clearly is of importance on its own, it is the thermodynamics in the region of small or vanishing chemical potential which is of most importance for a discussion of thermodynamic conditions created in heavy ion collisions at RHIC or LHC. Current estimates of baryon number densities obtained at central rapidities in heavy ion collisions at RHIC [27] suggest that the baryon chemical potential is below 50 MeV, corresponding to a quark chemical potential of about 15 MeV. We will see in the following that at these small values of the critical temperature is expected to change by less than a percent from that at . For this reason, and of course also because much more quantitative results are known in this case, we will in the following focus our discussion on the thermodynamics at . As discussed in the previous section the transition to the high temperature phase is continuous and non-singular for a large range of quark masses. Nonetheless, for all quark masses the transition proceeds rather rapidly in a small temperature interval. A definite transition point thus can be identified, for instance through the location of maxima of the susceptibilities of the chiral condensate, (Eq. 7), or the Polyakov loop, (Eq. 10). While the maximum of determines the point of maximal slope in the chiral condensate, characterizes the change in the long distance behavior of the heavy quark free energy (see section 5). On a lattice with temporal extent and for a given value of the quark mass the susceptibilities define pseudo-critical couplings which are found to coincide within statistical errors. In order to determine the transition temperature one has to fix the lattice spacing through the calculation of an experimentally or phenomenologically known observable. For instance, this can be achieved through the calculation of a hadron mass, , or the string tension, , at zero temperature and the same value of the lattice cut-off, i.e. at . This yields and similarly for a hadron mass. In the pure gauge theory the transition temperature has been analyzed in great detail and the influence of cut-off effects has been examined through calculations on different size lattices and with different actions. From this one finds for the critical temperature of the first order phase transition555This number is a weighted average of the data given in Ref. [28, 29], including a rephrasement of given in Ref. [30] using the string model value for the term in the heavy quark potential. We also use MeV for the string tension to set the scale for . SU(3) gauge theory:–––––––––––––––––––––––– Tc/√σ=0.632±0.002 (16) Tc=(269±1)MeV Already the early calculations for the transition temperature in QCD with dynamical quark degrees of freedom [31, 32] indicated that the inclusion of light quarks leads to a significant decrease of the transition temperature. A compilation of newer results [32, 33, 34, 35, 36] which have been obtained using improved lattice regularizations for staggered as well as Wilson fermions is presented in Fig. 6. The figure only shows results for 2-flavor QCD obtained from calculations with several bare quark mass values. In order to compare calculations performed with different actions the results are presented in terms of a physical observable, the ratio of pseudo-scalar (pion) and vector (rho) meson masses, . From Fig. 6 it is evident that drops with increasing ratio , i.e. with increasing quark mass. This is not surprising as , of course, does not take on the physical -meson mass value as long as the quark masses used in the calculations are larger than than those realized in nature and the ratio thus does not attain its physical value (vertical line in Fig. 6). In fact, for large quark masses will continue to increase with while will remain finite and eventually approach the value calculated in the pure gauge theory. The ratio has to approach zero for . Fig. 6 alone thus does not allow us to quantify the quark mass dependence of . A simple percolation picture for the QCD transition would suggest that or better will increase with increasing ; with increasing all hadrons will become heavier and it will become more and more difficult to excite these heavy hadronic states. It thus becomes more difficult to create a sufficiently high particle/energy density in the hadronic phase that can trigger a phase (percolation) transition. Such a picture also follows from chiral model calculations [37, 38]. In order to check to what extent this physically well motivated picture finds support in the actual numerical results obtained in lattice calculations of the quark mass dependence of the transition temperature we should express in units of an observable, which itself is not (or only weakly) dependent on ; the string tension (or also a hadron mass in the valence quark chiral limit666This often is called the partially quenched limit.) seems to be suitable for this purpose. In fact, this is what we have tacitly assumed when converting the critical temperature of the SU(3) gauge theory, , into physical units as it has been done in Eq. 16. This assumption is supported by the observation that already in the heavy quark mass limit the string tension calculated in units of quenched hadron masses, e.g.  [39], is in good agreement with values required in QCD phenomenology, . To quantify the quark mass dependence of the transition temperature one may express in units of . This ratio is shown in Fig. 7 as a function of . As can be seen the transition temperature starts deviating from the quenched value for . We also note that the dependence of on is almost linear in the entire mass interval. Such a behavior might, in fact, be expected for light quarks in the vicinity of a order chiral transition where the dependence of the pseudo-critical temperature on the mass of the Goldstone-particle follows from the scaling relation Tc(mπ)−Tc(0)∼m2/βδπ . (17) For 2-flavor QCD the critical indices and are expected to belong to the universality class of 3-d, symmetric spin models and one thus indeed would expect to find . However, this clearly cannot be the origin for the quasi linear behavior which is observed already for rather large hadron masses and, moreover, seems to be independent of . In fact, unlike in chiral models [37, 38] the dependence of on turns out to be rather weak. The line shown in Fig. 7 is a fit to the 3-flavor data, (Tc√σ)mPS/√σ=(Tc√σ)0+0.04(1)(mPS√σ). (18) It seems that the transition temperature does not react strongly to changes of the lightest hadron masses. This favors the interpretation that the contributions of heavy resonance masses are equally important for the occurrence of the transition. In fact, this also can explain why the transition still sets in at quite low temperatures even though all hadron masses, including the pseudo-scalars, attain masses of the order of 1 GeV or more. Such an interpretation also is consistent with the weak quark mass dependence of the critical energy density which one finds from the analysis of the QCD equation of state as we will discuss in the next section. In Fig. 7 we have included results from calculations with 2 and 3 degenerate quark flavors. So far such calculations have mainly been performed with staggered fermions. In this case also a simulation with non-degenerate quarks (a pair of light u,d quarks and a heavier strange quark) has been performed. Unfortunately, the light quarks in this calculation are still too heavy to represent the physical ratio of light u,d quark masses and a heavier strange quark mass. Nonetheless, the results obtained so far suggest that the transition temperature in (2+1)-flavor QCD is close to that of 2-flavor QCD. The 3-flavor theory, on the other hand, leads to consistently smaller values of the critical temperature,  MeV. Extrapolations of the transition temperatures to the chiral limit gave 2−flavor QCD:––––––––––––––––––– Tc=⎧⎪ ⎪⎨⎪ ⎪⎩(171±4)MeV,clover-improved Wilson% fermions \@@cite[cite]{[\@@bibref{}{Ali01}{}{}]}(173±8)MeV,% improved staggeredfermions \@@cite[cite]{[\@@bibref{}{Kar01}{}{}]} 3−flavor QCD:––––––––––––––––––– Tc=(154±8)MeV,% improved staggered fermions \@@cite[cite]{[% \@@bibref{}{Kar01}{}{}]} Here has been used to set the scale for . Although the agreement between results obtained with Wilson and staggered fermions is striking, one should bear in mind that all these results have been obtained on lattices with temporal extent , i.e. at rather large lattice spacing,  fm. Moreover, there are uncertainties involved in the ansatz used to extrapolate to the chiral limit. We thus estimate that the systematic error on the value of still is of similar magnitude as the purely statistical error quoted above. As mentioned already in the previous section, first studies of the dependence of the transition temperature on the chemical potential have been performed recently using either a statistical reweighting technique [19, 20, 40] to extrapolate from numerical simulations performed at to or performing simulations with an imaginary chemical potential [23, 24] the results of which are then analytically continued to real . To leading order in one finds Tc(μ)Tc(0)={1−0.0056(4)(μB/T)2Ref.~{}\@@cite[ci% te]{[\@@bibref{}{deForcrand}{}{}]} (imaginary μ)1−0.0078(38)(μB/T)2Ref.~{}\@@cite[cite]{[\@@bibref{}{Ejiri}{}{}]} (O(μ2) reweighting). (19) These results are consistent with the (2+1)-flavor calculation performed with an exact reweighting algorithm [19, 20]. The result obtained for in this latter approach is shown in Fig. 8. The dependence of on the chemical potential is rather weak. We stress, however, that these calculations have not yet been performed with sufficiently light up and down quark masses and a detailed analysis of the quark mass dependence has not yet been performed. The -dependence of is expected to become stronger with decreasing quark masses (and, of course, vanishes in the limit of infinite quark masses). ## 4 The equation of state One of the central goals in studies of the thermodynamics of QCD is, of course, the calculation of basic thermodynamic quantities and their temperature dependence. In particular, one wants to know the pressure and energy density which are of fundamental importance when discussing experimental studies of dense matter. Besides, they allow a detailed comparison of different computational schemes, e.g. numerical lattice calculations and analytic approaches in the continuum. At high temperature one generally expects that due to asymptotic freedom these observables show ideal gas behavior and thus are directly proportional to the basic degrees of freedom contributing to thermal properties of the plasma, e.g. the asymptotic behavior of the pressure will be given by the Stefan-Boltzmann law limT→∞pT4=(16+10.5nf)π290. (20) Perturbative calculations [41] of corrections to this asymptotic behavior are, however, badly convergent and suggest that a purely perturbative treatment of bulk thermodynamics is trustworthy only at extremely high temperatures, i.e. several orders of magnitude larger than the transition temperature to the plasma phase. In analytic approaches one thus has to go beyond perturbation theory which currently is being attempted by either using hard thermal loop resummation techniques in combination with a variational ansatz [42, 43] or perturbative dimensional reduction combined with numerical simulations of the resulting effective 3-dimensional theory [44, 45]. The lattice calculation of pressure and energy density is based on the standard thermodynamic relations given in Eq. 6. For vanishing chemical potential the free energy density is directly given by the pressure, . As the partition function itself is not directly accessible in a Monte Carlo calculation one first takes a suitable derivative of the partition function, which yields a calculable expectation value, e.g. the gauge action. After renormalizing this observable by subtracting the zero temperature contribution it can be integrated again to obtain the difference of free energy densities at two temperatures, pT4∣∣∣TTo=1V∫TTodt∂t−3lnZ(t,V)∂t. (21) The lower integration limit is chosen at low temperatures so that is small and may be ignored. This easily can be achieved in an gauge theory where the only relevant degrees of freedom at low temperature are glueballs. Even the lightest ones calculated on the lattice have large masses,  GeV. The free energy density thus is exponentially suppressed up to temperatures close to . In QCD with light quarks, however, the dominant contribution to the free energy density comes from pions. In the small quark mass limit also has to be shifted to rather small temperatures. At present, however, numerical calculations are performed with rather heavy quarks and also the pion contribution thus is strongly suppressed below . In Fig. 9 we show results for the pressure obtained in calculations with different numbers of flavors [46]. At high temperature the magnitude of clearly reflects the change in the number of light degrees of freedom present in the ideal gas limit. When we rescale the pressure by the corresponding ideal gas values it becomes, however, apparent that the overall pattern of the temperature dependence of is quite similar in all cases. The figure also shows that the transition region shifts to smaller temperatures as the number of degrees of freedom is increased. As pointed out in the previous section such a conclusion, of course, requires the determination of a temperature scale that is common to all QCD-like theories which have a particle content different from that realized in nature. We have determined this temperature scale by assuming that the string tension is flavor and quark mass independent. Other thermodynamic observables can be obtained from the pressure using suitable derivatives. In particular one finds for the energy density, ϵ−3pT4=TddT(pT4). (22) In Fig. 10 we show results for the energy density777 This figure is based on data from Ref. [46] obtained for bare quark masses . The energy density shown does not contain a contribution which is proportional to the quark mass and thus vanishes in the chiral limit. obtained from calculations with staggered fermions and different number of flavors. Unlike the pressure the energy density rises rapidly at the transition temperature. Although the results shown in this figure correspond to quark mass values in the crossover region of the QCD phase diagram the transition clearly proceeds rather rapidly. This has, for instance, also consequences for the velocity of sound, , which becomes rather small close to . The velocity of sound is shown in Fig. 11. The comparison of results obtained from calculations in the gauge theory [28] and results obtained in simulations of 2-flavor QCD using Wilson fermions [47] with different values of the quark mass shows that the temperature dependence of is almost independent of the value of the quark mass. Also shown in Fig. 10 is an estimate of the critical energy density at which the transition to the plasma phase sets in. In units of the transition takes place at which should be compared with the corresponding value in the gauge theory [28], . Although these numbers differ by an order of magnitude it is rather remarkable that the transition densities expressed in physical units are quite similar in both cases; when moving from large to small quark masses the increase in is compensated by the decrease in . This result thus suggests that the transition to the QGP is controlled by the energy density, i.e. the transition seems to occur when the thermal system reaches a certain “critical” energy density. In fact, this assumption has been used in the past to construct the phase boundary of the QCD phase transition in the plane. Also at non-vanishing baryon number density, the pressure as well as the energy density can be calculated along the same line outlined above by using the basic thermodynamic relation given in Eq. 6. Although the statistical errors are still large, a first calculation of the -dependence of the transition line indeed suggests that varies only little with increasing ,  [40]. First calculations of the -dependence of the pressure in a wider temperature range have recently been performed using the reweighting approach for the standard staggered fermion formulation [49] as well as the Taylor expansion for an improved staggered fermion action up to O() [50]. This shows that the behavior of bulk thermodynamic observables follow a similar pattern as in the case of vanishing chemical potential. For instance, the additional contribution to the pressure, rapidly rises at and shows only little temperature variation for . In this temperature regime the dominant contribution to the pressure arises from the contribution proportional to which also is the dominant contribution in the ideal gas limit as long as , (pT4)μ/T−(pT4)μ=0=nf2(μT)2+nf4π2(μT)4. (23) Here is given by Eq. 20. It also turns out that at non-zero values of the chemical potential the cut-off effects in bulk thermodynamic observables are of the same size as at . Further detailed studies of the behavior of the pressure and the energy density thus will require a careful extrapolation to the continuum limit and/or the use of improved gauge and fermion actions. ## 5 Heavy quark free energies Heavy quark free energies play a central role in our understanding of the QCD phase transition, the properties of the plasma phase and the temperature dependence of the heavy quark potential. The heavy quark free energies [1] are defined as the QCD partition functions of thermal systems containing static quark (anti-quark) sources located at positions and , Z(n,¯n)(V,T,x,¯x) ≡ exp(−F(n,¯n)(V,T,x,¯x)/T) = ∫DAνD¯ψDψe−SE(V,T,μf)n∏i=1TrL(→xi)¯n∏i=1TrL†(¯→xi) , where the Polyakov Loop, , has been defined in Eq. 9. The expectation value of the product of Polyakov loops gives the difference in free energy due to the presence of static -sources in a thermal heat bath of quarks and gluons, ⟨n∏i=1TrL(→xi)¯n∏i=1TrL†(¯→xi)⟩ = Z(n,¯n)(V,T,x,¯x)Z(V,T) = exp(−ΔF(n,¯n)(V,T,x,¯x)/T) ≡ exp(−(F(n,¯n)(V,T,x,¯x)−F(V,T))/T) , where is the QCD partition function defined in Eq. 1. In particular, one considers the two point correlation function (), GL(|→x−→y|,T)=⟨TrL(→x)TrL†(→y)⟩, (26) and the Polyakov loop expectation value, which can be defined through the large distance behavior of , ⟨L⟩=limr→∞√GL(r,T),r=|→x−→y|. (27) These observables elucidate the deconfining features of the transition to the high temperature phase of QCD. ### 5.1 Deconfinement order parameter The Polyakov loop expectation value has been introduced in Section 1 as an order parameter for deconfinement in the heavy quark mass limit of QCD, i.e. the gauge theory. Like in statistical models, e.g. the Ising model, it is sensitive to the spontaneous breaking of a global symmetry of the theory under consideration. In the case of the gauge theory this is the global centre symmetry [51]. In the presence of light dynamical quarks this symmetry is explicitly broken and in a strict sense the Polyakov loop looses its property as an order parameter. Through its relation to the two point correlation function, Eq. 27, it however still is the free energy of a static quark placed in a thermal heat bath, Fq(T)=−Tln(⟨L⟩). (28) In the low temperature, confined regime is small and the free energy thus is large. It is infinite only for an gauge theory, i.e. for QCD in the limit . In the high temperature regime however, becomes large and decreases rapidly when crossing the transition region. The static quark sources introduced in the QCD partition function through the line integral defined in Eq. 9 also introduce additional ultraviolet divergences, which require a proper renormalization. For the lattice regularized Polyakov loop this can be achieved through a renormalization of the temporal gauge link variables , i.e. L→n≡Nτ∏i=1ZL(g2)U0(i,→n) . (29) The renormalization constant can, for instance, be determined by normalizing the two point correlation functions such that the resulting free energy of a color singlet quark anti-quark pair at short distances coincides with the zero temperature heavy quark potential. This also insures that divergent self energy contributions to the Polyakov loop expectation value, defined by Eq. 27, get removed and that the heavy quark free energy can unambiguously be defined also in the continuum limit [52]. We stress that it is conceptually appealing to define the renormalization constant for the Polyakov loop in terms of color singlet free energies. Nonetheless, this can also be achieved through the gauge invariant two point correlation functions, , which define so called color averaged free energies. A -pair placed in a thermal heat bath cannot maintain its relative color orientation. The entire thermal system (-pair heat bath) will be colorless and the -pair can change its orientation in color space when interacting with gluons of the thermal bath. The Polyakov loop correlation function thus has to be considered as a superposition of contributions arising from color singlet () and color octet () contributions to the free energy [1], \rm e−F(1,1)(r,T)/T=19\rm e−F1(r,T)/T+89\rm e−F8(r,T)/T. (30) At short distances the repulsive octet term is exponentially suppressed and the contribution from the attractive singlet channel will dominate the heavy quark free energy, F(1,1)(r,T)T = F1(r,T)T−ln9 (31) = −g23π1rT−ln9forrT<<1, where the last equality gives the perturbative result obtained from 1-gluon exchange at short distances. When normalizing the color singlet free energy at short distances such that it coincides with the zero temperature heavy quark potential the corresponding color averaged free energy thus will differ by an additive constant, limr→0(F(1,1)(r,T)−F1(r,T))=Tln9for all T. (32) Using this normalization condition the renormalized Polyakov loop order parameter has been determined for the gauge theory. It is shown in Fig. 12. As the deconfinement phase transition is first order in the gauge theory the order parameter is discontinuous at . From the discontinuity, , one finds for the change in free energy  MeV. In QCD with light quarks the renormalization program outlined above has not yet been performed in such detail as practically all studies of the heavy quark free energy have been performed on rather coarse lattices with a small temporal extent, . Nonetheless, normalizing the free energies obtained in such calculations at the shortest distance presently available () to the zero temperature Cornell potential does seem to be a reasonable approximation [53] (see Fig. 13). Also in this case the free energy at takes on a similar value as in the pure gauge theory. ### 5.2 Heavy quark potential The change in free energy due to the presence of a static quark anti-quark pair is given by the two point correlation function defined in Eq. 26. In the zero temperature limit the free energies, , determined from define the heavy quark potential. Also at non zero temperature the free energies exhibit properties expected from phenomenological discussions of thermal modifications of the heavy quark potential. In the pure gauge theory ) the free energies diverge linearly at large distances in the low temperature, confinement phase. The coefficient of the linear term, the string tension at finite temperature, decreases with increasing temperature and vanishes above . In the deconfined phase the free energies exhibit the behavior of a screened potential. At large distances they approach a constant value at an exponential rate. This exponential approach defines a thermal screening mass. For finite quark masses the free energies show the expected string breaking behavior; at large distances they approach a finite value at all temperatures, i.e. also below . This asymptotic value rapidly decreases with decreasing quark mass and increasing temperature [35] as can be seen in Fig. 13 where we show in QCD with three light quark degrees of freedom [35]. In the lower part of Fig. 13 we show the change in free energy needed to separate a quark anti-quark pair from a distance typical for the radius of a bound state, i.e.  fm, to infinity. The change in free energy induced by a heavy quark anti-quark pair often is taken to be the heavy quark potential at finite temperature. Of course, this is not quite correct and care has to be taken when using the free energies in phenomenological discussions of thermal properties of heavy quark bound states. In this case we would like to know the energy needed to break up a color singlet state formed by a -pair. As pointed out in the previous subsection, represents a color averaged free energy. It may be decomposed in terms of the corresponding singlet and octet contributions, which has indeed been done for and gauge theories [54, 55]. However, even then one has to bear in mind that the free energy is the difference between an energy and entropy contributions, . Of course, from a detailed knowledge of the temperature dependence of the free energy at fixed separation of the -pair we can in principle determine the entropy and energy contributions separately, S=−∂F(r,T)∂T,E=−T2∂F(r,T)/T∂T. (33) Such an analysis does, however, not yet exist. ## 6 Thermal modifications of hadron properties ### 6.1 QCD phase transition and the hadron spectrum The non-perturbative structure of the QCD vacuum, in particular confinement and chiral symmetry breaking, determine many qualitative aspects of the hadron mass spectrum. Also the actual values of light and heavy quark bound state masses depend on the values of the chiral condensate and the string tension, respectively. As these quantities change with temperature and will change drastically close to the QCD transition temperature it also is expected that the properties of hadrons, e.g. their masses and widths, undergo drastic changes at finite temperature. Investigations into the nature of hadronic excitations are interesting for various reasons in the different temperature regimes. Below temperature dependent modifications of hadron masses and widths may lead to observable consequences in heavy ion collision experiments e.g. a pre-deconfined dilepton enhancement due to broadening of the -resonance or a shift of its mass [56]. At temperatures around the transition temperature the (approach to the) restoration of chiral symmetry should reflect itself in degeneracies of the hadron spectrum. First evidence for this has, indeed, been found early in lattice calculations of hadronic correlation functions [57]. In the plasma phase the very nature of hadronic excitations is a question of interest. Asymptotic freedom leads one to expect that the plasma consists of a gas of almost free quarks and gluons. The lattice results on the equation of state, however, have shown already that this is not yet the case for the interesting temperature region. While quasi-particle models and HTL resummed perturbation theory are able to reproduce the deviations from the ideal gas behaviour observed in the equation of state at temperatures a few times it remains to be seen whether they also account properly for hadronic excitations. This question arises in particular as the generally assumed separation of scales does not hold for temperatures quite a few times . In the previous section we have discussed modifications of the heavy quark free energy which indicate drastic changes of the heavy quark potential in the QCD plasma phase. As a consequence, depending on the quark mass, heavy quark bound states cannot form above certain “critical” temperatures [58]. Similarly it is expected that the QCD plasma cannot support the formation of light quark bound states. In the pseudo-scalar sector the disappearance of the light pions clearly is related to the vanishing of the chiral condensate at . For the pions would be no longer (nearly massless) Goldstone bosons. In the plasma phase one thus may expect to find only massive quasi-particle excitations in the pseudo-scalar quantum number channel. However, also below it is expected that the gradual disappearance of the spontaneous breaking of chiral flavor symmetry as well as the gradual effective restoration of the axial symmetry may lead to thermal modifications of hadron properties. While the breaking of the flavor symmetry leads, for instance, to the splitting of scalar and pseudo-scalar particle masses, the symmetry breaking is visible in the splitting of the pion and the meson. More general, thermal modifications of the hadron spectrum should be discussed in terms of modifications of hadronic spectral functions which describe the thermal average over transition matrix elements between energy eigenstates () with fixed quantum numbers () [59], σH(ω,→p,T)=1Z(T)∑n,me−En(→p)/T(1−e−ω/T)δ(ω+En(→p)−Em(→p))⋅ |⟨n|^JH(0)|m⟩|2. (34) These spectral functions in turn determine the structure of Euclidean correlation functions, . Numerical studies of at finite temperature thus will allow to learn about thermal modifications of the hadron spectrum, although in practice it is difficult to reconstruct the spectral functions themselves. Here recent progress has been reached through the application of the maximum entropy method (MEM). Before describing these developments and presenting recent lattice results we will start in the next subsection with a presentation of some basic field theoretic background on hadronic correlation functions and their spectral representation. ### 6.2 Spatial and temporal correlation functions, hadronic susceptibilities #### 6.2.1 Basic field theoretic background Numerical calculations of hadronic correlation functions are carried out on Euclidean lattices i.e. one uses the imaginary time formalism. This holds also for zero temperature computations in which case the limit has to be taken. The formalism has been worked out in detail in textbooks [7, 59, 60], however, for readers’ convenience we have collected some formulae in the Appendix. Hadronic correlation functions in coordinate space, , are defined as GH(τ,→x) = ⟨JH(τ,→x)J†H(0,→0)⟩ (35) where the hadronic, e.g. mesonic, currents contain an appropriate combination of -matrices, , which fixes the quantum numbers of a meson channel; i.e., , , , for scalar, pseudo-scalar, vector and pseudo-vector channels, respectively. On the lattice, at zero temperature, one usually studies the temporal correlator at fixed momentum . Save possible subtractions, the correlator is related to the spectral function (see Appendix), , by means of GTH(τ,→p)=∫+∞0dp0σH(p0,→p)K(p0,τ), (36) where the kernel K(p0,τ)=cosh[p0(τ−1/2T)]sinh(p0/2T) (37) describes the propagation of a single free boson of mass . At zero temperature, for large temporal separations the correlation function is dominated by the exponential decay due to the lightest contribution to the spectral function in a given channel. At finite temperature, studies of the temporal correlator are hampered by the limited extent of the system in the time direction. Therefore most (lattice) analyses have been concentrating on the spatial correlation functions. These depend of course on the same (temperature dependent) spectral density but are different Fourier transforms of it. Projecting onto vanishing transverse momentum and vanishing Matsubara frequency one obtains GSH(z)=∫+∞−∞dpz2πeipzz∫+∞−∞dp0σH(p0,→0⊥,pz)p0  . (38) In addition it is quite common in lattice calculations to analyze hadronic susceptibilities which are given by the space-time integral over the Euclidean correlation functions, χH = ∫1/T0dτGTH(τ,→0). (39) The susceptibilities have a particularly simple relation to the spectral function, χH=2∫∞0dp0σH(p0,→0)p0  . (40) Unfortunately, these susceptibilities are generally ultraviolet divergent and the above defined integrals should be cut-off at some short distance scale. Rather than doing this one can consider a closely related quantity, which provides a smooth, exponential cut-off for the ultraviolet part of the spectral function and is given by the thermal correlation function at , GTH(1/2T,→0)=∫∞0dp0σH(p0,→0)sinh(p0/2T). (41) In the case of a free stable boson of mass the spectral function is a pole σH(p0,→p)=|⟨0|JH|H(→p)⟩|2ϵ(p0)δ(p20−→p2−M2H). (42) Correspondingly, the imaginary time correlator, projected to vanishing momentum, decreases with the mass (modulo periodicity) as GTH(τ,→0)∼12MHcosh[MH(τ−1/2T)]sinh(MH/2T). (43) Likewise, in this case the spatial correlator is also decaying with the mass GSH(z)∼12MHexp(−MHz). (44) In this simple case the exponential fall-off of spatial and temporal correlation functions thus carry the same information on the particle mass. Interactions in a thermal medium, however, are likely to alter the dispersion relation to ω2(→p,T)=M2H+→p2+Π(→p,T) (45) with the temperature dependent vacuum polarization . In the simplest case, one can perhaps assume [61] that the temperature effects can be absorbed into a temperature dependent mass and a coefficient which might also be temperature dependent and different from 1, ω2(→p,T)≃M2H(T)+A2(T)→p2 (46) In this case, at zero momentum the temporal correlator will decay with the so-called pole mass , GTH(τ,→0)∼exp(−MH(T)τ), (47) whereas the spatial correlation function has an exponential fall-off, GSH(z)∼exp(−MscH(T)z), (48) determined by the screening mass which differs from the pole mass if . In this simple case also the susceptibility defined in Eq. 39 as well as the central value of the temporal correlation function defined in Eq. 41 are closely related to the pole mass, χH ∼ 1M2H(T), GTH(τ=1/2T,→0) ∼ 1MH(T)sinh(MH(T)/2T), (49) respectively888Note that for the spatial correlation functions the relevant symmetry to classify states no longer consists of the SO(3) group of rotations but rather is an due to the asymmetry between spatial and temporal directions. This leads to non-degeneracies between e.g. and excitations [62].. The opposite limit to the case of a free stable boson is reached for two freely propagating quarks contributing to the spectral density. Here, to leading order perturbation theory the evaluation of the meson correlation function amounts to the evaluation of the self-energy diagram shown in Fig. 14a in which the internal quark lines represent bare quark propagators  [63]. For massless quarks the spectral density in the mesonic channel is then computed from Eq. LABEL:eq:freequarkprop_2 as, σH(p0,→p)=Nc8π2(p20−→p2)aH⎧⎪⎨⎪⎩Θ(p20−→p2)2Tplncosh(p0+p4T)cosh(p0−p4T) +Θ(→p2−p20)⎡⎢⎣2Tplncosh(p+p04T)cosh(p−p04T)−p0p⎤⎥⎦⎫⎪⎬⎪⎭ (50) where and depends on the channel analyzed (see Table 1 for some selected values). In the limit of vanishing momentum the spectral density is also known [64] for quarks with non-vanishing masses , σH(p0,→0)=Nc8π2p20Θ(p20−4m2)tanh(p04T)√1−(2mp0)2 [aH+(2mp0)2bH]. (51) The coefficients are also given in Table 1. For massless quarks closed analytic expressions can be given for both, the temporal as well as the spatial correlator, e.g. for the pion [63] GT,freeπ(τ,→0)/T3=πNc(1−2τT)1+cos2(2πTτ)sin3(2πTτ)+2Nccos(2πTτ)sin2(2πTτ) (52) GS,freeπ(z) = NcT4πz2sinh(2πTz)[1+2πTzcoth(2πTz)] (53) ∼ e−MfreesczwithMfreesc=2πT. In this free field limit the susceptibility , defined in Eq. 39, is divergent, while the temporal correlator at , of course, stays finite GT,freeH(1/2,→0)/T3=aHNc/3. (54) It, however, is neither related to the screening mass nor does this finite value have anything to do with the existence of a pole mass. The above discussion can be extended to the leading order hard thermal loop (HTL) approximation [64] using dressed quark propagators and vertices for the calculation of the self-energy diagrams as indicated in Fig. 14b. The corresponding quark spectral functions are given in the Appendix. In the interacting case it has been argued [65] from dimensionally reduced QCD that the simple relation between screening mass and lowest Matsubara frequency will be changed, to leading order, into Msc(T)=2πT+CF4π
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809455871582031, "perplexity": 484.0904310186377}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00005.warc.gz"}
http://clay6.com/qa/9747/if-f-x-large-frac-g-x-large-frac-and-h-x-sqrt-3-then-fogoh-8-
Browse Questions # If $f(x)=\large\frac{x}{x+1},$ $g(x)=\large\frac{x}{1-x}$ and $h(x)=\sqrt [3] {x},$ then $(fogoh)^{-1}(8)= ?$ (A) 1 (B) 2 (C) 8 (D) 512 Can you answer this question? $goh(x)=\large\frac{\sqrt [3] {x}}{1-\sqrt [3] {x}}$ $fogoh(x)=\large\frac{\large\frac{\sqrt [3] {x}}{1-\sqrt [3] {x}}}{\large\frac{\sqrt [3] {x}}{1-\sqrt [3] {x}}+1}$= $\sqrt [3] {x}$ Let $fogoh(x)=y$ $\Rightarrow x=(fogoh)^{-1}y$ $or\:\:\sqrt [3] {x}=y$ $\Rightarrow\:x=y^3$ $\Rightarrow\:(fogoh)^{-1}(8)=8^3=512$ answered May 21, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978800356388092, "perplexity": 2481.0724634820385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00318.warc.gz"}
https://nigerianscholars.com/past-questions/economics/question/416363/
Home » » If wage rate is less than the average revenue product, the firms would be earnin... # If wage rate is less than the average revenue product, the firms would be earnin... ### Question If wage rate is less than the average revenue product, the firms would be earning________ ### Options A) loss B) super normal profit C) normal profit D) higher revenue
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381508827209473, "perplexity": 4633.970357264563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00507.warc.gz"}
http://umj.uran.ru/index.php/umj/announcement/view/7
## VAK The journal Ural Mathematical Journal  is included in the List of peer-reviewed scientific journals in which the main scientific results of dissertations on scientific degrees should be published (as of March 26, 2013).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884146451950073, "perplexity": 2672.430851131912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00080.warc.gz"}
https://cstheory.stackexchange.com/tags/type-systems/hot?filter=year
People who code: we want your input. Take the Survey # Tag Info ## Hot answers tagged type-systems 6 The discussion in the section surrounding that paragraph in Pierce's book explains why this is so. In particular, consider the definition of "type system" given on the page before: A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute. ... 5 This isn't an excessively deep answer, but you can express a type system based on STLC with prenex polymorphism as a Pure Type System in a quite simple way, using sorts $*_{\mathrm{mono}}$, $*_{\mathrm{poly}}$ and $\square$ along with the axioms $$*_{\mathrm{mono}}, *_{\mathrm{poly}}\ :\ \square$$ and the rules $$(*_{\mathrm{mono}},*_{\mathrm{mono}},*_{\... 5 Apart from what's already written in the slides you linked to, let me describe one possible approach. For studying type inference semantically we need a model in which a term can have many types, or none. This naturally leads to Curry-style typing, i.e., we think of t : A as a relation where both the term t and the type A are meaningful by themselves. (... 4 Use an auxiliary type of positive natural numbers. data positive : Set where one : positive s0 : positive → positive -- multiply by 2 s1 : positive → positive -- multiply by 2 and add 1 data N : Set where zero : N pos : positive → N Supplemental: Another option, which I found on my whiteboard today (probably put there by Egbert Rijke months ago)... 4 \newcommand{\Alg}{\mathsf{Alg}\ } \newcommand{\NatF}{\mathsf{NatF}\ } \newcommand{\Nat}{\mathsf{Nat}} \newcommand{\map}{\mathrm{map}\ } \newcommand{\Z}{\mathrm{Z}} \newcommand{\S}{\mathrm{S}} I don't think this is an actual counter-example. Parametricity implies:$$∀(α : \Alg \NatF t) (g : r → t)(x : \NatF r). \\ α\ [r]\ g\ x = α\ [t]\ (λx.x)\ (\... 3 You should $\alpha$-rename to avoid conflict with the variable names. That is, you should prove weakening of the form: $\Gamma \vdash (\upsilon y) P$ implies $\Gamma, x : T \vdash (\upsilon y) P$. $\alpha$-equivalence and capture-avoiding substitution is an important concept to understand in type theory: I would recommend studying this concept for the ... 2 It is false that the only well-typed occurrence of Prf has to be of the form Prf(all ...). For example, in the context with a variable p : Prop we can form the type Prf(p) which is not of the stated form. Another possibility is that we have a Prf(t) for some closed term t : Prop which is not of the form all ... but it normalizes to it. The purpose of Prf is ... 1 Ali Asaf worked out a hierachy of universes with explicit coercions (lifting) in A calculus of constructions with explicit subtyping and established a relationship with cummulative universes. Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888760209083557, "perplexity": 1029.3384421060418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00526.warc.gz"}
http://bicep.rc.fas.harvard.edu/CMB-S4/analysis_logbook/20191101_dc06_specsmaps/
## DSR Sim Map Set 06 - some spectra and map check plots ### Summary The first 10 realizations of set 06 (DSR style) have been generated and are transfering to NERSC. The next step is for someone to run foreground cleaning on the high res bands and produce cleaned maps for input to lensing reconstruction. Delensers are then invited to run their best algorithms on the result and provide lensing templates similar to those which were provided here. These can be taken through to power specra and ML r values similar to what was done here and here. ### Spectra plots for "r bands" In 20191016_dc06_dsr the setup for sim set 06 was described. Here are some check plots of the first realization. 06b is "Pole deep", 06c is Pole wide", 06d is "Chile deep". We are using two foreground models: 07 amplitude modulated Gaussian dust+sync, and 09 Vansyngel model. Below are component spectra of the "r bands". A few notes: • Color order is red-yellow-green-blue lowest to highest frequency. • In all the column we see that $$\ell<30$$ has been cut out of all components to simulate loss of these modes to timestream filtering. • Noise is plotted as $$C_\ell$$ so we can see the white + $$1/f$$ form which is being assumed. Higher ell knee for 20GHz since this is assumed to be on LAT. • In lLCDM column we can see beam size getting smaller from 30 to 270GHz with 20 GHz (LAT) beam size similar to 270 GHz (SAT). Simce these are simple anafast spectra we see E to B mixing causing an aparent excess over the lensing B at low $$\ell$$. • In the 07 amplitude modulated Gaussian dust+sync column we can see foreground getting lower with increasing frequency and then higher again. • In the 09 Vansyngel model column we see a big bump at low $$\ell$$. This is an artifact caused by applying the $$\ell<30$$ cut to a galactic map which has a very bright galactic plane. Even cutting out the plane with an $$l<10$$ deg cut before going to harmonic space there is still aliased power at the cutoff which spreads over the whole sky. At high latitude it dominates over the actual high latitude structure which is in the model. We can see this in the maps below. Suggestions for how to reduce or elliminate this problem are welcome. • In all cases each hit pattern has spectra taken using its "natural" mask - just the hit pattern itself. For "Chile deep" one would throw out the high foreground parts in reality. Fig 1: "Pole deep" Fig 2: "Pole wide" Fig 3: "Chile deep" ### Spectra Plots for Delensing Bands There are six high res delensing bands: Fig 1: "Pole deep" Fig 2: "Pole wide" Fig 3: "Chile deep" ### Map Plots For what they are worth here are some map plots - plotting such small maps leads to heavy pixel aliasing. All have $$\pm$$100$$\mu$$K for T and $$\pm$$5$$\mu$$K for Q/U. Fig 4: Common full sky components: LCDM and the two foreground models Fig 5: Noise for the three hit patterns Fig 6: Combined for the three hit patterns
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121243119239807, "perplexity": 4498.655505829714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00730.warc.gz"}
http://mathhelpforum.com/business-math/141671-differentiating-portfolio-help.html
## Differentiating Portfolio Help!!! I have the portfolio: $ \prod = V(S_1,S_2,........,t) - \sum_{i=1}^{N}\triangle_iS_i $ I need to differentiate this so I have $d\prod$ but I am unsure how to do this since of the summation in the equation. Any help is much appreciated. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830197691917419, "perplexity": 305.69433161572067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615093.77/warc/CC-MAIN-20170530105157-20170530125157-00073.warc.gz"}
http://www.maa.org/press/periodicals/loci/joma/design-of-a-thrilling-roller-coaster-module-c-design-of-one-drop-polynomial-function
Design of a Thrilling Roller Coaster - Module C. Design of One Drop: Polynomial Function Author(s): Patricia W. Hammer, Jessica A. King, and Steve Hammer In this module, you will model one drop of a coaster by marking the peak and valley of the drop and then fitting (in height and slope) a cubic polynomial to the marked points. Once you have determined the function, you will then calculate the thrill of the single drop. We provide a downloadable Maple worksheet with commands and explanations. I. Getting Started Click the button at the right to open the MAPLE worksheet cubiccoaster1hill.mws. If you are given a choice, you should save the file to your hard drive, then navigate to your hard drive and open the file from there. In the MAPLE worksheet, position your cursor anywhere in the line [ > restart: and press Enter. Pressing the Enter key executes the MAPLE code on the current line. The MAPLE restart command will clear all MAPLE variables. It is important to do this whenever you start a new MAPLE project. Now resize your MAPLE and Internet Explorer windows so that you can see them both, side-by-side. Click in either window to make it the active window. Your screen should look something like the figure at the right. II. Data Points First, carefully work through this module using the sample peak and valley points already entered in the Maple worksheet. Then, use your recorded peak and valley data points collected from Colossus (Module A, page 2). Enter the x coordinates of your peak point and valley point using the list syntax ( [x1,x2] ) for the xdata variable. Enter the y coordinates of your peak point and valley point using the list syntax ( [y1,y2] )for the ydata variable. Enter the slope conditions for your peak point and for your valley point using the list syntax ( [s1,s2] ) for the slopes variable. III. Connecting Cubic Polynomial Now that you have entered the x coordinates, y coordinates, and slope conditions, you can work through the Maple worksheet by pressing the Enter key on your computer to execute the Maple commands. In this section, the Maple commands will determine a cubic polynomial that fits the given peak and valley points. A close examination of the commands shows that Maple determines the unknown coefficients by solving a system of 4 equations [two conditions at each of the two (peak and valley) points] in 4 unknowns. Maple shows a plot of the cubic polynomial function. Does this match your coaster hill? IV. Calculation of the Angle of Steepest Descent Now we must determine the steepest point on the curve (i.e., the coaster drop). In other words, we must determine the minimum value of the derivative on the x interval (determined by the peak and valley points). In order to work with a positive-valued function, we rephrase this as determining the maximum value of the absolute value of the derivative on the x interval. How do we maximize |f'| on a closed interval? We determine critical points of f' and then compare function values of f' at critical points and endpoints. The Maple commands calculate and then graph f'(x). Then, the critical points of f' are found by solving f"(x) = 0 on the restricted x interval. Finally, we evaluate f'(x) at all critical points and endpoints and choose the maximum absolute value. Questions: • What is the x coordinate of the point of steepest descent? • What is the relation between the point of steepest descent and the peak and valley points? • Is this relation true for all functions? • How does the slope at the steepest point compare to your previous work from module A? To determine the angle of steepest descent, we must convert slope measurement into angle measurement. Using a right triangle, we see that the radian measure of the angle of steepest descent is given by the arctangent of the slope. V. Safety Restrictions and Thrill Factor In this section, we determine safety of the coaster based on the radian measure of the angle of steepest descent. We also calculate the thrill of the drop based on the definition (see page 1 or page 2). VI. Observation and Generalization Repeat using collected data points (from Module A, page 2) for the single drop of the Steel Dragon. Keeping in mind the coaster restrictions, experiment with several different peak and valley combinations. Keep a record of your results. Patricia W. Hammer, Jessica A. King, and Steve Hammer, "Design of a Thrilling Roller Coaster - Module C. Design of One Drop: Polynomial Function," Convergence (February 2005) JOMA Journal of Online Mathematics and its Applications
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8760855197906494, "perplexity": 1263.146757938046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161942.67/warc/CC-MAIN-20160205193921-00034-ip-10-236-182-209.ec2.internal.warc.gz"}
http://physicsfromtheedge.blogspot.com/2015/10/explaining-mihsc-with-schematics.html?showComment=1446068726637
I've suggested (& published in 21 journal papers) a new theory called quantised inertia (or MiHsC) that assumes that inertia is caused by relativistic horizons damping quantum fields. It predicts galaxy rotation, cosmic acceleration & the emdrive without any dark stuff or adjustment. My Plymouth University webpage is here, I've written a book called Physics from the Edge and I'm on twitter as @memcculloch ## Tuesday, 27 October 2015 ### MiHsC with horizons, no waves. Here are some schematics to show how MiHsC explains inertia, for the first time, in a mechanistic way, and also the observed cosmic acceleration. This explanation is equivalent to my previous explanations using Unruh waves fitting into the cosmic horizon, but uses Rindler and cosmic horizons only, no waves, for simplicity. Imagine a spaceship in deep space (the black central object, below). It sees a spherical cosmic horizon, where objects are diverging away from it at the speed of light (the dot-dashed line). This line is an information horizon, so the people on the spaceship can know nothing about what lies behind it. This horizon produces Unruh radiation (orange arrows) that hit the ship from all directions. The spaceship is firing its engines (red flames) and accelerates to the right (black arrow), so information from a certain distance to the left can never catch up with the spacecraft, so a Rindler information horizon appears behind it to the left, and MiHsC says that the Rindler horizon damps the Unruh radiation from the left (the orange arrows disappear) so more radiation hits from the right and a leftwards force appears that opposes the rightwards acceleration. This predicts inertial mass (McCulloch, 2013) which has never before been explained, only assumed. Now imagine the spaceship starts to run out of fuel, so that its acceleration rightwards decreases (smaller red jet, smaller black arrow, see below). Now the Rindler horizon moves further away (the distance to it is c^2/a, where c is the speed of light and a is the acceleration). Now a bit more Unruh radiation can arrive from the left so the net Unruh radiation imbalance and the inertial force is weaker, to mirror the lower acceleration: Now the engine dies completely, and you would expect there to be no acceleration at all. The Rindler horizon is just about to retreat behind the cosmic horizon, but before it does the ship now feels Unruh radiation pressure almost equally from all directions, so its inertial mass starts to collapse.. As its inertia collapses the spacecraft becomes suddenly very sensitive to any external force, including from the gravitating black dot in the bottom right of the picture, so it is now accelerated towards that (the lower inertia makes the gravitational attraction seem stronger than expected, as an aside: this fixes the galaxy rotation problem without the need for dark matter) and a new Rindler horizon appears near the top of the picture to produce an Unruh field that opposes the acceleration It turns out that in order for the Rindler horizon to be disallowed from retreating behind the cosmic horizon, there is a minimum acceleration allowed in MiHsC which is 2c^2/Theta where Theta is the Hubble diameter (the width of the observable cosmos). This acceleration is similar in size to the recently observed cosmic acceleration, and explains it without the need for dark energy. References McCulloch, M.E., 2013. Inertia from an asymmetric Casimir effect, EPL, 101, 59001. http://arxiv.org/abs/1302.2775 Czeko said... Best explanation so far. Alain Coetmeur said... interesting. I think about what is the way matter interact with unruh radiation, to behave like mass... If I understand well, if a radiation interact with matter, it is because of Higgs boson ? and the sensibility to higgs is what creat various particles mass? unruhradiation which interact with higgs boson and particle maybe change the pattern, creating casimir effect ? is Higgs mechanism compatible with MiHsC ,inertia? or do they compete? Mike McCulloch said... Czeko: Thanks. I'm working on a sheltering model & I can get to Newton, but oddly I seem to lose a dimension on the way!.. Can't say more yet.. Mike McCulloch said... Alain: The Higgs mechanism is only responsible for electron & quark mass, only 0.1% of known mass, so it is negligible. Ryan Pavlick said... Losing a dimension makes sense in light of the holographic principle: https://en.wikipedia.org/wiki/Holographic_principle#Energy.2C_matter.2C_and_information_equivalence ZeroIsEverything said... On the street Police officer: - Sir, is this your dimension?! Mike: - Why, thank you very much. I didn't really miss it. You can keep it, and have a nice day. Analytic D said... I went on a 16 mile hike in September and something about the solitude and the motion led me to thinking about an MiHsC related sheltering model for hours as I wore out my legs. I'm convinced that gravity is due to particle pairwise sheltering. Two massive particles at distance r will block respective waves from the other's horizon that have a node at it's partner, causing an equal pressure pushing them together. Obviously, r/2 will have more nodes and thus more pressure. But this is only a linear relationship. The inverse square law comes into play with gravity because QM implies that the number of long waves increases with the surface area at a given radial distance, however linearly fewer wavelengths will have nodes as that wavelength increases, generating an inverse square relationship for sheltering. And the empirical m*m term is due to the round robin of sheltering, since particle A1 shelters B1 and B2, as does A2 shelters B1 and B2, for four pairs of sheltering to produce the force, which matches the Newtonian description. B1 sheltering B2 produces no net force towards A and vice versa. This has some implications with high acceleration away from large bodies as well as providing an explicit mechanism for mutual acceleration (I didn't have a mental image of that until now). For example, like you've said, a ring with mass rotating fast enough would experience no gravity from bodies above/below the axis of rotation. I have a feeling though that Mike's derivation is a bit off from my incredibly rough picture here, though. Mike McCulloch said... Ryan: Indeed, you're right and I'm hoping to explain the lost dimension w/ the holographic principle. The alternative is that gravity just can't be captured this way.. We'll see! Mike McCulloch said... ZeroIsEverything: :) Your joke didn't fall flat. ZeroIsEverything said... Mike: If you lose a dimension and spacetime degenerates into a e.g. 2D holographic surface, I guess it's flatlander time for everyone then? ;) Mike McCulloch said... Analytic D: Thanks for sharing this. I could do with going on a long hike as well. Fresh air, open moors, lots of time to think: 'Zen and the Art of Motorcycle Maintenance', 'Not all who wander are lost'..etc. I've been trying to get gravity into the MiHsC framework three ways: 1) using sheltering, 2) from the uncertainty principle involving interactions between Planck masses (published) and 3) from changes in horizon areas. You're suggesting a mix of 1 and the interaction part of option 2, which is the way to go: IMO the answer will use all three, but the maths seems to be suggesting something further than my intuition can go at the moment. I can get G in F=GMm/r^2 to within half a percent, but as I said it falls flat because I lose a dimension. I need that long walk.. qraal said... Maybe spacetime is 5-D and we're on a 3-sphere 'surface' of a 4-space? Czeko said... Odd question: what generates Unruh radiation? Mike McCulloch said... Czeko: The standard explanation of Unruh radiation is that a Rindler horizon splits up ever-forming zpf virtual particle pairs so the unpaired particles become real, but I'm moving towards an informational understanding and I can derive the usual formula like this: when a Rindler horizon forms it reduces the uncertainty in position Delta_x, so by Heisenberg Delta_p must increase. Since E=pc, energy is produced. Energy from information loss. See Appendix C of my book (the appendices are available for free as a pdf on the world scientific webpage for my book). qraal said... Interesting preprint from Matt Walker & Abraham Loeb, which might add clues to the nature of "dark mass/energy" and support the Inertial Horizon view: http://arxiv.org/abs/1401.1146 Is the Universe Simpler than LCDM? Matthew G. Walker, Abraham Loeb (Submitted on 6 Jan 2014 (v1), last revised 28 May 2014 (this version, v2)) In the standard cosmological model, the Universe consists mainly of two invisible substances: vacuum energy with constant mass-density rho_v=\Lambda/(8pi G) (where Lambda is a cosmological constant' originally proposed by Einstein and G is Newton's gravitational constant) and cold dark matter (CDM) with mass density that is currently rho_{DM,0}\sim 0.3 rho_v. This LCDM' model has the virtue of simplicity, enabling straightforward calculation of the formation and evolution of cosmic structure against the backdrop of cosmic expansion. Here we review apparent discrepancies with observations on small galactic scales, which LCDM must attribute to complexity in the baryon physics of galaxy formation. Yet galaxies exhibit structural scaling relations that evoke simplicity, presenting a clear challenge for formation models. In particular, tracers of gravitational potentials dominated by dark matter show a correlation between orbital size, R, and velocity, V, that can be expressed most simply as a characteristic acceleration, a_{DM}\sim 1 km^2 s^{-2} pc^{-1} \approx 3 x 10^{-9} cm s^{-2} \approx 0.2c\sqrt{G rho_v}, perhaps motivating efforts to find a link between localized and global manifestations of the Universe's dark components. Czeko said... @mike Does the principle of locality is broken in your example? Mike McCulloch said... Czeko: The example does need non-locality, but this doesn't bother relativity because the selection or deselection of Unruh waves by the horizon can be achieved at the phase velocity. Unruh waves, infinite in extent and each of constant frequency cannot carry information, so their phase velocities can be greater than c. Czeko said... Someone was very concerned by this very point and throwed your theory to the bin because of that. Is there a formal formulation explaining why the locality principle is broken and in which way it doesn't matter at all? Zephir said... The explanation of inertia with radiation pressure of Unruh waves is sorta recursive by itself, as it depends on inertia of Unruh waves. And how large/distant the Rindler horizon is in comparison to cosmic horizon? The Unruh radiation propagates with speed of light, so that no inertia induced by its shielding can be a momentary effect. http://physicsfromtheedge.blogspot.cz/2015/10/explaining-mihsc-with-schematics.html https://i.imgur.com/OKTzJma.jpg Mike McCulloch said... Zephir: There is no relativistic light speed limit for monochromatic waves (as Unruh waves are) since such waves carry no information. Zephir said... /* There is no relativistic light speed limit for monochromatic waves (as Unruh waves are) since such waves carry no information. */ In relativity the Unruh radiation is essentially black body radiation: it's neither monochromatic, neither superluminal. The filtering white light to monochromatic doesn't make it superluminal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274229168891907, "perplexity": 1653.229218306481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00278.warc.gz"}
https://www.physicsforums.com/threads/range-calculations.894734/
# Range Calculations 1. Nov 25, 2016 ### baba_944 1. The problem statement, all variables and given/known data Here's a similar problem to the one in my book (I don't want to post the one in my book as I don't want to cheat): The gravitational acceleration on Mars is 3.719[m/s^2]. The gravitational acceleration on Earth (excluding air friction and other places) is 9.81. the initial velicty is this given in polar coordinates: 30<45o How far will a ball go on Earth compared to on Mars? Use range calculations 2. Relevant equations I want to say this is it: R = vi2/g * sin2(theta) 3. The attempt at a solution I did the following: convert polar coordinates to rectangle coordinates: x = 22cos(90) y = 22sin(90) x = 0 y = 22 R = (0,22)^2/9.81 * sin2(theta) R = (0,22)^2/3.71 * sin2(theta) That's all I got and that doesn't seem right. Last edited: Nov 25, 2016 2. Nov 25, 2016 ### Staff: Mentor What direction is implied if the polar angle is 90°? What range can you expect for such a projectile? Edit: Ah. I see that you've changed the angle and magnitude. Last edited: Nov 25, 2016 3. Nov 25, 2016 ### PeroK Are you sure you mean polar coordinates? That doesn't make much sense. Do you mean you are given an initial velocity and an initial angle? 4. Nov 25, 2016 ### baba_944 I'm an idiot. Hold, up let me put the polar coordinates I have in my book to here. 90 degrees doesn't make senses as it's horizontal/vertical. EDIT: Yea, I mean polar coordinates. Apologies for the polar coordinate format. If you want, I can take a screenshot of my book. Now I'm unsure. It looks like polar coordinates to me. 5. Nov 25, 2016 ### Staff: Mentor Do whatever you need to do to show/describe the scenario. 6. Nov 25, 2016 ### baba_944 I'm uploading a screenshot fro my book. Sorry I didn't do this before, I thought you guys could understand from my own example. Plus I want to solve it on my own after getting help. 7. Nov 25, 2016 ### haruspex I think baba means that if you view the launch in the vertical plane and express the initial velocity vector in polars then you get magnitude 30 (m/s?) and angle 45 degrees. 8. Nov 25, 2016 ### baba_944 Here's the photo from the book:https://i.sli.mg/I2aK7f.jpg I'm teaching myself physic so is it OK to ask you guys some questions every now and again? 9. Nov 25, 2016 ### Staff: Mentor Certainly. As long you follow the posting rules you can ask homework or homework-like questions here. If you just need to discuss something conceptually you might start a discussion in one of the technical forums. Regarding the problem posted in this thread, you should make sure that you always attach units to any values that are specified or results that you present. A bare number is often meaningless without the associated units. So we now understand that you are launching a projectile at an angle of 45° with a launch speed of 30 m/s, and doing so in two scenarios. One where the acceleration due to gravity is 9.83 m/s2 and one where it is 9.76 m/s2. What results does your range equation give you for those two scenarios? Edit: Fixed launch speed. I thought I saw 40 m/s, now it seems to be 30 m/s. I await developments 10. Nov 25, 2016 ### haruspex Sure. You only quoted an equation for range. That equation involves doubling the angle. You did not quote equations for converting from polar to Cartesian. Those do not double the angle. You can either work from first principles, using the correct initial horizontal and vertical velocities, or just apply the range equation. Since you are trying to teach yourself, I suggest you would find it most fruitful to work from first principles, with unknown angle and initial velocity, and derive the range equation for yourself. Where did you get 22 from? In your question statement you mentioned 30. In the image from the book it is not legible, but it looks like either 10 or 40. 11. Nov 25, 2016 ### PeroK Your question about Mars is better. That one from your book is very silly. Huascaran is the highest mountain in Peru and is not very flat on top! 12. Nov 25, 2016 ### baba_944 Ok, so just for clarification sakes: dgs = degrees 30<45dgs 30 = initial velocity squared 45 = projectile's angle (plug in to theta) NH: R= (30)^2[m/s]/9.76 *sin2(45) NP: R = (30)^2[m/s]/9.83 * sin2(45) sin2(theta) means two things I think 1: times by two or squared 2: Trigonometric identity: sin(theta) * cos(theta) Going with the former: NH: 130.41[m/s] (I rounded to the nearest tenth). NP: 129.48[m/s] So NH in Peru has the greater distance compared to the North Pole. 13. Nov 25, 2016 ### Staff: Mentor For a bit more clarity, the range equation is: $R = \frac{v^2}{g} sin(2θ)$. The "2" multiplies the angle, and is part of the sine function argument. Note that if you click on the $\Sigma$ icon in the edit panel header, a menu of special symbols and Greek letters is made available. You can select special characters from that menu to use in your post. Items such as θ and the degree symbol ° are there. 14. Nov 25, 2016 ### baba_944 Thank you. So is my calculations accurate? 15. Nov 25, 2016 ### Staff: Mentor No, you took the sine of 45° and then multiplied the result by 2. The angle itself needs to be doubled before you take the sine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109154343605042, "perplexity": 1473.0771701027911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647519.62/warc/CC-MAIN-20180320170119-20180320190119-00486.warc.gz"}
http://www.mathworks.com/help/pde/ug/initial-conditions.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouse over text to see original. Click the button below to return to the English verison of the page. ## Solve PDEs with Initial Conditions Note:   THIS PAGE DESCRIBES THE LEGACY WORKFLOW. New features might not be compatible with the legacy workflow. For the corresponding step in the recommended workflow, see Set Initial Conditions. ### What Are Initial Conditions? Initial conditions has two meanings: • For the `parabolic` and `hyperbolic` solvers, the initial condition `u0` is the solution u at the initial time. You must specify the initial condition for these solvers. Pass the initial condition in the first argument or arguments. ```u = parabolic(u0,... or u = hyperbolic(u0,ut0,...``` For the `hyperbolic` solver, you must also specify `ut0`, which is the value of the derivative of u with respect to time at the initial time. `ut0` has the same form as `u0`. • For nonlinear elliptic problems, the initial condition `u0` is a guess or approximation of the solution u at the initial iteration of the `pdenonlin` nonlinear solver. You pass `u0` in the `'U0'` name-value pair. `u = pdenonlin(b,p,e,t,c,a,f,'U0',u0)` If you do not specify initial conditions, `pdenonlin` uses the zero function for the initial iteration. ### Constant Initial Conditions You can specify initial conditions as a constant by passing a scalar or character vector. • For scalar problems or systems of equations, give a scalar as the initial condition. For example, set `u0` to `5` for an initial condition of 5 in every component. • For systems of N equations, give a character vector initial condition with N rows. For example, if there are N = 3 equations, you can give initial conditions `u0` = `char('3','-3','0')`. ### Initial Conditions in Character Form You can specify text expressions for the initial conditions. The initial conditions are functions of x and y alone, and, for 3-D problems, z. The text expressions represent vectors at nodal points, so use `.*` for multiplication, `./` for division, and `.^` for exponentiation. For example, if you have an initial condition `$u\left(x,y\right)=\frac{xy\mathrm{cos}\left(x\right)}{1+{x}^{2}+{y}^{2}}$` then you can use this expression for the initial condition. `'x.*y.*cos(x)./(1 + x.^2 + y.^2)'` For a system of N > 1 equations, use a text array with one row for each component, such as ```char('x.^2 + 5*cos(x.*y)',... 'tanh(x.*y)./(1 + z.^2)')``` ### Initial Conditions at Mesh Nodes Pass `u0` as a column vector of values at the mesh nodes. The nodes are either `model.Mesh.Nodes`, or the `p` data from `initmesh` or `meshToPet`. See Mesh Data. Tip   For reliability, the initial conditions and boundary conditions should be consistent. The size of the column vector `u0` depends on the number of equations, N, and on the number of nodes in the mesh, `Np`. For scalar u, specify a column vector of length `Np`. The value of element `k` corresponds to the node `p(k)`. For a system of N equations, specify a column vector of N*`Np` elements. The first `Np` elements contain the values of component 1, where the value of element `k` corresponds to node `p(k)`. The next `Np` points contain the values of component 2, etc. It can be convenient to first represent the initial conditions `u0` as an `Np`-by-`N` matrix, where the first column contains entries for component 1, the second column contains entries for component 2, etc. The final representation of the initial conditions is `u0(:)`. For example, suppose you have a function `myfun(x,y)` that calculates the value of the initial condition `u0(x,y)` as a row vector of length N for a 2-D problem. Suppose that `p` is the usual mesh node data (see Mesh Data). Compute the initial conditions for all mesh nodes `p`. ```% Assume N and p exist; N = 1 for a scalar problem np = size(p,2); % Number of mesh points u0 = zeros(np,N); % Allocate initial matrix for k = 1:np x = p(1,k); y = p(2,k); u0(k,:) = myfun(x,y); % Fill in row k end u0 = u0(:); % Convert to column form``` Specify `u0` as the initial condition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640933871269226, "perplexity": 631.0089613393708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00556-ip-10-171-6-4.ec2.internal.warc.gz"}
https://nuit-blanche.blogspot.com/2015/10/1-regularized-neural-networks-are.html
## Monday, October 19, 2015 ### ℓ1-regularized Neural Networks are Improperly Learnable in Polynomial Time Here is a very interesting find: Theorem 3 presents a more general result, showing that any activation function that is sigmoid-like or ReLU-like leads to the computational hardness, even if the loss function ℓis convex and from the conclusion: Although the recursive kernel method doesn’t outperform the LeNet5 model, the experiment demonstrates that it does learn better predictors than fully connected neural networks such as the multi-layer perceptron. The LeNet5 architecture encodes prior knowledge about digit recogniition via the convolution and pooling operations; thus its performance is better than the generic architectures ℓ1-regularized Neural Networks are Improperly Learnable in Polynomial Time by Yuchen Zhang, Jason D. Lee, Michael I. Jordan We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers and that the 1-norm of the incoming weights of any neuron is bounded by L. We present a kernel-based method, such that with probability at least 1δ, it learns a predictor whose generalization error is at most ϵ worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in (1/ϵ,log(1/δ),F(k,L)), where F(k,L) is a function depending on (k,L) and on the activation function, independent of the number of neurons. The algorithm applies to both sigmoid-like activation functions and ReLU-like activation functions. It implies that any sufficiently sparse neural network is learnable in polynomial time. Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there ! Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173485994338989, "perplexity": 1486.8392801042276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826283.88/warc/CC-MAIN-20171023183146-20171023203146-00591.warc.gz"}
http://appliedclassicalanalysis.net/category/integration/page/9/
## Integrate $$\int_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x$$ This integral appeared in Inside Interesting Integrals by Paul Nahin in the problem set of chapter 3. Using Wolfram Alpha, we get \int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \pi \label{eq:1} \tag{1} Nahin suggests the following trig substitution, $$x = \cos(2y)$$. While the form of the integrand certainly does suggest that some type of trig substitution will work, let us do it with another method. If we write the integral as \int\limits_{-1}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x this looks like a beta function. From Higher Transcendental Functions (Bateman Manuscript), Volume 1, Section 1.5.1, equation 10, we see \mathrm{B}(x,y) = 2^{1-x-y} \int\limits_{0}^{1} (1+t)^{x-1}(1-t)^{y-1} + (1+t)^{y-1}(1-t)^{x-1} \mathrm{d} t \label{eq:2} \tag{2} Let us begin with the original integral and the right half of the interval of integration \int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x \label{eq:3} \tag{3} Now, let us consider \int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1-x}{1+x}} \mathrm{d} x \label{eq:4} \tag{4} We let $$x=-y$$ to obtain -\int\limits_{0}^{-1} \sqrt{\frac{1+y}{1-y}} \mathrm{d} y, \label{eq:5} \tag{5} which we can rewrite as \int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x \label{eq:6} \tag{6} Adding the right hand side of equation \eqref{eq:3} and equation \eqref{eq:6} yields our original integral \int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x \label{eq:7} \tag{7} Likewise, adding the left hand sides of equations \eqref{eq:4} and \eqref{eq:3} yields \int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x + \int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x If we combine this result into one integral and rearrange the integrand, we see that it is the same as the integral in \eqref{eq:2} with x=\frac{3}{2} \,\, \mathrm{and} \,\, y=\frac{1}{2} Putting it all together, we have \int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = 2\mathrm{B}\left(\frac{3}{2},\frac{1}{2}\right) = \pi ## Integrate $$\int^{\infty}_{0}\frac{e^{-px^{2}} – e^{-qx^{2}}}{x^{2}} \mathrm{d}x$$ This integral appeared in Paul Nahin’s very interesting book Inside Interesting Integrals. Nahin begins with a completely different integral and derives this one. Let us evaluate the integral directly and then redo it with Nahin’s method. We begin by breaking up the integral and looking at each piece. So we have \mathrm I = \int\limits^{\infty}_{0} x^{-2}\mathrm{e}^{-px^{2}} \mathrm{d}x. This looks very similar to a definition of the gamma function: \Gamma(z) = \int\limits^{\infty}_{0} x^{z-1}\mathrm{e}^{-x} \mathrm{d}x. We make the substitution $$y = px^{2}$$ \mathrm I = \frac{\sqrt{p}}{2} \int\limits^{\infty}_{0} \mathrm{e}^{-y} y^{-\frac{3}{2}} \mathrm{d}y. Invoking the gamma function yields \mathrm I = \frac{\sqrt{p}}{2} \Gamma\Big(-\frac{1}{2}\Big) = -\sqrt{p}\sqrt{\pi}. Treating the other part of the original integral involving $$q$$ yields our final result \int\limits^{\infty}_{0}\frac{\mathrm{e}^{-px^{2}} – \mathrm{e}^{-qx^{2}}}{x^{2}} \mathrm{d}x = \sqrt{\pi}(\sqrt{q}-\sqrt{p}). As I mentioned earlier, Nahin derived this result beginning with an entirely different integral. A casual glance at the original integral should make us suspect that this is the case as it is clear that both parts of the integrand are identical. In other words, why solve the original integral as opposed to the integral that I used at the beginning of the analysis. Such is the case with many of the results in Inside Interesting Integrals. This is the result of working backward, yielding an evaluated integral via some methods as opposed to starting from an integral that one wants to evaluate. I am not criticizing this approach, as it has resulted in an enormous number of useful integral evaluations. Indeed, it can create an unlimited number of evaluated integrals. Also, such “accidental” integrals can result from contour integration even when directly attacking a given integral. Consider that it often happens that upon the last step in evaluating an integral via contour integration, one equates real and imaginary parts in which one is the solution to the original integral while the other is a bonus. Let us now see how Nahin achieved his result. He begins with \int\limits_{0}^{\infty} \mathrm{e}^{-x^{2}} \mathrm{d}x for which Nahin derived the answer of $$\frac{1}{2} \sqrt{\pi}$$ earlier in the book. What is interesting here is that this integral can be done easily with the gamma function by letting $$x^{2} = y$$. This quickly results in \int\limits_{0}^{\infty} \mathrm{e}^{-x^{2}} \mathrm{d}x = \frac{1}{2} \int\limits_{0}^{\infty} \mathrm{e}^{-y} y^{-1/2} \mathrm{d} y = \frac{1}{2} \Gamma\Big(\frac{1}{2}\Big) = \frac{1}{2} \sqrt{\pi}. If someone saw this, then they would immediately recognize that the integral sought can be evaluated via the gamma function as I did above. Nevertheless, let us continue with Nahin’s analysis. Nahin makes a change of variable, $$x = t\sqrt{a}$$ to introduce the parameter $$a$$, and thus obtains \int\limits_{0}^{\infty} \mathrm{e}^{-at^{2}} \mathrm{d}t = \frac{1}{2}\frac{\sqrt{\pi}}{\sqrt{a}} Then he invokes a useful and interesting trick. He integrates the equation with respect to $$a$$, between two arbitrary end points, and changes the order of integration. Changing the order of integration requires some care, as it is only valid if the integral converges uniformly. Here, the integral is just a gamma function, which we know converges uniformly. This is usually the case for “well behaved”, “non-crazy” integrals. So, Nahin has for the left hand side \int\limits_{p}^{q}\left\{\int\limits_{0}^{\infty} \mathrm{e}^{-at^{2}} \mathrm{d}t\right\}\mathrm{d}a = \int\limits_{0}^{\infty}\left\{\int\limits_{p}^{q}\mathrm{e}^{-at^{2}} \mathrm{d}a\right\} \mathrm{d}t = \int\limits_{0}^{\infty}\frac{\mathrm{e}^{-pt^{2}} – \mathrm{e}^{-qt^{2}}}{t^{2}} \mathrm{d}t. The right hand side yields \int\limits_{p}^{q}\frac{1}{2}\frac{\sqrt{\pi}}{\sqrt{a}} \mathrm{d}a = \sqrt{\pi}(\sqrt{q}-\sqrt{p}). And thus we have our result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949098229408264, "perplexity": 892.2214368757952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662863.53/warc/CC-MAIN-20190119074836-20190119100836-00541.warc.gz"}
http://www.ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-273
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education The Collected Mathematical Papers of Leonard Eugene Dickson SEARCH THIS BOOK: AMS Chelsea Publishing 1975; 4022 pp; hardcover Volume: 273 ISBN-10: 0-8284-0273-6 ISBN-13: 978-0-8284-0273-6 List Price: US$158 Member Price: US$142.20 Order Code: CHEL/273 The set of Collected Mathematical Papers of Leonard Eugene Dickson contains nearly all of his research papers and two of his books, the second being the famous Algebren und ihre Zahlentheorie. The work includes an annotated Bibliography and Index. Part 1 • Determination of the structure of all linear homogeneous groups in a Galois field which are defined by a quadratic invariant • Certain subgroups of the Betti-Mathieu group • Canonical form of a linear homogeneous substitution in a Galois field • Subgroups of order a power of $$p$$ in the general and special $$m$$-ary linear homogeneous groups in the $$GF[p^n]$$ • Distribution of the ternary linear homogeneous substitutions in a Galois field into complete sets of conjugate substitutions • Representation of linear groups as transitive substitution groups • Analytic functions suitable to represent substitutions • Cyclic subgroups of the simple ternary linear fractional group in a Galois field • Canonical form of a linear homogeneous transformation in an arbitrary realm of rationality • Determination of the ternary modular groups • On the quaternary linear homogeneous groups modulo $$p$$ of order a multiple of $$p$$ • On the canonical forms and automorphs of ternary cubic forms • Rational reduction of a pair of binary quadratic forms; their modular invariants • A theory of invariants • Binary modular groups and their invariants • Concerning the cyclic subgroups of the simple group $$G$$ of all linear fractional substitutions of determinant unity in two nonhomogeneous variables with coefficients in an arbitrary Galois field • Invariantive reduction of quadratic forms in the $$GF[2^n]$$ • Finiteness of the odd perfect and primitive abundant numbers with $$n$$ distinct prime factors • Even abundant numbers • Invariantive theory of plane cubic curves modulo 2 • Determination of all groups of binary linear substitutions with integral coefficients taken modulo 3 and of determinant unity • On the real elements of certain classes of geometrical configurations • On the rank of a symmetrical matrix • Ternary orthogonal group in a general field • Groups defined for a general field by the rotation groups • Lower limit for the number of sets of solutions of $$x^e +y^e +z^e \equiv 0\, (\text{mod } p)$$ • On the congruence $$x^n +y^n +z^n \equiv 0\, (\text{mod } p)$$ • Universal Waring theorems with cubic summands • Universal forms $$a_{i}x^{n}_{i}$$ and Waring's problem • Quaternary quadratic forms representing all integers • Sur plusieurs groupes linéaires isomorphes au groupe simple d'ordre 25920 • Sur une généralisation du théorème de Fermat • Some fallacies of an angle trisector • A new solution of the cubic equation • On systems of isothermal curves • A matrix defined by the quaternion group • A generalization of symmetric and skew-symmetric determinants • A property of the group $$G_{2}^{2n}$$ all of whose operators except identity are of period 2 • On the factorization of large numbers • On the representation of numbers as the sum of two squares [with M. Kaba] • Amicable number triples • Rational edged cuboids with equal volumes and equal surfaces • Extensions of Waring's theorem on nine cubes • A new method for universal Waring theorems with details for seventh powers • Quadratic functions or forms, sums of whose values give all positive integers • Projective classification of cubic surfaces, modulo 2 • The invariants, seminvariants and linear covariants of the binary quartic form modulo 2 • Fermat's last theorem and the origin and nature of the theory of algebraic numbers • On quaternions and their generalization and the history of the eight square theorem • Ternary quadratic forms and congruences • A new definition of the general Abelian linear group • Determination of an abstract simple group of order $$2^7 \cdot 3^6 \cdot 5\cdot 7$$ holoedrically isomorphic with a certain orthogonal group and with a certain hyperabelian group • Index Part 2 • Canonical forms of quaternary Abelian substitutions in an arbitrary Galois field • Theory of linear groups in an arbitrary field • On the group defined for any given field by the multiplication table of any given finite group • The groups of Steiner in problems of contact • On the reducibility of linear groups • The groups of Steiner in problems of contact (second paper) • Definitions of a field by independent postulates • Definitions of a linear associative algebra by independent postulates • On the subgroups of order a power of $$p$$ in the quaternary Abelian group in the Galois field of order $$p^n$$ • The subgroups of order a power of 2 of the simple quinary orthogonal group in the Galois field of order $$p^n =8l\pm 3$$ • On hypercomplex number systems • The minimum degree $$\tau$$ of resolvents for the $$p$$-section of the periods of hyperelliptic functions of four periods • On commutative linear algebras in which division is always uniquely possible • Linear algebras in which division is always uniquely possible • Invariants of binary forms under modular transformations • Modular theory of group-matrices • Representations of the general symmetric group as linear groups in finite and infinite fields • Definite forms in a finite field • General theory of modular invariants • Linear algebras • Proof of the finiteness of modular covariants • Linear associative algebras and Abelian equations • Quartic curves modulo 2 • Determination of all general homogeneous polynomials expressible as determinants with linear elements • New division algebras • Singular case of pairs of bilinear, quadratic, or Hermitian forms • Simpler proofs of Waring's theorem on cubes, with various generalizations • Construction of division algebras • Waring's problem for cubic functions • A new method for Waring theorems with polynomial summands • Cyclotomy when $$e$$ is composite • Cyclotomy and trinomial congruences • A new method for Waring theorems with polynomial summands, II • Some relations between the theory of numbers and other branches of mathematics • Homogeneous polynomials with a multiplication theorem • Linear algebras with associativity not assumed • The ideal Waring theorem for twelfth powers • On the inscription of regular polygons • The analytic representation of substitutions on a power of a prime number of letters with a discussion of the linear group • A generalization of Fermat's theorem • An elementary exposition of Frobenius's theory of group-characters and group-determinants • Index Part 3 • Differential equations from the group standpoint • On Waring's problem and its generalization • The known systems of simple groups and their inter-isomorphisms • The simplest model for illustrating the conic sections • On the trisection of an angle and the construction of regular polygons of 7 and 9 sides • Notes on the theory of numbers • On the cyclotomic function • Problems • Generalizations of Waring's theorem on fourth, sixth, and eighth powers • Minimum decompositions into $$n$$-th powers • Two-fold generalizations of Cauchy's lemma • Cyclotomy, higher congruences, and Waring's problem • Cyclotomy, higher congruences, and Waring's problem, II • Waring theorems of new type • Proof of the ideal Waring theorem for exponents 7-180 • Solution of Waring's problem • Universal Waring theorems • The structure of the linear homogeneous groups defined by the invariant • The alternating group on eight letters and the quaternary linear congruence group modulo two • The hyperorthogonal groups • Universal Waring theorem for eleventh powers • Arithmetic of quaternions • The rational linear algebras of maximum and minimum ranks • The straight lines on modular cubic surfaces • Recent progress in the theories of modular and formal invariants and in modular geometry • Quaternions and their generalizations • Quadratic forms which represent all integers • Polygonal numbers and related Waring problems • New Waring theorems for polygonal numbers • Congruences involving only $$e$$-th powers • Outline of the theory to date of the arithmetics of algebras • Further development of the theory of arithmetics of algebras • A new theory of linear transformations and pairs of bilinear forms • On the minimum degree of resolvents for the $$p$$-section of the periods of hyperelliptic functions of four periods • On the theory of numbers and generalized quaternions • Algebraic theory of the expressibility of cubic forms as determinants, with application to Diophantine analysis • On finite algebras • Index Part 4 • Algebren und ihre Zahlentheorie • The structure of certain linear groups with quadratic invariants • The group of linear homogeneous substitutions on $$mq$$ variables which is defined by the invariant • Concerning the four known simple linear groups of order 25920, with an introduction to the hyper-abelian linear groups • The abstract group isomorphic with the symmetric group on $$k$$ letters • An abstract simple group of order 25920 • Linear substitutions commutative with a given substitution • Concerning the Abelian and related linear groups • Generational relations for the abstract group simply isomorphic with the linear fractional group in the $$GF[2^n]$$ • Linear groups in an infinite field • A quadratic Cremona transformation defined by a conic • Two systems of subgroups of the quaternary Abelian group in a general Galois field • On the subgroups of order a power of $$p$$ in the linear homogeneous and fractional groups in the $$GF[p^n]$$ • On the class of the substitutions of various linear groups • The group of a tactical configuration • A general theorem on algebraic numbers • Proof of the existence of the Galois field of order $$p^r$$ for every integer $$r$$ and prime number $$p$$ • The symmetric group on eight letters and the senary first hypoabelian group • On quadratic forms in a general field • On triple algebras and ternary cubic forms • Criteria for the irreducibility of a reciprocal equation • Modular theory of group characters • On higher congruences and modular invariants • On the factorization of integral functions with $$p$$-adic coefficients • On the representation of numbers by modular forms • On the negative discriminants for which there is a single class of positive primitive binary quadratic forms • Invariants, seminvariants, and covariants of the ternary and quaternary quadratic form modulo 2 • On the relation between linear algebras and continuous groups • Mathematics in war perspective. [Presidential address delivered before the American Mathematical Society, December 27, 1918.] • Fallacies and misconceptions in Diophantine analysis • A new method in Diophantine analysis • Index Part 5 • Recent progress on Waring's theorem and its generalizations • Proof of a Waring theorem on fifth powers • Waring's problem for ninth powers • The converse of Waring's problem • The Waring problem and its generalizations • A generalization of Waring's problem • All integers except 23 and 239 are sums of eight cubes • The theory of numbers: its principal branches • Recent light on classic problems in the theory of numbers • Systems of simple groups derived from the orthogonal group • Systems of simple groups derived from the orthogonal group • Why it is impossible to trisect an angle or to construct a regular polygon of seven or nine sides by ruler and compasses • Systems of continuous and discontinuous simple groups • Higher irreducible congruences • The structure of the hypoabelian groups • Report on the recent progress in the theory of linear groups • Orthogonal group in a Galois field • Concerning a linear homogeneous group in $$C_{m, q}$$ variables isomorphic to the general linear homogeneous group in $$m$$ variables • Systems of simple groups derived from the orthogonal group • Concerning real and complex continuous groups • The configurations of the 27 lines on a cubic surface and the 28 bitangents to a quartic curve • An extension of the theory of numbers by means of correspondences between fields • Integral solutions of $$x^2-my^2=zw$$ • All integral solutions of $$ax^2 + bxy + cy^2 =w_{1}w_{2}\cdots w_{n}$$ • A generalization of Waring's theorem on nine cubes • Integers represented by positive ternary quadratic forms • Extensions of Waring's theorem on fourth powers • All positive integers are sums of values of a quadratic function of $$x$$ • Generalizations of the theorem of Fermat and Cauchy on polygonal numbers • Extended polygonal numbers • New division algebras • The forms $$ax^2 + by^2 + cz^2$$ which represent all integers • Perfect and amicable numbers • Cyclic numbers • Proof of the non-isomorphism of the simple Abelian group on $$2m$$ indices and the simple orthogonal group on $$2m +1$$ indices for $$m>2$$ • A triply infinite system of simple groups • The first hypoabelian group generalized • A class of groups in an arbitrary realm connected with the configuration of the 27 lines on a cubic surface • Theorems on the residues of multinomial coefficients with respect to a prime modulus • Determination of all the subgroups of the three highest powers of $$p$$ in the group $$G$$ of all $$m$$-ary linear homogeneous transformations modulo $$p$$ • On the last theorem of Fermat • On the last theorem of Fermat (second paper) • On families of quadratic forms in a general field • On non-vanishing forms • Theorems and tables on the sum of the divisors of a number • On commutative linear groups • Modular invariants of the system of a binary cubic, quadratic, and linear form • Gergonne's pile problem • Isomorphism between certain systems of simple linear groups • Impossibility of restoring unique factorization in a hypercomplex arithmetic • Algebras and their arithmetics • Quadratic fields in which factorization is always unique • Resolvent sextics of quintic equations • Obituary: Hans Frederik Blichfeldt 1873-1945 • Bibliography • Index Part 6 • Algebraic invariants • Simplicity of the Abelian group on two pairs of indices in the Galois field of order $$2^{n}, n>1$$ • A class of linear groups including the Abelian group • The abstract form of the special linear homogeneous group in an arbitrary field • The abstract form of the Abelian linear groups • A class of groups in an arbitrary realm connected with the configuration of the 27 lines on a cubic surface [Second paper] • Note on modular invariants • Les polynomes égaux à des déterminants • La composition des polynomes • The largest linear homogeneous groups with an invariant Pfaffian • The known finite simple groups • On the groups defined for an arbitrary field by the multiplication tables of certain finite groups • The abstract group simply isomorphic with the group of linear fractional transformations in a Galois field • Generational relations of an abstract simple group of order 4080 • Invariants of the general quadratic form modulo 2 • Modular invariants of a general system of linear forms • Invariantive classification of pairs of conics modulo 2 • Definition of the Abelian, the two hypoabelian, and related linear groups of the groups of isomorphism of certain elementary groups • Determination of all the subgroups of the known simple group of order 25920 • Definition of a group and a field by independent postulates • On semi-groups and the general isomorphism between infinite groups • On quadratic, Hermitian and bilinear forms • Equivalence of pairs of bilinear or quadratic forms under rational transformation • An invariantive investigation of irreducible binary modular forms • A fundamental system of invariants of the general modular linear group with a solution of the form problem • Invariants in the theory of numbers • On the number of inscriptible regular polygons • Three sets of generational relations defining the abstract simple group of order 504 • Generational relations defining the abstract simple group, of order 660 • The abstract group $$G$$ simply isomorphic with the alternating group on six letters • Criteria for the irreducibility of functions in a finite field • On the theory of equations in a modular field • On binary modular groups and their invariants • Geometrical and invariantive theory of quartic curves modulo 2 • A quadratic Cremona transformation defined by a conic • Factors of a certain determinant of order six • The order of a certain senary linear group • Three algebraic notes • Note on the volume of a tetrahedron in terms of the coordinates of the vertices
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000882863998413, "perplexity": 1722.9276468573505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770400.105/warc/CC-MAIN-20141217075250-00129-ip-10-231-17-201.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/12/lesson/12.1.4/problem/12-61
### Home > PC3 > Chapter 12 > Lesson 12.1.4 > Problem12-61 12-61. Plot each of the given complex numbers in parts (a) and (b) and state the modulus and argument of the number. Then use your answer from part (a) to complete part (c). Once you plot each of the numbers, create a right triangle with the $x$-axis. Then use the Pythagorean Theorem and the tangent ratio to determine the modulus and argument. If you need more help, review the Math Notes box in Lesson 11.2.2. 1. $z = 5 − 4i$ 1. $w = −3 + 7i$ 1. $\text{Evaluate } z^{2/3}.$ $z^{2/3} = (z^2)^{1/3}$ First compute $z^{2}$, then compute the third root of the result.
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500324487686157, "perplexity": 540.5836848860152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00295.warc.gz"}
http://math.stackexchange.com/questions/73520/divisibility-with-sums-going-to-infinity?answertab=oldest
# Divisibility with sums going to infinity I can't quite wrap my head around this. Given the formula $(1-x)(1+x+x^2+...) = 1$ It seems clear to me why this is true. All the x terms cancel out and we are left with one. And this is clearly true for all values of x. However what I can't figure out is $\displaystyle\sum_{i=0}^\infty x^i = \frac{1}{1-x}$ If x is something like 2, then $\displaystyle\frac{1}{1-2} = -1$ But $\displaystyle\sum_{i=0}^\infty 2^i$ is the sum of infinitely many positive numbers. How is it possible that they are equal? - That be true 2-adically! –  jspecter Oct 18 '11 at 1:40 @Olives: The sum $a_1+a_2+\cdots+a_n$ of a finite set of reals has a reasonably clear meaning. However, before working with sums $a_1+a_2+\cdots +a_n+\cdots$ (adding forever), we must assign meaning to such "sums," and make sure that under our definition, such "sums" behave well under the usual operations of arithmetic. That is why it is necessary to define what one means by a convergent series. If we are going to behave purely "formally," there is no problem. But when we substitute a real number for the formal symbol $x$, the issue of convergence arises. –  André Nicolas Oct 18 '11 at 2:13 You'll want to see this... –  J. M. Oct 18 '11 at 2:55 The series $\sum_{i=0}^\infty x^i$ converges if $-1<x<1$ and otherwise diverges. If it converges, then it is $1/(1-x)$. - One might add that OP's first formula is not true for all values of $x$. –  Gerry Myerson Oct 18 '11 at 2:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892579972743988, "perplexity": 310.46210934849177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657116650.48/warc/CC-MAIN-20140914011156-00158-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.ipam.ucla.edu/abstract/?tid=14058&pcode=DMC2017
## Vlasov simulations in 6D phase space based on compressed data formats #### Eric SonnendrueckerMax-Planck-Institut The Vlasov equation describes the motion of charged particles in phase space. It is non linearly coupled to a field solver. We shall consider the Poisson equation in this talk. Many simulations can be performed using a cartesian grid or a mapped cartesian grid. The particle distribution function, which is the solution of the Vlasov equations, can then be represented on a 6D logical grid. This yields a huge amount of data for which computations and storage is necessary. We are going to present two ideas, which enable to reduce considerably the amount of data that is actually needed at the price of a small loss of accuracy compared to a full grid simulation on the same grid. Our simulations were performed using the semi-Lagrangian method, which consists in solving particle trajectories and interpolations, these will need to be done with our reduced data format. The first approach we are going to describe is a sparse grid method, specially adapted to our problem by using a multiplicative delta f idea, and the second one is based on a low rank decomposition of the 6D tensor representing the distribution function, for which we tried a hierarchical Tucker decomposition and the Tensor Train decomposition. These ideas are quite general and can be used either only for data compression for storage or message passing or directly for computations. (Joint work with Katharina Kormann) Back to Big Data Meets Computation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311548233032227, "perplexity": 287.55875080182403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00231.warc.gz"}
https://www.physicsforums.com/threads/stacking-dominos.319371/
# Stacking Dominos 1. Jun 11, 2009 ### Chewy0087 1. The problem statement, all variables and given/known data When dominos are stacked one on top of each other on the edge of the table, it turns out that the overhangs are related to the harmonic numbers Hn, which are defined as 1 + 1/2 + 1/3 + ... + 1/n: the maximum overhang (over the edge of the table) possible for n dominos is Hn/2. Using the concept of centre of gravity prove this, feel free to use diagrams to aid your explanation. 3. The attempt at a solution I'm struggling with this one, and even having looked it up it's not at all clear to me, from what i thought, the centre of mass of all of the books together must be on the pivot point (at least), over the edge of the table, with 1/2 of the first domino over the side, however I really can't justify to myself that placing another domino ontop will keep it balanced, I read somewhere that " The center of gravity of the stack of two books is at the midpoint of the books' overlap" what exactly does this mean? I've tried drawing diagrams, don't understand it. Thanks in advance for any help & i'll try and scan in a picture although I don't think my diagram will help Edit: Hmm looking over it again I get the feeling that for any pile the centre of gravity of the ENTIRE pile must be on the pivot point, however how would you go about working this out mathematically for say 10 books? Last edited: Jun 11, 2009 2. Jun 11, 2009 ### LowlyPion Consider the Σ of the centers of masses of each of the dominoes. So long as that sum is not past the pivot point (the edge of the table) then they should be stable, even if diminishingly so with each added domino. 3. Jun 11, 2009 ### Chewy0087 That's what I thought would be the intuitive solution, however; Saw this link, and can't really make heads or tails of it, I understand with two blocks, in that the centre of where they overlap is the centre of mass, and i'm happy with doing that mathematically however when it gets to 3 you have to find the centre of three & four and i get stuck. 4. Jun 11, 2009 ### LowlyPion Consider then the center of mass of the x(n) system as x(n)*n = (L)*(n-1) + L/2*1 With the overhang equal then to L - x(n) That should simplify then to Overhang(n) = L - (L - 1/(2N)) = 1/(2N) Doesn't that look to be the Harmonic number / 2 since any element N has overhang 1/(2n)? 5. Jun 12, 2009 ### Chewy0087 Sorry i'm having difficulty understanding how you can find the centre of mass mathematically the way you have...could you go through it a bit slower, sorry but i really need to understand this problem =[ 6. Jun 12, 2009 ### LowlyPion You recognize that the center of mass of the bottom one must be right at the corner. That is maximum overhang. For the second one then you have the center of mass of n = 2 as the sum of the distances of the centers of masses from the far end (away from the edge). The CM of #2 is at L/2 from the end and the CM of #1 is acting at L, which is right at the protruding edge - a distance of L away. The formula for the CM is: x = 1/M*Σ CM*d (For convenience use unit mass for the dominoes.) For N = 2 x = 1/2*(1/2*L + 1*L) = 1/2*(3/2*L) = 3/4*L That's how far from the furthest from the table edge end the CM of 1&2 is. And the overhang of 2 from the edge then is L - 3/4*L = 1/4*L. Now you are going to iterate by placing the 1&2 system on 3. It will then be at the overhanging edge of 3. That means that a mass of 2 will now be at a distance of L and you will have the CM of 3 again still at the L/2 distance from the far end. This CM calculation then looks like: x = 1/3(1/2*L + 2*L) = 1/3(5/2*L) = 5/6*L Overhang of 3 then is 1 - 5/6 = 1/6 Generalizing then for any domino element N, you have x = 1/N*(L/2 + (N-1)*L) All of the mass of all the dominoes above the bottom one are acting at the overhanging edge - the distance L away. Then you add the CM of the bottom domino and divide by the mass N. The rest follows as before. 7. Jun 13, 2009 ### Chewy0087 This is the bit i've been having trouble with, i can follow your reasoning from here (ish) however what is this formula and where has it come from? =[ i'm sorry if i'm bieng stupid but i can't find it or really make sense of it. x = 1 / Mass*Sum of Centre of Mass*distance? Or..? 8. Jun 13, 2009 ### LowlyPion 9. Jun 13, 2009 ### LowlyPion So yes, basically. It's the sum of the each center of mass times distances of the center of mass from the one end. All divided by the total mass. By employing the previous center of mass, in each successive iteration ... 10. Jun 13, 2009 ### Chewy0087 Grrrr really struggling to understand this, so if the formula for the centre of mass is; x (overhang?) = $$\frac{\sum M*Total Overhang}{\sum M}$$ ? Isn't that redundant? Also then why is it 1 /Mass * Sum of Centre of Mass * Distance for this example? I know i'm bieng a pain :< Edit : I see now that the x is the distance to the overall centre of mass, but i don't see how that helps me.. 11. Jun 13, 2009 ### LowlyPion No, it's not the mass times the overhang. The overhang is found subsequently after you find the center of mass x, because since the center of mass is to be at the corner/edge just in balance, then you subtract that point, the x of the center of mass from L the length, because that's how much is sticking out for that element. 12. Jun 13, 2009 ### Chewy0087 Okay, I think i'm starting to get to grips with it abit, thanks again for all of the help! =D So the distance from the start of the first domino to the centre of mass is given by; (for standard unit mass of 1) *for x1 meaning distance from the start of the first domino to the centre of mass of that block. $$\frac{x1 + x2 + x3......xn}{n}$$ for n blocks. From the formula; $$\frac{\sum (mass(N)x(N))}{Total Mass}$$ Meaning that to work out x1 would be the distance from the start of the first block to the end of the given block / 2? Have I interpreted this correctly? And if so, I don't see how given this you can find the overhang to be related to harmonic numbers. Can I just say the help is great, and you don't know how much i appreciate it :P 13. Jun 13, 2009 ### LowlyPion I think it is better if you think it as one block - the bottom block - with the remainder of the blocks - the (n-1) blocks whose masses you have previously determined to have balanced their combined center of mass at the overhanging edge. This avoids having to recalculate with all the masses individually, because the center of mass of the (n-1) are acting at the already determined overhanging edge. That makes then the general form of the center of mass of the nth block the natural L/2 of that block alone, acting at the center L/2 as you would ordinarily suppose for just that block, and then the remainder of the mass, the (n-1), is acting at the overhanging edge of the block, which is of course a distance of L away from the edge of the block that lays on the table. With xn in hand, then you know the overhang of the nth block by subtracting from L - the rest of L up to x of it laying on the table. As shown before then for the nth block, that L - xn = Hn/2 Last edited: Jun 13, 2009 14. Jun 13, 2009 ### Chewy0087 Hmmm, i think i see what you're saying...thanks alot for the help! i'll try to work through it a couple of times tonight and if i'm still having trouble i'll post. thanks again for the help! Similar Discussions: Stacking Dominos
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097732901573181, "perplexity": 659.0125932585535}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806939.98/warc/CC-MAIN-20171123195711-20171123215711-00695.warc.gz"}
http://nrich.maths.org/7161/solution
### Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### 14 Divisors What is the smallest number with exactly 14 divisors? ### Summing Consecutive Numbers Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? # Weekly Problem 13 - 2011 ##### Stage: 2 and 3 Short Challenge Level: It is clear that each of a, b and c must be less than or equal to $10$. A brief inspection will show that the only combination of different square numbers which total $121$ is $81+36+4$. More formally, the problem can be analysed by considering the remainders after dividing the square numbers less than $121$ $(1,4,9,16,25,36,49,64,81$ and $100)$ by three. The remainders are $(1,1,0,1,1,0,1,1,0$ and $1)$. When $121$ is divided by $3$, the remainder is $1$. Therefore $a^2+b^2+c^2$ must also leave a remainder of $1$. Now we can deduce that two of the three squares must leave a remainder of $0$ and so be multiples of $3$. There are three square numbers below $121$ which are multiples of three: $9, 36$ and $81$. Checking these, we see that $81$ and $36$ are the only pair to have a sum which differs from $121$ by a perfect square, namely $4$. So $a+b+c=9+6+2=17$. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126724362373352, "perplexity": 219.1052869390984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00211-ip-10-180-206-219.ec2.internal.warc.gz"}
https://crozier.engineering.asu.edu/research/modeling-and-simulation/dielectric-theory/
# Dielectric Theory Fast electrons from the incident electron beam are inelastically scattered by atomic electrons, collective oscillations of free electrons and vibrational excitation modes between atomic groups. Classical dielectric theory accurately predicts the total inelastic scattering probability i.e. the ratio of the number of electrons that scatter inelastically to the total number of electrons in the beam, by these mechanisms. Therefore, it helps predict the EELS spectrum when an electron transmits through the bulk or aloofly passes through vacuum near a surface of the material of interest. In this theory, the interaction of a transmitted electron with the entire solid is described in terms of a dielectric response function ε (q, ω). In 1957, Ritchie derived an expression for the electron scattering power of an infinite medium. The stopping power (dE/dz) is equal to the backward force on the transmitted electron in the direction of motion and is also the electronic charge multiplied by the potential gradient in the z-direction. Using Fourier transforms, Ritchie showed that where the angular frequency ω is equivalent to E/ℏ, a0 is the first Bohr radius, m0 is the mass of the electron and qy is the component of the scattering vector in a direction perpendicular to velocity v. The imaginary part of [−1/ε (q, ω)] is known as the energy-loss function and provides a complete description of the response of the medium through which the fast electron is traveling. The stopping power can be related to the double-differential cross section (per atom) for inelastic scattering by where na represents the number of atoms per unit volume of the medium. For small scattering angles, dqy ≈ k0θ and dΩ ≈ 2πθdθ, so, where θE = E/(γm0v2), γ = √(1-v2/c2), is the characteristic angle for a particular energy loss E. In the small-angle dipole region, ε (q, E) varies little with q and can be replaced by the optical value ε (0, E), which is the relative permittivity of the specimen at an angular frequency ω = E/ℏ. An energy-loss spectrum that has been recorded using a reasonably small collection angle can, therefore, be compared directly with optical data. Such a comparison involves a Kramers–Kronig transformation to obtain Re [1/ε (0, E)], leading to the energy dependence of the real and imaginary parts (ε1 and ε2) of ε (0, E). At large energy loss, ε2 is small and ε1 close to 1, so that Im(−1/ε) = ε2/(ε12 + ε22) becomes proportional to ε2 and (apart from a factor of E−3) the energy-loss spectrum is proportional to the x-ray absorption spectrum. The optical permittivity is a transverse property of the medium, in the sense that the electric field of an electromagnetic wave displaces electrons in a direction perpendicular to the direction of propagation, the electron density remaining unchanged. On the other hand, an incident electron produces a longitudinal displacement and a local variation of electron density. The transverse and longitudinal dielectric functions are precisely equal only in the random-phase approximation at sufficiently small q (Nozieres and Pines, 1959); nevertheless, there is no evidence for a significant difference between them, as indicated by the close similarity of Im(−1/ε) obtained from both optical and energy-loss measurements on a variety of materials (Daniels et al., 1970). – R.F. Egerton, Electron Energy-Loss Spectroscopy in the Electron Microscope, 2nd ed. (New York: Plenum/Springer, 1996).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123677015304565, "perplexity": 949.6780473636546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00432.warc.gz"}
http://math.stackexchange.com/questions/25475/natural-set-to-express-any-natural-number-as-sum-of-two-in-the-set?answertab=votes
# Natural set to express any natural number as sum of two in the set Any natural number can be expressed as the sum of three triangular numbers, or as four square numbers. The natural analog for expressing numbers as the sum of two others would apparently be the sum of two "linear" numbers, but all natural numbers are "linear", so this is rather unsatisfying. Is there a well-known sparser set of integers (or, half-integers, for that matter) that has this property? - Well, you could always take all the even numbers and $1$; or more generally, $k\mathbb{N}\cup\{1,2,\ldots,k-1\}$. Don't think they will be any more "satisfying", though. –  Arturo Magidin Mar 7 '11 at 6:51 It's a start, but unfortunately, I don't find that too satisfying either. The triangular numbers have density 1/n, rather than constant density. –  wnoise Mar 7 '11 at 7:19 Assuming Goldbach's conjecture, you can take the set of all primes and successors of primes (plus some small numbers). These have density $2/\log n$. Not assuming the conjecture, you can take primes, almost-primes and successors of primes (plus some small numbers) - this is the famous Chen's theorem. The resulting density is $\log\log n/\log n$. Another suggestion is to take the set of all numbers whose binary expansion is "supported" on only odd powers of $2$ or only even powers of $2$. The resulting density is roughly $2/\sqrt{n}$, so this is close to optimal (you need at least $1/\sqrt{n}$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776591658592224, "perplexity": 185.69844131671212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678696015/warc/CC-MAIN-20140313024456-00023-ip-10-183-142-35.ec2.internal.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=IPNO&langue=fr&action_todo=view&id=in2p3-00675218&version=1
761 articles – 7591 Notices  [english version] HAL : in2p3-00675218, version 1 Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 269 (2012) 3251-3257 Neutron measurements for advanced nuclear systems: The n_TOF project at CERN nTOF Collaboration(s) (2012) A few years ago, the neutron time-of-flight facility n_TOF was built at CERN to address some of the urgent needs of high-accuracy nuclear data for Accelerator Driven Systems and other advanced nuclear energy systems, as well as for nuclear astrophysics and fundamental nuclear physics. Thanks to the characteristics of the neutron beam, and to state-of-the-art detection and acquisition systems, high quality neutron cross-section data have been obtained for a variety of isotopes, many of which radioactive. Following an important upgrade of the spallation target and of the experimental area, a new measurement campaign has started last year. After a brief review of the most important results obtained so far at n_TOF, the new features of the facility are presented, together with the first results on the commissioning of the neutron beam. The plans for future measurements, in particular related to nuclear technology are finally discussed. Thème(s) : Physique/Physique Nucléaire Expérimentale in2p3-00675218, version 1 http://hal.in2p3.fr/in2p3-00675218 oai:hal.in2p3.fr:in2p3-00675218 Contributeur : Sophie Heurteau <> Soumis le : Mercredi 29 Février 2012, 14:16:50 Dernière modification le : Jeudi 1 Mars 2012, 11:07:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729472160339355, "perplexity": 2479.7100054289513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00137-ip-10-146-231-18.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/56288/maximize-function-over-a-set-with-a-transitive-and-antisymmetric-relation
# Maximize function over a set with a transitive and antisymmetric relation Let $\mathcal{R}$ be a transitive and antisymmetric relation defined over a finite set $X$. For any set $S\subseteq X$ define $\Gamma(S)=\left\{y\in S \mid \not \exists x\in S . (x,y)\in\mathcal{R}\right\}$. (Thus, $y \in \Gamma(S)$ if it belongs to $S$ and no other element in $S$ "dominates" it.) Suppose that each element is assigned a weight. This is represented by the function $w:X\to \mathbb{R}^+$. The problem is to find a subset $S \subseteq X$ to maximize $\sum_{z \in \Gamma(S)}w(z)$. Is this problem polynomial-time solvable? • Equivalent formulation: Given a dag with non-negative weights on the vertices, find a subset $S$ of the vertices whose total sum is as large as possible, subject to the constraint that no vertex in $S$ is reachable from any other. Or: given a partial order on $X$ and a non-negative weight for each element of $X$, find an antichain whose total weight is as large as possible. – D.W. Apr 22 '16 at 1:42 You can compute the maximum weight of an antichain, or more generally the maximum weight of a union of $k$ antichains, by reducing to the maximum flow problem. See for example a technical report by Cong.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670288920402527, "perplexity": 142.33815054621414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00236.warc.gz"}
http://mathhelpforum.com/geometry/147827-vectors-question.html
# Thread: Vectors question 1. ## Vectors question Relative to a fixed origin, O, the points A and B have position vectors (1i, 5j, 1k) and (6i, 3j, 6k) respectively. Find, in exact, simplified form, (i) the cosine of angle AOB, [4] (ii) the area of triangle OAB, [3] (iii) the shortest distance from A to the line OB. [2] I successfully found the answer to part (i) to be $\frac{\sqrt{3}}{3}$. I have the mark scheme and the answer to part (ii) is $\frac{27}{2}\sqrt{2}$ and the answer to part (iii) is $3\sqrt{2}$ Please explain to me how to do parts (ii) and (iii) Thanks 2. Find the length AB, OA and OB Then Area = sqrt[s(s-a)(s-b)(s-c)], where s = (a+b+c)/2, a, b and c are the sides of the triangle AOB. If the shortest distance from A to the line OB. is d, then Area AOB = 1/2*d*OB. Find OB. You have already calculated area AOB. Find d.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497545123100281, "perplexity": 1478.2628223601093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105304.35/warc/CC-MAIN-20170819051034-20170819071034-00250.warc.gz"}
https://www.physicsforums.com/threads/free-energy-increasing-speed-of-a-fluid-in-a-nozzle.163110/
# Free energy? increasing speed of a fluid in a nozzle • Start date • #1 5 0 I have a tube 1 m long and 1 m wide with an ideal fluid inside with a pressure of 1 bar. I conect that tube of 1*1m with another tube of 1m long and 0.5m wide and this connected to another tube of 1 m and 0.25 m wide and so on So i have a tube in wich every 1 m of lenght the wideness of the tube is half, 1m. 0.5, 0.25, 0.125... and so on The fluid starts with a speed of 1 m/s and a pressure of 1 bar, when it goes to the next part of the tube the speed doubles and the pressure halfs because the section is half So as the ideal fluid goes along the narrowing tube every meter doubles the speed and halfs the pressure So the pressure will go: 1 bar, 0.5, 0.125 and so on and the speed of the fluid will go: 1m/s, 2, 4 ,8 ,16 ,32,... So with a presure of 1 bar and a initial speed of 1 m/s i can get an ideal fluid to go as fast as i want So can someone explain me whats wrong with this assumption because kinetic energy cant be created This is a thought experiment with an ideal fluid so plz dont tell me it wouldnt be posible because of friction, its an ideal fluid • #2 ZapperZ Staff Emeritus 35,981 4,708 I have a tube 1 m long and 1 m wide with an ideal fluid inside with a pressure of 1 bar. I conect that tube of 1*1m with another tube of 1m long and 0.5m wide and this connected to another tube of 1 m and 0.25 m wide and so on So i have a tube in wich every 1 m of lenght the wideness of the tube is half, 1m. 0.5, 0.25, 0.125... and so on The fluid starts with a speed of 1 m/s and a pressure of 1 bar, when it goes to the next part of the tube the speed doubles and the pressure halfs because the section is half So as the ideal fluid goes along the narrowing tube every meter doubles the speed and halfs the pressure So the pressure will go: 1 bar, 0.5, 0.125 and so on and the speed of the fluid will go: 1m/s, 2, 4 ,8 ,16 ,32,... So with a presure of 1 bar and a initial speed of 1 m/s i can get an ideal fluid to go as fast as i want So can someone explain me whats wrong with this assumption because kinetic energy cant be created This is a thought experiment with an ideal fluid so plz dont tell me it wouldnt be posible because of friction, its an ideal fluid First of all, you need an external force to maintain the pressure. So this isn't a "free" energy (and from now on, please try to avoid using that term because that would only raise warning flags all over). Secondly, why would you even need to demonstrate this with all those different size diameters? Just putting a finger partially over a water hose is a sufficient demonstration. What you need to consider is the TOTAL ENERGY per second that crosses the cross-sectional surface of the pipe. You are moving the SAME volume per unit time. But since the pipe with a smaller diameter has a smaller cross-section, then the unit volume of water has to move faster. So same volume, same pass per unit time being moved by the same force, so work done is the same no matter what the pipe diameter is. No free energy. Zz. • #3 russ_watters Mentor 21,525 8,571 Just something else to point out here: This is a relatively simple fluid dynamics concept. When you start thinking about such things, you may want to take a step back and consider the possibility that others have thought about the same issues and simply research what they found. We don't mean to sound harsh here, but the words "free energy" do set off loud warning bells for us. So..... What you are musing about is Bernoulli's Principle: Bernoulli's Principle states that in an ideal fluid (low speed air is a good approximation), with no work being performed on the fluid, an increase in velocity occurs simultaneously with decrease in pressure or gravitational energy. This principle is a simplification of Bernoulli's equation, which states that the sum of all forms of energy in a fluid flowing along an enclosed path (a streamline) is the same at any two points in that path. http://en.wikipedia.org/wiki/Bernoulli's_principle Last edited: • #4 5 0 Yes that was exactly what my teacher answered me when i asked him about converging nozzles, pressure which is potential energy decreases transforming in kinetic energy, speed of the fluid But i still dont understand it so i have another question: I have a tube of constant wideness and to get the ideal fluid to a speed of 1 m/s I need a pump that gives a pressure of 1 bar what would happen if in the end of the 1 m section tube i put a converging nozzle with the narrower section of .125m? Obviously the fluid will come out at a speed of 4 m/s But if the mass of ideal fluid per unit of time is the same however the section how is it possible it increases its speed 4 times if the pump works at the same rate? Same energy spent by the pump, same amount of mass but 4 times more speed just by putting the nozzle And that taken into account the nozzle is just 4 times smaller if I made the nozzle 1 million times smaller the speed at which the water came out would be awesome and all with the pump working at the same rate I dont believe in creation of energy so plz explain me how with the pump working at the same rate i can get an ideal fluid as fast as I want making tiny the nozzle and i can always make the nozzle even smaller • #5 ZapperZ Staff Emeritus 35,981 4,708 Yes that was exactly what my teacher answered me when i asked him about converging nozzles, pressure which is potential energy decreases transforming in kinetic energy, speed of the fluid But i still dont understand it so i have another question: I have a tube of constant wideness and to get the ideal fluid to a speed of 1 m/s I need a pump that gives a pressure of 1 bar what would happen if in the end of the 1 m section tube i put a converging nozzle with the narrower section of .125m? Obviously the fluid will come out at a speed of 4 m/s But if the mass of ideal fluid per unit of time is the same however the section how is it possible it increases its speed 4 times if the pump works at the same rate? Same energy spent by the pump, same amount of mass but 4 times more speed just by putting the nozzle And that taken into account the nozzle is just 4 times smaller if I made the nozzle 1 million times smaller the speed at which the water came out would be awesome and all with the pump working at the same rate I dont believe in creation of energy so plz explain me how with the pump working at the same rate i can get an ideal fluid as fast as I want making tiny the nozzle and i can always make the nozzle even smaller You have 2 cylinders. One has a cross-sectional diameter 1/2 of the other one. How much longer is that cylinder to have the SAME volume? This is the same length that has to flow through per unit time to preserve the same volume flowing through in that time. Zz. Last edited: • #6 russ_watters Mentor 21,525 8,571 The simple answer is you can't do that without increasing the pressure at the pump. When you speed up the flow, you convert static pressure to velocity pressure (as the equation shows), so if you want to end up with the same volume flow rate, you need more pressure at the pump. The wik link provides all the derivations and even has a picture of what you are describing (though reversed). Please read it (or google for other explanations). Last edited: • Last Post Replies 9 Views 7K • Last Post Replies 4 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 6 Views 3K • Last Post Replies 5 Views 3K • Last Post Replies 4 Views 6K • Last Post Replies 3 Views 944 • Last Post Replies 3 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607367277145386, "perplexity": 529.8124504276631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00648.warc.gz"}
http://math.stackexchange.com/questions/192294/how-to-prove-this-combinatorial-identity
# How to prove this combinatorial identity? I am wondering how to prove the following identity: $$\sum_{i=0}^{n-r} \frac{2^i (r+i) \binom{n-r}{i}}{(i+1) \binom{2n-r}{i+1}}=1?$$ It seems this might be related to the hypergeometric distribution, but I could not convert that form back into hypergeometric distribution form. - The first term has a zero denominator. – Brian M. Scott Sep 7 '12 at 8:18 Where did you get that "identity" from? At least people should have some idea why one could have a reason to believe the identity (after correction) to hold. – Marc van Leeuwen Sep 7 '12 at 8:32 It fails when $r=0$ (so that the $i$ in the denominator can be cancelled) and $n=1$: $$\frac{1\binom10}{\binom21}+\frac{2\binom11}{\binom22}=\frac12+2=\frac32$$ – Brian M. Scott Sep 7 '12 at 9:01 @Rob: Even when dropping the $i=0$ term, I have yet to find a case $(n,r)$ where a useful result comes out. Please check the original problem statement carefully. – Hagen von Eitzen Sep 7 '12 at 13:11 Sorry that that $i$ should $i+1$ – Rob Sep 7 '12 at 18:54 Note that \begin{align} \frac{2^i(r+i)\binom{n-r}{i}}{(i+1) \binom{2n-r}{i+1}} &=\frac{2^i(r+i)}{n}\frac{\binom{2n-r-i-1}{n-1}}{\binom{2n-r}{n}}\tag{1}\\ &=\frac{2^i}{n}\frac{2n\binom{2n-r-i-1}{n-1}-n\binom{2n-r-i}{n}}{\binom{2n-r}{n}}\tag{2}\\ &=\frac{2^{i+1}\binom{2n-r-i-1}{n-1}-2^i\binom{2n-r-i}{n}}{\binom{2n-r}{n}}\tag{3} \end{align} $(1)\quad$ $\frac1{i+1}\frac{\color{#C00000}{(n-r)!}}{i!\color{#C00000}{(n-r-i)!}}\frac{(i+1)!\color{#00A000}{(2n-r-i-1)!}}{\color{#00A000}{(2n-r)!}}=\frac1n\frac{n!\color{#C00000}{(n-r)!}}{\color{#00A000}{(2n-r)!}}\frac{\color{#00A000}{(2n-r-i-1)!}}{(n-1)!\color{#C00000}{(n-r-i)!}}$ $(2)\quad$ $\begin{array}{l}(2n-r-i)\binom{2n-r-i-1}{n-1}=n\binom{2n-r-i}{n}\\ \Rightarrow(r+i)\binom{2n-r-i-1}{n-1}=2n\binom{2n-r-i-1}{n-1}-n\binom{2n-r-i}{n}\end{array}$ $(3)\quad$ distribute $\frac{2^i}{n}$ Next, we have \begin{align} \sum_{k=0}^{n-m}\binom{n-k}{m}2^k &=\sum_{k=0}^{n-m}\sum_{j=0}^k\binom{n-k}{m}\binom{k}{j}\\ &=\sum_{j=0}^{n-m}\binom{n+1}{m+j+1}\tag{4} \end{align} Applying $(4)$ to the sum of $(3)$, we get to a nicely telescoping sum: \begin{align} {\large\sum_{i=0}^{n-r}}\;\frac{2^i(r+i)\binom{n-r}{i}}{(i+1) \binom{2n-r}{i+1}} &={\large\sum_{i=0}^{n-r}}\;\frac{2^{i+1}\binom{2n-r-i-1}{n-1}-2^i\binom{2n-r-i}{n}}{\binom{2n-r}{n}}\\ &=\frac1{\binom{2n-r}{n}}\sum_{i=0}^{n-r}\left(2\binom{2n-r}{n+i}-\binom{2n-r+1}{n+i+1}\right)\\ &=\frac1{\binom{2n-r}{n}}\sum_{i=0}^{n-r}\left(\binom{2n-r}{n+i}-\binom{2n-r}{n+i+1}\right)\\ &=\frac1{\binom{2n-r}{n}}\binom{2n-r}{n}\\[6pt] &=1\tag{5} \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769780397415161, "perplexity": 1350.8069479042356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00038-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-angle-which-the-particle-hits-the-wall.201232/
# The angle @ which the particle hits the wall 1. Nov 28, 2007 ### ranaroy dear all, i am working on a problem, but not having found the solution yet. i have a wall section (in a 2D x-y plane) on which spherical balls hit and get reflected. i know the x-velocity and y-velocity of the particle when it hits the wall. i need to calculate the impact (incident) angle of the ball on the wall. it is easy to calculate when the plane is either along x-axis or y-axis. but, my plane is curvilnear (i know the radius of curvature). can i calculate the incident angle for this curvilinear plane ? pls help. thank you. 2. Nov 28, 2007 ### ASAPLZ I don't know this very well, but here's a shot. Take the angle it hits the and copy that same angle it hits on the wall and add the curvilinear to the other side. 3. Nov 28, 2007 ### ranaroy dear asaplz, can u explain it once more in detail. i didnt understand well. you mean to say, first i will do theta = atan(V_y/V_x) and calculate the angle. then, what shoud i do ? to make it more clear for me, can u do one example case. say V_y = 3 m/s , V_y = 4 m/s So, theta = 37 degree. pls help 4. Nov 28, 2007 ### ranaroy sorry, the above velocity components should read V_y = 3 m/s , V_x = 4 m/s sorry for the mistake. Similar Discussions: The angle @ which the particle hits the wall
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700110912322998, "perplexity": 1798.267214482295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00609.warc.gz"}
http://mathoverflow.net/questions/41939/a-balls-and-colours-problem?sort=newest
# A balls-and-colours problem A box contains n balls coloured 1 to n. Each time you pick two balls from the bin - the first ball and the second ball, both uniformly at random and you paint the second ball with the colour of the first. Then, you put both balls back into the box. What is the expected number of times this needs to be done so that all balls in the box have the same colour? Answer (Spoiler put through rot13.com): Gur fdhner bs gur dhnagvgl gung vf bar yrff guna a. Someone asked me this puzzle some four years back. I thought about it on and off but I have not been able to solve it. I was told the answer though and I suspect there may be an elegant solution. Thanks. - I think this question is borderline, but I would like to keep it open. It's borderline because you already know the answer, and that makes this more of a puzzle than a research question. On the other hand, I find "is there an elegant proof of this pretty combinatorial formula" to be a reasonable type of question. –  David Speyer Oct 12 '10 at 23:54 I first heard this problem at Mathcamp 2001. I believe the problem was invented there by Dave Savitt. I recall verifying it up to n=10 using linear relations among variables, one for each partition of n. Eventually, Dave and John Conway found a proof for all n. Their proof gave an explicit formula for the expected number of steps from any starting position, not just all distinct, and had a trivial inductive proof. IIRC, the formula involved harmonic numbers. While Ori Gurel-Gurevich's solution is very nice, I wonder if anyone can find the formula for all starting positions, which I have lost. –  aorq Oct 13 '10 at 9:38 I certainly didn't invent the problem. IIRC I heard about it at a colloquium tea in the MIT common room (not 100% sure from whom, but maybe Greg Warrington?), put it aside at the time, and brought it out as a challenge problem at Mathcamp later that summer. The rest of Rex's story is accurate, I think. –  D. Savitt Oct 14 '10 at 2:16 Of course, the mathematical solution is simple (though not elegant). I would like to see a more elegant solution. –  Pratik Poddar Oct 14 '10 at 6:22 It can probably be done by looking at the sum of squares of sizes of color clusters and then constructing an appropriate martingale. But here's a somewhat elegant solution: reverse the time! Let's formulate the question like that. Let $F$ be the set of functions from $\{1,\ldots,n\}$ to $\{1,\ldots,n\}$ that are almost identity, i.e., $f(i)=i$ except for a single value $j$. Then if $f_t$ is a sequence of i.i.d. uniformly from $F$, and $$g_t=f_1 \circ f_2 \circ \ldots \circ f_t$$ then you can define $\tau= \min \{ t | g_t \verb"is constant"\}$. The question is then to calculate $\mathbb{E}(\tau)$. Now, one can also define the sequence $$h_t=f_t \circ f_{t-1} \circ \ldots \circ f_1$$ That is, the difference is that while $g_{t+1}=g_t \circ f_{t+1}$, here we have $h_{t+1}=f_{t+1} \circ h_t$. This is the time reversal of the original process. Obviously, $h_t$ and $g_t$ have the same distribution so $$\mathbb{P}(h_t \verb"is constant")=\mathbb{P}(g_t \verb"is constant")$$ and so if we define $\sigma=\min \{ t | h_t \verb"is constant"\}$ then $\sigma$ and $\tau$ have the same distribution and in particular the same expectation. Now calculating the expectation of $\sigma$ is straightforward: if the range of $h_t$ has $k$ distinct values, then with probability $k(k-1)/n(n-1)$ this number decreases by 1 and otherwise it stays the same. Hence $\sigma$ is the sum of geometric distributions with parameters $k(k-1)/n(n-1)$ and its expectation is $$\mathbb{E}(\sigma)=\sum_{k=2}^n \frac{n(n-1)}{k(k-1)}= n(n-1)\sum_{k=2}^n \frac1{k-1} - \frac1{k} = n(n-1)(1-\frac1{n}) = (n-1)^2 .$$ - I do not take the point. Why $\sigma$ and $\tau$ have the same distribution? Equality of distributions at each time (not common distributions!), does it imply the equality oа distributions of the first stop? I do not think so. But hopefully we may fix it: expectation of $\sigma$ equals $\sum_{n=1}^{\infty} prob (\sigma\geq n)=\sum \prob(h_{n-1}\ne const)$, and in thi last we may replace $h$ to $g$. –  Fedor Petrov Oct 13 '10 at 8:33 Aha! I now understand Ori's answer. At time $t$, considering all steps from step $t$ to the end, there will be $k$ balls whose colors are mapped to all the other balls at the end. Considering time step $t-1$, the only way to reduce $k$ is to choose two of these $k$ influential balls, and have the color of one mapped to that of another. This gives the recursion in his answer. Very nice, although it could be explained better. –  Peter Shor Oct 13 '10 at 13:13 $P(\tau > t)=P(g_t \verb"is not constant")=P(h_t \verb"is not constant")=P(\sigma > t)$ –  Ori Gurel-Gurevich Oct 13 '10 at 17:08 Sorry I am a bit confused: 1. what is the domain and codomain of each f? Are we talking about a map between balls and colors, or just a transition between colors? Without this being clear I don't see why introducing the time reversal process is really magic. 2. In Peter's comment above, what are the influential balls"? At time t, if we define the influential balls to be the balls whose current color eventually wins in the end, then at time t+1 this number could increase, decrease or stays the same. Why do we have a geometric distribution? –  Ying Zhang Jun 27 '14 at 16:45 3. The choice of f being uniform from F at each step? Again this has to do with the definition of f. But this probably doesn't matter as long as the other steps make sense. –  Ying Zhang Jun 27 '14 at 16:47 Consider just those sequences of selections that result in the final colour being $c$. If at some point during a sequence we have $k$ of the balls being this colour, we can define $E_k$ as the expected number of selections from here before all the balls are coloured $c$. Doing this, we need to take account of the fact that not all selections are equally probable: each selection must be multiplied by the probability that it results in $c$ being the eventual colour. Happily, this probability is simply $k'/n$, where $k'$ is the number of balls coloured $c$ after this selection. That gives us: $E_k = 1 + \frac{(k+1)(n-k)E_{k+1} + (k-1)(n-k)E_{k-1} + (n(n-1)-2k(n-k))E_k}{n(n-1)}$. This simplifies to $2kE_k = \frac{n(n-1)}{n-k} + (k+1)E_{k+1} + (k-1)E_{k-1}$. We find from this that $E_1 = n/2 + E_2$, and generally if $E_{k-1} = w_{k-1}(n) + E_k$ then $E_k = w_k(n) + E_{k+1}$ with $w_k(n) = \frac{n(n-1)}{(n-k)(k+1)} + \frac{k-1}{k+1}w_{k-1}(n)$. The required expectation, $E_1$, now resolves to: $E_1 = \sum_{i=1}^{n-1}{w_i(n)}$ $= n(n-1) \sum_{i=1}^{n-1}{ \frac{1}{(n-i)(i+1)}(1 + \sum_{j=i+1}^{n-1}{ \prod_{k=i+1}^j{ \frac{k-1}{k+1} } }) }$ $= n(n-1) \sum_{i=1}^{n-1}{ \frac{1}{(n-i)(i+1)}(1 + \sum_{j=i+1}^{n-1}{ \frac{i(i+1)}{j(j+1)} }) }$ $= n(n-1) \sum_{i=1}^{n-1}{ \frac{1}{(n-i)(i+1)}(1 + i(i+1)(\frac{n-1}{n} - \frac{i}{i+1})) }$ $= n(n-1) \sum_{i=1}^{n-1}{ \frac{1}{n} }$ $= (n-1)^2$. (Sorry, couldn't find how to get working multiline equations under jsMath, so I split them up.) - Fix $n$, so that I don't have to include it in my notation. In all other respects, copy Aaron's notation. JBL observes that there appear to be numbers $f(1)$, $f(2)$, ..., $f(n-1)$ such that $$p_{\lambda} = \sum_{k \in \lambda} f(k). \quad \quad (*)$$ We will show that such $f$'s exist, and give formulas for them. In particular, it will be clear that $f(1) = (n-1)^2/n$, proving the result. For convenience, we set $f(0)=f(n)=0$. It seems best to describe the $f$'s by the following relation $$\frac{k}{n} = \frac{k(n-k)}{n(n-1)} \left( - f(k-1) + 2 f(k) - f(k+1) \right) \ \mbox{for} \ 1 \leq k \leq n-1 \quad \quad (**)$$ Our proof breaks into two parts: showing that there is a unique solution to $(**)$ and showing that the resulting $f$'s obey $(*)$. We do the second part first. We must establish the Markov relation: $$\sum_{k \in \lambda} f(k) = 1 + \sum_{\mu} p(\lambda \to \mu) \sum_{k \in \mu} f(k).$$ For any $k$ in $\lambda$, the modified partition $\mu$ contains either $k-1$, $k+1$ or $k$, depending on whether we lost a ball of the corresponding color, gained one, or kept the same number. The probabilities of these events are $k(n-k)/n(n-1)$, $k(n-k)/n(n-1)$, and $1-2 k(n-k)/n(n-1)$, respectively. So we must show that $$\sum_{k \in \lambda} f(k) = 1 + \sum_{k \in \lambda} \left( \frac{k(n-k)}{n(n-1)} f(k-1) + \left( 1- \frac{2k(n-k)}{n(n-1)} f(k) \right) + \frac{k(n-k)}{n(n-1)} f(k+1)\right).$$ Canceling $\sum_{k \in \lambda} f(k)$ from both sides, we must show that $$0 = 1 - \sum_{k \in \lambda} \frac{k(n-k)}{n(n-1)} \left( - f(k-1) + 2 f(k) - f(k+1) \right).$$ By $(**)$, the defining equation of the $f$'s, this is $$1- \sum_{k \in \lambda} (k/n) = 1 - |\lambda|/n=0$$~ as desired. So, if we can find $f$'s obeying $(**)$, we will have $(*)$. Let $g_j$ be the length $(n-1)$ vector $$( n-j, 2(n-j), 3(n-j), \ldots, j(n-j), \ldots, 3j, 2j, j)$$ The key feature of $g_j$ is that $$- g_j(k-1) + 2 g_j(k) - g_j(k+1) = \begin{cases} n \quad k=j \\ 0 \quad k \neq j \end{cases}$$ where we set $g_j(0)=g_j(n)=0$. Let $f$ be the vector $(f(1), f(2), \ldots, f(n-1))$. Rewrite $(**)$ as $$\frac{n-1}{n-k} = - f(k-1) + 2 f(k) - f(k+1).$$ So we see that $$f = \sum_{k=1}^{n-1} \frac{n-1}{n(n-k)} g_k$$ In particular, $$f(1) = \sum_{k=1}^{n-1} \frac{n-1}{n(n-k)} (n-k) = \frac{(n-1)^2}{n}$$ as desired. More generally, $$f(j) = \sum_{k=1}^{j-1} \frac{n-1}{n(n-k)} k(n-j) + \sum_{k=j}^{n-1} \frac{n-1}{n(n-k)} j(n-k) = \frac{(n-1)(n-j)}{n} \sum_{k=1}^{j-1} \frac{k}{n-k} + \frac{j(n-j)(n-1)}{n}.$$ - Barring arithmetic errors, we can also rewrite your final expression as $\frac{(2j - 1)(n - 1)(n - j)}{n} - (n - 1)(n - j) \sum_{k = 1}^{j - 1} \frac{1}{n - k}$, which does indeed look fairly simple in terms of harmonic numbers, as A. Rex remembered. –  JBL Oct 13 '10 at 21:41 For n=3, one turn takes you 2 of one color and 1 of another. To get to a single color you need to pick one of the 2 (prob=2/3), then pick the odd one (prob=1/2). So the expected number of turns is 1+3=4 - Just wish to add some sense to $f(k)$... Let: $X_{i}$ - the number of balls of the color $i$ at time $t=0$. $A_{i}$ - the event that in the end all the balls are of the color $i$. Using this notation: $E(T|X_{1}=\lambda_{1},\ldots ,X_{n}=\lambda_{n}) = \sum E(1_{A_{i}}T|X_{1}=\lambda_{1},\ldots,X_{n}=\lambda_{n})$. But $E(1_{A_{i}}T|X_{1}=\lambda_{1},\ldots,X_{n}=\lambda_{n})$ depends only on $\lambda_{i}$ and is denoted by $f(\lambda_{i})$. Using "first step" analysis we get: $E(1_{A_{i}}T|X_{i}=k)=dE(1_{A_{i}}(T+1)|X_{i}=k+1)+ dE(1_{A_{i}}(T+1)|X_{i}=k-1)+(1-2d)E(1_{A_{i}}(T+1)|X_{i}=k)$ where $d=\frac{k(n-k)}{n(n-1)}$ One may compute (using Doob's theorem or otherwise) that $E(1_{A_{i}}|X_{i}=k)=\frac{k}{n}$. And using that we obtain David Speyer's equation (**) for $f(k)$. - edit I indeed was mistaken. The suggested answer is correct as far as I checked (n=10). It is easy enough to set up a system of equations for the expectations and compute. Here are the results to 6, the notation should be clear enough: $$p_{12}=3,p_{111}=4$$ $$p_{13}=11/2,p_{22}=7,p_{112}=8,p_{1111 }=9$$ $$p_{14}={\frac {25}{3}},p_{23}={\frac {35}{3}} ,p_{1^23}={\frac {38}{3}},p_{12^2}=14,p_{1^32}=15,p_{1^5}=16$$ ` $p_{15}={\frac {137}{12}},p_{24}={\frac {101}{ 6}},p_{3^2}={\frac {37}{2}},p_{1^24}={\frac {107}{6}},p_{123} ={\frac {83}{4}}, p_{1^33}={\frac {87}{4}},p_{2^3}=22 ,p_{1^22^2}=23,p_{1^52}=24,p_{1^6}=25$ AT least up to n=10, If two ones are replaced by a 2, the expected number of moves goes down by 1. later in reply to a comment, here are results for 10. If anyone wants more, ask me. They are just kind of bulky at least the way I have them now. I think the notation should be clear. [[1, 10], 81], [[1, 8], [2, 1], 80], [[1, 6], [2, 2], 79], [[1, 4], [2, 3], 78], [[1, 2], [2, 4], 77], [[2, 5], 76], [[1, 7], [3, 1], 623/8], [[1, 5], [2, 1], [3, 1], 615/8], [[1, 3], [2, 2], [3, 1], 607/8], [[1, 1], [2, 3], [3, 1], 599/8], [[1, 4], [3, 2], 299/4], [[1, 2], [2, 1], [3, 2], 295/4], [[2, 2], [3, 2], 291/4], [[1, 1], [3, 3], 573/8], [[1, 6], [4, 1], 2085/28], [[1, 4], [2, 1], [4, 1], 2057/28], [[1, 2], [2, 2], [4, 1], 2029/28], [[2, 3], [4, 1], 2001/28], [[1, 3], [3, 1], [4, 1], 3995/56], [[1, 1], [2, 1], [3, 1], [4, 1], 3939/56], [[3, 2], [4, 1], 955/14], [[1, 2], [4, 2], 951/14], [[2, 1], [4, 2], 937/14], [[1, 5], [5, 1], 3895/56], [[1, 3], [2, 1], [5, 1], 3839/56], [[1, 1], [2, 2], [5, 1], 3783/56], [[1, 2], [3, 1], [5, 1], 465/7], [[2, 1], [3, 1], [5, 1], 458/7], [[1, 1], [4, 1], [5, 1], 3529/56], [[5, 2], 1627/28], [[1, 4], [6, 1], 4399/70], [[1, 2], [2, 1], [6, 1], 4329/70], [[2, 2], [6, 1], 4259/70], [[1, 1], [3, 1], [6, 1], 16721/280], [[4, 1], [6, 1], 7883/140], [[1, 3], [7, 1], 15087/280], [[1, 1], [2, 1], [7, 1], 14807/280], [[3, 1], [7, 1], 3553/70], [[1, 2], [8, 1], 5869/140], [[2, 1], [8, 1], 5729/140], [[1, 1], [9, 1], 7129/280], [[10, 1], 0] - I agree with Ross Millikan's comment below. I have verified the claimed formula upto n=4. The only approach I can imagine for this problem is to draw out the Markov chain explicitly and find the expected time it takes for the chain to hit the one non-transient state. –  Hedonist Oct 12 '10 at 23:48 I also confirm 4 for n=3. It's 1+1+2/3 +(2/3)^2+(2/3)^3 + .... I think you might have taken the ratio in the geometric series to be 1/3 be mistake. –  David Speyer Oct 12 '10 at 23:54 It seems very likely that there exist functions $f(n, k)$ such that $p_{\lambda} = \sum_{i} f(|\lambda|, \lambda_i)$. For example, it appears that $f(n, 1) = \frac{(n - 1)^2}{n}$ and $f(n, 2) = \frac{(2n - 1)(n - 2)}{n}$. Probably a little more data would be enough to guess the general form and proceed as in A. Rex's comment on the question. –  JBL Oct 13 '10 at 13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949026465415955, "perplexity": 263.78626683384874}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00282-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3775873/can-incentre-lie-on-the-euler-line-for-an-obtuse-triangle
# can incentre lie on the Euler line for an obtuse triangle? I know that incentre lie on the Euler line for equilateral and isosceles triangle but I found a claim that incentre can lie on the Euler line for obtuse triangle. So, is this claim true?Also does there exist any scalene and acute ( but neither equilateral or isosceles ) triangle for which incentre lies on the Euler line? Finally, if incentre is on Euler line, then is it at a unique location with respect to other centres (orthocentre, circumcentre, centroid)? Case 1. Acute triangle. Let $$ABC$$ be an acute triangle and $$I$$, $$O$$, $$H$$ are its incenter, circumcenter and orthocenter, respectively. Note that points $$I$$, $$O$$ and $$H$$ are inside triangle $$ABC$$. We will prove that $$I$$, $$O$$ and $$H$$ are collinear iff $$ABC$$ is isosceles or equilateral. Indeed, suppose that $$O$$, $$I$$ and $$H$$ are collinear but $$\triangle ABC$$ is scalene. Recall that rays $$AO$$ and $$AH$$ are symmetric with respect to angle bisector of $$\angle BAC$$. Hence, angle bisectors of angles $$OAH$$ and $$BAC$$ coincide, so $$AI$$ bisects angle $$AOH$$. Since $$I\in OH$$ we have $$\frac{AO}{AH}=\frac{IO}{IH}$$ due to angle bisector theorem for $$\triangle AOH$$. Similarly, we obtain that $$\frac{AO}{AH}=\frac{BO}{BH}=\frac{CO}{CH}=\frac{IO}{IH}.$$ Finally, note that $$AO=BO=CO$$, so the last equality implies $$AH=BH=CH$$. Thus, $$O$$ and $$H$$ are distinct circumcenters of triangle $$ABC$$ which is impossible. Therefore, in acute triangle $$O$$, $$I$$ and $$H$$ are collinear iff $$\triangle ABC$$ has equal sides. Case 2. Obtuse (or right) triangle. Suppose that in $$\triangle ABC$$ we have $$\angle C\geq 90^{\circ}$$. In this case we still can apply the previous argument to triangles $$AOH$$ an $$BOH$$ (because rays $$AO$$ and $$AH$$ are still symmetric with respect to $$AI$$; the same for rays $$BO$$, $$BH$$ and $$BI$$). Thus, $$\frac{AO}{AH}=\frac{BO}{BH}=\frac{IO}{IH}.$$ However, it means that $$AH=BH$$, so $$AB=BC$$, so triangle $$ABC$$ is isosceles, as desired. • That's perfect but what about obtuse triangles? @richrow Aug 1 '20 at 3:10 • Actually, obtuse triangle has two acute angles, so we can apply this argument for $AOH$ and $BOH$ and obtain $AH=BH$ if $\angle C>90^{\circ}$. I will edit my answer later. Aug 1 '20 at 5:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 50, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907448947429657, "perplexity": 242.40581076706837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00477.warc.gz"}
https://math.stackexchange.com/questions/1626579/sets-of-positive-integers-closed-under-lcm-gcd
# Sets of positive integers closed under lcm/gcd? Is there an exact, workable description of sets of positive integers closed under the lcm or gcd operations? In other words, a set of ideals of Z which is closed under intersections or sums. My motivation for asking this is the fact that the order of the product of two elements in an abelian group is the lcm of their orders: thus, given a subset of integers A which is closed under lcm and contains 1, the set of all elements in a group whose orders are in A is a subgroup which may have some interesting properties. P-subgroups are an example of this. In the absence of a general criterion, examples are also helpful. • Do you want sets to be closed under gcd and sets to be closed under lcm, or do you want sets that are closed under gcd and lcm? – Martin Brandenburg Jan 25 '16 at 17:39 • Any one of those would be helpful. – Vik78 Jan 25 '16 at 17:41 Is there an exact, workable description of sets of positive integers closed under the lcm or gcd operations? In short, I don't think so. Here's a description of uncountably many such sets: for every prime $p$, pick a subset $S_p \subseteq \mathbb{Z}_{\ge 0}$ of the nonnegative integers. Now consider the set of positive integers $n$ such that, if $\nu_p(n)$ denotes the power of $p$ dividing $n$, $$\nu_p(n) \in S_p \forall p.$$ This set uniquely determines each $S_p$, and since there are uncountably many choices for each $S_p$, there are uncountably many such sets. But this isn't even all of them! Right now there's no interaction between the different $\nu_p$, but we could also require, for example, that $\nu_2 = \nu_3$. More generally, for any equivalence relation $\sim$ on the natural numbers (and there are uncountably many of these too), we could require that if $p \sim q$, then $\nu_p = \nu_q$. And this isn't even all of them. For simplicity, at this point I'm going to pretend that there are only two primes, say $2$ and $3$. Assign to every natural number $n$ the coordinates $(\nu_2(n), \nu_3(n))$ in $\mathbb{R}^2$, which is a certain lattice point in the first quadrant. Geometrically, taking the gcd of two natural numbers corresponds to making the two corresponding points two vertices of a right triangle and taking the third vertex. So there are lots more possibilities that aren't covered by the above construction, which you can visualize as sets of points closed under this operation. For more look up the notion of order dimension. • Pretty sure there is more stuff we can do. Every finite lattice embeds into the lattice of natural numbers under divisibility, and not all of those have the nice structure as a product of linear orders that the ones described here do. – Henning Makholm Jan 25 '16 at 17:34 • Right, I was about to edit to give some indication of this. – Qiaochu Yuan Jan 25 '16 at 17:37 • This is a fascinating construction that you've come up with. Thanks for the answer! – Vik78 Jan 25 '16 at 17:51 • One additional question: this construction is pretty general. Do you know whether or not this describes all the sets closed under lcm? – Vik78 Jan 25 '16 at 18:12 • @Vik78: which construction? The second one? Definitely not. That's what the third construction (which isn't really a construction, just an indication of the possibilities) was supposed to indicate. – Qiaochu Yuan Jan 25 '16 at 18:32 The powers of any number of the form $a^k$, with $a,k$ natural satisfies this. You can stop at any power you want. You can also take the union of powers of $a^k$ and $b^m$ as long as $\gcd(a,b)=1$ and their products, stopping each at any power you want. This extends to as many pairwise coprime numbers as you want. You can include any subset of the primes, each to any power you like, so there are uncountably many such sets. • @QiaochuYuan: I have made it explicit that you can use an infinite set of primes, which makes the collection uncountable. I see you show there are more than this. – Ross Millikan Jan 25 '16 at 18:37 • My bad, I misread the last bit. – Qiaochu Yuan Jan 25 '16 at 18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832302689552307, "perplexity": 207.74428554292666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00323.warc.gz"}
https://www.mathworks.com/examples/control/mw/control-ex98378292-kalman-filtering
MATLAB Examples # Kalman Filtering This case study illustrates Kalman filter design and simulation. Both steady-state and time-varying Kalman filters are considered. ## Overview of the Case Study This case study illustrates Kalman filter design and simulation. Both steady-state and time-varying Kalman filters are considered. Consider a discrete plant with additive Gaussian noise on the input : The following matrices represent the dynamics of this plant. A = [1.1269 -0.4940 0.1129; 1.0000 0 0; 0 1.0000 0]; B = [-0.3832; 0.5919; 0.5191]; C = [1 0 0]; ## Discrete Kalman Filter The equations of the steady-state Kalman filter for this problem are given as follows. • Measurement update: • Time update: In these equations: • is the estimate of , given past measurements up to . • is the updated estimate based on the last measurement . Given the current estimate , the time update predicts the state value at the next sample n + 1 (one-step-ahead predictor). The measurement update then adjusts this prediction based on the new measurement . The correction term is a function of the innovation, that is, the discrepancy between the measured and predicted values of . This discrepancy is given by: The innovation gain M is chosen to minimize the steady-state covariance of the estimation error, given the noise covariances: You can combine the time and measurement update equations into one state-space model, the Kalman filter: This filter generates an optimal estimate of . Note that the filter state is . You can design the steady-state Kalman filter described above with the function kalman. First specify the plant model with the process noise: Here, the first expression is the state equation, and the second is the measurement equation. The following command specifies this plant model. The sample time is set to -1, to mark the model as discrete without specifying a sample time. Plant = ss(A,[B B],C,0,-1,'inputname',{'u' 'w'},'outputname','y'); Assuming that Q = R = 1, design the discrete Kalman filter. Q = 1; R = 1; [kalmf,L,P,M] = kalman(Plant,Q,R); This command returns a state-space model kalmf of the filter, as well as the innovation gain M. M M = 0.3798 0.0817 -0.2570 The inputs of kalmf are u and , and. The outputs are the plant output and the state estimates, and . Because you are interested in the output estimate , select the first output of kalmf and discard the rest. kalmf = kalmf(1,:); To see how the filter works, generate some input data and random noise and compare the filtered response with the true response y. You can either generate each response separately, or generate both together. To simulate each response separately, use lsim with the plant alone first, and then with the plant and filter hooked up together. The joint simulation alternative is detailed next. The block diagram below shows how to generate both true and filtered outputs. You can construct a state-space model of this block diagram with the functions parallel and feedback. First build a complete plant model with u, w, v as inputs, and y and (measurements) as outputs. a = A; b = [B B 0*B]; c = [C;C]; d = [0 0 0;0 0 1]; P = ss(a,b,c,d,-1,'inputname',{'u' 'w' 'v'},'outputname',{'y' 'yv'}); Then use parallel to form the parallel connection of the following illustration. sys = parallel(P,kalmf,1,1,[],[]); Finally, close the sensor loop by connecting the plant output to filter input with positive feedback. SimModel = feedback(sys,1,4,2,1); % Close loop around input #4 and output #2 SimModel = SimModel([1 3],[1 2 3]); % Delete yv from I/O list The resulting simulation model has w, v, u as inputs, and y and as outputs. View the InputName and OutputName properties to verify. SimModel.InputName ans = 3x1 cell array {'w'} {'v'} {'u'} SimModel.OutputName ans = 2x1 cell array {'y' } {'y_e'} You are now ready to simulate the filter behavior. Generate a sinusoidal input u and process and measurement noise vectors w and v. t = [0:100]'; u = sin(t/5); n = length(t); rng default w = sqrt(Q)*randn(n,1); v = sqrt(R)*randn(n,1); Simulate the responses. [out,x] = lsim(SimModel,[w,v,u]); y = out(:,1); % true response ye = out(:,2); % filtered response yv = y + v; % measured response Compare the true and filtered responses graphically. subplot(211), plot(t,y,'--',t,ye,'-'), xlabel('No. of samples'), ylabel('Output') title('Kalman filter response') subplot(212), plot(t,y-yv,'-.',t,y-ye,'-'), xlabel('No. of samples'), ylabel('Error') The first plot shows the true response y (dashed line) and the filtered output (solid line). The second plot compares the measurement error (dash-dot) with the estimation error (solid). This plot shows that the noise level has been significantly reduced. This is confirmed by calculating covariance errors. The error covariance before filtering (measurement error) is: MeasErr = y-yv; MeasErrCov = sum(MeasErr.*MeasErr)/length(MeasErr) MeasErrCov = 0.9992 The error covariance after filtering (estimation error) is reduced: EstErr = y-ye; EstErrCov = sum(EstErr.*EstErr)/length(EstErr) EstErrCov = 0.4944 ## Time-Varying Kalman Filter The time-varying Kalman filter is a generalization of the steady-state filter for time-varying systems or LTI systems with nonstationary noise covariance. Consider the following plant state and measurement equations. The time-varying Kalman filter is given by the following recursions: • Measurement update: • Time update: Here, and are as described previously. Additionally: For simplicity, the subscripts indicating the time dependence of the state-space matrices have been dropped. Given initial conditions and , you can iterate these equations to perform the filtering. You must update both the state estimates and error covariance matrices at each time sample. ## Time-Varying Design To implement these filter recursions, first generate noisy output measurements. Use the process noise w and measurement noise v generated previously. sys = ss(A,B,C,0,-1); y = lsim(sys,u+w); yv = y + v; Assume the following initial conditions: Implement the time-varying filter with a for loop. P = B*Q*B'; % Initial error covariance x = zeros(3,1); % Initial condition on the state ye = zeros(length(t),1); ycov = zeros(length(t),1); for i = 1:length(t) % Measurement update Mn = P*C'/(C*P*C'+R); x = x + Mn*(yv(i)-C*x); % x[n|n] P = (eye(3)-Mn*C)*P; % P[n|n] ye(i) = C*x; errcov(i) = C*P*C'; % Time update x = A*x + B*u(i); % x[n+1|n] P = A*P*A' + B*Q*B'; % P[n+1|n] end Compare the true and estimated output graphically. subplot(211), plot(t,y,'--',t,ye,'-') title('Time-varying Kalman filter response') xlabel('No. of samples'), ylabel('Output') subplot(212), plot(t,y-yv,'-.',t,y-ye,'-') xlabel('No. of samples'), ylabel('Output') The first plot shows the true response y (dashed line) and the filtered response (solid line). The second plot compares the measurement error (dash-dot) with the estimation error (solid). The time-varying filter also estimates the covariance errcov of the estimation error at each sample. Plot it to see if your filter reached steady state (as you expect with stationary input noise). subplot(211) plot(t,errcov), ylabel('Error covar') From this covariance plot, you can see that the output covariance did indeed reach a steady state in about five samples. From then on, your time-varying filter has the same performance as the steady-state version. Compare with the estimation error covariance derived from the experimental data: EstErr = y - ye; EstErrCov = sum(EstErr.*EstErr)/length(EstErr) EstErrCov = 0.4934 This value is smaller than the theoretical value errcov and close to the value obtained for the steady-state design. Finally, note that the final value and the steady-state value M of the innovation gain matrix coincide. Mn Mn = 0.3798 0.0817 -0.2570 M M = 0.3798 0.0817 -0.2570 ## Bibliography [1] Grimble, M.J., Robust Industrial Control: Optimal Design Approach for Polynomial Systems, Prentice Hall, 1994, p. 261 and pp. 443-456.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411398530006409, "perplexity": 3068.918019324803}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589179.32/warc/CC-MAIN-20180716041348-20180716061348-00031.warc.gz"}
https://www.physicsforums.com/threads/relate-wavelength-and-energy-scale.885442/
Homework Help: Relate wavelength and energy scale Tags: 1. Sep 15, 2016 senobim 1. The problem statement, all variables and given/known data Light intensity is measured by monochromator and is given by $$I(\lambda ) = I_{0}\lambda ^{3}$$ How to change it to the energy scale $$I(h\nu ) = ?$$ 2. Relevant equations Photons enery $$E = h\nu, E =\frac{hc}{\lambda }$$ 3. The attempt at a solution It's kind of strange to relate energy with wavelenght 2. Sep 15, 2016 Staff: Mentor How much power is there between the frequencies v and v+$\delta$v? What happens if you let $\delta$v go to zero? 3. Sep 16, 2016 senobim Power could be calculated from intensity and area P = I * A. and It could be related in terms of photon energy P = N * E --> P = N * hv N - number of photons Last edited: Sep 16, 2016 4. Sep 16, 2016 Staff: Mentor Yes, but that is not the point. Can you calculate it with the given intensity profile? Approximations for small $\delta$v are fine. 5. Sep 16, 2016 senobim can't think of anything how to do that.. any hints? ;] Last edited: Sep 16, 2016 6. Sep 16, 2016 Staff: Mentor How much intensity is there between two wavelengths $\lambda$ and $\lambda + \delta \lambda$? How is that related to my previous question? 7. Sep 16, 2016 senobim $$I = I_{0}(\lambda + (\lambda + \delta \lambda ))^{3}$$ and $$\nu = \frac{c}{\lambda }$$ Last edited: Sep 16, 2016 8. Sep 16, 2016 Staff: Mentor That is not right. You'll need that formula once you found the intensity in the wavelength range. 9. Sep 16, 2016 senobim emm.. Intensity at some wavelenght is defind by $$I(\lambda)=I_{0}\lambda^{3}$$ at some different lamda $$I(\lambda_{2})=I_{0}(\lambda_{2})^{3}$$ where am I going wrong? 10. Sep 16, 2016 Staff: Mentor Right, and how is this related to what you wrote in #7? Can you draw a sketch, $I(\lambda)$? How would you get the integrated intensity between $\lambda$ and $\lambda + \delta \lambda$? How can you approximate this for very small intervals of $\lambda$? 11. Sep 16, 2016 senobim you mean something like this? $$I = I_{0}\int \lambda ^{3}d\lambda$$ 12. Sep 16, 2016 Staff: Mentor Yes. There are two approaches: keep the integrals everywhere, or go via the integrands. The former is easier if you know how to do substitutions in integrals, otherwise the latter is easier. 13. Sep 16, 2016 senobim so my answer is $$I=I_{0}\int \lambda ^{3}d\lambda =I_{o}\frac{\lambda ^{4}}{4}$$ and $$I(h\nu )=I_{o}\frac{h}{4}\left ( \frac{c}{\nu } \right )^4$$ am I right? 14. Sep 16, 2016 Staff: Mentor No, that does not work. The second "=" in the first line is wrong, and the transition between the first line and the second line does not make sense. What is $\lambda$ at the very right of the first line? Which wavelength is that? 15. Sep 16, 2016 senobim i seems that i dont get the concept at all.. I just integrated the expression, what could be wrong by that 16. Sep 16, 2016 Staff: Mentor Okay, let me ask differently: what did you use as range for the integral, and why? 17. Sep 17, 2016 senobim I did indefinte integral and that's not the case here, maybe i should try something like this $$I = I_{0}\int_{\lambda }^{\lambda +\delta \lambda }\lambda ^{3}d\lambda$$ 18. Sep 17, 2016 Staff: Mentor That's what I suggested in post #6. The integrand should use a different symbol (like $\lambda'$) to avoid mixing two different things. 19. Sep 19, 2016 senobim good, so now we have this $$I = I_{0}\int_{\lambda }^{\lambda +\delta \lambda }\lambda'd\lambda = I_{0}\frac{\lambda'^4}{4}|_{\lambda}^{\lambda + \delta \lambda} = I_{0}(\frac{(\lambda +\delta \lambda)^4 }{4} - \frac{\lambda^4}{4})$$ and how to relate this with energy? 20. Sep 19, 2016 Staff: Mentor You can find an approximation of this for small $\delta \lambda$. What is the approximate value of an integral if the function is (roughly) constant over the integration range? This is also the integrated intensity between two specific frequencies, which you can get with the formula relating wavelengths and frequencies. 21. Sep 19, 2016 senobim Could I let $$\delta\lambda \rightarrow 0$$? 22. Sep 19, 2016 Staff: Mentor Well, if you set it exactly to zero then the integrated intensity is zero, so you want to keep the first order in $\delta \lambda$. 23. Sep 20, 2016 senobim still struggling on the approximation, what do you mean by that? Do i need to use trapezoidal rule for the definite integral approximation? Or something else? 24. Sep 20, 2016 Staff: Mentor Even easier. Approximate it as rectangle. 25. Sep 22, 2016 senobim In order to perform approximation how do I find y-cordinate for this function? Attached Files: • Untitled.jpg File size: 5.7 KB Views: 46 Last edited: Sep 22, 2016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564698338508606, "perplexity": 1609.677822388016}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863516.21/warc/CC-MAIN-20180620085406-20180620105406-00585.warc.gz"}
http://math.stackexchange.com/questions/119350/monotone-class-theorem?answertab=active
# Monotone class theorem I have some question about the Monotone Class Theorem and its application. First I state the Theorem: Let $\mathcal{M}:=\{f_\alpha; \alpha \in J\}$ be a set of bounded functions, such that $f_\alpha:N\to \mathbb{R}$ where $N$ is a set. Further we suppose that $\mathcal{M}$ is closed under multiplication and define $\mathcal{C}:=\sigma(\mathcal{M})$. Let $\mathcal{H}$ be a real vector space of bounded real-valued functions on $N$ and assume: 1.$\mathcal{H}$ contains $\mathcal{M}$ 2.$\mathcal{H}$ contains the constant function $1$1. 3.If $0\le f_{\alpha_1}\le f_{\alpha_2}\le \dots$ is a sequence in $\mathcal{H}$ and $f=\lim_n f_{\alpha_n}$ is bounded, then $f\in \mathcal{H}$ Then $\mathcal{H}$ contains all bounded $\mathcal{C}$ -measurable functions. My first question: I know the Dynkin lemma which deals with $\sigma$-Algebras and $\pi$-Systems. Which is the stronger one, i.e. does Monotone Class imply Dynkin or the other way around?(or are they equal?) A reference for a proof would also be great! My second question is about an application. Let $X=(X_t)$ be a right continuous stochastic process with $X_0=0$ a.s. and denote by $F=(F_t)$ a filtration, where $F_t:=\sigma(X_s;s\le t)$. I want to show: If for all $0\le t_1<\dots<t_n<\infty$ the increments $X_{t_{i}}-X_{t_{i-1}}$ are independent then $X_t-X_s$ is independent of $F_s$ for $t>s$. The hint in the book is to use Monotone Class Theorem. So $$\mathcal{H}:=\{Y:\Omega\to \mathbb{R} \mbox{ bounded };E[h(X_t-X_s)Y]=E[h(X_t-X_s)]E[Y] \forall h:\mathbb{R}\to\mathbb{R} \mbox{ bounded and Borel-measurable}\}$$ This choice is clear. Now they say $$\mathcal{M}:=\{\prod_{i=1}^n f_i(X_{s_i});0\le s_1\le \dots\le s_n\le s,n\in \mathbb{N},f_i:\mathbb{R}\to\mathbb{R} \mbox{ bounded and Borel-measurable}\}$$ 1. Why is $\sigma(\mathcal{M})=\mathcal{F}_s$? 2. Why do they define $\mathcal{M}$ like this? (As family of products?) $F$ is a filtration and $\sigma(\mathcal M)$ is a $\sigma$-algebra/family of measurable maps, right? What do you mean then by $F = \sigma(\mathcal M)$? – Ilya Mar 13 '12 at 9:24 In the definition of $\mathcal{M}$ is it not supposed to be $0\leq s_1<\cdots < s_n\leq t$? – Stefan Hansen Mar 14 '12 at 8:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828281998634338, "perplexity": 141.58585521264715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
https://raymondkww.github.io/publications/liu-mao-wong20/
## Median Matrix Completion: from Embarrassment to Optimality International Conference on Machine Learning (ICML) W. Liu, X. Mao and R. K. W. Wong ### Abstract In this paper, we consider matrix completion with absolute deviation loss and obtain an estimator of the median matrix. Despite several appealing properties of median, the non-smooth absolute deviation loss leads to computational challenge for large-scale data sets which are increasingly common among matrix completion problems. A simple solution to large-scale problems is parallel computing. However, embarrassingly parallel fashion often leads to inefficient estimators. Based on the idea of pseudo data, we propose a novel refinement step, which turns such inefficient estimators into a rate (near-)optimal matrix completion procedure. The refined estimator is an approximation of a regularized least median estimator, and therefore not an ordinary regularized empirical risk estimator. This leads to a non-standard analysis of asymptotic behaviors. Empirical results are also provided to confirm the effectiveness of the proposed method.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839622437953949, "perplexity": 777.8782453222537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00708.warc.gz"}
https://repo.scoap3.org/record/31513
# Inclusive production of charged pion pairs in proton-antiproton collisions Ahmadov, A.I. (Department of Theoretical Physics, Baku State University, Z. Khalilov st. 23, AZ-1148, Baku, Azerbaijan) ; Aydin, C. (Department of Physics, Karadeniz Technical University, 61080, Trabzon, Turkey) ; Uzun, O. (Department of Physics, Karadeniz Technical University, 61080, Trabzon, Turkey) 11 March 2019 Abstract: In this study, we have considered the contribution of the higher-twist (HT) effects of the subprocesses to inclusive pion pair production cross section in the high energy proton-antiproton collisions by using various pion distribution amplitudes (DAs) within the frozen coupling constant approach and compared them with the leading-twist contributions. The feature of the HT effects may help the theoretical interpretation of the future PANDA experiment. The dependencies of the HT contribution on the transverse momentum ${p}_{T}$, the center of mass energy $\sqrt{s}$, and the variable ${x}_{T}$ are discussed numerically with special emphasis put on DAs. Moreover, the obtained analytical and numerical results for the differential cross section of the pion pair production are compared with the elastic backward scattering of the pion on the proton. We show that the main contribution to the inclusive cross section comes from the HT direct production process via gluon-gluon fusion. Also, it is strongly dependent on the pion DAs, momentum cut-off parameter $▵p$, and $〈{q}_{T}^{2}〉$ which is the mean square of the intrinsic momentum of either initial parton. Published in: Physical Review C 99 (2019)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936728477478027, "perplexity": 1821.0331336448437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00403.warc.gz"}
http://mathhelpforum.com/trigonometry/50429-trigonometry.html
1. ## Trigonometry Greetings, I have the problem below. I've tried dividing everything to sina or cosa - I get one or several tga's, but eventually I end up with a single sina or cosa with some other numbers and from there I loop... Thank you. Attached Thumbnails 2. Hello, If tga means tan(a), use the formula $1+\tan^2 a=\frac{1}{\cos^2 a}$ and $\cos^2 a+\sin^2 a=1$. Bye 3. I do not understand. I am left with an extra cosa again. Indeed, tg is tan. I am thought to abreviate it as tg. Attached Thumbnails 4. Hello, Since tan a=-2 and $1+\tan^2 a=\frac{1}{\cos^2 a}$, $\cos a=\pm\frac{1}{\sqrt{5}}$. Use $\tan a=\frac{\sin a}{\cos a}$ to get $\sin a=\mp\frac{1}{\sqrt{5}}$. Plug in everything. Bye. 5. Originally Posted by Logic Greetings, I have the problem below. I've tried dividing everything to sina or cosa - I get one or several tga's, but eventually I end up with a single sina or cosa with some other numbers and from there I loop... Thank you. If the trigonometric functions don't want to cooperate, bypass them and use the reference triangle of the angle instead. tan(a) = -2 That means "a" is in the 2nd or 4th quadrants. opposite side = 2 hypotenuse = sqrt(5) sin(a) = 2/sqrt(5) cos(a) = -1/sqrt(5) So, sin(a) / [sin^3(a) +3cos^2(a) = [2/sqrt(5)] / [(2/sqrt(5))^3 +3(-1/sqrt(5))^2]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8060800433158875, "perplexity": 3079.85387412573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805911.18/warc/CC-MAIN-20171120032907-20171120052907-00530.warc.gz"}
http://paperity.org/search/?q=authors%3A%22Alessandro+Broggio%22&f_country=CN
# Search: authors:"Alessandro Broggio" 3 papers found. Use AND, OR, NOT, +word, -word, "long phrase", (parentheses) to fine-tune your search. #### NNLL resummation for the associated production of a top pair and a Higgs boson at the LHC We study the resummation of soft gluon emission corrections to the production of a top-antitop pair in association with a Higgs boson at the Large Hadron Collider. Starting from a soft-gluon resummation formula derived in previous work, we develop a bespoke parton-level Monte Carlo program which can be used to calculate the total cross section along with differential... #### Associated production of a top pair and a Higgs boson beyond NLO We consider soft gluon emission corrections to the production of a top-antitop pair in association with a Higgs boson at hadron colliders. In particular, we present a soft-gluon resummation formula for this production process and gather all elements needed to evaluate it at next-to-next-to-leading logarithmic order. We employ these results to obtain approximate next-to-next-to... #### NNLL momentum-space resummation for stop-pair production at the LHC If supersymmetry near the TeV scale is realized in Nature, the pair production of scalar top squarks is expected to be observable at the Large Hadron Collider. Recently, effective field-theory methods were employed to obtain approximate predictions for the cross section for this process, which include soft-gluon emission effects up to next-to-next-to-leading order (NNLO) in...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941381216049194, "perplexity": 1796.597744225215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868135.87/warc/CC-MAIN-20180625150537-20180625170537-00016.warc.gz"}
http://eprints.maths.ox.ac.uk/605/
# Asymptotic analysis of a secondary bifurcation of the one-dimensional Ginzburg-Landau equations of superconductivity Aftalion, Amandine and Chapman, S. J. (2000) Asymptotic analysis of a secondary bifurcation of the one-dimensional Ginzburg-Landau equations of superconductivity. SIAM Journal on Applied Mathematics, 60 (4). pp. 1157-1176. ISSN 1095-712X Preview 201kB Official URL: http://www.siam.org/journals/siap/60-4/34479.html ## Abstract The bifurcation of asymmetric superconducting solutions from the normal solution is considered for the one-dimensional Ginzburg--Landau equations by the methods of formal asymptotics. The behavior of the bifurcating branch depends on the parameters d, the size of the superconducting slab, and , the Ginzburg--Landau parameter. The secondary bifurcation in which the asymmetric solution branches reconnect with the symmetric solution branch is studied for values of for which it is close to the primary bifurcation from the normal state. These values of form a curve in the -plane, which is determined. At one point on this curve, called the quintuple point, the primary bifurcations switch from being subcritical to supercritical, requiring a separate analysis. The results answer some of the conjectures of [A. Aftalion and W. C. Troy, Phys. D, 132 (1999), pp. 214--232]. Item Type: Article superconducting; bifurcation; symmetric; asymmetric O - Z > Optics, electromagnetic theoryO - Z > Ordinary differential equations Oxford Centre for Industrial and Applied Mathematics 605 Jon Chapman 24 May 2007 29 May 2015 18:25 Repository Staff Only: item control page
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615252375602722, "perplexity": 1917.1216245887958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981921.1/warc/CC-MAIN-20150728002301-00064-ip-10-236-191-2.ec2.internal.warc.gz"}
http://v8doc.sas.com/sashtml/ets/chap18/sect17.htm
Chapter Contents Previous Next The STATESPACE Procedure ## Preliminary Autoregressive Models After computing the sample autocovariance matrices, PROC STATESPACE fits a sequence of vector autoregressive models. These preliminary autoregressive models are used to estimate the autoregressive order of the process and limit the order of the autocovariances considered in the state vector selection process. ### Yule-Walker Equations for Forward and Backward Models Unlike a univariate autoregressive model, a multivariate autoregressive model has different forms, depending on whether the present observation is being predicted from the past observations or from the future observations. Let xt be the r-component stationary time series given by the VAR statement after differencing and subtracting the vector of sample means. (If the NOCENTER option is specified, the mean is not subtracted.) Let n be the number of observations of xt from the input data set. Let et be a vector white noise sequence with mean vector 0 and variance matrix , and let nt be a vector white noise sequence with mean vector 0 and variance matrix .Let p be the order of the vector autoregressive model for xt. The forward autoregressive form based on the past observations is written as follows: The backward autoregressive form based on the future observations is written as follows: Letting E denote the expected value operator, the autocovariance sequence for the xt series, , is The Yule-Walker equations for the autoregressive model that matches the first p elements of the autocovariance sequence are and Here are the coefficient matrices for the past observation form of the vector autoregressive model, and are the coefficient matrices for the future observation form. More information on the Yule-Walker equations in the multivariate setting can be found in Whittle (1963) and Ansley and Newbold (1979). The innovation variance matrices for the two forms can be written as follows: The autoregressive models are fit to the data using the preceding Yule-Walker equations with replaced by the sample covariance sequence Ci. The covariance matrices are calculated as Let , ,, and represent the Yule-Walker estimates of ,, , and respectively. These matrices are written to an output data set when the OUTAR= option is specified. When the PRINTOUT=LONG option is specified, the sequence of matrices and the corresponding correlation matrices are printed. The sequence of matrices is used to compute Akaike information criteria for selection of the autoregressive order of the process. ### Akaike Information Criterion The Akaike information criterion, or AIC, is defined as -2(maximum of log likelihood)+2(number of parameters). Since the vector autoregressive models are estimates from the Yule-Walker equations, not by maximum likelihood, the exact likelihood values are not available for computing the AIC. However, for the vector autoregressive model the maximum of the log likelihood can be approximated as Thus, the AIC for the order p model is computed as You can use the printed AIC array to compute a likelihood ratio test of the autoregressive order. The log-likelihood ratio test statistic for testing the order p model against the order p-1 model is This quantity is asymptotically distributed as a with r2 degrees of freedom if the series is autoregressive of order p-1. It can be computed from the AIC array as AICp-1-AICp+2r2 You can evaluate the significance of these test statistics with the PROBCHI function in a SAS DATA step, or with a table. ### Determining the Autoregressive Order Although the autoregressive models can be used for prediction, their primary value is to aid in the selection of a suitable portion of the sample covariance matrix for use in computing canonical correlations. If the multivariate time series xt is of autoregressive order p, then the vector of past values to lag p is considered to contain essentially all the information relevant for prediction of future values of the time series. By default, PROC STATESPACE selects the order, p, producing the autoregressive model with the smallest AICp. If the value p for the minimum AICp is less than the value of the PASTMIN= option, then p is set to the PASTMIN= value. Alternatively, you can use the ARMAX= and PASTMIN= options to force PROC STATESPACE to use an order you select. ### Significance Limits for Partial Autocorrelations The STATESPACE procedure prints a schematic representation of the partial autocorrelation matrices indicating which partial autocorrelations are significantly greater or significantly less than 0. Figure 18.10 shows an example of this table. The STATESPACE Procedure Schematic Representation of PartialAutocorrelations Name/Lag 1 2 3 4 5 6 7 8 9 10 x ++ +. .. .. .. .. .. .. .. .. y ++ .. .. .. .. .. .. .. .. .. + is > 2*std error,  - is < -2*std error,  . is between Figure 18.10: Significant Partial Autocorrelations The partial autocorrelations are from the sample partial autoregressive matrices .The standard errors used for the significance limits of the partial autocorrelations are computed from the sequence of matrices and . Under the assumption that the observed series arises from an autoregressive process of order p-1, the pth sample partial autoregressive matrix has an asymptotic variance matrix . The significance limits for used in the schematic plot of the sample partial autoregressive sequence are derived by replacing and with their sample estimators to produce the variance estimate, as follows: Chapter Contents Previous Next Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776740670204163, "perplexity": 870.6692457063538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00358.warc.gz"}
https://www.symmetrymagazine.org/article/starting-from-the-bottom?language=en&page=1
A joint Fermilab/SLAC publication Artwork by Sandbox Studio, Chicago with Corinne Mucha # Starting from the bottom 03/20/18 The bottom quark may lead physicists on a path to new discoveries. The Standard Model of particle physics has been developed over several decades to describe the properties and interactions of elementary particles. The model has been extended and modified with new information, but time and again, experiments have bolstered physicists’ confidence in it. And yet, scientists know that the model is incomplete. It cannot predict the masses of certain particles, nor can it explain what most of the universe is made of. To discover what lies beyond the Standard Model, scientists are searching for its flaws—untenable assumptions and phenomena that it does not predict. A growing set of results from the study of bottom quarks may offer physicists a welcome chance to do just that. “The Standard Model is very rigid,” says Marco Nardecchia, a theorist from Italy, “so the best way to break it is by precisely testing its predictions.” The Standard Model makes many detailed predictions about how particles should interact or decay. Some subatomic processes are so complicated that even theorists aren’t quite sure exactly how they are supposed to work. For one: quarks—the constituents that make up elementary particles—should interact in the same way with the electron as with its heavier cousins, the muon or tau lepton. There are six types of quarks. The lightest and most common are the up and down quarks, which together make up protons and neutrons. Particles carrying a bottom quark—which is much heavier—are short-lived. In its decay, the bottom quark transitions into a lighter quark, preferentially a charm quark and rarely an up quark, forming another known particle. The remaining energy is carried by a charged lepton: an electron, a muon or a tau, each accompanied by its associated neutrino. According to the Standard Model, the rates of producing electrons, muons and taus differ only due to the very different masses of these three charged leptons. (The tau mass, for example, exceeds the electron mass by a factor of about 3500.) “These predictions are straightforward and precise,” says Vera Lüth, a scientist on the BaBar experiment, “which is why we decided to pursue these measurements in the first place.” Scientists working on three different experiments are testing these predictions by examining specific decays of particles that carry a bottom quark. The first hint of an unexpected tau enhancement appeared in 2012 at the BaBar experiment at SLAC National Accelerator Laboratory, which studied close to 500 million events produced in electron-positron collisions, and reconstructed less than 2000 decays involving taus. In 2015, the Belle experiment in Japan reported a similar enhancement in the tau rate in data collected from electron-positron collisions at the same energy. “A friend working on another experiment was sure that we had done something wrong,” Lüth says. “Then they observed the same effect.” In 2015, scientists working on the LHCb experiment operating at CERN saw signs of the same phenomenon in very large samples of proton-proton collisions at much higher energy and collision rates. “All these results point in the same direction,” says Hassan Jawahery, a professor at the University of Maryland working on LHCb. “That’s what puzzles everyone.” On their own, these individual results have a significance below the level that would raise an eyebrow. But together, they are “intriguing,” according to Tom Browder, the spokesperson of the Belle experiment and its successor, Belle II. “We are pretty sure that something new is out there. Proving even a tiny deviation from the  Standard Model could lead to a revolution in our field.” The results accumulated so far have already inspired theorists to speculate about what kind of new physics processes might cause these enhancements. Some theories suggest that perhaps there is a yet undiscovered charged Higgs boson that favors the heavy tau over the much lighter muon and electron. Other models predict the existence of at least one new particle outside the Standard Model. “We may need something which interacts with quarks and leptons simultaneously,” Nardecchia says. Scientists won’t know what’s happening without further study, and gathering enough data to allow more detailed and precise studies will be a crucial step toward finding out. Scientists at the LHCb experiment are only at the beginning of this study. They plan to analyze about four times as many events in the next few years. They hope to complete new and updated measurements by this summer. The LHC accelerator complex program foresees major upgrades that will enlarge the experiments’ datasets over the next decade. In parallel, Belle II is scheduled to start collecting data in 2019 and is expected to record enough to shed light on this query in a few years. Physicists around the globe are eagerly waiting to compare notes.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039332628250122, "perplexity": 1257.1529049236738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00402.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5000/measuring-qubits-in-qutip
# Measuring qubits in QuTiP How can you measure qubits in QuTiP? As far as I have seen you can define a Hamiltonian and let it evolve in time. It is also possible to define a quantum circuit, however, measuring and running it is not possible. Does anyone know how to do this? A simple circuit could be H q[0] CNOT q[0],q[1] Measure q[0], q[1] QuTiP is not really meant for this I think. As said on the home page : QuTiP is open-source software for simulating the dynamics of open quantum systems. Simulating dynamics of open quantum systems by definition means you are interested in the quantum state as a result of your algorithm. I tried looking at the Notebook examples provided in this Github but could not find measurement examples somewhere. You have a possibility to get expectation values though (see this notebook). Main purpose of Qutip is to explore dynamics of quantum systems and therefore density matrices are the tool to use. According to this answer on Quantum computing, we can model a measurement operator Pi on a density matrix. In the case of the measurement of a single qubit in the computational basis, you have $$P_0=|0\rangle\langle 0|\qquad P_1=|1\rangle\langle 1|$$ If you want to talk about n qubits where you measure just the first one, then you use the measurement operators $$P_0=|0\rangle\langle 0|\otimes\mathbb{I}^{\otimes(n-1)}\qquad P_1=|1\rangle\langle 1|\otimes\mathbb{I}^{\otimes(n-1)}$$ Implementation with the Qutip dag method. First we set up a two level quantum system with the basis method use a vector v0 for the zero vector and v1 for one vector. v0 = qp.basis(2, 0) Calculate outer product with the dag method this will give a density operator P0 = v0 * v0.dag() expand for multiqubit gate M0 = qp.gate_expand_1toN(P0, self.activeQubits, qubitNum) Also v1 = qp.basis(2, 1) You can find a basic qubit quantum simulator running on Qutip in the SimulaQron software. SimulaQron crudeSimulator Here. Scroll down to the stochastic solver, and you'll find an attribute for storing measurements. It's certainly not the emphasis of the package, as cnada pointed out, but it's there.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80265873670578, "perplexity": 735.7608977083734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00548.warc.gz"}
http://physics.stackexchange.com/questions/47584/what-is-the-current-radius-of-cosmological-event-horizon/47585
What is the current radius of cosmological event horizon? Doing some crude calculations (using the value of $H_0$ at this point of time only, since it is time dependent but not distance dependent thanks to Johannes answer) what is the radius of cosmological event horizon at this point of time? (not looking for the changes of CEH through time) From here we have for $H_0$: $$H_0 = 73.8 \pm 2.4 (\frac{km}{s})\frac{1}{Mpc}\tag{I}$$ We are seeking the distance $L$ s.t. $H_0L = c = 3\times 10^6 \frac{km}{s}$ $$L=\frac {c}{H_0} = \frac {3\times 10^6}{73.8 \pm 2.4} Mpc \tag{II}$$ Where 1 pc = 3.26 light years ($ly$), $$L=\frac {c}{H_0} = \frac {3\times 10^6\times10^6\times3.26}{73.8 \pm 2.4} ly \tag{III}$$ $$L= \frac {9.78\times 10^{12}}{73.8 \pm 2.4} ly \tag{IV}$$ $$L=1.3\pm0.1\times 10^{11} ly \tag{V}$$ Is this calculation correct? Would the correct calculation make sense? (By making sense I mean it would seem in accordance with some observation and not in contradiction to some other observations? Or results like this are unconfirmable, just mere flights of fancy were they do not relate to anything physical? The only thing I could use to see it is not invalid (yes double negative, I cannot say it was valid) is the fact that observable universe is $45.7×10^9 ly$ but then again by that account $L=10^{123}ly$ would seem just as valid. - The Hubble length $c/H_0$ does not coincide with the radius of the observable universe. Your calculation assumes a Hubble parameter that doesn't change over time. This is not correct: the Hubble parameter $H$ changes over time, and $H_0$ (the Hubble constant) indicates the current value of $H$. To refer to $H_0$ as a 'constant' is a bit of a misnomer, it is effectively a constant in space, not in time. Also note that if $H$ would have been constant over time, the Hubble time $1/H$ would be the time taken for the universe to increase in size by a factor of $e$. It is a coincidence that the current value for $H$ leads to a Hubble time very close to the current age of the universe. - Is The Hubble length suppose to coincide with the radius of the observable universe? or not? Sorry I am confused, I don't know why Hubble length and radius of the observable universe relate to each other. – Arjang Dec 26 '12 at 3:35 No, there is no reason to expect the Hubble length to coincide with the radius of the observable universe. – Johannes Dec 26 '12 at 3:40 Dear Arjang, Johannes' (right) answer was all about these two things' not being equal, so why did you ask again whether they were equal? Do you know derivatives? The Hubble constant is $(da/dt)/a=d\ln a / dt$ where $a$ is the distance between two galaxies or other two points. We're interested how this distance increases right now. But only if $a=Kt$ for all $t$, a linear function, $(da/dt)/a$ would be the same thing as $1/t$. In our Universe, the dependence of $a$ on $t$ wasn't linear/proportional. – Luboš Motl Dec 26 '12 at 6:41 The current radius of the visible Universe is wolframalpha.com/input/?i=radius+of+visible+universe 46 billion light years. It's much greater than 13.7 billion years because the places close to the cosmic horizon were recently expanding to huge distances even though they correspond to very short periods of time right after the Big Bang - all the metric distances/times have expanded since the Big Bang. – Luboš Motl Dec 26 '12 at 6:43 @LubošMotl : because of the first line of the answer , I wasn't sure if my question was implying the contrary, I wasn't questioning Johannes but confirming that my question is not implying the contrary. the result I ended up with was $13\times10^{10}$ it is 3 times the answer of from alpha. My question is at this point of time, how far is the cosmological event horizon? If it was possible to take a snap shot of universe, with given $H_0$ at this point of time, since it is not dependent on time. – Arjang Dec 26 '12 at 8:20 The answer by Johannes is correct - the proper horizon distance in the concordance cosmology is ~46 billion light years. The reason that the answer in (1) was three times larger than that, when it should have been three times smaller, is that the value of c used was incorrect: The speed of light is $3 \times 10^5\ \mathrm{km}/\mathrm{s}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041355013847351, "perplexity": 200.63049303320375}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278091.17/warc/CC-MAIN-20160524002118-00014-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/78381/do-infinitesimal-neighbourhoods-help-to-compute-the-inverse-images-of-coherent-sh
# Do infinitesimal neighbourhoods help to compute the inverse images of coherent sheaves? Let $i:Z\to X$ be a closed embedding of (projective) varieties; $S$ is a coherent sheaf on $X$. How could one compute $H^*(Z,i^\ast S)$ (I don't know whether I should write $H^\ast (Z,i^{-1}S)$ instead? If there is a substantial distinction, I could be interested in both)? Could one do this using some infinitesimal neighbourhoods of $Z$ in $X$? - Take $Z=x$ to be a point and $S=\mathcal{O}_X$, then $H^0(i^*S)=k$ and $H^0(i^{-1}S)=\mathcal{O}_x$. So they are certainly different. Computing $H^∗(Z,i^∗S)$ is not so easy in general, and usually it is more useful to resolve the ideal sheaf than to play with inf. nbhds I'm not quite sure what else to say. –  Donu Arapura Oct 17 '11 at 22:12 Could you say more about resolutions? For which $S$ the computation of $H^\ast(Z,i^*S)$ is easy?! Thank you! –  Mikhail Bondarko Oct 18 '11 at 3:22 For example, if $X$ is projective space then Hilbert's syzygy theorem would allow you to resolve the ideal of $Z$ by graded free modules. This means that the ideal sheaf could be resolved by sums of line bundles. You could compute cohomology of $H^*(Z,\mathcal{O}_Z)$ using this. –  Donu Arapura Oct 18 '11 at 11:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855128884315491, "perplexity": 272.3970070922187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461494.41/warc/CC-MAIN-20150226074101-00171-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.studypug.com/algebra-2/binomial-theorem
# Binomial theorem ### Binomial theorem The Binomial Theorem is a convenient way to multiply a binomial by itself. It can be applied to the powers of any binomials.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961300790309906, "perplexity": 809.482838924142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00096-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/e-mc-2-defines-mass-energy.567534/
E=mc^2 , defines mass energy 1. Jan 14, 2012 Algren E=mc^2 , defines mass energy "Equivalence", but after some surfing, I dont guess that Mass can be straight converted into pure energy. Is there any place which defines mass in terms of energy? 2. Jan 14, 2012 Antiphon Re: Mass=Energy? Matter+antimatter=photon. 3. Jan 14, 2012 maverick_starstrider Re: Mass=Energy? You can't simply "make" mass convert to energy, it's more the other way around. If you have enough energy (say 2m_e c^2 worth, where m_e is the mass of an electron) the system will spontaneously convert that energy into the mass of a particle (rather two particles since the amount of charge must also be conserved so you get a positively charge positron and a negatively charge electron). No "mass energy" power stations are allowed i'm afraid. 4. Jan 14, 2012 atyy Last edited by a moderator: May 5, 2017 5. Jan 14, 2012 DrStupid Re: Mass=Energy? Yes, mass and energy are equivalent and yes, mass can not be converted into energy and vice versa. There is not "but". 6. Jan 14, 2012 DrStupid Re: Mass=Energy? That would be a violation of energy conservation. 7. Jan 14, 2012 atyy Re: Mass=Energy? Ok, to be more precise: Rest mass can be converted to energy. Relativistic mass is by definition, the same as energy, and is conserved. 8. Jan 14, 2012 DrGreg Re: Mass=Energy? Energy comes in many forms, kinetic, potential, heat, sound, etc. I wouldn't say mass can be converted into energy, I would say mass is one form of energy. And, yes, you can convert mass-energy into other forms of energy. atyy gave some examples in post #4. The total energy, including mass-energy, is conserved. (In special relativity: always; in general relativity: always locally and sometimes globally.) 9. Jan 14, 2012 Antiphon Mass can (and is routinely) converted directly into energy. Electron-positron anhiliation. Energy can and is routinely converted into mass. Gamma rays of the right energy can turn into electron-positron pairs. This is not confined to rest mass. Nuclear binding energy manifests as increased mass. Or nuclear mass is released as energy in fission. It's all the same thing. Mass and energy are directly transformable into one another. 10. Jan 14, 2012 DrStupid Re: Mass=Energy? No, they aren't. Mass and energy are equivalent. It is similar to radius and circumference of a circle. 11. Jan 14, 2012 Passionflower Re: Mass=Energy? I think that rest mass at the quantum level cannot possibly exist as nothing can be at rest, this would violate the uncertainty principle. Perhaps elementary particles have (theoretical) rest mass but even that I doubt, even charged particles for that matter. Furthermore it would be hard to reconcile the existence of uncharged point particles with mass as I think one cannot have a stress-energy at a point. 12. Jan 14, 2012 Antiphon How do you explain Positron-Electron anhiliation? 13. Jan 14, 2012 Staff: Mentor Re: Mass=Energy? To keep things simple, suppose the positron and electron have zero kinetic energy. They're simply sitting right next to each other, about to annihilate. The mass of this system is 2 x 511 keV/c^2 = 1022 keV/c^2. The total energy is 1022 keV, consisting of the rest-energies of the two particles. The total momentum is zero. They annihilate, and you have two photons going off in opposite directions, each with energy 511 keV, so the total energy is still 1022 keV (energy is conserved). They each have momentum with magnitude 511 keV/c, but in opposite directions, so the total momentum is still zero (momentum is conserved). Each photon has mass zero (using the "invariant mass" as physicists normally do), so the sum of the masses is also zero. Does that mean that the mass has been converted to energy? No, because the total energy before is the same as the total energy afterwards! If mass had been converted to energy, then the total energy afterwards would be greater than the total energy before, because new energy would have been created, right? One way to look at this is to say that energy is conserved, but mass isn't. What we often call "conversion of mass to energy" is actually conversion of energy from one form to another (from rest-energy to kinetic energy). The masses of the electron and positron simply disappear. There's another way to look at this. The energy, mass and momentum of a single particle are related by $$E^2 = (pc)^2 + (mc^2)^2$$ For a system of particles, we can define the "mass of the system" using $$E_{total}^2 = (p_{total}c)^2 + (m_{system}c^2)^2$$ The mass of the system must be conserved, because both the total momentum and total energy are conserved. In our example, the mass of the system of two photons is 1022 keV/c^2, equal to the total mass of the electron and positron. In this view, the mass of a system of particles does not generally equal the sum of the masses of the component particles. Either way of looking at it, you can't say (strictly speaking) that "mass is converted to energy" because this implies that "new" energy is created, which contradicts the principle that total energy is always conserved. 14. Jan 15, 2012 Antiphon Re: Mass=Energy? Sorry, but you're torturing the terminology. Though your math is spot on. Here's the basic convention you're laboring to reject. Electrons and positrons are particles of matter. Photons are quanta of energy. When the matter disappears and is replaced by the energy, it can be called "conversion." This is what the terms have meant for over a hundred years. 15. Jan 15, 2012 maverick_starstrider Re: Mass=Energy? You cannot talk about quantum mechanics and rest mass, quantum mechanics is, by design, non-relativistic. If you want to merge the two you get quantum field theory, in which case mass is due to the Higgs mechanism. 16. Jan 15, 2012 maverick_starstrider Re: Mass=Energy? Let's be clear here, protons and electrons are BY FAR the most common type of matter we find in everyday life. Neither decays and cannot simply be "directly transformed into energy". Yes you can annihilate them with their anti-matter particle but you would first have to create that anti-matter. Pure mass energy is not a directly accessible form of energy. "Mass" held in bonding can of course be accessed (this is nuclear fission/fusion) but all decay pathways end at the proton or electron. 17. Jan 15, 2012 ghwellsjr Re: Mass=Energy? A neutron will decay into an electron and a proton giving off an energy equivalent to the difference in its rest mass and the sum of the rest masses of a proton and an electron. 18. Jan 15, 2012 Passionflower Re: Mass=Energy? I cannot what? You either agree or disagree with what I say. If you disagree please use physics not "you can't say because you are wearing your quantum hat while talking relativity". 19. Jan 15, 2012 Passionflower Re: Mass=Energy? A system of two photons going in opposite direction does have mass. 20. Jan 15, 2012 DrStupid Re: Mass=Energy? Parities and electric charges of both particles cancel each other out. Mass and energy remain unchanged (see jtbell's post for details). There is no replacement of matter by energy. Matter has energy. If the matter disappears the energy remains (e.g. in the form of radiation).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400423765182495, "perplexity": 859.856696091512}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649683.30/warc/CC-MAIN-20180324034649-20180324054649-00211.warc.gz"}
http://mathhelpforum.com/statistics/143070-conditional-probability.html
1. Conditional probability I've been stuck on this and can not for the life of me figure it out any help would be appreciated. A test is 97% accurate that a person who tests positive actually has the disease.If 2% of the population has the disease, what is the probability that a person selected at randon actually has the disease if they test positive? 2. Originally Posted by nightrider456 I've been stuck on this and can not for the life of me figure it out any help would be appreciated. A test is 97% accurate that a person who tests positive actually has the disease.If 2% of the population has the disease, what is the probability that a person selected at randon actually has the disease if they test positive? This seems to me a trick question. Usually for this type of question the wording is different, and there's calculation you need to do. But here we are given the probability that's being asked for. That is, the answer is simply 97%. The information about choosing a person at random and 2% of the population has the disease is irrelevant. Unless maybe you copied it down wrong? 3. Originally Posted by undefined This seems to me a trick question. Usually for this type of question the wording is different, and there's calculation you need to do. But here we are given the probability that's being asked for. That is, the answer is simply 97%. The information about choosing a person at random and 2% of the population has the disease is irrelevant. Unless maybe you copied it down wrong? I double checked it and it's word for word the correct answer is 0.3975 4. Originally Posted by nightrider456 I double checked it and it's word for word the correct answer is 0.3975 I see now that I didn't think carefully enough. What convinced me was considering the scenario of a population that is 100% diseased, and then a random subject is chosen and the test comes out positive, obviously the answer in that case would not be 97%. Sorry for being too hasty and posting an incorrect response. My new answer still may not be helpful to you, but I think it's correct. Now I believe there's not enough information. The reasoning is as follows. Let $p_1$ be the probability that a diseased person tests positive. Let $p_2$ be the probability that a healthy person tests positive. The number 97% isn't as clearly defined as I would like, but it seems to mean: choose a subject A, with 50% probability that A is diseased and 50% probability A is healthy. Then 97% represents the probability that A is diseased if the test comes out positive. In terms of $p_1$ and $p_2$, we can write $\frac{0.5p_1}{0.5p_1+0.5p_2}=0.97$ which I got using a decision tree, but with the same reasoning described in this example on Wikipedia. So we don't know $p_1$ and $p_2$, only how they relate to each other. When we solve the problem being asked, we find the desired probability equal to $p=\frac{0.02p_1}{0.02p_1+0.98p_2}$ So there are two equations and three unknowns, and there is not enough information to solve. 5. Originally Posted by nightrider456 I've been stuck on this and can not for the life of me figure it out any help would be appreciated. A test is 97% accurate that a person who tests positive actually has the disease.If 2% of the population has the disease, what is the probability that a person selected at randon actually has the disease if they test positive? 97% who test positive have the disease. 3% who test positive do not have the disease. 2% of the population test positive at a 97% accuracy. 98% of the population test positive at a 3% innacuracy (incorrectly). Hence the probability that a positive test indicates disease is $\frac{(0.02)(0.97)}{(0.02)(0.97)+(0.03)(0.98)}=\fr ac{0.0194}{0.0488}=\frac{194}{488}=0.39754$ 6. Originally Posted by Archie Meade 97% who test positive have the disease. 3% who test positive do not have the disease. 2% of the population test positive at a 97% accuracy. 98% of the population test positive at a 3% innacuracy (incorrectly). In an attempt to prove that this method involves a hidden assumption, I have verified your result. I proceeded as follows: Suppose the probability of a false negative is 0.5, and the probability of a false positive is 3/194 $\approx$ 0.0154639. Then suppose we have a room with two people in it: one diseased and one healthy. We choose a person at random and perform the test. The test is positive. We can verify that the probability that the person is the diseased one is exactly 0.97. I expected the final answer to be different, but it was the same answer you gave. So then I worked it out: $\frac{0.5p_1}{0.5p_1+0.5p_2}=0.97$ $\left(\frac{p_1}{2}+\frac{p_2}{2}\right)\cdot 0.97=\frac{p_1}{2}$ $(p_1+p_2)\cdot 0.97 = p_1$ $p_2 = \left(\frac{3}{97}\right)p_1$ And then the desired probability: $p=\frac{0.02p_1}{0.02p_1+0.98p_2}$ $p=\frac{0.02p_1}{0.02p_1+0.98\cdot(3/97)\cdot p_1}$ $p=\frac{0.02}{0.02+0.98\cdot(3/97)}$ $\approx 0.39754$ So I stopped too soon when I concluded that there was not enough information. Ah well, I know for next time. Sorry again for an incorrect response.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039212465286255, "perplexity": 301.1018884470492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607620.78/warc/CC-MAIN-20170523103136-20170523123136-00121.warc.gz"}
https://www.physicsforums.com/threads/finding-the-time-and-maximum-height-at-highest-point.835301/
# Finding the time and maximum height at highest point 1. Sep 30, 2015 ### NotFrankieMuniz 1. The problem statement, all variables and given/known data: A projectile is given initial velocity 80 m/s ( V0 ) at angle 60° above the horizontal. Find the time it takes to reach to the highest point and find the maximum height. (g = -10 m/s2) 2. Relevant equations: 1. $y_{final} = y_{initial} + v_{y_{initial}}t + \frac {1}{2}gt^2$ 2. $v_{final}^2 = v_{y_{intial}^2} + 2g(y_{final} - y_{intial})$ 3. The attempt at a solution: The first thing I did was to find the velocity of the y-component ( $v_{y_{initial}})$: $v_{y_{initial}} = (80 m/s)({\sin 60}^{\circ})$ $v_{y_{initial}} = 69 m/s$ Since I know that $v_{final}^2$ is 0 m/s at the highest point, I can use Equation 2 to figure out the max height. With $v_{y_{initial}^2}$ known and $t$ unknown: $v_{final}^2 = v_{y_{intial}^2} + 2g(y_{final} - y_{intial})$ $0 = v_{y_{intial}^2} + 2gy$ $y = \frac{-v_{y_{intial}^2}}{2g}$ $y = \frac{-(69 m/s)^2}{2(-10 m/s^2)}$ $y = 238 m$ With the max height known, I can use Equation 1 to get the time to reach the max height: $y_{final} = y_{initial} + v_{y_{initial}}t + \frac {1}{2}gt^2$ $238 m = (0m) + (69 m/s)t + \frac {1}{2}(-10m/s^2)t^2$ $(5m/s^2)t^2 - (69 m/s)t + (238 m) = 0$ From there, I used the quadratic formula where a = 5 m/s2, b = -69 m/s, and c = 238 m The results that I got were t = 175 s and t = 170 s, which doesn't seem right. Anyone care to point me to the right direction? Last edited: Sep 30, 2015 2. Sep 30, 2015 ### Staff: Mentor A couple of things. First, keep some extra decimal places in your intermediate values that you will be using for further calculations. This will prevent rounding and truncation errors from creeping into latter calculations. Rounding for presentation purposes is fine, but use full precision in your calculations. So for example, your initial vertical velocity should be something like 69.282 m/s. Speaking of initial vertical velocity, you've shown the cosine function rather than sine function in its calculation. I presume this is a typo since you've show a valid value for the result. Something appears to have gone awry in your evaluation of the quadratic at the end. Try the calculation again. Note that if you happen to have some differential calculus under your belt, you can get at the time of the maximum height much more easily by just maximizing the vertical trajectory equation... 3. Sep 30, 2015 ### NotFrankieMuniz I redid the quadratic calculation and got t = 69.1 s and t = 68.9 s, which still doesn't seem right. 4. Sep 30, 2015 ### Staff: Mentor The answer should be less than 10 seconds. I guess you'll have to show your work in some detail so we can see what's happening. 5. Sep 30, 2015 ### RUber Don't forget the 2a in the denominator for the quadratic equation. 6. Sep 30, 2015 ### NotFrankieMuniz Ah, looks like I forgot to punch in a set of parenthesis in my calculator. t = 6.8 s I also just realized that I can easily calculate without having to use the quadratic formula the t with: $v_{y_{final}} = v_{y_{intial}} -gt$ 7. Sep 30, 2015 ### haruspex Better still, never plug in numbers until the end. In the specific case of finding the maximum height, you would have found $s=\frac{v_0^2\sin^2(\theta)}{2g}$. Plugging in the angle, $\sin^2(\frac{\pi}3)=\frac 34$, avoiding any approximations. Draft saved Draft deleted Similar Discussions: Finding the time and maximum height at highest point
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700254559516907, "perplexity": 647.989009366319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00423.warc.gz"}
https://www.zora.uzh.ch/id/eprint/51520/
# Cournot games with biconcave demand Ewerhart, Christian (2014). Cournot games with biconcave demand. Working paper series / Department of Economics 16, University of Zurich. ## Abstract Biconcavity is a simple condition on inverse demand that corresponds to the ordinary concept of concavity after simultaneous parameterized transformations of price and quantity. The notion is employed here in the framework of the homogeneous-good Cournot model with potentially heterogeneous firms. The analysis leads to unified conditions, respectively, for the existence of a pure-strategy equilibrium via nonincreasing best-response selections, for existence via quasiconcavity, and for uniqueness of the equilibrium. The usefulness of the generalizations is illustrated in cases where inverse demand is either "nearly linear" or isoelastic. It is also shown that commonly made assumptions regarding large outputs are often redundant. ## Abstract Biconcavity is a simple condition on inverse demand that corresponds to the ordinary concept of concavity after simultaneous parameterized transformations of price and quantity. The notion is employed here in the framework of the homogeneous-good Cournot model with potentially heterogeneous firms. The analysis leads to unified conditions, respectively, for the existence of a pure-strategy equilibrium via nonincreasing best-response selections, for existence via quasiconcavity, and for uniqueness of the equilibrium. The usefulness of the generalizations is illustrated in cases where inverse demand is either "nearly linear" or isoelastic. It is also shown that commonly made assumptions regarding large outputs are often redundant. ## Statistics Detailed statistics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778584599494934, "perplexity": 1982.334609679547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00455.warc.gz"}
https://www.chemeurope.com/en/encyclopedia/Lattice_gauge_theory.html
My watch list my.chemeurope.com # Lattice gauge theory In physics, lattice gauge theory is the study of gauge theories on a spacetime that has been discretized onto a lattice. Although most lattice gauge theories are not exactly solvable, they are of tremendous appeal because they can be studied by simulation on a computer. One hopes that, by performing simulations on larger and larger lattices, while making the lattice spacing smaller and smaller, that one will be able to recover the behaviour of the continuum theory. In lattice gauge theory, the spacetime is Wick rotated into Euclidean space, discretized and replaced by a lattice with lattice spacing equal to a. The quark fields are only defined at the elements of the lattice. There are problems with fermion doubling, though. See Wilson-Ginsparg action. Instead of a vector potential as in the continuum case, the gauge fields variables are defined on the links of the lattice and correspond to the parallel transport along the edge which takes on values in the Lie group. Hence to simulate QCD, for which the Lie group is SU(3), there is a 3 by 3 special unitary matrix defined on each link. The faces of the lattice are called plaquettes. The Yang-Mills action is rewritten using Wilson loops over plaquettes (it's simply a character evaluated over the composition of link variables around the plaquette) in such a way that the limit $a\to 0$ formally gives the original continuous action. More precisely, we have a lattice with vertices, edges and faces. In lattice theory, the alternative terminology sites, links and plaquettes for vertices, edges and faces is often used. This reflects the origin of the field in solid state physics. While each edge happens to have no intrinsic orientation, to define the gauge variables, we assign an element of a compact Lie group G to each edge given an orientation for it called U. Basically, the assignment for an edge in a given orientation is the group inverse of the assignment to the same edge in the opposite orientation. Likewise, the plaquettes have no intrinsic orientations, but have to be temporarily given an orientation for computational purposes. Given a faithful irreducible representation ρ of G, the lattice Yang-Mills action is $S=\sum_F -\Re\{\chi^{(\rho)}(U(e_1)\cdots U(e_n))\}$ (the sum over all lattice sites of the (real component of the) Wilson loop). Here, χ is the character (trace) and the real component is redundant if ρ happens to be a real or pseudoreal representation. e1, ..., en are the n edges of the Wilson loop in sequence. The nice thing about being real is even if the orientation of a Wilson loop is flipped, its contribution to the action remains unchanged. There are many possible lattice Yang-Mills actions, depending on which Wilson loop is used in the above formula. The simplest is the Wilson action, in which the Wilson loop is just a plaquette. A disadvantage of the Wilson action is that the difference between it and the continuous action is proportional to the lattice spacing a. It is possible to use more complicated Wilson loops to form actions where this difference is proportional to a2, thus making computations more accurate. These are known as improved actions. To calculate a quantity (such as the mass of a particle) in lattice gauge theory, it should be calculated for every possible value of the gauge field on each link, and then averaged. In practice this is impossible. Instead the Monte Carlo method is used to estimate the quantity. Random configurations (values of the gauge fields) are generated with probabilities proportional to e − βS, where S is the lattice action for that configuration and β is related to the lattice spacing a. The quantity is calculated for each configuration. The true value of the quantity is then found by taking the average of the value from a large number of configurations. To find the value of the quantity in the continuous theory this is repeated for various values of a and extrapolated to a = 0. Lattice gauge theory is a particularly important tool for quantum chromodynamics (QCD). The discretized version of QCD is called Lattice QCD. QCD confinement has been shown in Monte Carlo simulations. Deconfinement at high temperature leads to the formation of a quark-gluon plasma. Lattice gauge theory has been shown to be exactly dual to spin foam models provided that the only Wilson loops appearing in the action are over plaquettes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347850680351257, "perplexity": 286.9532533937358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00070.warc.gz"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F12/f12abc.html
f12 Chapter Contents f12 Chapter Introduction NAG C Library Manual NAG Library Function Documentnag_real_sparse_eigensystem_iter (f12abc) Note: this function uses optional arguments to define choices in the problem specification. If you wish to use default settings for all of the optional arguments, then the option setting routine nag_real_sparse_eigensystem_option (f12adc) need not be called. If, however, you wish to reset some or all of the settings please refer to Section 10 in nag_real_sparse_eigensystem_option (f12adc) for a detailed description of the specification of the optional arguments. 1  Purpose nag_real_sparse_eigensystem_iter (f12abc) is an iterative solver used to find some of the eigenvalues (and optionally the corresponding eigenvectors) of a standard or generalized eigenvalue problem defined by real nonsymmetric matrices. This is part of a suite of functions that also includes nag_real_sparse_eigensystem_init (f12aac), nag_real_sparse_eigensystem_sol (f12acc), nag_real_sparse_eigensystem_option (f12adc) and nag_real_sparse_eigensystem_monit (f12aec). It is 2  Specification #include #include void nag_real_sparse_eigensystem_iter (Integer *irevcm, double resid[], double v[], double **x, double **y, double **mx, Integer *nshift, double comm[], Integer icomm[], NagError *fail) 3  Description The suite of functions is designed to calculate some of the eigenvalues, $\lambda$, (and optionally the corresponding eigenvectors, $x$) of a standard eigenvalue problem $Ax=\lambda x$, or of a generalized eigenvalue problem $Ax=\lambda Bx$ of order $n$, where $n$ is large and the coefficient matrices $A$ and $B$ are sparse, real and nonsymmetric. The suite can also be used to find selected eigenvalues/eigenvectors of smaller scale dense, real and nonsymmetric problems. nag_real_sparse_eigensystem_iter (f12abc) is a reverse communication function, based on the ARPACK routine dnaupd, using the Implicitly Restarted Arnoldi iteration method. The method is described in Lehoucq and Sorensen (1996) and Lehoucq (2001) while its use within the ARPACK software is described in great detail in Lehoucq et al. (1998). An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices is provided in Lehoucq and Scott (1996). This suite of functions offers the same functionality as the ARPACK software for real nonsymmetric problems, but the interface design is quite different in order to make the option setting clearer and to simplify the interface of nag_real_sparse_eigensystem_iter (f12abc). The setup function nag_real_sparse_eigensystem_init (f12aac) must be called before nag_real_sparse_eigensystem_iter (f12abc), the reverse communication iterative solver. Options may be set for nag_real_sparse_eigensystem_iter (f12abc) by prior calls to the option setting function nag_real_sparse_eigensystem_option (f12adc) and a post-processing function nag_real_sparse_eigensystem_sol (f12acc) must be called following a successful final exit from nag_real_sparse_eigensystem_iter (f12abc). nag_real_sparse_eigensystem_monit (f12aec), may be called following certain flagged, intermediate exits from nag_real_sparse_eigensystem_iter (f12abc) to provide additional monitoring information about the computation. nag_real_sparse_eigensystem_iter (f12abc) uses reverse communication, i.e., it returns repeatedly to the calling program with the argument irevcm (see Section 5) set to specified values which require the calling program to carry out one of the following tasks: • – compute the matrix-vector product $y=\mathrm{OP}x$, where $\mathrm{OP}$ is defined by the computational mode; • – compute the matrix-vector product $y=Bx$; • – notify the completion of the computation; • – allow the calling program to monitor the solution. The problem type to be solved (standard or generalized), the spectrum of eigenvalues of interest, the mode used (regular, regular inverse, shifted inverse, shifted real or shifted imaginary) and other options can all be set using the option setting function nag_real_sparse_eigensystem_option (f12adc) (see Section 10.1 in nag_real_sparse_eigensystem_option (f12adc) for details on setting options and of the default settings). 4  References Lehoucq R B (2001) Implicitly restarted Arnoldi methods and subspace iteration SIAM Journal on Matrix Analysis and Applications 23 551–562 Lehoucq R B and Scott J A (1996) An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices Preprint MCS-P547-1195 Argonne National Laboratory Lehoucq R B and Sorensen D C (1996) Deflation techniques for an implicitly restarted Arnoldi iteration SIAM Journal on Matrix Analysis and Applications 17 789–821 Lehoucq R B, Sorensen D C and Yang C (1998) ARPACK Users' Guide: Solution of Large-scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods SIAM, Philidelphia 5  Arguments Note: this function uses reverse communication. Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the argument irevcm. Between intermediate exits and re-entries, all arguments other than x and y must remain unchanged. 1:     irevcmInteger *Input/Output On initial entry: ${\mathbf{irevcm}}=0$, otherwise an error condition will be raised. On intermediate re-entry: must be unchanged from its previous exit value. Changing irevcm to any other value between calls will result in an error. On intermediate exit: has the following meanings. ${\mathbf{irevcm}}=-1$ The calling program must compute the matrix-vector product $y=\mathrm{OP}x$, where $x$ is stored in x and the result $y$ is placed in y. ${\mathbf{irevcm}}=1$ The calling program must compute the matrix-vector product $y=\mathrm{OP}x$. This is similar to the case ${\mathbf{irevcm}}=-1$ except that the result of the matrix-vector product $Bx$ (as required in some computational modes) has already been computed and is available in mx. ${\mathbf{irevcm}}=2$ The calling program must compute the matrix-vector product $y=Bx$, where $x$ is stored as described in the case ${\mathbf{irevcm}}=-1$ and $y$ is placed in y. ${\mathbf{irevcm}}=3$ Compute the nshift real and imaginary parts of the shifts where the real parts are to be placed in the first nshift locations of the array y and the imaginary parts are to be placed in the first nshift locations of the array mx. Only complex conjugate pairs of shifts may be applied and the pairs must be placed in consecutive locations. This value of irevcm will only arise if the optional argument ${\mathbf{Supplied Shifts}}$ is set in a prior call to nag_real_sparse_eigensystem_option (f12adc) which is intended for experienced users only; the default and recommended option is to use exact shifts (see Lehoucq et al. (1998) for details). ${\mathbf{irevcm}}=4$ Monitoring step: a call to nag_real_sparse_eigensystem_monit (f12aec) can now be made to return the number of Arnoldi iterations, the number of converged Ritz values, their real and imaginary parts, and the corresponding Ritz estimates. On final exit: ${\mathbf{irevcm}}=5$: nag_real_sparse_eigensystem_iter (f12abc) has completed its tasks. The value of fail determines whether the iteration has been successfully completed, or whether errors have been detected. On successful completion nag_real_sparse_eigensystem_sol (f12acc) must be called to return the requested eigenvalues and eigenvectors (and/or Schur vectors). Constraint: on initial entry, ${\mathbf{irevcm}}=0$; on re-entry irevcm must remain unchanged. 2:     resid[$\mathit{dim}$]doubleInput/Output Note: the dimension, dim, of the array resid must be at least ${\mathbf{n}}$ (see nag_real_sparse_eigensystem_init (f12aac)). On initial entry: need not be set unless the option ${\mathbf{Initial Residual}}$ has been set in a prior call to nag_real_sparse_eigensystem_option (f12adc) in which case resid should contain an initial residual vector, possibly from a previous run. On intermediate re-entry: must be unchanged from its previous exit. Changing resid to any other value between calls may result in an error exit. On intermediate exit: contains the current residual vector. On final exit: contains the final residual vector. 3:     v[$\mathit{dim}$]doubleInput/Output Note: the dimension, dim, of the array v must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}×{\mathbf{ncv}}\right)$ (see nag_real_sparse_eigensystem_init (f12aac)). The $\mathit{i}$th element of the $\mathit{j}$th basis vector is stored in location ${\mathbf{v}}\left[{\mathbf{n}}×\left(\mathit{i}-1\right)+\mathit{j}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{ncv}}$. On initial entry: need not be set. On intermediate re-entry: must be unchanged from its previous exit. On intermediate exit: contains the current set of Arnoldi basis vectors. On final exit: contains the final set of Arnoldi basis vectors. 4:     xdouble **Input/Output On initial entry: need not be set, it is used as a convenient mechanism for accessing elements of comm. On intermediate re-entry: is not normally changed. On intermediate exit: contains the vector $x$ when irevcm returns the value $-1$, $+1$ or $2$. On final exit: does not contain useful data. 5:     ydouble **Input/Output On initial entry: need not be set, it is used as a convenient mechanism for accessing elements of comm. On intermediate re-entry: must contain the result of $y=\mathrm{OP}x$ when irevcm returns the value $-1$ or $+1$. It must contain the real parts of the computed shifts when irevcm returns the value $3$. On intermediate exit: does not contain useful data. On final exit: does not contain useful data. 6:     mxdouble **Input/Output On initial entry: need not be set, it is used as a convenient mechanism for accessing elements of comm. On intermediate re-entry: must contain the result of $y=Bx$ when irevcm returns the value $2$. It must contain the imaginary parts of the computed shifts when irevcm returns the value $3$. On intermediate exit: contains the vector $Bx$ when irevcm returns the value $+1$. On final exit: does not contain any useful data. 7:     nshiftInteger *Output On intermediate exit: if the option ${\mathbf{Supplied Shifts}}$ is set and irevcm returns a value of $3$, nshift returns the number of complex shifts required. 8:     comm[$\mathit{dim}$]doubleCommunication Array Note: the dimension, dim, of the array comm must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{lcomm}}\right)$ (see nag_real_sparse_eigensystem_init (f12aac)). On initial entry: must remain unchanged following a call to the setup function nag_real_sparse_eigensystem_init (f12aac). On exit: contains data defining the current state of the iterative process. 9:     icomm[$\mathit{dim}$]IntegerCommunication Array Note: the dimension, dim, of the array icomm must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{licomm}}\right)$ (see nag_real_sparse_eigensystem_init (f12aac)). On initial entry: must remain unchanged following a call to the setup function nag_real_sparse_eigensystem_init (f12aac). On exit: contains data defining the current state of the iterative process. 10:   failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INITIALIZATION Either the initialization function has not been called prior to the first call of this function or a communication array has become corrupted. NE_INT The maximum number of iterations $\le 0$, the option ${\mathbf{Iteration Limit}}$ has been set to $〈\mathit{\text{value}}〉$. NE_INTERNAL_EIGVAL_FAIL Error in internal call to compute eigenvalues and corresponding error bounds of the current upper Hessenberg matrix. Please contact NAG. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_MAX_ITER The maximum number of iterations has been reached. The maximum number of iterations $=〈\mathit{\text{value}}〉$. The number of converged eigenvalues $=〈\mathit{\text{value}}〉$. See the function document for further details. NE_NO_ARNOLDI_FAC Could not build an Arnoldi factorization. The size of the current Arnoldi factorization $=〈\mathit{\text{value}}〉$. NE_NO_SHIFTS_APPLIED No shifts could be applied during a cycle of the implicitly restarted Arnoldi iteration. NE_OPT_INCOMPAT The options ${\mathbf{Generalized}}$ and ${\mathbf{Regular}}$ are incompatible. NE_ZERO_INIT_RESID The option ${\mathbf{Initial Residual}}$ was selected but the starting vector held in resid is zero. 7  Accuracy The relative accuracy of a Ritz value, $\lambda$, is considered acceptable if its Ritz estimate $\text{}\le {\mathbf{Tolerance}}×\left|\lambda \right|$. The default ${\mathbf{Tolerance}}$ used is the machine precision given by nag_machine_precision (X02AJC). None. 9  Example This example solves $Ax=\lambda x$ in shift-invert mode, where $A$ is obtained from the standard central difference discretization of the convection-diffusion operator $\frac{{\partial }^{2}u}{\partial {x}^{2}}+\frac{{\partial }^{2}u}{\partial {y}^{2}}+\rho \frac{\partial u}{\partial x}$ on the unit square, with zero Dirichlet boundary conditions. The shift used is a real number. 9.1  Program Text Program Text (f12abce.c) 9.2  Program Data Program Data (f12abce.d) 9.3  Program Results Program Results (f12abce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 75, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340896010398865, "perplexity": 1594.572454308596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295993.24/warc/CC-MAIN-20150323172135-00212-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Phase-Field-Models-for-Thin-Elastic-Structures-with-Dondl-Lemenant/9d871aaaf18ace17c960bb0ee1d2aa10d39482cf
# Phase Field Models for Thin Elastic Structures with Topological Constraint @article{Dondl2015PhaseFM, title={Phase Field Models for Thin Elastic Structures with Topological Constraint}, author={P. Dondl and A. Lemenant and Stephan Wojtowytsch}, journal={Archive for Rational Mechanics and Analysis}, year={2015}, volume={223}, pages={693-736} } • Published 2015 • Mathematics, Physics • Archive for Rational Mechanics and Analysis This article is concerned with the problem of minimising the Willmore energy in the class of connected surfaces with prescribed area which are confined to a small container. We propose a phase field approximation based on De Giorgi’s diffuse Willmore functional to this variational problem. Our main contribution is a penalisation term which ensures connectedness in the sharp interface limit. The penalisation of disconnectedness is based on a geodesic distance chosen to be small between two… Expand #### Figures from this paper Uniform regularity and convergence of phase-fields for Willmore’s energy • Mathematics • 2015 We investigate the convergence of phase fields for the Willmore problem away from the support of a limiting measure $$\mu$$μ. For this purpose, we introduce a suitable notion of essentially uniformExpand Connected Coulomb columns: analysis and numerics • Physics, Mathematics • 2020 We consider a version of Gamow’s liquid drop model with a short range attractive perimeter-penalizing potential and a long-range Coulomb interaction of a uniformly charged mass in R3 . Here weExpand Keeping it together: a phase field version of path-connectedness and its implementation • Mathematics • 2018 We describe the implementation of a topological constraint in finite element simulations of phase field models which ensures path-connectedness of preimages of intervals in the phase field variable.Expand Elastic curves and phase transitions This paper is devoted to a classical variational problem for planar elastic curves of clamped endpoints, so-called Euler’s elastica problem. We investigate a straightening limit that means enlargingExpand Confined elasticae and the buckling of cylindrical shells For curves of prescribed length embedded into the unit disc in two dimensions, we obtain scaling results for the minimal elastic energy as the length just exceeds $2\pi$ and in the large lengthExpand A phase-field approach to variational hierarchical surface segmentation • Computer Science • Comput. Aided Geom. Des. • 2021 In this paper, we propose a phase-field model to partition a curved surface into path-connected segments with minimal boundary length. Phase-fields offer a powerful tool to represent diffuseExpand A new diffuse-interface approximation of the Willmore flow • Mathematics, Physics • 2019 Standard diffuse approximations of the Willmore flow often lead to intersecting phase boundaries that in many cases do not correspond to the intended sharp interface evolution. Here we introduce aExpand Phase Field Topology Constraints • Computer Science • 2018 A morphological approach to extract topologically critical regions in phase field models by adapting a non-simple point concept from digital topology to local regions using structuring masks that can be used to constrain the evolution locally. Expand Approximation of the relaxed perimeter functional under a connectedness constraint by phase-fields • Mathematics • 2018 We develop a phase-field approximation of the relaxation of the perimeter functional in the plane under a connectedness constraint based on the classical Modica-Mortola functional and theExpand On a phase field approximation of the planar Steiner problem: existence, regularity, and asymptotic of minimizers • Mathematics • 2016 In this article, we consider and analyse a small variant of a functional originally introduced in \cite{BLS,LS} to approximate the (geometric) planar Steiner problem. This functional depends on aExpand #### References SHOWING 1-10 OF 98 REFERENCES A Phase Field Model for the Optimization of the Willmore Energy in the Class of Connected Surfaces • Mathematics, Computer Science • SIAM J. Math. Anal. • 2014 This article provides a proof of Gamma-convergence of the model to the sharp interface limit of the Willmore energy connected surfaces with prescribed surface area by a nested minimization of two phase fields. Expand Confined structures of least bending energy • Mathematics • 2013 In this paper we study a constrained minimization problem for the Willmore functional. For prescribed surface area we consider smooth embeddings of the sphere into the unit ball. We evaluate theExpand Minimising a relaxed Willmore functional for graphs subject to boundary conditions • Mathematics • 2015 For a bounded smooth domain in the plane and smooth boundary data we consider the minimisation of the Willmore functional for graphs subject to Dirichlet or Navier boundary conditions. ForExpand Variational Principles for immersed Surfaces with $L^2$-bounded Second Fundamental Form In this work we present new fundamental tools for studying the variations of the Willmore functional of immersed surfaces into $R^m$. This approach gives for instance a new proof of the existence ofExpand Confined Elastic Curves • Mathematics, Computer Science • SIAM J. Appl. Math. • 2011 A solution based on a diffuse approximation of the winding number of Euler's elastica energy is proposed, a proof that one can approximate a given sharp interface using a sequence of phase fields is presented, and some numerical results using finite elements based on subdivision surfaces are shown. Expand A phase field formulation of the Willmore problem • Mathematics • 2005 In this paper, we demonstrate, through asymptotic expansions, the convergence of a phase field formulation to model surfaces minimizing the mean curvature energy with volume and surface areaExpand Approximation of Length Minimization Problems Among Compact Connected Sets • Mathematics, Computer Science • SIAM J. Math. Anal. • 2015 This paper provides an approximation of some classical minimization problems involving the length of an unknown one-dimensional set, with an additional connectedness constraint, in dimension two, and introduces a term of new type relying on a weighted geodesic distance that forces the minimizers to be connected at the limit. Expand Finite elements on evolving surfaces • Mathematics • 2007 In this article, we define a new evolving surface finite-element method for numerically approximating partial differential equations on hypersurfaces (t) in n+1 which evolve with time. The key ideaExpand A PHASE FIELD BASED PDE CONSTRAINED OPTIMIZATION APPROACH TO TIME DISCRETE WILLMORE FLOW • 2010 A novel phase field model for Willmore flow is proposed based on a nested variational time discretization. Thereby, the mean curvature in the Willmore functional is replaced by an approximate speedExpand Gradient Flow for the Willmore Functional in Riemannian Manifolds of bounded Geometry We consider the $L^2$ gradient flow for the Willmore functional in Riemannian manifolds of bounded geometry. In the euclidean case E.\;Kuwert and R.\;Sch\"atzle [\textsl{Gradient flow for theExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419347405433655, "perplexity": 1278.6764945812486}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00455.warc.gz"}
http://physics.stackexchange.com/questions/70155/time-to-immerse-in-a-fluid/70172
# Time to immerse in a fluid Whenever you drop an object in water, it takes some time to get fully immersed. I was wondering if this depends upon the buoyant force of the water on the object, slowing it down. However, I was not able to come up with a general formula for this. I wanted to know, what would be the time taken for an object of density $d_1$, dropped from a height $h$ to be fully immersed in a liquid of density $d_2$. Take gravity as $g$. Part of the reason I was not able to do this was because I thought it might involve calculus (since the volume of water displaced changes over time and so does the buoyant force). I'm not competent enough with calculus to be able to attempt this sort of problem. I know friction will affect the time, but what if there were $no$ friction? Can this be further generalized to gases as well? - It's pretty tough to derive it without using calculus. And you should also specify the geometry of the object because the time will greatly depend on that. – udiboy1209 Jul 6 '13 at 12:12 As you say yourself, there will be a dependence on time and you'll need a differential equation to solve the problem. Let's make some simplifying assumptions. We'll assume that our body is spherical and has a radius $R$. We'll assume that its density is homogeneous and that it is denser than the fluid in which it will be immersed. We'll assume it drops from a height $h_0$ from the surface of the liquid, i.e. its center of mass is at height $h_0$. We'll neglect friction and splashing and any phenomenon that will complicate our analysis. We'll only take gravity and the Archimedean force into account. The variable component here is the Archimedean force which depends on the immersed volume which will change in time. The formula for this force is $$F_A=\rho_{f} g V(h)$$ where $\rho_{f}$ is the fluid density, $g$ is the gravitational acceleration in the vicinity of the Earth's surface and $V(h)$ is the immersed volume. The formula for the immersed volume is $$V(h)=\begin{cases}0 \; \text{ if } h\geq R \\ \frac{\pi}{3}(R-h)\left[2R^2+Rh+h^2\right] \; \text{ if } -R<h<R \\ \frac{4}{3}\pi R^3 \; \text{ if } h\leq -R \end{cases}$$ We therefore have the following equation of motion $$-g\left[\rho_0 \frac{4}{3}\pi R^3-\rho_f V(h)\right] = \rho_0 \frac{4}{3}\pi R^3 \frac{d^2 h}{dt^2}$$ Where $\rho_0>\rho_f$ is the density of our sphere. This is a second order nonlinear ODE because $V(h)$ is of third order in $h$ for part of its range. Introducing the new variables $x=h/R$ and $\tau=\sqrt{g/R}t$ and a new function $$P(x) = \begin{cases}0 \; \text{ if } x\geq 1 \\ \frac{2-x-x^3}{4} \; \text{ if } -1<x<1 \\ 1 \; \text{ if } x\leq -1 \end{cases}$$ we can rewrite our equation as $$\frac{d^2x}{d\tau^2}=\frac{\rho_f}{\rho_0}P(x)-1$$ We can integrate this equation w.r.t. $x$ once to reduce the order of our equation. This gives us $$\frac{1}{2}\left(\frac{dx}{d\tau}\right)^2=(h_0/R-x)+\frac{\rho_f}{\rho_0}Q(x)$$ in which I introduced $Q(x)$ which is the integral of $P(x)$, $$Q(x) = \begin{cases}0 \; \text{ if } x\geq 1 \\ \frac{-5+8x-2x^2-x^4}{16} \; \text{ if } -1<x<1 \\ x \; \text{ if } x\leq -1 \end{cases}$$ We can therefore solve for the time $\tau$ in function of $x$ $$\tau=\frac{1}{\sqrt{2}}\int_{-1}^{h_0/R}\frac{dx}{\sqrt{h_0/R-x+\frac{\rho_f}{\rho_0}Q(x)}}$$ I've computed a first order approximation in $\rho_f/\rho_0$ for the case of the sphere dropping from just above the surface until the moment its gets fully immersed: $$\tau = 2+\frac{43}{210}\frac{\rho_f}{\rho_0}+O((\rho_f/\rho_0)^2)$$ or $$t=2\sqrt{\frac{R}{g}} +\frac{43}{210}\frac{\rho_f}{\rho_0}\sqrt{\frac{R}{g}}+O((\rho_f/\rho_0)^2)$$ The zeroth order term is what you'd expect if there was no fluid, the next term gives a correction for a very dense sphere compared to the density of the fluid. - doesn't the OP mention that s/he doesn't want calculus to be involved? – udiboy1209 Jul 6 '13 at 12:56 And the general method would have been easier to understand with a simpler geometrical object, maybe a cylinder or cuboid. – udiboy1209 Jul 6 '13 at 13:01 @udiboy: Yes, but as you can see, calculus is unavoidable. The final formula, whatever the method you use to arrive at it will contain an integral. Of course, you could maybe partly guess a simplified formula by some dimensional analysis I suppose. – Raskolnikov Jul 6 '13 at 13:03 @udiboy: I think the sphere is somewhat harder but more realistic. With the other forms, I would have to make an additional assumption of how the piece drops into the fluid to be able to compute things. With the sphere, it obviously doesn't matter. – Raskolnikov Jul 6 '13 at 13:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400364756584167, "perplexity": 198.89892140537364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00112-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/121786-cauchy-sequence.html
1. ## cauchy sequence Prove that the sequence : $x_n = n^2 + \frac{(-1)^n}{n}$ is not cauchy 2. Originally Posted by flower3 Prove that the sequence : $x_n = n^2 + \frac{(-1)^n}{n}$ is not cauchy If it were Cauchy then for any $\epsilon >0\,\,\,\exists M\in\mathbb{N}\,\,\,s.t.\,\,\,|x_n-x_m|<\epsilon\,\,\,\forall\,n,m>M$ . In particular, this must be true for $m=n+1$ , but: $|x_n-x_{n+1}|=\left|-2n-1+\frac{(-1)^n}{n(n+1)}\right|=2n+1\pm \frac{1}{n(n+1)}>2$ , so it is enough to take $\epsilon < 2$ and the above definitory property of Cauchy sequences won't be true for it. Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922457337379456, "perplexity": 656.4988607407671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661778.22/warc/CC-MAIN-20160924173741-00188-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.varsitytutors.com/high_school_math-help/finding-domain-and-range
# High School Math : Finding Domain and Range ## Example Questions ### Example Question #1 : Relations And Functions What is the domain of the function below: Explanation: The domain is defined as the set of possible values for the x variable. In order to find the impossible values of x, we should: a) Set the equation under the radical equal to zero and look for probable x values that make the expression inside the radical negative: There is no real value for x that will fit this equation, because any real value square is a positive number i.e. cannot be a negative number. b) Set the denominator of the fractional function equal to zero and look for probable x values: Now we can solve the equation for x: There is no real value for x that will fit this equation. The radical is always positive and denominator is never equal to zero, so the f(x) is defined for all real values of x. That means the set of all real numbers is the domain of the f(x) and the correct answer is . Alternative solution for the second part of the solution: After figuring out that the expression under the radical is always positive (part a), we can solve the radical and therefore denominator for the least possible value (minimum value). Setting the x value equal to zero will give the minimum possible value for the denominator. That means the denominator will always be a positive value greater than 1/2; thus it cannot be equal to zero by setting any real value for x. Therefore the set of all real numbers is the domain of the f(x). ### Example Question #1 : Pre Calculus What is the domain of the function below? Explanation: The domain is defined as the set of all values of x for which the function is defined i.e. has a real result. The square root of a negative number isn't defined, so we should find the intervals where that occurs: The square of any number is positive, so we can't eliminate any x-values yet. If the denominator is zero, the expression will also be undefined. Find the x-values which would make the denominator 0: Therefore, the domain is . ### Example Question #2 : Functions What is the domain of the function below:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957626461982727, "perplexity": 229.2924150407604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00221.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-difference-between-van-der-waals-interaction-and-casimir-interaction.207993/
# What is the difference between van der waals interaction and casimir interaction? 1. Jan 9, 2008 ### wdlang It seems to me that the physics is the same they are all due to the exchange of virtual photons between two particles 2. Jan 9, 2008 ### Sojourner01 I'm grappling with the same problem myself. From what I can gather, the casimir force is due to the interaction of the vacuum itself with its boundaries, and the van der waals force is a property of the materials that make up the boundary. You have to delve deeper into it to see the difference - and there are those who think that the casimir force can be adequately explained by van der waals-type interactions between the atoms in the 'plates'. Similar Discussions: What is the difference between van der waals interaction and casimir interaction?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801563382148743, "perplexity": 253.16648616113704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888302.37/warc/CC-MAIN-20180119224212-20180120004212-00473.warc.gz"}
http://mathhelpforum.com/advanced-algebra/124791-highest-common-factor-polynomial-division-print.html
# Highest common factor, Polynomial division • January 21st 2010, 11:35 AM craig Highest common factor, Polynomial division Hi, think this is the correct forum, apologies if not. Find the highest common factor of the polynomials $x^5 - 2x^4 - 3x^3 + 13x^2 -16x + 6$ and $x^4 - 2x^3 - 2x^2 + 8x - 8$. Hence factorise $x^4 - 2x^3 - 2x^2 + 8x - 8$ into irreducible polynomials in $R[x]$. First I started off by factorising, by long division I got the following: $x^5 - 2x^4 - 3x^3 + 13x^2 -16x + 6 = x(x^4 - 2x^3 - 2x^2 + 8x - 8) + (-x^3 +5x^2-8x+6)$ Not sure where to go from here. I'm ok with the Euclidean algorithm but not sure how to apply it here? • January 21st 2010, 12:27 PM craig Edit just seen that I've not fully completed the algorithm yet! Think I can manage this.. • January 22nd 2010, 04:26 AM HallsofIvy I wondered about that- because what you wrote involved neither factoring nor long division! • January 22nd 2010, 07:26 AM craig Yeh sorry about that. Posted the question and then realised that I'd only done the first bit... oops. • January 22nd 2010, 08:03 AM Dinkydoe It's often also wise before you start to do a rational root test: Let $f(x)=x^4-2x^3-2x^2+8x-8$ Suppose $f(x) =0$ has rational roots then it either has to be $x=\pm 1, \pm 2,\pm 4,\pm 8$. I checked for you and notice that $f(2)=f(-2) = 0$ ! This means you can allready factor out $f(x)/(x-2)(x+2) = x^2-2x+2$. For the other function $g$ we have $g(\pm 2)\neq 0$. This means that you only have to check whether the roots of $x^2-2x+2$ are roots of g(x) and you're done. Edit: I can better put it like this. It now follows that gcd $(g(x),f(x)) =$gcd( $g(x), x^2-2x+2)$ • January 22nd 2010, 08:45 AM qmech Your first step was the first step in Euclidean factorization: $ x^5 - 2x^4 - 3x^3 + 13x^2 -16x + 6 = x(x^4 - 2x^3 - 2x^2 + 8x - 8) + (-x^3 +5x^2-8x+6) $ Now you have to do this again, but with the polynomials: $ x^4 - 2x^3 - 2x^2 + 8x - 8 = (-x)(-x^3 +5x^2-8x+6) + remainder $ Keep going until the remainder is 0. • January 22nd 2010, 08:58 AM Dinkydoe Quote: Your first step was the first step in Euclidean factorization: http://www.mathhelpforum.com/math-he...3875a606-1.gif Now you have to do this again, but with the polynomials: http://www.mathhelpforum.com/math-he...c12a87dd-1.gif Keep going until the remainder is 0. That is clearly impossible: Only possible when $g(x) = f(x)\cdot p(x)$ For some linear factor $p(x)$. This is not the case here, there will allways be a remainder.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545758724212646, "perplexity": 940.0374106653192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042991019.80/warc/CC-MAIN-20150728002311-00248-ip-10-236-191-2.ec2.internal.warc.gz"}
https://danmackinlay.name/notebook/wiener_khintchine.html
# Wiener-Khintchine representations ## Spectral representations of stochastic processes $\renewcommand{\var}{\operatorname{Var}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mmm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\ff}[1]{\mathfrak{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\rv}[1]{\mathsf{#1}}$ Consider a real-valued stochastic process $$\{\rv{f}_{\vv{t}}\}_{\vv{t}\in\mathcal{T}}$$ such that a realisation of such process is a function $$\mathcal{T}\to\mathbb{R}$$ where $$\mathcal{T}\subseteq \mathcal{R}^d$$ is some compact set of non-zero Lebesgue volume, like a hypercube, or all of $$\mathbb{R}^{d}$$.1 We call $$\mathcal{T}$$ the index. Suppose the process is described by a probability measure $$\mu_{\vv{t}}, \vv{t}\in\mathcal{T}$$ such that for $$\vv{t},\vv{s}\in\mathcal{T}$$, the process has expectation function $\Ex[t]=\Ex[\rv{f}_{\vv{t}}]=\int_{\mathbb{R}} x \mu_{\vv{t}}(\dd x)$ and covariance \begin{aligned} K(\vv{t}, \vv{s}) &=\operatorname{Cov}\left\{\rv{f}_{\vv{t}}, \rv{f}_{\vv{s}}\right\}\\ &=\Ex[\rv{f}_{\vv{t}} \rv{f}_{\vv{s}}]-\Ex[\rv{f}_{\vv{t}}]\Ex[ \rv{f}_{\vv{s}}] \\ &=\iint_{\mathbb{R}^{2}} \vv{s}\vv{t} \mu_{\vv{t}, \vv{s}}(\dd \vv{s} \times \dd \vv{t})-\Ex[\rv{f}_{\vv{t}}]\Ex[ \rv{f}_{\vv{s}}] \end{aligned} We are concerned with ways to represent this covariance function $$K$$. I do not want to arse about with this mean function overmuch since it only clutters things up, so hereafter we will assume $$\Ex[\vv{t}]=0$$ unless stated otherwise. ## Wiener theorem: Deterministic case This is also interesting and I wrote it up for a different project: See Wiener theorem. ## Wiener-Khinchine theorem: Spectral density of covariance kernels I found the Wikipedia introduction usually confusing. I recommend a well-written article, e.g. Abrahamsen (1997) or Robert J. Adler, Taylor, and Worsley (2016). Anyway, this theorem governs wide-sense-stationary random processes. Here wide-sense-stationary, a.k.a. weakly stationary or sometimes homogeneous, requires that 1. the process mean function is constant, $$\Ex[\vv{t}]=0,$$ w.l.o.g. and 2. correlation depends only on $$\vv{t}-\vv{s}$$, i.e. $$K(\vv{t}, \vv{s})=K(\vv{t}-\vv{s}).$$ That is, the first two moments of the process are stationary, but other moments might do something weird. For the wildly popular case of Gaussian processes, since the first two moments uniquely determine the process, these end up being the same. In this context, the Wiener-Khintchine theorem tells us that there exists a finite positive measure $$\nu$$ on the Borel subsets of $$\mathbb{R}^d$$ such that the covariance kernel is given $K(\vv{\tau} )=\int \exp(2\pi i\vv{\omega}^{\top}\tau )\nu(\dd \vv{\omega}).$ If $$\nu$$ has a density $$\psi(\vv{\omega})$$ with respect to the dominating Lebesgue measure, then $\psi(\vv{\omega})=\int K(\vv{\tau} )\exp(-2\pi i \vv{\omega}^{\top} \vv{\tau} )\,\dd\vv{\tau}.$ That is, the power spectral density and the covariance kernel are Fourier dual. Nifty. What does this mean? Why do I care? Turns out this is useful for many reasons. It relates the power spectral density to the correlation function, and also to continuity/differentiability. ## Bochner’s Theorem: stationary spectral kernels Everyone seems to like the exposition in Yaglom (1987), which I brusquely summarise here. Bochner’s theorem tells us that $$K:\mathcal{T}\to\mathbb{R}$$ is the covariance function of a weakly stationary, mean-square-continuous, complex-valued random process on $$\mathbb{R}^{d}$$ if and only if it can be represented as $K(\vv{\tau})=\int_{\mathcal{T}} \exp \left(2 \pi i \vv{\omega}^{\top} \vv{\tau}\right) \nu(\mathrm{d} \vv{\omega})$ where $$\nu$$ is a positive and finite measure on (the Borel subsets of) $$\mathbb{C}^d.$$ If $$\nu$$ has a density $$\psi(\vv{\omega})$$ with respect to the dominating Lebesgue measure, then $$\psi$$ is called the spectral density of $$K,$$ and $$\psi$$ and $$K$$ are Fourier duals. This is what Robert J. Adler, Taylor, and Worsley (2016) calls the spectral distribution theorem. This looks similar to the Wiener-Khintchine theorem, no? This one is telling us that the power spectrum represents all possible stationary kernels, i.e. we are not missing out on any by using a spectral representation. Note also that we needed to generalize this to complex-valued fields, and consider integrals over complex indices for it to make sense; the real fields arise as a special case. ## Yaglom’s theorem Some of the kernel design literature cites a generalised Bochner-type Theorem , Yaglom’s Theorem, which does not presume stationarity: A complex-valued, bounded, continuous function $$K$$ on $$\mathbb{R}^{d}$$ is the covariance function of a mean-square-continuous, complex-valued, random process on $$\mathbb{R}^{d}$$ if and only if it can be represented as $K(\vv{s}, \vv{t})=\int_{\mathcal{T} \times\mathcal{T}} e^{2 \pi i\left(\vv{\omega}_{1}^{\top} \vv{s}-\vv{\omega}_{2}^{\top} \vv{t}\right)} \nu\left(\dd \vv{\omega}_{1}\times \dd \vv{\omega}_{2}\right).$ This is reassuring, but does not constrain kernel designs in an obviously useful way to my tiny monkey brain. ## Spectral representation This is nearly simple, but has the minor complication that the most intuitive route (to my mind at least) requires us to admit complex stochastic processes. TBD; mention power spectrum, spectral moments. ## References Abrahamsen, Petter. 1997. Adler, Robert J. 2010. The Geometry of Random Fields. SIAM ed. Philadelphia: Society for Industrial and Applied Mathematics. Adler, Robert J., and Jonathan E. Taylor. 2007. Random Fields and Geometry. Springer Monographs in Mathematics 115. New York: Springer. Adler, Robert J, Jonathan E Taylor, and Keith J Worsley. 2016. Applications of Random Fields and Geometry Draft. Bochner, Salomon. 1959. Lectures on Fourier Integrals. Princeton University Press. Broersen, Petrus MT. 2006. Automatic Autocorrelation and Spectral Analysis. Secaucus, NJ, USA: Springer. Hartikainen, J., and S. Särkkä. 2010. In 2010 IEEE International Workshop on Machine Learning for Signal Processing, 379–84. Kittila, Finland: IEEE. Higdon, Dave. 2002. In Quantitative Methods for Current Environmental Issues, edited by Clive W. Anderson, Vic Barnett, Philip C. Chatwin, and Abdel H. El-Shaarawi, 37–56. London: Springer. Khintchine, A. 1934. Mathematische Annalen 109 (1): 604–15. Kom Samo, Yves-Laurent, and Stephen Roberts. 2015. arXiv:1506.02236 [Stat], June. Krapf, Diego, Enzo Marinari, Ralf Metzler, Gleb Oshanin, Xinran Xu, and Alessio Squarcini. 2018. New Journal of Physics 20 (2): 023029. Loynes, R. M. 1968. Journal of the Royal Statistical Society. Series B (Methodological) 30 (1): 1–30. Marple, S. Lawrence, Jr. 1987. Digital Spectral Analysis with Applications. Priestley, M. B. 2004. Spectral analysis and time series. Repr. Probability and mathematical statistics. London: Elsevier. Remes, Sami, Markus Heinonen, and Samuel Kaski. 2017. In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 4642–51. Curran Associates, Inc. Rust, Henning. 2007. Lecture Notes for the E2C2/CIACS Summer School, Comorova, Romania, University of Potsdam, 1–76. Särkkä, S., and J. Hartikainen. 2013. In 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 1–6. Särkkä, Simo, and Jouni Hartikainen. 2012. In Artificial Intelligence and Statistics. Stoica, Petre, and Randolph L. Moses. 2005. Spectral Analysis of Signals. 1 edition. Upper Saddle River, N.J: Prentice Hall. Sun, Shengyang, Guodong Zhang, Chaoqi Wang, Wenyuan Zeng, Jiaman Li, and Roger Grosse. 2018. “Differentiable Compositional Kernel Learning for Gaussian Processes.” arXiv Preprint arXiv:1806.04326. Wiener, Norbert. 1930. Acta Mathematica 55: 117–258. Yaglom, A. M. 1987. Correlation Theory of Stationary and Related Random Functions. Volume II: Supplementary Notes and References. Springer Series in Statistics. New York, NY: Springer Science & Business Media. 1. We can take it to be a sub manifold but things get more subtle and complex.↩︎ ### No comments yet. Why not leave one? GitHub-flavored Markdown & a sane subset of HTML is supported.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880775570869446, "perplexity": 2240.817163682107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00181.warc.gz"}
https://ccrma.stanford.edu/~jos/mdft/Frequency_Response.html
Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search ### Frequency Response Definition: The frequency response of an LTI filter may be defined as the Fourier transform of its impulse response. In particular, for finite, discrete-time signals , the sampled frequency response may be defined as The complete (continuous) frequency response is defined using the DTFT (see §B.1), i.e., where the summation limits are truncated to because is zero for and . Thus, the DTFT can be obtained from the DFT by simply replacing by , which corresponds to infinite zero-padding in the time domain. Recall from §7.2.10 that zero-padding in the time domain gives ideal interpolation of the frequency-domain samples (assuming the original DFT included all nonzero samples of ). Definition: The amplitude response of a filter is defined as the magnitude of the frequency response From the convolution theorem, we can see that the amplitude response is the gain of the filter at frequency , since where is the th sample of the DFT of the input signal , and is the DFT of the output signal . Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search [How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882748126983643, "perplexity": 2984.553664806753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00466-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/117210/summing-series-and-taking-limits
# Summing series and taking limits Given that the series $$\sum_{-\infty}^\infty {1\over n^2+k^2}={\pi\over k \tanh (\pi k)}$$ and $\sum_{-\infty}^\infty {1\over (n+k)^2}={\pi^2\over \sin^2 (\pi k)}$ How might we take the limit of $k\to 0$ so that we can get $\sum_{n=1}^\infty {1\over n^2}={\pi\over 6}$? - From your first result, $$\sum_{k=1}^\infty \frac{1}{n^2+k^2} = \frac{1}{2} \left(\frac{\pi}{k \tanh(\pi k)} - \frac{1}{k^2} \right)$$ Now use $$\frac{1}{\tanh(\pi i)} = \frac{\cosh(\pi k)}{\sinh(\pi k)} = \frac{1 + \pi^2 k^2/2 + \ldots}{\pi k + \pi^3 k^3/6 + \ldots}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874349236488342, "perplexity": 430.5262001881385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00159-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1512/2/s/k/
# Properties Label 1512.2.s.k Level $1512$ Weight $2$ Character orbit 1512.s Analytic conductor $12.073$ Analytic rank $0$ Dimension $6$ CM no Inner twists $2$ # Learn more about ## Newspace parameters Level: $$N$$ $$=$$ $$1512 = 2^{3} \cdot 3^{3} \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1512.s (of order $$3$$, degree $$2$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$12.0733807856$$ Analytic rank: $$0$$ Dimension: $$6$$ Relative dimension: $$3$$ over $$\Q(\zeta_{3})$$ Coefficient field: 6.0.309123.1 Defining polynomial: $$x^{6} - 3 x^{5} + 10 x^{4} - 15 x^{3} + 19 x^{2} - 12 x + 3$$ Coefficient ring: $$\Z[a_1, \ldots, a_{7}]$$ Coefficient ring index: $$3$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{3}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\ldots,\beta_{5}$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( \beta_{1} - \beta_{5} ) q^{5} + ( -1 - \beta_{1} + \beta_{2} + \beta_{5} ) q^{7} +O(q^{10})$$ $$q + ( \beta_{1} - \beta_{5} ) q^{5} + ( -1 - \beta_{1} + \beta_{2} + \beta_{5} ) q^{7} + ( 1 - \beta_{2} - \beta_{4} + \beta_{5} ) q^{11} + ( 1 + \beta_{1} ) q^{13} + ( \beta_{2} - \beta_{5} ) q^{17} + ( \beta_{2} + \beta_{3} + 2 \beta_{4} ) q^{19} + ( -\beta_{2} - \beta_{3} - \beta_{4} ) q^{23} + ( 2 + \beta_{2} - 2 \beta_{4} + \beta_{5} ) q^{25} + ( 1 + 3 \beta_{1} + \beta_{3} ) q^{29} + ( -1 + 2 \beta_{2} + \beta_{4} + 3 \beta_{5} ) q^{31} + ( 2 - 2 \beta_{1} - \beta_{2} - 3 \beta_{4} ) q^{35} + ( \beta_{1} + 3 \beta_{2} + 3 \beta_{3} + 4 \beta_{4} - \beta_{5} ) q^{37} + ( -1 - 3 \beta_{3} ) q^{41} + ( 1 + 5 \beta_{1} + \beta_{3} ) q^{43} + ( -5 \beta_{1} + 2 \beta_{2} + 2 \beta_{3} + 5 \beta_{5} ) q^{47} + ( 3 \beta_{1} + \beta_{2} + 2 \beta_{3} - \beta_{4} ) q^{49} + ( -3 + 2 \beta_{2} + 3 \beta_{4} + \beta_{5} ) q^{53} + ( 4 + \beta_{1} + \beta_{3} ) q^{55} + ( -4 - \beta_{2} + 4 \beta_{4} - 4 \beta_{5} ) q^{59} + ( 2 \beta_{2} + 2 \beta_{3} + 5 \beta_{4} ) q^{61} + ( \beta_{2} + \beta_{3} + 3 \beta_{4} ) q^{65} + ( 5 + 3 \beta_{2} - 5 \beta_{4} + 2 \beta_{5} ) q^{67} + ( 2 + 2 \beta_{1} + 3 \beta_{3} ) q^{71} + ( -1 + 3 \beta_{2} + \beta_{4} + 3 \beta_{5} ) q^{73} + ( -5 + \beta_{1} - 2 \beta_{3} + 6 \beta_{4} - 3 \beta_{5} ) q^{77} + ( 3 \beta_{2} + 3 \beta_{3} + 5 \beta_{4} ) q^{79} + ( -3 \beta_{1} + 4 \beta_{3} ) q^{83} + ( -4 - \beta_{3} ) q^{85} + ( -2 \beta_{1} + 5 \beta_{4} + 2 \beta_{5} ) q^{89} + ( -2 - \beta_{1} - \beta_{3} - 2 \beta_{4} - \beta_{5} ) q^{91} + ( -1 + \beta_{4} - 3 \beta_{5} ) q^{95} + ( 4 - 2 \beta_{3} ) q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$6q - q^{5} - 4q^{7} + O(q^{10})$$ $$6q - q^{5} - 4q^{7} + q^{11} + 4q^{13} + 2q^{17} + 5q^{19} - 2q^{23} + 6q^{25} - 2q^{29} - 4q^{31} + 6q^{35} + 8q^{37} - 6q^{43} + 3q^{47} - 12q^{49} - 8q^{53} + 20q^{55} - 9q^{59} + 13q^{61} + 8q^{65} + 16q^{67} + 2q^{71} - 3q^{73} - 7q^{77} + 12q^{79} - 2q^{83} - 22q^{85} + 17q^{89} - 13q^{91} + 28q^{97} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{6} - 3 x^{5} + 10 x^{4} - 15 x^{3} + 19 x^{2} - 12 x + 3$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu^{2} - \nu + 2$$ $$\beta_{2}$$ $$=$$ $$($$$$-\nu^{5} + \nu^{4} - 8 \nu^{3} + 5 \nu^{2} - 18 \nu + 6$$$$)/3$$ $$\beta_{3}$$ $$=$$ $$\nu^{4} - 2 \nu^{3} + 6 \nu^{2} - 5 \nu + 3$$ $$\beta_{4}$$ $$=$$ $$($$$$-2 \nu^{5} + 5 \nu^{4} - 16 \nu^{3} + 19 \nu^{2} - 21 \nu + 9$$$$)/3$$ $$\beta_{5}$$ $$=$$ $$($$$$2 \nu^{5} - 5 \nu^{4} + 19 \nu^{3} - 22 \nu^{2} + 30 \nu - 9$$$$)/3$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$-2 \beta_{5} - \beta_{4} - \beta_{3} - 2 \beta_{2} + \beta_{1} + 2$$$$)/3$$ $$\nu^{2}$$ $$=$$ $$($$$$-2 \beta_{5} - \beta_{4} - \beta_{3} - 2 \beta_{2} + 4 \beta_{1} - 4$$$$)/3$$ $$\nu^{3}$$ $$=$$ $$($$$$7 \beta_{5} + 5 \beta_{4} + 2 \beta_{3} + 4 \beta_{2} + \beta_{1} - 10$$$$)/3$$ $$\nu^{4}$$ $$=$$ $$($$$$16 \beta_{5} + 11 \beta_{4} + 8 \beta_{3} + 10 \beta_{2} - 17 \beta_{1} + 5$$$$)/3$$ $$\nu^{5}$$ $$=$$ $$($$$$-14 \beta_{5} - 16 \beta_{4} + 5 \beta_{3} - 5 \beta_{2} - 23 \beta_{1} + 47$$$$)/3$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1512\mathbb{Z}\right)^\times$$. $$n$$ $$757$$ $$785$$ $$1081$$ $$1135$$ $$\chi(n)$$ $$1$$ $$1$$ $$-\beta_{4}$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 865.1 0.5 − 2.05195i 0.5 + 1.41036i 0.5 − 0.224437i 0.5 + 2.05195i 0.5 − 1.41036i 0.5 + 0.224437i 0 0 0 −1.23025 2.13086i 0 −0.0665372 + 2.64491i 0 0 0 865.2 0 0 0 −0.119562 0.207087i 0 0.710533 2.54856i 0 0 0 865.3 0 0 0 0.849814 + 1.47192i 0 −2.64400 0.0963576i 0 0 0 1297.1 0 0 0 −1.23025 + 2.13086i 0 −0.0665372 2.64491i 0 0 0 1297.2 0 0 0 −0.119562 + 0.207087i 0 0.710533 + 2.54856i 0 0 0 1297.3 0 0 0 0.849814 1.47192i 0 −2.64400 + 0.0963576i 0 0 0 $$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 1297.3 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 7.c even 3 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1512.2.s.k 6 3.b odd 2 1 1512.2.s.l yes 6 7.c even 3 1 inner 1512.2.s.k 6 21.h odd 6 1 1512.2.s.l yes 6 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 1512.2.s.k 6 1.a even 1 1 trivial 1512.2.s.k 6 7.c even 3 1 inner 1512.2.s.l yes 6 3.b odd 2 1 1512.2.s.l yes 6 21.h odd 6 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1512, [\chi])$$: $$T_{5}^{6} + T_{5}^{5} + 5 T_{5}^{4} - 2 T_{5}^{3} + 17 T_{5}^{2} + 4 T_{5} + 1$$ $$T_{11}^{6} - T_{11}^{5} + 13 T_{11}^{4} + 30 T_{11}^{3} + 135 T_{11}^{2} + 108 T_{11} + 81$$ $$T_{13}^{3} - 2 T_{13}^{2} - 3 T_{13} + 3$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ 1 $3$ 1 $5$ $$1 + T - 10 T^{2} - 7 T^{3} + 57 T^{4} + 14 T^{5} - 299 T^{6} + 70 T^{7} + 1425 T^{8} - 875 T^{9} - 6250 T^{10} + 3125 T^{11} + 15625 T^{12}$$ $7$ $$1 + 4 T + 14 T^{2} + 55 T^{3} + 98 T^{4} + 196 T^{5} + 343 T^{6}$$ $11$ $$1 - T - 20 T^{2} + 41 T^{3} + 179 T^{4} - 310 T^{5} - 1349 T^{6} - 3410 T^{7} + 21659 T^{8} + 54571 T^{9} - 292820 T^{10} - 161051 T^{11} + 1771561 T^{12}$$ $13$ $$( 1 - 2 T + 36 T^{2} - 49 T^{3} + 468 T^{4} - 338 T^{5} + 2197 T^{6} )^{2}$$ $17$ $$1 - 2 T - 36 T^{2} + 14 T^{3} + 826 T^{4} + 262 T^{5} - 16321 T^{6} + 4454 T^{7} + 238714 T^{8} + 68782 T^{9} - 3006756 T^{10} - 2839714 T^{11} + 24137569 T^{12}$$ $19$ $$1 - 5 T - 34 T^{2} + 63 T^{3} + 1465 T^{4} - 1156 T^{5} - 29405 T^{6} - 21964 T^{7} + 528865 T^{8} + 432117 T^{9} - 4430914 T^{10} - 12380495 T^{11} + 47045881 T^{12}$$ $23$ $$1 + 2 T - 60 T^{2} - 38 T^{3} + 2458 T^{4} + 482 T^{5} - 65101 T^{6} + 11086 T^{7} + 1300282 T^{8} - 462346 T^{9} - 16790460 T^{10} + 12872686 T^{11} + 148035889 T^{12}$$ $29$ $$( 1 + T + 37 T^{2} - 71 T^{3} + 1073 T^{4} + 841 T^{5} + 24389 T^{6} )^{2}$$ $31$ $$1 + 4 T - 28 T^{2} - 402 T^{3} - 584 T^{4} + 5648 T^{5} + 67711 T^{6} + 175088 T^{7} - 561224 T^{8} - 11975982 T^{9} - 25858588 T^{10} + 114516604 T^{11} + 887503681 T^{12}$$ $37$ $$1 - 8 T - 2 T^{2} + 254 T^{3} - 1214 T^{4} + 2314 T^{5} + 5399 T^{6} + 85618 T^{7} - 1661966 T^{8} + 12865862 T^{9} - 3748322 T^{10} - 554751656 T^{11} + 2565726409 T^{12}$$ $41$ $$( 1 + 66 T^{2} - 137 T^{3} + 2706 T^{4} + 68921 T^{6} )^{2}$$ $43$ $$( 1 + 3 T + 9 T^{2} - 143 T^{3} + 387 T^{4} + 5547 T^{5} + 79507 T^{6} )^{2}$$ $47$ $$1 - 3 T - 18 T^{2} + 1225 T^{3} - 2499 T^{4} - 16644 T^{5} + 579911 T^{6} - 782268 T^{7} - 5520291 T^{8} + 127183175 T^{9} - 87834258 T^{10} - 688035021 T^{11} + 10779215329 T^{12}$$ $53$ $$1 + 8 T - 90 T^{2} - 278 T^{3} + 9514 T^{4} + 8150 T^{5} - 568945 T^{6} + 431950 T^{7} + 26724826 T^{8} - 41387806 T^{9} - 710143290 T^{10} + 3345563944 T^{11} + 22164361129 T^{12}$$ $59$ $$1 + 9 T - 54 T^{2} - 171 T^{3} + 4023 T^{4} - 18486 T^{5} - 466229 T^{6} - 1090674 T^{7} + 14004063 T^{8} - 35119809 T^{9} - 654337494 T^{10} + 6434318691 T^{11} + 42180533641 T^{12}$$ $61$ $$1 - 13 T - 45 T^{2} + 252 T^{3} + 13021 T^{4} - 33607 T^{5} - 667154 T^{6} - 2050027 T^{7} + 48451141 T^{8} + 57199212 T^{9} - 623062845 T^{10} - 10979751913 T^{11} + 51520374361 T^{12}$$ $67$ $$1 - 16 T + 34 T^{2} + 562 T^{3} + 1498 T^{4} - 52510 T^{5} + 381563 T^{6} - 3518170 T^{7} + 6724522 T^{8} + 169028806 T^{9} + 685138114 T^{10} - 21602001712 T^{11} + 90458382169 T^{12}$$ $71$ $$( 1 - T + 129 T^{2} - 235 T^{3} + 9159 T^{4} - 5041 T^{5} + 357911 T^{6} )^{2}$$ $73$ $$1 + 3 T - 132 T^{2} - 347 T^{3} + 8433 T^{4} + 8514 T^{5} - 573015 T^{6} + 621522 T^{7} + 44939457 T^{8} - 134988899 T^{9} - 3748567812 T^{10} + 6219214779 T^{11} + 151334226289 T^{12}$$ $79$ $$1 - 12 T - 84 T^{2} + 454 T^{3} + 14832 T^{4} - 6264 T^{5} - 1475337 T^{6} - 494856 T^{7} + 92566512 T^{8} + 223839706 T^{9} - 3271806804 T^{10} - 36924676788 T^{11} + 243087455521 T^{12}$$ $83$ $$( 1 + T + 129 T^{2} + 313 T^{3} + 10707 T^{4} + 6889 T^{5} + 571787 T^{6} )^{2}$$ $89$ $$1 - 17 T - 57 T^{2} + 344 T^{3} + 36001 T^{4} - 164759 T^{5} - 1843186 T^{6} - 14663551 T^{7} + 285163921 T^{8} + 242509336 T^{9} - 3576307737 T^{10} - 94929010633 T^{11} + 496981290961 T^{12}$$ $97$ $$( 1 - 14 T + 331 T^{2} - 2740 T^{3} + 32107 T^{4} - 131726 T^{5} + 912673 T^{6} )^{2}$$ show more show less
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919153451919556, "perplexity": 2896.239584721772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00162.warc.gz"}
http://clay6.com/qa/7974/find-the-cartesian-form-of-following-planes-overrightarrow-1-s-t-overrighta
Browse Questions # Find the cartesian form of following planes: $\overrightarrow{r}=(1+s+t) \overrightarrow{i}+(2-s+t)\overrightarrow{j}+(3-2s+2t)\overrightarrow{k}$ Toolbox: Step 1 Let $\overrightarrow r=x\overrightarrow i+y\overrightarrow j+z\overrightarrow k$ We have $x\overrightarrow i+y\overrightarrow j+z\overrightarrow k=(1+s+t)\overrightarrow i+(2-s+t)\overrightarrow j+(3-2s+2t)\overrightarrow k$ Equating the components in the $x-, y-, z-$ directions. $x = 1+s+t\: \: \Rightarrow x-1=s+t$ $y = 2-s+t\: \: \Rightarrow y-2=-s+t$ $z = 3-2s+2t\: \: \Rightarrow z-3=-2s+2t$ Step 2 Eliminating $t, s$ from the above, we have $\begin{vmatrix} x-1 & 1 & 1 \\ y-2 & -1 & 1 \\ z-3 & -2 & 2 \end{vmatrix}=0 \Rightarrow (x-1)(-2+2)-(y-2)(2+2)+(z-3)(1+1)=0$ $\therefore -4y+4+2z-6=0\: or \: 4y-2z=-2$ $\Rightarrow 2y-z=-1$ is the equation of the plane.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013630747795105, "perplexity": 1386.1687707518151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00525-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.mathjournals.org/mmj/2015-015-002/2015-015-002-004.html
# Moscow Mathematical Journal Volume 15, Issue 2, April–June 2015  pp. 257–267. Quasi-Coherent Hecke Category and Demazure Descent Authors:  Sergey Arkhipov (1) and Tina Kanstrup Author institution:(1) Matematisk Institut, Aarhus Universitet, Ny Munkegade, DK-8000, Århus C, Denmark (2) Centre for Quantum Geometry of Moduli Spaces, Aarhus Universitet, Ny Munke gade, DK-8000, Århus C, Denmark Summary: Let G be a reductive algebraic group with a Borel subgroup B. We define the quasi-coherent Hecke category for the pair (G, B). For any regular Noetherian G-scheme X we construct a monoidal action of the Hecke category on the derived category of B-equivariant quasi-coherent sheaves on X. Using the action we define the Demazure Descent Data on the latter category and prove that the Descent category is equivalent to the derived category of G-equivariant sheaves on X. 2010 Math. Subj. Class. Primary: 14M15; Secondary: 20F55, 18E30. Keywords: Equivariant coherent sheaves, Demazure functors, Bott–Samelson varieties
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646626710891724, "perplexity": 3329.376361013569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687766.41/warc/CC-MAIN-20170921115822-20170921135822-00101.warc.gz"}
https://stats.stackexchange.com/questions/105337/asymptotic-distribution-of-sample-variance-of-non-normal-sample
# Asymptotic distribution of sample variance of non-normal sample This is a more general treatment of the issue posed by this question. After deriving the asymptotic distribution of the sample variance, we can apply the Delta method to arrive at the corresponding distribution for the standard deviation. Let a sample of size $n$ of i.i.d. non-normal random variables $\{X_i\},\;\; i=1,...,n$, with mean $\mu$ and variance $\sigma^2$. Set the sample mean and the sample variance as $$\bar x = \frac 1n \sum_{i=1}^nX_i,\;\;\; s^2 = \frac 1{n-1} \sum_{i=1}^n(X_i-\bar x)^2$$ We know that $$E(s^2) = \sigma^2, \;\;\; \operatorname {Var}(s^2) = \frac{1}{n} \left(\mu_4 - \frac{n-3}{n-1}\sigma^4\right)$$ where $\mu_4 = E(X_i -\mu)^4$, and we restrict our attention to distributions for which what moments need to exist and be finite, do exist and are finite. Does it hold that $$\sqrt n(s^2 - \sigma^2) \rightarrow_d N\left(0,\mu_4 - \sigma^4\right)\;\; ?$$ • Heh. I just posted on the other thread, not realizing you'd posted this. There's a number of things to be found on the CLT applied to the variance (such as p3-4 here for example). Nice answer btw. – Glen_b Jul 1 '14 at 1:38 • Thanks. Yes I have found this. But they miss the case @whuber pointed out. They even provide a Bernoulli example with general $p$! (base of p. 4). I am extending my answer to cover the $p=1/2$ case also. – Alecos Papadopoulos Jul 1 '14 at 1:53 • Yes, I saw that they considered the Bernoulli yet didn't consider that special case. I think the mention of the distinction for the scaled Bernoulli (equal prob. dichotomous case) is one reason (among a couple of others) why it's valuable to have it discussed in answer here (rather than just in a comment) - not least that it's searchable for. – Glen_b Jul 1 '14 at 1:56 To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big)^2$$ and after a little manipualtion, $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2 - n\Big(\bar x-\mu\Big)^2$$ Therefore $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \sigma^2- \frac {\sqrt n}{n-1}n\Big(\bar x-\mu\Big)^2$$ Manipulating, $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ $$=\frac {n\sqrt n}{n-1}\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ $$=\frac {n}{n-1}\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right] + \frac {\sqrt n}{n-1}\sigma^2 -\frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ The term $n/(n-1)$ becomes unity asymptotically. The term $\frac {\sqrt n}{n-1}\sigma^2$ is determinsitic and goes to zero as $n \rightarrow \infty$. We also have $\sqrt n\Big(\bar x-\mu\Big)^2 = \left[\sqrt n\Big(\bar x-\mu\Big)\right]\cdot \Big(\bar x-\mu\Big)$. The first component converges in distribution to a Normal, the second convergres in probability to zero. Then by Slutsky's theorem the product converges in probability to zero, $$\sqrt n\Big(\bar x-\mu\Big)^2\xrightarrow{p} 0$$ We are left with the term $$\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right]$$ Alerted by a lethal example offered by @whuber in a comment to this answer, we want to make certain that $(X_i-\mu)^2$ is not constant. Whuber pointed out that if $X_i$ is a Bernoulli $(1/2)$ then this quantity is a constant. So excluding variables for which this happens (perhaps other dichotomous, not just $0/1$ binary?), for the rest we have $$\mathrm{E}\Big(X_i-\mu\Big)^2 = \sigma^2,\;\; \operatorname {Var}\left[\Big(X_i-\mu\Big)^2\right] = \mu_4 - \sigma^4$$ and so the term under investigation is a usual subject matter of the classical Central Limit Theorem, and $$\sqrt n(s^2 - \sigma^2) \xrightarrow{d} N\left(0,\mu_4 - \sigma^4\right)$$ Note: the above result of course holds also for normally distributed samples -but in this last case we have also available a finite-sample chi-square distributional result. • +1 There's no reason to check general dichotomous distributions because they are all scale and location versions of the Bernoulli: the analysis for the Bernoulli suffices. My simulations (out to sample sizes of $10^{1000}$) confirm the $\chi^2_1$ result. – whuber Jul 1 '14 at 15:38 • @whuber Thanks for checking. You' re right of course about the Benroulli being the mother of them all. – Alecos Papadopoulos Jul 1 '14 at 17:48 You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1} \sum_{i=1}^n \left(X_i - \bar{X} \right)^2$$ does not depend on $E(X) = \xi$, say. Asymptotically, it also does not matter whether we change the factor $\frac{1}{n-1}$ to $\frac{1}{n}$, which I will do for convenience. We then have $$\sqrt{n} \left(S^2 - \sigma^2 \right) = \sqrt{n} \left[ \frac{1}{n} \sum_{i=1}^n X_i^2 - \bar{X}^2 - \sigma^2 \right]$$ And now we assume without loss of generality that $\xi = 0$ and we notice that $$\sqrt{n} \bar{X}^2 = \frac{1}{\sqrt{n}} \left( \sqrt{n} \bar{X} \right)^2$$ has probability limit zero, since the second term is bounded in probability (by the CLT and the continuous mapping theorem), i.e. it is $O_p(1)$. The asymptotic result now follows from Slutzky's theorem and the CLT, since $$\sqrt{n} \left[ \frac{1}{n} \sum X_i^2 - \sigma^2 \right] \xrightarrow{D} \mathcal{N} \left(0, \tau^2 \right)$$ where $\tau^2 = Var \left\{ X^2\right\} = \mathbb{E} \left(X^4 \right) - \left( \mathbb{E} \left(X^2\right) \right)^2$. And that will do it. • This is certainly more economical. But please reconsider how innocuous is the $E(X) =0$ assumption. For example, it excludes the case of a Bernoulli ($p=1/2$) sample, and as I mention at the end of my answer, for such a sample, this asymptotic result does not hold. – Alecos Papadopoulos Mar 25 '16 at 21:41 • @AlecosPapadopoulos Indeed but the data can always be centered, right? I mean $$\sum_{i=1}^n \left(X_i - \mu - ( \bar{X}-\mu) \right)^2 = \sum_{i=1}^n \left(X_i - \bar{X} \right)^2$$ and we can work with the these variables. For the Bernoulli case, is there something stopping us from doing so? – JohnK Mar 25 '16 at 21:44 • @AlecosPapadopoulos Oh yeah, I see the problem. – JohnK Mar 25 '16 at 21:47 • I have written a small piece on the matter, I think it is time to upload it in my blog. I will notify you in case you are interested to read it. The asymptotic distribution of the sample variance in this case is interesting, and even more the asymptotic distribution of the sample standard deviation. These results hold for any $p=1/2$ dichotomous random variable. – Alecos Papadopoulos Mar 25 '16 at 21:53 • Dumb question, but how can we assume that $S^2$ is ancillary if the $X_i$ are not normal? Or is $S^2$ always ancillary (w.r.t. mean parametrization I guess) but only independent of the sample mean when the sample mean is a complete sufficient statistic (i.e. normally distributed) by Basu's theorem? – Chill2Macht Nov 2 '17 at 17:42 The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see asymptotic results presented using the normal distribution, and this is useful for stating the theorems. However, practically speaking, the purpose of an asymptotic distribution for a sample statistic is that it allows you to obtain an approximate distribution when $$n$$ is large. There are lots of choices you could make for your large-sample approximation, since many distributions have the same asymptotic form. In the case of the sample variance, it is my view that an excellent approximating distribution for large $$n$$ is given by: $$\frac{S_n^2}{\sigma^2} \sim \frac{\text{Chi-Sq}(\text{df} = DF_n)}{DF_n},$$ where $$DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2) = 2n / ( \kappa - (n-3)/(n-1))$$ and $$\kappa = \mu_4 / \sigma^4$$ is the kurtosis parameter. This distribution is asymptotically equivalent to the normal approximation derived from the theorem (the chi-squared distribution converges to normal as the degrees-of-freedom tends to infinity). Despite this equivalence, this approximation has various other properties you would like your approximating distribution to have: • Unlike the normal approximation derived directly from the theorem, this distribution has the correct support for the statistic of interest. The sample variance is non-negative, and this distribution has non-negative support. • In the case where the underlying values are normally distributed, this approximation is actually the exact sampling distribution. (In this case we have $$\kappa = 3$$ which gives $$DF_n = n-1$$, which is the standard form used in most texts.) It therefore constitutes a result that is exact in an important special case, while still being a reasonable approximation in more general cases. Derivation of the above result: Approximate distributional results for the sample mean and variance are discussed at length in O'Neill (2014), and this paper provides derivations of many results, including the present approximating distribution. This derivation starts from the limiting result in the question: $$\sqrt{n} (S_n^2 - \sigma^2) \sim \text{N}(0, \sigma^4 (\kappa - 1)).$$ Re-arranging this result we obtain the approximation: $$\frac{S_n^2}{\sigma^2} \sim \text{N} \Big( 1, \frac{\kappa - 1}{n} \Big).$$ Since the chi-squared distribution is asymptotically normal, as $$DF \rightarrow \infty$$ we have: $$\frac{\text{Chi-Sq}(DF)}{DF} \rightarrow \frac{1}{DF} \text{N} ( DF, 2DF ) = \text{N} \Big( 1, \frac{2}{DF} \Big).$$ Taking $$DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2)$$ (which yields the above formula) gives $$DF_n \rightarrow 2n / (\kappa - 1)$$ which ensures that the chi-squared distribution is asymptotically equivalent to the normal approximation from the limiting theorem. • One empirically interesting question is that which of these two asymptotic results works better in finite sample cases under various underlying data distributions. – lzstat Jun 19 '18 at 20:45 • Yes, I think that would be a very interesting (and publishable) simulation study. Since the present formula is based on kurtosis-correction of the variance of the sample variance, I would expect that the present result would work best when you have an underlying distribution with a kurtosis parameter that is far from mesokurtic (i.e., when the kurtosis-correction matters most). Since the kurtosis would need to be estimated from the sample, it is an open question as to when there would be a substantial improvement in overall performance. – Ben Jun 19 '18 at 23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582807421684265, "perplexity": 377.07908188583855}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00616.warc.gz"}
https://mathprelims.wordpress.com/2008/07/18/compact-closed-unit-ball-implies-finite-dimension/
Mathematics Prelims July 18, 2008 Compact Closed Unit Ball Implies Finite Dimension Filed under: Analysis,Functional Analysis — cjohnson @ 2:32 pm If $X$ is a normed space and the closed unit ball centered at zero is compact, then $X$ is finite dimensional. Proof: Suppose $X$ is an infinite dimensional normed space and let $x_1$ be any point in $X$ with $\|x_1\| = 1$ and let $Y_1$ be the one-dimensional subspace of $X$ generated by $x_1$.  Recall that a finite dimensional subspace is always closed, and since $X$ is infinite dimensional, $Y_1$ is a proper subspace of $X$.  By Riesz’s Lemma, there exists an $x_2 \in X \setminus Y_1$ with $\|x_2\| = 1$ and $\|x_2 - y\| \geq \frac{1}{2}$ for all $y \in Y_1$.  Let $Y_2$ be the two-dimensional subspace generated by $x_1, x_2$.  There exists a $x_3 \in X \setminus Y_2$ such that $\|x_3\| = 1$ and $\|x_3 - y\| \geq \frac{1}{2}$ for all $y \in Y_2$.  Note that since $X$ is infinite dimensional, we can keep applying this procedure generating a sequence $(x_n)$ such that $\|x_n\| = 1$ but $\|x_m - x_n\| \geq \frac{1}{2}$ for all $m \neq n$This means that our sequence can not be Cauchy, and so it can not be convergent, and so the closed unit ball of radius one can not be compact. This means that, since all points in the sequence are at least distance $1/2$ from one another, no subsequence can be Cauchy, so no subsequence can be convergent.  Hence, if the closed unit ball of radius one is compact, then the space is finite dimensional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885662198066711, "perplexity": 86.37952466355603}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866888.20/warc/CC-MAIN-20180624063553-20180624083553-00379.warc.gz"}
http://mathhelpforum.com/trigonometry/198046-needing-help-angle-evaluation.html
# Thread: Needing help with angle of evaluation 1. ## Needing help with angle of evaluation (a) When the angel of elevation of the sun is 62 degrees, the shadow cast by a vertical pole is 10m long. Find the height of the pole correct to one decimal place. (b) If a=8, c=5 and A=72 degrees, find the size of the angle C? (c) A ladder is 7m long and reaches 5m up a vertical wall. Find the angle between the ladder and the horizontal. (d) Find the area of the triangle shown in the diagram: Thanks a million! 2. ## Re: Needing help with angle of evaluation Originally Posted by Molly1313 (a) When the angel of elevation of the sun is 62 degrees, the shadow cast by a vertical pole is 10m long. Find the height of the pole correct to one decimal place. make a sketch ... use the tangent ratio (b) If a=8, c=5 and A=72 degrees, find the size of the angle C? make a sketch ... use the law of sines (c) A ladder is 7m long and reaches 5m up a vertical wall. Find the angle between the ladder and the horizontal. make a sketch ... use the sine ratio. (d) Find the area of the triangle shown in the diagram: if 8 is the base of the triangle, then the height of the triangle is 6sin(52) ... 3. ## Re: Needing help with angle of evaluation Originally Posted by Molly1313 (a) When the angel of elevation of the sun is 62 degrees, the shadow cast by a vertical pole is 10m long. Find the height of the pole correct to one decimal place. Draw a sketch! You are dealing with a right triangle. Use the tan-function: $\tan(62^\circ) = \frac{length\ of\ pole}{10\ m}$ (b) If a=8, c=5 and A=72 degrees, find the size of the angle C? Draw a sketch! Use the Sine-rule. (c) A ladder is 7m long and reaches 5m up a vertical wall. Find the angle between the ladder and the horizontal. Draw a sketch! You are dealing with a right triangle whose hypotenuse is 7 m long. Use the Sine-function. (d) Find the area of the triangle shown in the diagram: Thanks a million! The area of a triangle is calculated by: $a = \frac12 \cdot 8\ cm \cdot h_{of\ the\ base}$ Draw the height of the base into your sketch. You are now dealing with a right triangle. Use the Sine-function: $\sin(62^\circ)=\frac h6$ Solve for h. Plug in this term into the equation of the area.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340353012084961, "perplexity": 680.5782295835193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424770.15/warc/CC-MAIN-20170724082331-20170724102331-00328.warc.gz"}
http://cruncheconometrix.blogspot.com/2018/02/time-series-analysis-lecture-3-how-to.html
# How to Perform Unit Root Test in EViews ## What is Stationarity in Time Series Analysis?In econometrics, time series data are frequently used and they often pose distinct problems for econometricians. As it will be discussed with examples, most empirical work based on time series data assumes that the underlying series is stationary. Stationarity of a series (that is, a variable) implies that its mean, variance and covariance are constant over time. That is, these do not vary systematically over time. In order words, they are time invariant. However, if that is not the case, then the series is nonstationary. We will discuss some possible scenarios where two series, Y and X, are nonstationary and the error term, u, is also nonstationary. In that case, the error term will exhibit autocorrelation. Another likely scenario is where Y and X are nonstationary, but u is stationary. The implications of this will also be explored. In time series analysis, the words nonstationary, unit root or random walk model are used synonymously. In essence, of a series is considered to be nonstationary, it implies that such exhibit a unit root and exemplifies a random walk series. Regressing two series that are nonstationary, likewise, yields a spurious (or nonsense) regression. That is, a regression whose outcome cannot be used for inferences or forecasting. In short, such results should not be taken seriously and must be discarded. A stationary series will tend to return to its mean (called mean reversion) and fluctuations around this mean (measured by its variance) will have a broadly constant breadth. But if a time series is not stationary in the sense just explained, it is called a nonstationary time series such will have a time-varying mean or a time-varying variance or both. In summary, a stationary time series is important because if such is nonstationary, its behaviour can be studied only for the time period under consideration. That is, each set of time series data will therefore be for a particular episode. As a result, it is not possible to generalise its relevance to other time periods. Therefore, for the purpose of forecasting, such (nonstationary) time series may be of little practical value How to detect unit root in a series? In a bivariate (2 variables) model or that involving multiple variables (called a multiple regression model), it is assumed that all the variables are stationary at level (that is, the order of integration of each of the variable is zero, I(0). It is important to state at this point, that the order of integration of a series in a regression model is determined by the outcome of a unit root test (or stationarity test). If the series is stationary at level after performing unit root test, then it is I(0), otherwise it is I(d) where d represents the number of times the series is differenced before it becomes stationary. But what if the assumption of stationarity at level of the series in a bivariate or multiple regression model is relaxed and we consequently allow for a unit root in each of the variables in the model, how can this be corrected? In general, this would require a different treatment from a conventional regression with stationary variables at I(0). In particular, we focus on a class of linear combination of unit root processes known as cointegrated process. The generic representation for the order of integration of series is I(d) where d is the number of differencing to render the series stationary. Hence, a stationary series at level, d = 0 is a series with an I(0) process. Although, for any non-stationary series, ‘d’ can assume any value greater than zero, however, in applied research, only the unit root process of I(1) process is allowed, otherwise such series with higher order of integration (d > 1) should be excluded in the model as no meaningful policy implications or relevance can be drawn from such series. Here is an example of a bivariate linear regression model: Y= 𝛂₀ + bXt + ut                                               [1] Assume Yt and Xt  are two random walk models that are I(1) processes and are independently distributed as: YρYt-1 +  vt,                     -1 ≤  ρ ≤ 1                                 [2] XղXt-1 +  et,                    -1 ≤  ղ ≤ 1                                 [3] and vt and et have zero mean, a constant variance and are orthogonal (these are white noise error terms). We also assumed that vt and et are serially uncorrelated as well as mutually uncorrelated. As stated in [2] and [3], both these time series are nonstationary; that is, they are I(1) or exhibit stochastic trends. Suppose we regress Yt on Xt. Since Yt on Xt are uncorrelated I(1) processes, the R2 from the regression of Y on X should tend to zero; that is, there should not be any relationship between the two variables. Equations [2] and [3] resemble the Markov first-order autoregressive model. If ρ and ղ = 1, the equations become a random walk model without drift. If ρ and ղ are in fact 1, then a unit root problem surfaces, that is, a situation of nonstationarity; because we already know that in this case the variance of Yt is not stationary. The name unit root is due to the fact that ρ = 1. Again, the terms nonstationary, random walk, and unit root can be treated as synonymous. If, however, |ρ| ≤ 1, and  |ղ| ≤ 1, that is if their absolute values are less than one, then it can be shown that both series Yt and Xt are stationary. In practice, then, it is important to find out if a time series possesses a unit root. Given equations [2] and [3], there should be no systematic relationship between Yt and Xt as they both drift away from equilibrium (i.e. they do not converge), and therefore, we should expect that an ordinary least squares (OLS) estimate of b should be close to zero, or insignificantly different from zero, at least as the sample size increases. But this is not usually the case. The fitted coefficients in this case may be statistically significant even when there is no true relationship between the dependent variable and the regressors. This is regarded as a spurious regression or correlation where, in the case of our example, b takes any value randomly, and its t-statistic indicates significance of the estimate. But how can unit root be detected? There are some clues that tell you if a series is nonstationary and if the regression of bivariate or multivariate relationships are spurious. Some of these are: 1.   Do a graphical plot of the series to visualise the nature. Is it trending upwards or downwards? Does it exhibit a mean-reversion or not? Or are there fluctuations around its mean? 2.   Or carry out a regression analysis on two series and observe the R2. If it is above 0.9, it may suggest that the variables are nonstationary. 3.    The rule-of-thumb: if the R2 obtained from the regression is higher than the Durbin Watson (DW) statistic. The low DW statistic evidences positive first order auto-correlation of the error terms. Using Gujarati and Porter Table 21.1 quarterly data of 1970q1 to 1991q4, examples of nonstationary series and spurious regression can be seen from the pce, pdi and gdp relationship. Since the series are measured in billions of US dollars, the natural logarithms of the variables will be used in analysing their essential features. Nonstationary series: the graphical plot of the three variables shows an upward trend and none of the variables revert to their means. That is, all three variables do not exhibit mean reversions. That clearly tells us that the series are nonstationary. EViews - Example of nonstationary series Source: CrunchEconometrix What is a spurious regression? Sometimes we expect to find no relationship between two variables, yet a regression of one on the other variable often shows a significant relationship. This situation exemplifies the problem of spurious, or nonsense, regression. The regression of lnpce on lnpdi shows how spurious regressions can arise if time series are not stationary. As expected, because both variables are nonstationary, the result evidences that a spurious regression has been undertaken. But how do we know this? Take a look at the R2 the value of 0.9944 is higher than the Durbin Watson statistic of 0.57. So, whenever the R2 > DW, a spurious regression has occurred because the variables are nonstationary. EViews - Example of a spurious regression Source: CrunchEconometrix As you can see, the coefficient of lnpdi is highly statistically significant, and the R2 value is statistically significantly different from zero. From these results, you may be tempted to conclude that there is a significant statistical relationship between both variables, whereas a priori there may or may not be none. This is simply the phenomenon of spurious or nonsense regression, first discovered by Yule (1926). He showed that (spurious) correlation could persist in nonstationary time series even if the sample is very large. That there is something wrong in the preceding regression is suggested by the extremely low Durbin–Watson value, which suggests very strong first-order autocorrelation. According to Granger and Newbold, R2 > DW is a good rule of thumb to suspect that the estimated regression is spurious, as in the given example. Why is it important to test for stationarity? We usually consider a nonstationary series for the following reasons: 1.    To evaluate the behaviour of series over time. Is the series trending upward or downward? This can be verified from performing a stationarity test. In other words, the test can be used to evaluate the stability or predictability of time series. If a series is nonstationary, that means the series is unstable or unpredictable and therefore may not be valid for inferences, prediction or forecasting. 2.     To know how a series responds to shocks requires carrying out a stationarity test. If such series is nonstationary, the impact of shocks to the series are more likely to be permanent. Consequently, if a series is stationary, impact of shocks will be temporary or brief. How to correct for nonstationarity? What can be done with nonstationarity in a time series knowing that performing OLS on such a model yields spurious regression? The Unit Root Test We begin with equations [2] and [3] which are unit root (stochastic) processes with white noise error terms. If the parameters of the models are equal to 1, that is, in the case of the unit root, both equations become random walk models without drift, which we know is a nonstationary stochastic process. So, what can be done to correct this? For instance, for equation [2], simply regress Yt on its (one-period) lagged value Yt−1 and find out if the estimated ρ is statistically equal to 1? If it is, then Yt is nonstationary. Repeat same for the Xt series. This is the general idea behind the unit root test of stationarity. For theoretical reasons, equation [2] is manipulated as follows: Subtract Yt−1 from both sides of [2] to obtain: Yt  Yt-1 =  ρYt-1 Yt-1  vt                        [4] = (ρ - 1)Yt-1  vt and this can be stated alternatively as: ⃤ Yt  δYt-1  vt                                                [5] where δ = (ρ − 1) and  ⃤, as usual, is the first-difference operator. In practice, therefore, instead of estimating [2], we estimate [5] and test the null hypothesis that δ = 0. If δ = 0, then ρ = 1, that is we have a unit root, meaning the time series under consideration is nonstationary. Before we proceed to estimate [5], it may be noted that if δ = 0, [5] will become: ⃤ Yt  Yt-1 Yt-1  vt                          [6] (Remember to do the same for Xt series) Since vt is a white noise error term, it is stationary, which means that the first difference of a random walk time series is stationary. EViews - Example of stationary series (in first difference) Source: CrunchEconometrix Visual observation of the differenced series shows that the three variables are stationary around the mean. They all exhibit constant mean-reversions. That is, they fluctuate around 0. If we are to draw a trend line, such a line will be horizontal at 0.01. Okay, having said all that. Let us return to estimating equation [5]. This is quite simple, all that is required is to take the first differences of Yt and regress on Yt−1 and see if the estimated slope coefficient in this regression is statistically different from is zero or not. If it is zero, we conclude that Yt is nonstationary. But if it is negative, we conclude that Yt is stationary. Note: Since δ = (ρ − 1), for stationarity ρ must be less than one. For this to happen δ must be negative! The only question is which test do we use to find out if the estimated coefficient of Yt−1 in [5] is zero or not? You might be tempted to say, why not use the usual t test? Unfortunately, under the null hypothesis that δ = 0 (i.e., ρ = 1), the t value of the estimated coefficient of Yt−1 does not follow the t distribution even in large samples; that is, it does not have an asymptotic normal distribution. What is the alternative? Dickey and Fuller (DF) have shown that under the null hypothesis that δ = 0, the estimated t value of the coefficient of Yt−1 in [5] follows the τ (tau) statistic. These authors have computed the critical values of the tau statistic on the basis of Monte Carlo simulations. Note: Interestingly, if the hypothesis that δ = 0 is rejected (i.e., the time series is stationary), we can use the usual (Student’s) t test. The unit root test can be computed under three (3) different null hypotheses. That is, under different model specifications such as if the series is a: 1.    random walk (that is, model has no constant, no trend) 2.    random walk with drift (that is, model has a constant) 3.    random walk with drift and a trend (that is, model has a constant and trend) In all cases, the null hypothesis is that δ = 0; that is, there is a unit root and the alternative hypothesis is that δ is less than zero; that is, the time series is stationary. If the null hypothesis is rejected, it means that Yt is a stationary time series with zero mean in the case of [5], that Yt is stationary with a nonzero mean in the case of a random walk with drift model, and that Yt is stationary around a deterministic trend in the case of random walk with drift around a trend. It is extremely important to note that the critical values of the tau test to test the hypothesis that δ = 0, are different for each of the preceding three specifications of the DF test, which are now computed by all econometric packages. In each case, if the computed absolute value of the tau statistic (|τ|) exceeds the DF or MacKinnon critical tau values, the null hypothesis of a unit root is rejected, in order words the time series is stationary. On the other hand, if the computed |τ| does not exceed the critical tau value, we fail to reject the null hypothesis, in which case the time series is nonstationary. Note: Students often get confused in interpreting the outcome of a unit root test. For instance, if the calculated tau statistic is -2.0872 and the MacKinnon tau statistic is -3.672, you cannot reject the null hypothesis. Hence, the conclusion is that the series is nonstationary. But if the calculated tau statistic is -5.278 and the MacKinnon tau statistic is -3.482, you reject the null hypothesis in favour of the alternative. Hence, the conclusion is that the series is stationary. *Always use the appropriate critical τ values for the indicated model specification. How to Perform Unit Root Test in EViews (See here for Stata) Example dataset is from Gujarati and Porter T21.1 Several tests have been developed in the literature to test for unit root. Prominent among these tests are Augmented Dickey-Fuller, Phillips-Perron, Dickey-Fuller Generalised Least Squares (DFGLS) and so on. But this tutorials limits testing to the use of ADF and PP tests. Once the reader has good basic knowledge of these two techniques, they can progress to conducting other stationarity test on their time series variables. How to Perform the Augmented Dickey-Fuller (ADF) Test An important assumption of the DF test is that the error terms are independently and identically distributed. The ADF test adjusts the DF test to take care of possible serial correlation in the error terms by adding the lagged difference terms of the outcome (dependent) variable. For Yt series, in conducting the DF test, it is assumed that the error term vt is uncorrelated. But in case where it is correlated, Dickey and Fuller have developed a test, known as the augmented Dickey–Fuller (ADF) test. This test is conducted by “augmenting” the preceding three model specifications stated above by adding the lagged values of the dependent variable. As mentioned earlier, approaches will be limited to using the ADF and PP tests. Either of these tests can be used and when both are used, the reader can compare the outcomes to see if there are similarities or differences in the results. 2.    We are considering only lnpce and lnpdi in natural logarithms. Unit root test for lnpce: ·      Double click the lnpce series to open it. ·   Go to View >> Unit root test >> dialog box opens >> Under Test Type, select Augmented Dickey Test ·    Decide whether to test for a unit root in the level, 1st difference, or 2nd difference of the series. Ideally, always start with the level and if we fail to reject the test in levels then continue with testing for the first difference. Hence, we first click on 'Level' in the dialog box to see what happens in the levels of the series and then continue, if appropriate, with the first and second  differences. ·    Also, the choice of model is very important since the distribution statistic under the null hypothesis differs across these three cases. Therefore, specify whether to include an intercept, trend and intercept, or none in the regression. It is more appropriate to consider the three possible test regressions when dealing with stationarity test. Thus, our demonstration will involve these three options: “none”, “constant”, “constant and trend”. ·      We have to also specify the number of lagged dependent variables to be included in the model in order to correct for the presence of serial correlation. Thus, the number of lags to be included in the model would be determined either automatically or manually. I prefer to allow AIC automatically decide the lag length. Due to the fact that I have a quarterly data, AIC automatically chose 11 lags which I modified to 8. Consequently, EViews reports the test statistic together with the estimated test regression. The null hypothesis of a unit root is rejected against the one-sided alternative hypothesis if the computed absolute value of the tau statistic exceeds the DF or MacKinnon critical tau values and we conclude that the series is stationary; otherwise (that is, if it is lower), then the series is non-stationary. Another way of stating this is that in failing to reject the null hypothesis of a unit root, the computed τ value should be more negative than the critical τ value. Since in general δ is expected to be negative, the estimated τ statistic will have a negative sign. Therefore, a large negative τ value is generally an indication of stationarity. On the other hand, using the probability value, we reject the null hypothesis of unit root if the computed probability value is less than the chosen level of statistical significance. ·   Having specified the “none” option where both the intercept and trend are excluded in the test regression, the unit root test dialog box is shown thus: EViews - Unit root test dialog box Source: CrunchEconometrix ·      The ADF unit root test results for the selected regression option, “none” appears as follows: EViews - Augmented Dickey-Fuller test ("none") option Source: CrunchEconometrix Following similar procedures, the select “intercept” for the ADF unit root test yields: EViews - Augmented Dickey-Fuller test ("intercept") option Source: CrunchEconometrix The result for the “trend and intercept” options for the ADF unit root test is shown below: EViews - Augmented Dickey-Fuller test ("trend and intercept") option Source: CrunchEconometrix ·      Do same for the lnpdi series. The three ADF specifications all confirm that lnpce is nonstationary with a trend which is also a confirmation of the graphical plot. The next thing to do is to run the specifications with the “1st difference” option, and if the series is still nonstationary, the “2nd difference” option is conducted. ·      1st difference with “none” option result: EViews - Augmented Dickey-Fuller test 1st difference ("none") option Source: CrunchEconometrix ·      1st difference with “intercept” option result: EViews - Augmented Dickey-Fuller test 1st difference ("intercept") option Source: CrunchEconometrix ·      1st difference with “trend and intercept” option result: EViews - Augmented Dickey-Fuller test 1st difference ("trend and intercept") option Source: CrunchEconometrix The three ADF specifications all confirm that lnpce is stationary at 1st difference but at varying significance levels. Given that I am willing to reject the null hypothesis at the 5% level, then the conclusion is that lnpce is stationary at 1st difference with a constant because it is only at that specification that the null hypothesis of a unit root is rejected. Hence, carrying out a “2nd difference” test is unnecessary. ·      Again, do same for the lnpdi series. How to Perform the Phillips-Perron (PP) Test Phillips and Perron use nonparametric statistical methods to take care of the serial correlation in the error terms without adding lagged difference terms. Procedures for testing for unit root using PP test are similar to that of ADF earlier discussed except for the Test Type options. Note: The asymptotic distribution of the PP test is the same as the ADF test statistic. After unit root testing, what next? The outcome of unit root testing matters for the empirical model to be estimated. The following scenarios explain the implications of unit root testing for further analysis. Scenario 1:  When series under scrutiny are stationary in levels? If pce and pdi are stationary in levels, that is, they are I(0) series (integrated of order zero).  In this situation, performing a cointegration test is not necessary. This is because any shock to the system in the short run quickly adjusts to the long run. Consequently, only the long run model should be estimated.  That is, the model should be specified as: pce= 𝛂₀ + bpdit + ut In essence, the estimation of short run model is not necessary if series are I(0). Scenario 2: When series are stationary in first differences? ·    Under this scenario, the series are assumed to be non-stationary. ·     One special feature of these series is that they are of the same order of integration. ·  Under this scenario, the model in question is not entirely useless although the variables are unpredictable. To verify further the relevance of the model, there is need to test for cointegration.  That is, can we assume a long run relationship in the model despite the fact that the series are drifting apart or trending either upward or downward? ·   If there is cointegration, that means the series in question are related and therefore can be combined in a linear fashion. This implies that, even if there are shocks in the short run, which may affect movement in the individual series, they would converge with time (in the long run). ·   However, there is no long run if series are not cointegrated. This implies that, if there are shocks to the system, the model is not likely to converge in the long run. ·   Note that both long run and short run models must be estimated when there is cointegration. ·  The estimation will require the use of vector autoregressive (VAR) model analysis and VECM models. ·  If there is no cointegration, there is no long run and therefore, only the short run model will be estimated. That is, run only VAR no VECM analysis! ·  There are however, two prominent cointegration tests for I(I) series in the literature. They are Engle-Granger cointegration test and Johansen cointegration test. ·  The Engle-Granger test is meant for single equation model while Johansen is considered when dealing with multiple equations. Scenario 3: The series are integrated of different order? ·  Should in case lnpce and lnpdi are integrated of different orders, like the second scenario, cointegration test is also required but the use of either Engle-Granger or Johansen cointegration are no longer valid. ·    The appropriate cointegration test to apply is the Bounds test for cointegration and the estimation technique is the autoregressive distributed lag (ARDL) model. ·    Similar to case 2, if series are not cointegrated based on Bounds test, we are expected to estimate only the short run. That is run only the ARDL model. ·   However, both the long run and short run models are valid if there is cointegration. That is run both ARDL and ECM models. In addition, there are formal tests that can be carried out to see if despite the behaviour of the series, there can still be a linear combination or long run relationship or equilibrium among the series. The existence of the linear combination is what is known as cointegration. Thus, the regression with I(1) series can either be spurious or cointegrated. The basic single equation cointegration tests are Johansen, Engle-Granger and Bounds cointegration tests. These will be discussed in detail in subsequent tutorials. [Watch video clip on performing ADF stationarity test in EViews] In conclusion, I have discussed what is meant by nonstationary series, how can a series with a unit root be detected, and how can such series be made useful for empirical research? You are encouraged to use your data or the sample datasets uploaded to this bog to practise in order to get more hands-on knowledge.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9190971255302429, "perplexity": 929.039690927345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00643.warc.gz"}
http://deborahrfowler.com/MathForVSFX/DotProductFollow.html
## Math for VSFX Updated on Nov 25  2017 • Examples # Deborah R. Fowler ## Dot Product Follow Posted March 12  2016 "Dot Product is the product of the magnitudes of the two vectors and the cosine of the angle between them. The name "dot product" is derived from the centered dot$\cdot$ " that is often used to designate this operation."  See wiki entry. See the hip file for a demonstration. Here is an excellent example using the dot product, pythagorean theorem and a little bit of vector math tossed in to solve an animation problem. Please see the hip file to correspond to this. There are two similar problems in this section, two point constraint and train wheels.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112240076065063, "perplexity": 1119.5469159191564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890771.63/warc/CC-MAIN-20180121135825-20180121155825-00202.warc.gz"}
https://www.physicsforums.com/threads/toroid-shape-electro-mag-question.102235/
# Toroid shape Electro-Mag question 1. Dec 1, 2005 ### square_imp I have a difficult probelm to solve: There is a dougnut shape (toroidal coil) with N turns of wire wrapped around it. Current I flows in the wire. THe cross section of the 'doughnut' is square with height h. I am meant to use ampere's law to prove that the magnetic field at a radius r from the centre (half way through the coil) is given by: B = uNI/2pi*r (u = permittivity) SO far I have that ampere's law says that the integral around a closed path of the Magnetic field is equal to the permittivity times the current enclosed. How does this translate into the above formula? I am probably just missing something really obvious. 2. Dec 1, 2005 ### Tide All you have to do is create a closed path within the toroid that loops around the axis at a fixed distance. You can see that the current passing through the loop you just created is NI. Similar Discussions: Toroid shape Electro-Mag question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535181879997253, "perplexity": 633.4958023504865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00049-ip-10-145-167-34.ec2.internal.warc.gz"}
http://stackoverflow.com/questions/2952980/quick-way-to-make-26-macros-one-for-each-letter/2957035
# Quick way to make 26 macros (one for each letter) Instead of making a macro for each letter, as in \def\bA{\mathbf{A}} ... \def\bZ{\mathbf{Z}} Is there a way to loop over a character class (like capital letters) and generate macros for each? I'd also like to do the same for Greek letters (using bm instead of mathbf). - By the way, I'm looking for a (La)Tex answer. Currently I have a bash script, but I want a (La)TeX only solution. –  Geoff Jun 1 '10 at 20:48 I don't know Latex, so I'll just add this as a comment - is there a scripting construct inside of whatever you're using to generate the macros? and if so, does it have the capability to call something to the effect of echo -e -n '\x41'. By incrementing the number '41' you'd go through all the ascii letters of the alphabet (starting with capital 'A'). –  aronchick Jun 1 '10 at 21:09 @aronchick - That's the idea. Andrews's answer below does this (but in TeX we get to refer to alphabetic characters directly, avoiding ascii). –  Geoff Jun 2 '10 at 14:10 \def\mydefb#1{\expandafter\def\csname b#1\endcsname{\mathbf{#1}}} \def\mydefallb#1{\ifx#1\mydefallb\else\mydefb#1\expandafter\mydefallb\fi} \mydefallb ABCDEFGHIJKLMNOPQRSTUVWXYZ\mydefallb New for Greek \def\mydefgreek#1{\expandafter\def\csname b#1\endcsname{\text{\boldmath$\mathbf{\csname #1\endcsname}$}}} \def\mydefallgreek#1{\ifx\mydefallgreek#1\else\mydefgreek{#1}% \lowercase{\mydefgreek{#1}}\expandafter\mydefallgreek\fi} \mydefallgreek {beta}{Gamma}{Delta}{epsilon}{etaex}{Theta}{Iota}{Lambda}{kappa}{mu}{nu}{Xi}{Pi}{rho}\mydefallgreek $\bGamma\bDelta \bTheta \bLambda \bXi \bPi$ $\bbeta \bgamma\bdelta \bepsilon \betaex \btheta \biota \blambda \bkappa \bmu \bnu \bxi \bpi \brho$ - This is great. So line 2 is essentially a loop because it inserts itself before the next character, right? Line one could be inserted into line 2, but is separate only for readability. A two line solution! The only down side is having to type the alphabet. –  Geoff Jun 2 '10 at 13:56 How about for a list of Greek letters? ({Gamma}{mu} or \Gamma\mu or some other reasonable list.) –  Geoff Jun 2 '10 at 15:09 I do not understand how you want to write Greek letters. You write \bA, \bB, etc for A,B,.... What is about Greek? \bGamma? \bmmu ? I do not understand. –  Alexey Malistov Jun 2 '10 at 15:25 Yes \bGamma or \bmu or \bLambda or \blambda. Currently I just have a list of these commands made manually. –  Geoff Jun 2 '10 at 15:29 I added new info. –  Alexey Malistov Jun 3 '10 at 8:33 Wouldn't be better to define one command \newcommand\bm[1]{\ensuremath{${\boldmath$#1$}}$} and it can be used both in text mode and math mode. Usage: \[\bm{F(x)}=\int\bm\delta(x)\ dx] \where \mb F is blah blah blah and \bm \delta is halb halb halb... Result: F(x)='inegral delta(x)'dx Where F is blah blah blah and 'delta' is halb halb halb... Outer dollars are there to leave math (roman) mode because \boldmath command has no effect in math mode. Inner ones switch back to math (bold). Additional braces (\${\boldmath) ensures that \boldmath command will work only with #1 Another advantage of this code is testing new commands for existence of \bb and \bg. So you can't crash LaTeX makros easily. I hope this is what you're looking for. - if you want to have macro specially designed for greek letters and another for latin letters you can add \newcommand\bl[1]{\ensuremath{\mathbf{#1}}} into preamble and use it. - It will work only for latin letters. –  Crowley Jun 2 '10 at 11:11 +1 I guess my question is more academic than practical. Your suggestion is totally valid. I guess writing \b X instead of \bX isn't a very big distinction. Still, my original goal is to avoid the space. Your point regarding trashing existing macros is especially important. It would make very confusing code to redefine \bf for example. –  Geoff Jun 2 '10 at 14:07 I would recommend doing: \newcommand{\b}[1]{\mathbf{#1}} as Crowley says, and similar for all the other alphabets. However, if you really want to do it using LaTeX code, here's one that seems to work: \documentclass{article} \usepackage{amssymb} \newcounter{char} \setcounter{char}{1} \loop\ifnum\value{char}<27 \edef\c{\Alph{char}} \expandafter\expandafter\expandafter\expandafter\expandafter\expandafter\expandafter\def\expandafter\expandafter\expandafter\csname\expandafter\expandafter\expandafter b\expandafter\c\expandafter\endcsname\expandafter{\expandafter\mathbb\expandafter{\c}} \repeat \begin{document} $$\bZ$$ \end{document} I lost count of how many 'expandafter's there are in that! To get lowercase letters, replace the Alph by alph. - Would it affect greek letters too? And why to use \bH ello \bW orld, when you can use \b Hello \b World? –  Crowley Jun 2 '10 at 11:36 +1 Cool solution. I never knew about \expandafter or csname. I think using the latter would allow you to trim your macro quite a bit. –  Geoff Jun 2 '10 at 14:08 @Crowley: I missed the bit about greek letters. One could define a version of \alph and \Alph for greek letters. Overkill for one application, but could be useful elsewhere. With regard to your second point: I agree completely and I said so! In my documents, I use (a variant of) your solution. Nonetheless, the question of being able to loop over the alphabet was sufficiently intriguing that I thought I'd have a go. I tried to strike a tone that showed that I wouldn't actually do this in practice. After all, 18 expandafters could be considered just a tab excessive! –  Andrew Stacey Jun 3 '10 at 13:44 @Andrew: no doubt you know this by now, but your bunch of \expandafter can be reduced a lot. \expandafter\edef\csname b\Alph{char}\endcsname{\noexpand\mathbb{\Alph{char}}} should do. –  Bruno Le Floch Oct 3 '11 at 13:21 Expanding on Andrew's answer, here is a solution without \expandafter: \makeatletter \@tempcnta=\@ne \def\@nameedef#1{\expandafter\edef\csname #1\endcsname} \loop\ifnum\@tempcnta<27 \@nameedef{b\@Alph\@tempcnta}{\noexpand\mathbb{\@Alph\@tempcnta}}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357570767402649, "perplexity": 2696.45875261827}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054096/warc/CC-MAIN-20131204131734-00046-ip-10-33-133-15.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/292828/random-matching-in-a-group
# Random Matching in a Group Recently I was asked such a question: N people joined a party. In this party, there is a present exchange game where each one prepares a present, these presents will be randomly shuffled and re-distributed. If there are two people receive presents prepared by the other, they become a pair of "lovers"(Regardless of sex, and one can be his own "lover"). What is the probability of generating at least one pair of lovers in this party? E.g. If there are two people in this party, the probability would be 1; and for three people, it would be 1-2/6 = 2/3. - I don't understand how you're dealing with the case of being one's own lover. You ask for the probability of generating at least one pair of lovers, and then you say this is $1$ for a party of two people. Does that mean that you count two people as a pair of lovers either if they receive each other's presents or if they both receive their own presents (and thus become their own lovers)? That would contradict how you introduced the term "a pair of lovers" further up. – joriki Feb 2 '13 at 16:31 Consider the $n$ guests as numbering the presents, then a "pair of lovers" is a cycle of length 2 in the permutation of the $n$ presents. It's probably easiest to count the permutations without 2-cycles (like derangements are permutations without 1-cycles), but there I'm stuck. – vonbrand Feb 2 '13 at 16:48 Using the word pair is a bad idea, I think you should say "set of lovers" to make it clear that the set can have size 1 or 2. – Byron Schmuland Feb 2 '13 at 17:41 Yes I was trying to say that the 1 length cycle also counts and thank Byron for clearing this up for me. – user1206899 Feb 3 '13 at 2:43 It looks like you want the probability that there is a cycle of length 1 or 2 in a random permutation on $\{1,\dots,N\}$. Considering the complementary event, we'd need to count the number of permutations without such small cycles. These values can be found here (including the party of size $N=1$), which give us the following probabilities for $1\leq N\leq 10$: $$1, 1, {2\over 3}, {3\over 4}, {4\over 5}, {7\over 9}, {65\over 84}, {373\over 480}, {1259\over 1620}, {2447\over 3150}$$ or $$1., 1., .66667, .75000, .80000, .77778, .77381, .77708, .77716, .77683$$ These converge pretty rapidly to $1-\exp(-1.5)=.77686984$, but at the moment I'm not sure why. Added: The number $C_1$ of one cycles is approximately Poisson(1), while the number $C_2$ of two cycles is approximately Poisson(1/2). Also, they are asymptotically independent so $C_1+C_2$ is approximately Poisson(3/2), and hence $\mathbb{P}(C_1+C_2=0)\approx \exp(-3/2)$. Reference: Example 10.5.2 $\$ Short cycles in random permutations from the book "Poisson Approximation" by A.D. Barbour, Lars Holst, and Svante Janson. The authors show that the number of cycles less than or equal $f$, is approximately a Poisson random variable with mean $\sum_{r=1}^f 1/r$, and also give bounds on the error. - Not really the same: What is needed is the number of permutations without 2-cycles to subtract from the total. – vonbrand Feb 2 '13 at 17:35 I interpret the OP's comment about loving yourself to include all permutations with 1-cycles. But it is not perfectly clear, it's true. – Byron Schmuland Feb 2 '13 at 17:38 You are both very right and the mis understanding is on me. I was not very clear about the 1 cycle part.. I guess 'll just randomly choose an answer. Thanks for all the explanations – user1206899 Feb 3 '13 at 4:25 The number of ways not to have a pair of lovers is the number of permutations of the $n$ presents that don't have 2-cycles. Using the symbolic method (see for example Flajolet and Sedgewick's "Analytic Combinatorics"), this class is described by the (hacky) symbolic equation for labelled classes: $$\mathcal{C} = \mathop{MSet}(\mathop{Cyc}(\mathcal{Z}) - {\mathop{Cyc}}_{= 2}(\mathcal{Z}))$$ The respective exponential generating function is: \begin{align*} C(z) &= \exp \left(- \ln (1 - z) - \frac{z^2}{2} \right) \\ &= \frac{\exp \left(- \frac{z^2}{2} \right)}{1 - z} \end{align*} Dividing by $1 - z$ gives partial sums, so what we are looking at is: $$c_n = n! \left. \exp \right|_{\lfloor n / 2 \rfloor} ( - 1 / 2 )$$ Here $\left. \exp \right|_k (z)$ is the truncated exponential function, i.e., go only up to $k$-term in the exponential's series. The requested value is $n! - c_n \approx n! (1 - e^{-1/2})$, some 40% of parties have at least a pair of lovers. This explains @ByronSchmuland's mystery, by the way. - Yes I programmed bruteforcely the first 32 value and it is about 38%. Thank vonbrand for this explanation in depth. – user1206899 Feb 3 '13 at 2:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986818194389343, "perplexity": 488.82191653821985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00041-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/119624-z-scores-standard-deviation-help.html
# Thread: Z-Scores and Standard Deviation Help 1. ## Z-Scores and Standard Deviation Help Hey, I'm new to this forum. I just started studying psychology and I haven't quite grasped the workings of z-scores. In an exercise, I have to answer the following question: A score that is 20 points above the mean corresponds to a z-score of z = +2. What is the standard deviation? It's probably a very simple question, but I've blanked out. Could someone please explain to me how the answer is achieved? Another one I'm confused about... For a population with σ = 12, a score of X = 87 corresponds to z = -0.25. What is the mean for this distribution? 2. Originally Posted by Gwame Hey, I'm new to this forum. I just started studying psychology and I haven't quite grasped the workings of z-scores. In an exercise, I have to answer the following question: A score that is 20 points above the mean corresponds to a z-score of z = +2. What is the standard deviation? It's probably a very simple question, but I've blanked out. Could someone please explain to me how the answer is achieved? Another one I'm confused about... For a population with σ = 12, a score of X = 87 corresponds to z = -0.25. What is the mean for this distribution? You should know $Z = \frac{X - \mu}{\sigma}$. Therefore: 1) $2 = \frac{(\mu + 20) - \mu}{\sigma}$. Solve for $\sigma$. 2) $-0.25 = \frac{87 - \mu}{12}$. Solve for $\mu$. 3. Originally Posted by Gwame Hey, I'm new to this forum. I just started studying psychology and I haven't quite grasped the workings of z-scores. In an exercise, I have to answer the following question: A score that is 20 points above the mean corresponds to a z-score of z = +2. What is the standard deviation? It's probably a very simple question, but I've blanked out. Could someone please explain to me how the answer is achieved? Another one I'm confused about... For a population with σ = 12, a score of X = 87 corresponds to z = -0.25. What is the mean for this distribution? Different set of samples have different mean values. The mean value is the center point of all numbers in the set. Different set of data have different mean values, and different means give different center point, which means that every time when you have a set of samples, you need to plot a different graph for the probability distribution, but all these extra work is unnecessary if you could standardize it. Once you standard it, you can use the same graph repeatedly. The standardized mean is always zero, and the standard deviation is measured a distance from zero. The distance from zero is the standard deviation of the set of data. Say the average of height all your chairs is 3 feet, plus and minus 3 inches. The plus and minus is the deviation. This can be 1 standard deviation or 0.5 standard deviation. 1 is further from the center than 0.5. The z-score tells you the distance from the center point of you data; it’s directly related to the sample mean and standard deviation. 4. Originally Posted by mr fantastic You should know $Z = \frac{X - \mu}{\sigma}$. Therefore: 1) $2 = \frac{(\mu + 20) - \mu}{\sigma}$. Solve for $\sigma$. 2) $-0.25 = \frac{87 - \mu}{12}$. Solve for $\mu$. Thanks for your reply. So in the first question, can I cancel out the two $\mu$s, leaving 2 = 20/ $\sigma$, therefore $\sigma$ = 10? And the second question, answer = 90? 5. Originally Posted by Gwame Thanks for your reply. So in the first question, can I cancel out the two $\mu$s, leaving 2 = 20/ $\sigma$, therefore $\sigma$ = 10? And the second question, answer = 90? Yes. CB Many thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238477349281311, "perplexity": 349.95847888966046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660350.13/warc/CC-MAIN-20160924173740-00036-ip-10-143-35-109.ec2.internal.warc.gz"}
https://arxiv.org/abs/1610.06167
astro-ph.HE (what is this?) (what is this?) Title: Deep Chandra Observations of the Pulsar Wind Nebula Created by PSR B0355+54 Abstract: We report on Chandra X-ray Observatory (CXO) observations of the pulsar wind nebula (PWN) associated with PSR B0355+54 (eight observations with a 395 ks total exposure, performed over an 8 month period). We investigated the spatial and spectral properties of the emission coincident with the pulsar, compact nebula (CN), and extended tail. We find that the CN morphology can be interpreted in a way that suggests a small angle between the pulsar spin axis and our line-of-sight, as inferred from the radio data. On larger scales, emission from the 7' (2 pc) tail is clearly seen. We also found hints of two faint extensions nearly orthogonal to the direction of the pulsar's proper motion. The spectrum extracted at the pulsar position can be described with an absorbed power-law + blackbody model. The nonthermal component can be attributed to magnetospheric emission, while the thermal component can be attributed to emission from either a hot spot (e.g., a polar cap) or the entire neutron star surface. Surprisingly, the spectrum of the tail shows only a slight hint of cooling with increasing distance from the pulsar. This implies either a low magnetic field with fast flow speed, or particle re-acceleration within the tail. We estimate physical properties of the PWN and compare the morphologies of the CN and the extended tail with those of other bow shock PWNe observed with long CXO exposures. Comments: 11 pages, 8 figures Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) DOI: 10.3847/1538-4357/833/2/253 Cite as: arXiv:1610.06167 [astro-ph.HE] (or arXiv:1610.06167v1 [astro-ph.HE] for this version) Submission history From: Noel Klingler [view email] [v1] Wed, 19 Oct 2016 19:59:48 GMT (953kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877615928649902, "perplexity": 3552.8894576813605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00500.warc.gz"}
http://concepts-of-physics.com/thermodynamics/stefans-law.php
# Stefan's Law ## Problems from IIT JEE Problem (IIT JEE 2005): A body with area $A$ and emissivity $e=0.6$ is kept inside a spherical black body. Total heat radiated by the body at temperature $T$ is, 1. $0.6\sigma eAT^4$ 2. $0.8\sigma eAT^4$ 3. $1.0\sigma eAT^4$ 4. $0.4\sigma eAT^4$ Solution: By Stefan-Boltzmann law, energy radiated per unit time by a body of surface area $A$, emissivity $e$, and absolute temperature $T$ is $e\sigma A T^4$. Problem (IIT JEE 1994): Two bodies A and B have thermal emissivities of $0.01$ and $0.81$, respectively. The outer surface area of the two bodies are the same. The two bodies emit total radiant power at the same rate. The wavelength $\lambda_B$ corresponding to maximum spectral radiance in the radiation from B is shifted from the wavelength corresponding to maximum spectral radiance in the radiation from A by $1.00\;\mathrm{\mu m}$. If the temperature of A is 5802 K, 1. the temperature of B is 1934 K. 2. $\lambda_B=1.5\;\mathrm{\mu m}$. 3. the temperature of B is 11604 K. 4. the temperature of B is 2901 K. Solution: Let $e_A=0.01$, $e_B=0.81$, and $T_A=5802 \mathrm{K}$. Stefan's law gives radiant power of two bodies as, \begin{align} {\mathrm{d}Q_A}/{\mathrm{d}t}=\sigma A e_A {T_A}^{\!4},\nonumber\\ {\mathrm{d}Q_B}/{\mathrm{d}t}=\sigma A e_B {T_B}^{\!4}.\nonumber \end{align} Equate ${\mathrm{d}Q_A}/{\mathrm{d}t}={\mathrm{d}Q_B}/{\mathrm{d}t}$ to get, \begin{align} T_B=\left({e_A}/{e_B}\right)^{\!1/4}T_A=\left({0.01}/{0.81}\right)^{\! 1/4}\times 5802=1934\;\mathrm{K}.\nonumber \end{align} Wien's displacement law, $\lambda_m T=b$, gives, \begin{align} \lambda_B=(T_A/T_B)\lambda_A=(5802/1934)\lambda_A=3\lambda_A. \end{align} Also, since $\lambda_B > \lambda_A$ and $|\lambda_B-\lambda_A|=1\;\mathrm{\mu m}$, we get, \begin{align} \lambda_B-\lambda_A=1\;\mathrm{\mu m}. \end{align} Solve above equations to get $\lambda_A=0.5\;\mathrm{\mu m}$ and $\lambda_B=1.5\;\mathrm{\mu m}$. Thus, answer is A and B. Problem (IIT JEE 2010): Two spherical bodies $A$ (radius 6 cm) and $B$ (radius 18 cm) are at temperatures $T_1$ and $T_2$, respectively. The maximum intensity in the emission spectrum of $A$ is at 500 nm and in that of $B$ is at 1500 nm. Considering them to be black bodies, what will be the ratio of the rate of total energy radiated by $A$ to that of $B$? Solution: Stefan-Boltzmann law gives the energy radiated per unit time by a spherical black body of area $A=4\pi r^2$ and temperature $T$ as, \begin{align} E=\sigma A T^4=4\pi\sigma r^2 T^4. \end{align} The Wien's displacement law relates temperature of the black body to the wavelength at maximum intensity by \begin{align} \lambda_m T=b. \end{align} Eliminate $T$ from above equations to get, \begin{align} E=4\pi\sigma b^4 (r^2/\lambda^4),\nonumber \end{align} which gives, \begin{align} E_1/E_2=(r_1/r_2)^2\, (\lambda_2/\lambda_1)^4=(6/18)^2\, (1500/500)^4=9.\nonumber \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9896780252456665, "perplexity": 923.2870670097207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527720.37/warc/CC-MAIN-20190419121234-20190419143234-00533.warc.gz"}
https://explainingmaths.wordpress.com/2008/12/12/quantifier-packaging-when-teaching-convergence-of-sequences/
Quantifier packaging when teaching convergence of sequences It is well known that maths students find statements with multiple quantifiers difficult to break down into digestible portions in order to understand the whole statement. One example of this is the definition of convergence for a sequence of real numbers (and, later, sequences in metric spaces). The definition of the statement xn tends to x as n tends to infinity has three quantifiers: For all ε > 0 there exists a natural number N such that for all natural numbers n≥N we have |xn-x| < ε I am developing my own approach to breaking down this statement into digestible pieces. First I have introduced into my teaching the notion of absorption of a sequence by a set. For a natural number N, I say that a set A absorbs the sequence (xn) by stage N if, for all n≥N, we have that xn is an element of A. This is a single-quantifier statement which can readily be checked by students for specific sets and sequences. The set A absorbs the sequence (xn) if there exists a natural number N such that the set A absorbs the sequence (xn) by stage N. There are several standard terms equivalent to this notion: for example, this is what is meant by saying that the sequence (xn) is eventually in the set A, or that all but finitely many of the terms of the sequence are in A, etc. As we will see, one advantage of absorption is grammatical: it makes the set the subject of the sentence, and the sequence the object. In terms of absorption, the definition of the statement xn tends to x as n tends to infinity can now be expressed as follows: Every open interval centred on x absorbs the sequence (xn). Compare this with the equivalent ‘eventually in’ formulation: The sequence (xn) is eventually in every open interval centred on x. This latter formulation appears ambiguous, and can cause problems for the students. It can clearly be made unambiguous, but only at the expense of making it somewhat unwieldy: For all ε>0, the sequence (xn) is eventually in the open interval (x-ε,x+ε). I have not yet had the opportunity to teach convergence of sequences to first year undergraduates. However, I have used the notion of absorption in teaching second-year mathematical analysis. In particular, I have used this method to teach the difference between uniform convergence and pointwise convergence for sequences of functions. This material (pdf file + audio podcast) is available from u-Now, or directly from http://unow.nottingham.ac.uk/resources/resource.aspx?hid=e29ada63-e1d3-7898-9afd-42692accd0be 7 responses to “Quantifier packaging when teaching convergence of sequences” 1. Tim Gowers was kind enough to leave some comments on the blogger edition of this post, which I will now reproduce here. gowers said… Dear Joel, I like your “absorbs” idea. In particular, it hadn’t occurred to me that one could get a grammatical advantage by focusing on the set rather than on the sequence, and I think it could be very useful indeed. Like all teaching mathematicians, I have faced the problem of dealing with quantifiers. One recent example was when I gave a talk to a non-mathematical (or rather, only partly mathematical) audience and wanted to explain Szemer\’edi’s theorem. For that I used a different trick. Obviously I couldn’t say $\forall \delta>0\ \forall k\in\mathbb{N}\ \exists N\in\mathbb{N}\ \forall A\subset\{1,2,\dots,N\}\ |A|\geq\delta N \Rightarrow A$ contains an arithmetic progression of length $k$. So instead I dropped down a couple of levels of quantification by saying, “If $N$ is large enough, then every subset of $\{1,2,\dots,N\}$ of size at least $N/120$ contains an arithmetic progression of length $34$.” I then followed that up by saying that I could have chosen any other pair of numbers instead of $120$ and $34$. I suppose I was using two tricks: choose specific numbers instead of arbitrary ones, and say “if $N$ is large enough” instead of “there exists an $N$ such that” . The second trick isn’t removing a quantifier but it sort of disguises it somehow. Best wishes, Tim Like 2. Tim Gowers’s second comment (working within the limitations of blogger) was the following gowers said… Actually, I did once have a similar pedagogical idea myself but never got round to trying it out. It was to have a sequence of definitions with ever-increasing numbers of quantifiers. But the trick would be that you’d only actually see one quantifier at a time, the remaining ones being hidden in a definition that you had become used to. For example, suppose you wanted to define the notion of a Cauchy sequence. You would present a sequence of definitions as follows. 1. A sequence has diameter at most c if no two of its terms differ by more than c. 2. A sequence has diameter at most c after N if |a_p-a_q| is at most c whenever p and q are greater than N. 3. A sequence has essential diameter at most c if there is some N for which it has diameter at most c after N. (One could perhaps say “if it eventually has diameter at most c”.) 4. A sequence is Cauchy if for every positive c it has essential diameter at most c. The advantage of doing this is that one could ask students to do exercises on the intermediate definitions and not progress to the full definition until they were comfortable with them. For instance, one could ask about the essential diameter of the sequence 0, 3, 0, 2, 0, 1.5, 0, 1.25, 0, 1.125,… and get them to see that if c is greater than 1 then it has essential diameter at most c, but not otherwise. Like 3. I agree with Tim’s second comment 100%, especially for students in the early/middle stages of the course. Doing things this way would give the students a much better understanding of the concepts involved. Unfortunately, we usually find ourselves with limited time and resources to cover a significant syllabus. If we spend too much extra time on one part of the syllabus, the rest of the syllabus will suffer. So, like everything in life, it comes down to finding an acceptable balance in an imperfect world where there are not enough hours in the day! In the latter stages of the course, I think that students need to develop the ability to package quantifiers for themselves. If we do too much of this for them, it might not be good for them! On the specific issue of Cauchy sequences, it may be possible to argue that there is a ‘redundant’ quantifier anyway. Why do we insist on looking at all $p$ and $q$ after $N$ and consider $|a_p - a_q|$? Why not simply look at all $p$ after $N$ and consider $|a_p-a_N|$? This would lead to an equivalent definition with one less quantifier, but would perhaps disguise the true nature of the Cauchy condition? Like 4. A tiny further comment — it occurred to me that it was pointless to use the phrase “essential diameter” when “eventual diameter” sounds almost the same and would be much more intuitively tied to what it is supposed to mean. Like 5. JamesCrook These are excellent ideas, but how about tackling the issue of familiarity with quantifiers head on too? How about asking or showing the students how to express the idea of ‘checkmate’ using $\forall$ and $\exists$ notation? And then what is checkmate in 1, in 2, in 3 moves? We get alternating sequences of $\forall$ and $\exists$ as long as you like from this, but in a familiar territory. You can show De Morgan’s laws for quantifiers, show that applying them is looking at the game from the other player’s point of view. You can get across clearly that $\forall$ and $\exists$ do not commute. It is, or should be, familiar in this context. Ask them to prove or disprove $\forall x \in \mathbb{N}~ \exists y \in \mathbb{N}$ s.t. $y> x$. $\exists y \in \mathbb{N}$ s.t. $\forall x \in \mathbb{N}~ y>x\,?$ Then prove or disprove $\forall v \in \mathbb{N}~\exists z \in \mathbb{N}$ with $z>v+3$ s.t. $\forall w \in \mathbb{N}~ \exists y \in \mathbb{N}$ with $y>w+1$ s.t. $\forall x \in \mathbb{N}~ v>w>x>y>z \Rightarrow x-v=z-x$. Possibly the big hurdle is for students to see that formal statements using universal and existential quantifiers can be manipulated as equations – and that like all equations there are rules to what you can and can’t do. There’s a tendency for students to see the quantifiers as somehow outside of that and to treat them informally, having never thought about the rules. That is where I think their common mistakes come from. I really like the way you both are reducing the depth of the expressions by making the inner expressions familiar ideas first. Like 6. You can now see a screencast of me discussing sequence convergence and absorption in a workshop for my second-year mathematical analysis students. This screencast is available at http://wirksworthii.nottingham.ac.uk/webcast/maths/G12MAN-09-10/EC4b/ Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854453444480896, "perplexity": 382.661764624697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00497.warc.gz"}
http://tex.stackexchange.com/questions/41775/get-combine-a-sidewaystable-with-a-subfloat-from-subfig
# Get combine a sidewaystable with a subfloat from subfig I use the subtables environment from the subfloat package to combine table numbering (i.e., any tables I wrap in the subtables environment get labeled 1a, 1b, and so on). But I just learned on the LaTeX wiki how to use the subfloat environment from the subfig package for figures and it seems that I should use the subfloat environment from the subfig package for tables, too. Is this correct? The rub is that I frequently use the sidewaystable environment from the rotating package. Can I rotate tables using the subfloat environment from the subfig package? (I can't figure out how from the manual.) This would allow side-by-side sideways tables. Or should I stick with my more naive approach using the subfloat package? Here is an example of how I combine the subtables and sidewaystable environments. \documentclass{article} \usepackage{subfloat} \usepackage{rotating} \begin{document} Check out tables \ref{tab:first} and \ref{tab:second}. \begin{subtables} \begin{sidewaystable} \centering \begin{tabular}{ccc} \hline a&b&c\\ d&e&f\\ \hline \end{tabular} \caption{my first table} \label{tab:first} \end{sidewaystable} \begin{sidewaystable} \centering \begin{tabular}{ccc} \hline a&b&c\\ d&e&f\\ \hline \end{tabular} \caption{my second table, which is the same as the first} \label{tab:second} \end{sidewaystable} \end{subtables} \end{document} - did you mean to have each subfloat on a separate page? -- i would expect to have the subfloat environments inside a single sidewaystable environment. –  wasteofspace Jan 21 '12 at 14:44 @anon -- Thanks. I didn't know that I could wrap multiple tabulars in one sidewaystable. Is there a good book on LaTeX? I learn most of this stuff in a vacuum, but the other programs I use (R, Stata) have consoles and help commands. Thanks! –  Richard Herron Jan 21 '12 at 18:30 It is possible to obtain a sidewaystable look with multiple tables turned sideways without using subfig or even rotating. Here is a minimal example (which uses subfloat for numbering): \documentclass{article} \usepackage{subfloat}% http://ctan.org/pkg/subfloat \usepackage{graphicx}% http://ctan.org/pkg/graphicx \begin{document} Check out Tables~\ref{tab:first} and~\ref{tab:second}. \begin{subtables} \begin{table}[ht] \centering \rotatebox{90}{% Rotate table 90 degree CCW \begin{minipage}{0.5\linewidth} \centering \begin{tabular}{ccc} \hline a&b&c\\ d&e&f\\ \hline \end{tabular} \caption{my first table}\label{tab:first} \end{minipage} } \qquad% <-------------- separation between sideways tables \rotatebox{90}{% Rotate table by 90 degree CCW \begin{minipage}{0.5\linewidth} \centering \begin{tabular}{ccc} \hline a&b&c\\ d&e&f\\ \hline \end{tabular} \caption{my second table, which is the same as the first}\label{tab:second} \end{minipage} } \end{table} \end{subtables} \end{document} Rotation of the tables are obtained via graphicx's \rotatebox{<angle>}{<stuff>}. However, for tables (like tabular with a \caption), you need to box the contents you're rotating. I've done so using a minipage of width 0.5\linewidth. This length is required, but you can also use the varwidth which provides a similarly-named environment as an analogue to minipage, but shrinks (if needed) to the natural width of the box. The space between "subtables" is given by \qquad, although a \hspace{<len>} would also work (where <len> is any recognized TeX length). Finally, the table has been set as a float with specification [ht], which is different from sidewaystable's necessary [p] (page of floats) usage. However, you can modify this to suit. For the moment, this does not incorporate rotating's on-the-fly +/90 degree rotating of floats which is page-dependent (which rotates the tables 90 degrees CCW for odd-numbered pages, and 90 degrees CW for even-numbered pages). However, it would be possible to implement such a feature using some additional packages like chngpage, for example. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425913095474243, "perplexity": 3788.4464321649098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378862.11/warc/CC-MAIN-20141119123258-00224-ip-10-235-23-156.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/67231/how-to-use-c1v1-c2v2-how-do-the-units-work
# How to use C1V1=C2V2 How do the units work? I am trying to do a dilution where I know I have 0.749 g/ mL and I want to use this concentration to calculate the grams of aspirin in each 100 mL solution. I am specifically worried about the units. I want grams out. Is it valid to do the following? One: $$C1V1=C2V2$$ $$(0.749 g)(1 mL)=(C2 g)(100mL)$$ $$C2=0.00749 g$$ or must I do this? Two: $$C1V1=C2V2$$ $$(0.749 g/mL)(1 mL)=(C2 g/mL)(100mL)$$ $$C2=0.00749 g/mL$$ The difference being the final units. g vs. g/mL • Concentration is g/mL so the latter is correct. – DHMO Jan 28 '17 at 5:24 • So there is no way to get rid of the mL part? Part of me says to multiply by 1 mL to remove it... It seems like that wouldn't be valid though.... Jan 28 '17 at 5:33 • What do you mean "to get rid of"? That's the way it is; you can't just up and change that, much like you can't get rid of the letter "c" in the word "concentration". Unless, of course, you want to know how much of your solute is in 1 mL (or 2 mL, or any other volume) of your solution; then you multiply by that volume and get some grams. Jan 28 '17 at 6:28 • You have to realize that if you have $$x_1 \cdot y_1 = x_2 \cdot y_2$$ then whatever the units of $x$ and $y$ the units will cancel appropriately. That doesn't mean that the closing price of the NY stock exchange times the number of Yankee's tickets sold means anything. – MaxW Jan 28 '17 at 6:40 It's fine to use $g/mL$ for concentration in a basic dilution equation. $C$ is never a mass. If you want mass of aspirin in a given solution with concentration $C$, the mass is given by $m = C\times V$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039177060127258, "perplexity": 509.4285985559083}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00596.warc.gz"}
https://ecommons.cornell.edu/handle/1813/57017/browse?type=subject&value=altmetrics
Now showing items 1-1 of 1 • #### Evaluating the Impact of Altmetrics  (2012-05-20) Objectives: Librarians, publishers, and researchers have long placed significant emphasis on journal metrics such as the impact factor. However, these tools do not take into account the impact of research outside of citations ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846716582775116, "perplexity": 2378.5498390464027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00781.warc.gz"}
https://www.physicsforums.com/threads/sum-of-the-first-50-terms-of-1-n-n-1.93243/
# Sum of the first 50 terms of 1/(n(n+1)) 1. Oct 9, 2005 ### nate808 how would you solve for the sum of the first 50 terms of 1/(n(n+1)) I know how to do it if there is a common denominator--but i cant seem to find one here, can someone please help(btw, question on a test i just had that i couldt figure out, not hw) 2. Oct 10, 2005 ### Tide Are you familiar with partial fractions? If so then you can do the sum very easily! :) 3. Oct 10, 2005 ### Moonbear Staff Emeritus Any question related to your coursework should go in homework help, even questions about old exams. Hang on, I'm going to send the thread for a ride over! 4. Oct 10, 2005 ### nate808 i dont exactly understand what finding a partial fraction would do in order to help find the sum--could u please explain( i believe the partial fractions ar 1/n - 1/n+1 5. Oct 10, 2005 ### nate808 nvm--i figured it out--thanks for the help Similar Discussions: Sum of the first 50 terms of 1/(n(n+1))
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056198954582214, "perplexity": 1667.8455681027426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189239.16/warc/CC-MAIN-20170322212949-00349-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/174677/possible-distance-b-w-points/174681
# Possible distance b/w points I am stumped on the following question (at least a part of the question) The distance from town A to town B is five miles . Town C is six miles from B .Which of the following could be a distance from A to C ? A)11 b)7 c)1 The answer is all of them. I could only figure out 11. How did they get 7 and 1 ? - Draw a picture. Say $A$ and $B$ live on the $x$-axis, with $B$ to the right of $A$. You noticed that if $C$ also lives on the $x$-axis, $6$ miles to the right of $B$, then $C$ will be $11$ miles from $A$. If $C$ lives on the $x$-axis, $6$ miles to the left of $B$, then $C$ will be $1$ mile from $A$. As for $7$, there certainly is a triangle $ABC$ with $AB=5$, $BC=6$, and $CA=7$. In general, if we are given three positive real numbers $a$, $b$, and $c$, and the sum of any two of $a$, $b$, and $c$ is greater than the third, then there is a triangle with sides $a$, $b$, and $c$. To think about it another way, draw a circle with centre $B$ and radius $6$. Draw a circle with centre $A$ and radius $7$. These two circles meet (in fact in two places). So there are two points $C$ which are distance $6$ from $B$ and distance $7$ from $A$. - Getting 1 is easy: Say B is 5 miles directly east of A. Also say that C is 6 miles directly west of B. This makes C 1 mile directly west of A. Getting 7 is a bit trickier and requires some thought: We know that A is 5 miles away from B and that B is 6 miles away from C. If we were to make a right triangle with 5 on the bottom and 6 on the side, we would get a hypotenuse length of sqrt(61), which is greater than 7. Therefore, we know that the angle of ABC is less than 90 degrees. We also know that there exists a triangle with sides 5, 6, and 7, and so we have our answer. - the triangle inequality states that $AB\leq BC+AC$ ,$BC\leq AB+AC$ and $AC\leq BC+AB$ If AC=7. If $AC=11$ then $AB+BC=AC$ which means C is in the road between A and B. if $AC=1, then AB+AC=BC which would mean c is in the road between A and B. The problem is that two of the answers make all towns be colinear while the other one makes a proper triangle with sides 5,6,7. - You know two things: the line connecting$A$and$B$is five miles long, and the line between$B$and$C$is six miles long. You do not know where$C$is relative to$B$. That means that$C$must lie on a circle with a radius of 6 miles from$B$. If$A$lies directly between$B$and$C$, then what is the distance from$A$to$C\$? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983433842658997, "perplexity": 115.79251579961064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769419.87/warc/CC-MAIN-20141217075249-00129-ip-10-231-17-201.ec2.internal.warc.gz"}
https://brilliant.org/problems/a-physicist-need-help/
# A physicist need help Geometry Level pending A physicist was studying the dependence a physical quantity $$y$$ on another quantity $$x$$ After performing a large number if experiments he discovered that the value of $$y$$ varies linearly with $$x$$ in other words we can say that $$y$$ is linear function of $$x$$ and in the language of mathametics we can express it as $\boxed{y = f(x)}$ where $$f(x)$$ is a linear function. In one of his experiments he needs to plot the graph of $$f(x)$$, if it is known that $\boxed{f(0) = -5}$ and $\boxed{f(1) = -4}$ then what will be the angle in degrees that the graph of the function will make with positive direction of $$x$$-axis ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071063995361328, "perplexity": 129.3115105188914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00494-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/kinematics-question-help.954540/
# Kinematics question help • Start date • Tags • #1 88 4 ## Homework Statement A ball is traveling at 3m/s then bounces off of a tennis racket and returns at 2m/s. If the ball is in contact with the racket for 12ms, what is the average acceleration? ## Homework Equations average acceleration and conversion a=Vf-Vi/Tf-Ti 1s= 1000ms ## The Attempt at a Solution Before I started, I converted 12ms to s and got 0.012s. I know the equation for avg acceleration. but I feel like I'm missing a step. I plug in the numbers and get an unrealistic solution. The answer comes out negative. I feel like im missing something. Or im making it too complicated for myself. • #2 Orodruin Staff Emeritus Homework Helper Gold Member 2021 Award 18,533 8,433 Please show your actual steps. Dont just say ”I put in the numbers and think I got it wrong”. How on Earth are we to see if you got it wrong or not if you dont show us what you did? • #3 tnich Homework Helper 1,048 336 ## Homework Statement A ball is traveling at 3m/s then bounces off of a tennis racket and returns at 2m/s. If the ball is in contact with the racket for 12ms, what is the average acceleration? ## Homework Equations average acceleration and conversion a=Vf-Vi/Tf-Ti 1s= 1000ms ## The Attempt at a Solution Before I started, I converted 12ms to s and got 0.012s. I know the equation for avg acceleration. but I feel like I'm missing a step. I plug in the numbers and get an unrealistic solution. The answer comes out negative. I feel like im missing something. Or im making it too complicated for myself. Everything you have showed us so far looks correct. A negative acceleration is not necessarily wrong. The magnitude of the acceleration should be pretty large to cause that change in velocity in such a short time. Why don't you show us your calculations so we can see if they are correct? • #4 88 4 Please show your actual steps. Dont just say ”I put in the numbers and think I got it wrong”. How on Earth are we to see if you got it wrong or not if you dont show us what you did? I subtracted 2m/s from 3m/s and then divided by the time 0.012s. • #5 88 4 Everything you have showed us so far looks correct. A negative acceleration is not necessarily wrong. The magnitude of the acceleration should be pretty large to cause that change in velocity in such a short time. Why don't you show us your calculations so we can see if they are correct? I subtracted 2m/s from 3m/s and then divided by the time 0.012s and got -83.3m/s^2 • #6 haruspex Homework Helper Gold Member 37,982 7,700 I subtracted 2m/s from 3m/s and then divided by the time 0.012s and got -83.3m/s^2 Velocity has direction. The change in speed 1m/s, but what is the change in velocity? • #7 tnich Homework Helper 1,048 336 I subtracted 2m/s from 3m/s and then divided by the time 0.012s and got -83.3m/s^2 You need to think about which direction the ball is traveling in before and after hitting it with the racket. Does the sign of the velocity change? • #8 88 4 You need to think about which direction the ball is traveling in before and after hitting it with the racket. Does the sign of the velocity change? I'm guessing that after it hits the racket it goes in the opposite direction so should 2m/s be negative? • #9 haruspex Homework Helper Gold Member 37,982 7,700 I'm guessing that after it hits the racket it goes in the opposite direction so should 2m/s be negative? Yes. • #10 88 4 Yes. Okay. I changed that. So i subtracted 3m/s from -2m/s and got -5. Then I divided by 0.012s and got -417m/s^2. but the question is multiple choice. The answers are: A) 0.417m/s^2 B) -56m/s^2 C) 0m/s^2 D) 83.3m/s^2 E) 417m/s^2 • #11 tnich Homework Helper 1,048 336 Okay. I changed that. So i subtracted 3m/s from -2m/s and got -5. Then I divided by 0.012s and got -417m/s^2. but the question is multiple choice. The answers are: A) 0.417m/s^2 B) -56m/s^2 C) 0m/s^2 D) 83.3m/s^2 E) 417m/s^2 • #12 88 4 3m/s is the positive direction. 2m/s is the negative direction I'm assuming since its returning. B is the only negative answer though • #13 Orodruin Staff Emeritus Homework Helper Gold Member 2021 Award 18,533 8,433 Does the sign depend on which direction you define as negative and which you define as positive? Is it specified in the problem which is the positive direction? • #14 88 4 Does the sign depend on which direction you define as negative and which you define as positive? Is it specified in the problem which is the positive direction? Now that I really think about it... I see the ball traveling FROM the wall or whatever is projecting it to the racket and back in the other direction.... but the problem doesn't state which is the negative direction. • #15 Orodruin Staff Emeritus Homework Helper Gold Member 2021 Award 18,533 8,433 Now that I really think about it... I see the ball traveling FROM the wall or whatever is projecting it to the racket and back in the other direction.... but the problem doesn't state which is the negative direction. Then which answer can be correct knowing that the people who constructed the problem may have defined the positive direction in either direction? • #16 88 4 Then which answer can be correct knowing that the people who constructed the problem may have defined the positive direction in either direction? 417m/s^2? E? I first thought D) 83.3m/s^2 but that was incorrect. • #17 haruspex Homework Helper Gold Member 37,982 7,700 417m/s^2? E? Yes. • Last Post Replies 7 Views 970 • Last Post Replies 8 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 5 Views 24K • Last Post Replies 9 Views 730 • Last Post Replies 4 Views 4K • Last Post Replies 4 Views 5K • Last Post Replies 5 Views 1K • Last Post Replies 9 Views 1K • Last Post Replies 2 Views 6K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209635019302368, "perplexity": 1205.6360756509634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00705.warc.gz"}
https://en.wikipedia.org/wiki/Elementary_symmetric_polynomial
# Elementary symmetric polynomial In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial P is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for each nonnegative integer dn, and it is formed by adding together all distinct products of d distinct variables. ## Definition The elementary symmetric polynomials in n variables X1, …, Xn, written ek(X1, …, Xn) for k = 0, 1, …, n, are defined by {\displaystyle {\begin{aligned}e_{0}(X_{1},X_{2},\dots ,X_{n})&=1,\\[10px]e_{1}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq j\leq n}X_{j},\\e_{2}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq j and so forth, ending with ${\displaystyle e_{n}(X_{1},X_{2},\dots ,X_{n})=X_{1}X_{2}\cdots X_{n}.}$ In general, for k ≥ 0 we define ${\displaystyle e_{k}(X_{1},\ldots ,X_{n})=\sum _{1\leq j_{1} so that ek(X1, …, Xn) = 0 if k > n. Thus, for each positive integer k less than or equal to n there exists exactly one elementary symmetric polynomial of degree k in n variables. To form the one that has degree k, we take the sum of all products of k-subsets of the n variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.) Given an integer partition (that is, a finite decreasing sequence of positive integers) λ = (λ1, …, λm), one defines the symmetric polynomial eλ(X1, …, Xn), also called an elementary symmetric polynomial, by ${\displaystyle e_{\lambda }(X_{1},\dots ,X_{n})=e_{\lambda _{1}}(X_{1},\dots ,X_{n})\cdot e_{\lambda _{2}}(X_{1},\dots ,X_{n})\cdots e_{\lambda _{m}}(X_{1},\dots ,X_{n})}$. Sometimes the notation σk is used instead of ek. ## Examples The following lists the n elementary symmetric polynomials for the first four positive values of n. (In every case, e0 = 1 is also one of the polynomials.) For n = 1: ${\displaystyle e_{1}(X_{1})=X_{1}.}$ For n = 2: {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2})&=X_{1}+X_{2},\\e_{2}(X_{1},X_{2})&=X_{1}X_{2}.\,\\\end{aligned}}} For n = 3: {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2},X_{3})&=X_{1}+X_{2}+X_{3},\\e_{2}(X_{1},X_{2},X_{3})&=X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3},\\e_{3}(X_{1},X_{2},X_{3})&=X_{1}X_{2}X_{3}.\,\\\end{aligned}}} For n = 4: {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2},X_{3},X_{4})&=X_{1}+X_{2}+X_{3}+X_{4},\\e_{2}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}+X_{1}X_{3}+X_{1}X_{4}+X_{2}X_{3}+X_{2}X_{4}+X_{3}X_{4},\\e_{3}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}X_{3}+X_{1}X_{2}X_{4}+X_{1}X_{3}X_{4}+X_{2}X_{3}X_{4},\\e_{4}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}X_{3}X_{4}.\,\\\end{aligned}}} ## Properties The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity ${\displaystyle \prod _{j=1}^{n}(\lambda -X_{j})=\lambda ^{n}-e_{1}(X_{1},\ldots ,X_{n})\lambda ^{n-1}+e_{2}(X_{1},\ldots ,X_{n})\lambda ^{n-2}+\cdots +(-1)^{n}e_{n}(X_{1},\ldots ,X_{n}).}$ That is, when we substitute numerical values for the variables X1, X2, …, Xn, we obtain the monic univariate polynomial (with variable λ) whose roots are the values substituted for X1, X2, …, Xn and whose coefficients are up to their sign the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas. The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain, up to their sign, the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of e1, and thus the sum of the eigenvalues. Similarly, the determinant is, up to the sign, the constant term of the characteristic polynomial; more precisely the determinant is the value of en. Thus the determinant of a square matrix is the product of the eigenvalues. The set of elementary symmetric polynomials in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring [e1(X1, …, Xn), …, en(X1, …, Xn)]. (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For other systems of symmetric polynomials with a similar property see power sum symmetric polynomials and complete homogeneous symmetric polynomials. ## The fundamental theorem of symmetric polynomials For any commutative ring A, denote the ring of symmetric polynomials in the variables X1, …, Xn with coefficients in A by A[X1, …, Xn]Sn. This is a polynomial ring in the n elementary symmetric polynomials ek(X1, …, Xn) for k = 1, …, n. (Note that e0 is not among these polynomials; since e0 = 1, it cannot be member of any set of algebraically independent elements.) This means that every symmetric polynomial P(X1, …, Xn) ∈ A[X1, …, Xn]Sn has a unique representation ${\displaystyle P(X_{1},\ldots ,X_{n})=Q{\big (}e_{1}(X_{1},\ldots ,X_{n}),\ldots ,e_{n}(X_{1},\ldots ,X_{n}){\big )}}$ for some polynomial QA[Y1, …, Yn]. Another way of saying the same thing is that A[X1, …, Xn]Sn is isomorphic to the polynomial ring A[Y1, …, Yn] through an isomorphism that sends Yk to ek(X1, …, Xn) for k = 1, …, n. ### Proof sketch The theorem may be proved for symmetric homogeneous polynomials by a double mathematical induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric). In the case n = 1 the result is obvious because every polynomial in one variable is automatically symmetric. Assume now that the theorem has been proved for all polynomials for m < n variables and all symmetric polynomials in n variables with degree < d. Every homogeneous symmetric polynomial P in A[X1, …, Xn]Sn can be decomposed as a sum of homogeneous symmetric polynomials ${\displaystyle P(X_{1},\ldots ,X_{n})=P_{\text{lacunary}}(X_{1},\ldots ,X_{n})+X_{1}\cdots X_{n}\cdot Q(X_{1},\ldots ,X_{n}).}$ Here the "lacunary part" Placunary is defined as the sum of all monomials in P which contain only a proper subset of the n variables X1, …, Xn, i.e., where at least one variable Xj is missing. Because P is symmetric, the lacunary part is determined by its terms containing only the variables X1, …, Xn − 1, i.e., which do not contain Xn. More precisely: If A and B are two homogeneous symmetric polynomials in X1, …, Xn having the same degree, and if the coefficient of A before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of B, then A and B have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables X1, …, Xn − 1.) But the terms of P which contain only the variables X1, …, Xn − 1 are precisely the terms that survive the operation of setting Xn to 0, so their sum equals P(X1, …, Xn − 1, 0), which is a symmetric polynomial in the variables X1, …, Xn − 1 that we shall denote by (X1, …, Xn − 1). By the inductive assumption, this polynomial can be written as ${\displaystyle {\tilde {P}}(X_{1},\ldots ,X_{n-1})={\tilde {Q}}(\sigma _{1,n-1},\ldots ,\sigma _{n-1,n-1})}$ for some . Here the doubly indexed σj,n − 1 denote the elementary symmetric polynomials in n − 1 variables. Consider now the polynomial ${\displaystyle R(X_{1},\ldots ,X_{n}):={\tilde {Q}}(\sigma _{1,n},\ldots ,\sigma _{n-1,n})\ .}$ Then R(X1, …, Xn) is a symmetric polynomial in X1, …, Xn, of the same degree as Placunary, which satisfies ${\displaystyle R(X_{1},\ldots ,X_{n-1},0)={\tilde {Q}}(\sigma _{1,n-1},\ldots ,\sigma _{n-1,n-1})=P(X_{1},\ldots ,X_{n-1},0)}$ (the first equality holds because setting Xn to 0 in σj,n gives σj,n − 1, for all j < n). In other words, the coefficient of R before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of P. As we know, this shows that the lacunary part of R coincides with that of the original polynomial P. Therefore the difference PR has no lacunary part, and is therefore divisible by the product X1···Xn of all variables, which equals the elementary symmetric polynomial σn,n. Then writing PR = σn,nQ, the quotient Q is a homogeneous symmetric polynomial of degree less than d (in fact degree at most dn) which by the inductive assumption can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for PR and R one finds a polynomial representation for P. The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the n polynomials e1, …, en are algebraically independent over the ring A.) The fact that the polynomial representation is unique implies that A[X1, …, Xn]Sn is isomorphic to A[Y1, …, Yn]. ### An alternative proof The following proof is also inductive, but does not involve other polynomials than those symmetric in X1, …, Xn, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree d; different homogeneous components can be decomposed separately. Order the monomials in the variables Xi lexicographically, where the individual variables are ordered X1 > … > Xn, in other words the dominant term of a polynomial is one with the highest occurring power of X1, and among those the one with the highest power of X2, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree d (they are in fact homogeneous) as follows by partitions of d. Order the individual elementary symmetric polynomials ei(X1, …, Xn) in the product so that those with larger indices i come first, then build for each such factor a column of i boxes, and arrange those columns from left to right to form a Young diagram containing d boxes in all. The shape of this diagram is a partition of d, and each partition λ of d arises for exactly one product of elementary symmetric polynomials, which we shall denote by eλt (X1, …, Xn) (the t is present only because traditionally this product is associated to the transpose partition of λ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables Xi. Lemma. The leading term of eλt (X1, …, Xn) is X λ. Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor ei(X1, …, Xn) is clearly X1X2···Xi. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1, …, i of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is X λ. Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial P of degree d can be written as polynomial in the elementary symmetric polynomials. Since P is symmetric, its leading monomial has weakly decreasing exponents, so it is some X λ with λ a partition of d. Let the coefficient of this term be c, then Pceλt (X1, …, Xn) is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back ceλt (X1, …, Xn) to it, one obtains the sought for polynomial expression for P. The fact that this expression is unique, or equivalently that all the products (monomials) eλt (X1, …, Xn) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the eλt (X1, …, Xn) were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables Xi) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126326441764832, "perplexity": 661.9987064639217}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424200.64/warc/CC-MAIN-20170723002704-20170723022704-00119.warc.gz"}
http://math.stackexchange.com/questions/140162/completeness-and-fourier-series-convergence
# Completeness and Fourier series convergence Consider the question: In an inner product space $V$, when does the Fourier series of $x$, $\sum\limits_{n=1}^k\langle e_n,x\rangle e_n$ converges to $x$ as $k\to\infty$? Well, certainly is converges for all $x$ if $V$ is a Hilbert space. But what if $V$ is not a Hilbert space? Is the completeness property that a Hilbert space possesses necessary to ensure the Fourier series converges in $V$ for all $x\in V$? Might there be incomplete inner product space $V$, e.g. maybe $c_{00}$ - sequences in $\mathbb{F}$ with finitely many non-zero entries, along with associated inner product $\langle(a_n),(b_n)\rangle = \sum\limits_{n=1}^\infty a_n \overline{b_n}$. Then we choose the obviously countable orthonormal basis. Isn't it true that for each $x$ the Fourier series converges? If this is the case then what can we say about such inner product space whose Fourier series of $x$ converges to any given $x$ in the space, assuming the space is not a Hilbert space, i.e. not complete? Is the point that these spaces are arbitrary and the completeness property will guarantee us Fourier convergence? Perhaps Hilbert spaces are, in turn, easier to deal with in general because they guarantee us nice properties. Also (more basic question), can we always find an ONB of a well-defined inner product space? Edit: I just realized Gram-Schmidt does this for us in all Hilbert spaces. I guess the question still remains for incomplete inner product spaces. Apologies for my lack of LaTeX skills.... So yeah lots of questions. Hopefully someone can tell me if I am on the right lines of thought. Thanks - What differentiates complete from incomplete inner product spaces is that if $V$ is complete, then for all sequences $(a_n)$ such that $\sum |a_n|^2 <\infty$, $\sum a_ne_n$ converges to an element of $V$. If $V$ is not complete, then there exists a sequence $(a_n)$ such that $\sum |a_n|^2<\infty$ but $\sum a_n e_n$ does not converge (although the partial sums form a Cauchy sequence). –  Jonas Meyer May 3 '12 at 5:44 First of all, I think it's confusing to refer to an arbitrary series of the form $\sum_n \langle e_n, x \rangle e_n$ as a Fourier series. I'd reserve that name for the special case when $e_n$ are trigonometric functions. To answer your second question first, any separable inner product space, complete or not, has an orthonormal basis (i.e. an orthonormal set whose linear span is dense). Just pick up a countable dense subset and apply the Gram-Schmidt algorithm to it. (A non-separable inner product space can fail to have an orthonormal basis, as Willie Wong's comment below points out. I got this wrong in my original answer. Thanks, Willie, for the correction.) For your first question, yes, in any inner product space, complete or not, if $\{e_n\}$ is a (countable) orthonormal basis in the above sense (so that the space is necessarily separable), then $\sum_{n=1}^k \langle e_n, x \rangle e_n \to x$ in norm. Since the span of $\{e_n\}$ is dense, for any $\epsilon$ there exists an integer $k$ and $a_1, \dots, a_k$ such that $\lVert x - \sum_{n=1}^k a_n e_n\rVert^2 < \epsilon$. On the other hand, a simple computation will show that $$\lVert x - \sum_{n=1}^k a_n e_n\rVert^2 - \lVert x - \sum_{n=1}^k \langle x, e_n \rangle e_n\rVert^2 = \sum_{n=1}^k |a_n - \langle x, e_n\rangle|^2 \ge 0$$ hence we have $\lVert x - \sum_{n=1}^k \langle x, e_n \rangle e_n\rVert^2 < \epsilon$. A similar argument will also show that $\lVert x - \sum_{n=1}^k \langle x, e_n \rangle e_n\rVert^2$ is non-increasing with $k$, and so we get the desired convergence. - There's no guarantee that the maximal orthonormal set is countable, though, right? Or are you building that into your definition of inner product space (like how some people only allow separable Hilbert spaces)? –  Willie Wong May 3 '12 at 10:47 In fact, this Wikipedia entry seems to contradict your assertion that any inner product space, complete or not, has an orthonormal basis. In particular, a maximal orthonormal system may not need to be an orthonormal basis in the absence of completeness and separability (just one of the two is enough). –  Willie Wong May 3 '12 at 10:57 Ah ok, so is it true that: An IPS V has a countable dense subset (is separable) iff it has an ONB ? I think so. [-->]: definition of density. [<--] Just take the linear span of the ONB, and this is a countable dense subset of V. Hence you can say something about non-separable IPS. But that is not of interest. –  Adam Rubinson May 3 '12 at 22:19 @AdamRubinson: A non-separable inner product space can certainly have an orthonormal basis, i.e. an orthonormal set whose span is dense. Such a set would necessarily be uncountable in this case. Every non-separable Hilbert space does in fact have an ONB, but as Willie points out a non-separable non-complete inner product space can fail to have one. However, it is true that an inner product space is separable iff it has a countable orthonormal basis. (If: take the $\mathbb{Q}$-span of the ONB. Only if: use Gram-Schmidt as I suggest in my answer.) –  Nate Eldredge May 3 '12 at 22:58 I meant to write countable ONB in space of ONB, but thanks for making things clearer anyhow. –  Adam Rubinson May 3 '12 at 23:22 I 'll just use the old fashion notation. To begin with, Fourier series is about expanding a periodic function. For non-periodic functions that have a continuous Your function $f(x)=x$ defined for all $\mathbb{R}$ is not periodic. You can make a periodic function out of it, $g(x)$ assuming a period say $2L$. Then You get a $C^1$ function that can be expanded as a fourier series $$g(x)=-\frac{2L}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^n}{n} \sin \frac{n \pi x}L, -L\le x \le L$$ which converges but not absolutely. The Fourier series will pointwise converge everywhere to a continuous function. It will even converge to a discontinuous function with at most countable number of discontinuities everywhere except at points of discontinuouity, at which it will converge to the average of the left and the right limit of the function. About your next question you can think of sequences as maps from $\mathbb{N}$ to some space say $\mathbb{S}$. If a sequence converges it might converge to a point in $\mathbb{S}$ or to some point not in $\mathbb{S}$. If $\mathbb{S}$ is complete that guarantees that all converging sequences will have a limit $l \in \mathbb{S}$. If your space is incomplete you don't have that. - "Fourier series is about expanding a periodic function." What do you mean by this exactly? –  Adam Rubinson Jun 30 '14 at 0:02 I mean that your function has to be periodic to fourier expand. If the support of the function is bounded you can always consider periodic repetition of the function outside the support and fourier expand that. However if the support is infinite and the function is not periodic you need to do a continuous fourier transform. –  Georgy Jul 7 '14 at 7:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732632637023926, "perplexity": 187.63787518153106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672923.0/warc/CC-MAIN-20151001215752-00276-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/example-proof-using-complex-numbers.523248/
# Homework Help: Example Proof using Complex Numbers 1. Aug 21, 2011 ### Chantry 1. The problem statement, all variables and given/known data http://www-thphys.physics.ox.ac.uk/people/JamesBinney/complex.pdf Example 1.2 (Page 6) 2. Relevant equations De Moivre's Theorem, Euler's Formula, and other simple complex number theory formulas 3. The attempt at a solution I'm having troubles understanding the format, which makes me thing the author is assuming prior knowledge in another area of math. What I don't understand is where he gets the mSYMBOL format from. I don't know what that symbol is, so I couldn't google it. I get all of the simplifying, except for when the conversion happens to and from the mSYMBOL. It looks like he's simply converting the sin(2n + 1) to the complex exponential function, but how can you do that without i? I know sin(n) = 1/(2i) * (e^(in)-e^(-in)), but that's not even close to the result they got. If that's the case, then my question is, how is this transformation happening? Again, I understand the simplifying of the series, just not the transformation to and from the complex exponential. Hopefully I explained that well enough. Any help would be appreciated. Last edited: Aug 21, 2011 2. Aug 21, 2011 ### rock.freak667 That symbol just means the imaginary part. For example for Euler's formula e=cosθ+ isinθ so that Imaginary part of e, written as Im(e) = sinθ. So the imaginary part of ei(2n+1)θ, written as Im(ei(2n+1)θ)=sin(2n+1)θ 3. Aug 21, 2011 ### Chantry Thanks for the help :). I now understand where they get the sin(x) + rsin(x) in the numerator, and where the 1 + r^2 comes from in the denominator. However, how do they get the 2rcos(2x) in the denominator? EDIT: Never mind, I figured it out. I forgot about cos(x) = 1/2(e^ix + e^-ix). Thanks again. Last edited: Aug 21, 2011
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571954011917114, "perplexity": 1325.4904101958211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00617.warc.gz"}
https://groupprops.subwiki.org/w/index.php?title=Conjugacy-separable_with_only_finitely_many_prime_divisors_of_orders_of_elements_implies_every_extensible_automorphism_is_class-preserving&amp;action=history
# Conjugacy-separable with only finitely many prime divisors of orders of elements implies every extensible automorphism is class-preserving ## Statement Suppose $G$ is a Conjugacy-separable group (?): in other words, given any two elements $x,y$ of $G$ that are not conjugate, there exists a normal subgroup of finite index $N$ in $G$ such that the images of $x,y$ in $G/N$ are not conjugate in $G/N$. Suppose, further, that the set of primes $p$ that divide the order of some non-identity element of $G$ is finite. Then, if $\sigma$ is an Extensible automorphism (?) of $G$, $\sigma$ is a Class-preserving automorphism (?) of $G$. ## Proof Given: A conjugacy-separable group $G$ with only finitely many primes $p$ dividing the orders of elements of $G$. An extensible automorphism $\sigma$ of $G$. Two elements $x,y$ of $G$ that are not conjugate. To prove: $\sigma$ cannot send $x$ to $y$. Proof: 1. There exists a normal subgroup $N$ of finite index in $G$ such that the images of $x$ and $y$ are not conjugate in $N$: This follows from the definition of conjugacy-separable. 2. Let $\overline{G} = G/N$. Then, there exists a prime $p$ such that the field of $p$ elements is sufficiently large for $\overline{G}$, and such that $p$ does not divide the order of any element of $G$: By fact (2), there are infinitely many sufficiently large prime fields for $\overline{G}$, i.e., there are infinitely many primes $p$ for which the corresponding prime field is sufficiently large for $G$. Since there are only finitely many prime divisors of orders of elements, we can find a prime $p$ not among any of these divisors such that the corresponding prime field is sufficiently large. 3. The field of $p$ elements is a class-separating field for $\overline{G}$. In particular, there is a finite-dimensional linear representation $\rho_1:\overline{G} \to GL(V)$ of $\overline{G}$ over this field such that $\rho_1(\overline{x})$ and $\rho_1(\overline{y})$ are not conjugate: This follows from fact (3). 4. $\sigma$ is linearly pushforwardable over the prime field with $p$ elements, for the $p$ chosen above. In particular, if $\sigma(x) = y$, then $\rho(x)$ and $\rho(y)$ are conjugate for any representation $\rho$ over this field: Let $\rho$ be a representation of $G$ over this field. Let $V$ be the corresponding vector space and $H = V \rtimes G$ the semidirect product for the action. Since $p$ does not divide the order of any element of $G$, $V$ is the set of elements of $H$ of order dividing $p$. In particular, $V$ is characteristic in $H$, and thus, if $\sigma$ extends to an automorphism $\sigma'$ of $H$, then $\sigma'$ also restricts to an automorphism $\alpha$ of $V$. Fact (1) thus yields that $\rho \circ \sigma = c_\alpha \circ \rho$, so $\sigma$ is linearly pushforwardable over the field of $p$ elements. In particular, if $\sigma(x) = y$, then $\rho(\sigma(x)) = c_\alpha(\rho(x))$, so $\rho(y)$ is conjugate to $\rho(x)$ by $\alpha$. 5. Let $\rho_1$ be the linear representation chosen in step (3), and let $\rho = \rho_1 \circ p$ where $p:G \to G/N = \overline{G}$ is the quotient map. Then, $\rho(x)$ and $\rho(y)$ are not conjugate in the general linear group $GL(V)$. However, by step (4), we have that $\sigma$ is linearly pushforwardable, so if $\sigma(x) = y$, then $\rho(x)$ and $\rho(y)$ are conjugate. This gives a contradiction, so we cannot have $\sigma(x) = y$, and we are done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 88, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928754568099976, "perplexity": 61.89472434761725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00093.warc.gz"}
http://mathoverflow.net/questions/7732/diameter-of-m-fold-cover/8357
# Diameter of m-fold cover Let $M$ be a closed Riemannian manifold. Assume $\tilde M$ is a connected Riemannian $m$-fold cover of $M$. Is it true that $$\mathop{diam}\tilde M\le m\cdot \mathop{diam} M\ ?\ \ \ \ \ \ \ (*)$$ • This is a modification of a problem of A. Nabutovsky. Here is yet related question about universal covers. • You can reformulate it for compact length metric space --- no difference. • The answer is YES if the cover is regular (but that is not as easy as one might think). • The estimate $\mathop{diam}\tilde M\le 2{\cdot}(m-1){\cdot} \mathop{diam} M$ for $m>1$ is trivial. • We have equality in $(*)$ for covers of $S^1$ and for some covers of figure-eight. - In your inequality, the manifold on the right hand side should be M, not $\tilde{M}$, I think. – Jason DeVito Dec 4 '09 at 2:41 What do you mean by "for many covers of figure-eight"? It seems clear that it's true for all finite covers of the wedge of two circles. (The number of edges in the maximal tree is m-1.) – HJRW Dec 4 '09 at 2:53 An equilateral triangle with a loop at every vertex is a 3 fold cover. Right? – Andrey Gogolev Dec 4 '09 at 3:45 Oh, you're talking about when you get equality. Sorry, I misread the previous version of the question. – HJRW Dec 4 '09 at 4:05 To clarify, a regular cover is one you get from a free (group action), not a (free group) action. Also known as a Galois cover. See en.wikipedia.org/wiki/… – David Speyer Dec 11 '09 at 17:46 I think I can prove that $diam(\tilde M)\le m\cdot diam(M)$ for any covering. Let $\tilde p,\tilde q\in\tilde M$ and $\tilde\gamma$ be a shortest path from $\tilde p$ to $\tilde q$. Denote by $p,q,\gamma$ their projections to $M$. I want to prove that $L(\gamma)\le m\cdot diam(M)$. Suppose the contrary. Split $\gamma$ into $m$ arcs $a_1,\dots,a_n$ of equal length: $\gamma=a_1a_2\dots a_m$, $L(a_i)=L(\gamma)/m>diam(M)$. Let $b_i$ be a shortest path in $M$ connecting the endpoints of $a_i$. Note that $L(b_i)\le diam(M)< L(a_i)$. I want to replace some of the components $a_i$ of the path $\gamma$ by their "shortcuts" $b_i$ so that the lift of the resulting path starting at $\tilde p$ still ends at $\tilde q$. This will show that $\tilde\gamma$ is not a shortest path from $\tilde p$ to $\tilde q$, a contradiction. To switch from $a_i$ to $b_i$, you left-multiply $\gamma$ by a loop $l_i:=a_1a_2\dots a_{i-1}b_i(a_1a_2\dots a_i)^{-1}$. More precisely, if you replace the arcs $a_{i_1},a_{i_2},\dots,a_{i_k}$, where $i_1< i_2<\dots< i_k$, by their shortcuts, the resulting path is homotopic to the product $l_{i_1}l_{i_2}\dots l_{i_k}\gamma$. So it suffices to find a product $l_{i_1}l_{i_2}\dots l_{i_k}$ whose lift starting from $\tilde p$ closes up in $\tilde M$. Let $H$ denote the subgroup of $\pi_1(M,p)$ consisting of loops whose lifts starting at $\tilde p$ close up. The index of this subgroup is $m$ since its right cosets are in 1-to-1 correspondence with the pre-images of $p$. While left cosets may be different from right cosets, the number of left cosets is the same $m$. Now consider the following $m+1$ elements of $\pi_1(M,p)$: $s_0=e$, $s_1=l_1$, $s_2=l_1l_2$, $s_3=l_1l_2l_3$, ..., $s_m=l_1l_2\dots l_m$. Two of them, say $s_i$ and $s_j$ where $i< j$, are in the same left coset. Then $s_i^{-1}s_j=l_{i+1}l_{i+2}\dots l_j\in H$ and we are done. - I must be missing something - why do you say that L(bi) < L(ai) in the second paragraph? Why is that a strict inequality? – Alon Amit Mar 3 '10 at 0:36 @Alon. Otherwise the distance is $\le m\cdot \mathrm{diam}$ – Anton Petrunin Mar 3 '10 at 1:01 @Alon: I've edited the second paragraph to make it more clear. – Sergei Ivanov Mar 3 '10 at 9:56 **** Nice! **** – Włodzimierz Holsztyński Dec 19 '14 at 5:28 Here's a proposed sketch of an approach. I hope it actually works... [EDIT: it doesn't, as it stands. I guess the main take-away from the rough outline below is that whatever the answer is for graphs should carry over to manifolds]. First, we can prove an appropriate analog in the category of graphs. Let $G$ be a base graph and $\tilde{G}$ a connected $m$-cover of $G$ in the combinatorial sense (the mapping takes vertices to vertices and edges to edges, and preserves local neighborhoods). It's useful to visualize $\tilde{G}$ this as a set of discrete fibers over the vertices of $G$, the vertices of which can be aribtrarily numbered $\{1,\ldots, m\}$. Now the edge-fibers correspond to permutations in $S_m$. Also notice that we may relabel the vertex fibers in order to make certain edge fibers "flat", meaning the corresponding permutation is the identity. This can simultaneously be done for a set of edges of $G$ which contain no cycle, such as a path (or a tree). Given two vertices $\tilde{x}, \tilde{y}$ in $\tilde{G}$, there's a path $P$ of length at most $d$ between their projections $x,y$ in $G$. We may assume that the permutations over the edges in $P$ are trivial. A path from $\tilde{x}$ to $\tilde{y}$ can now be formed by navigating across the floors (at most $d$ steps in each trip [EDIT: could be worse, since as you move to a new floor you're not guaranteed to land on the path]) and among the floors (at most $m$ steps overall), yielding $md+m$ steps in total. Sorry this is so vague but it's really quite simple if you draw a picture. Now $m(d+1)$ is a bit too large (we want $md$) but this can't be helped in the category of graphs: for example, the hexagon (diameter 3) is a 2-cover of the triangle (diameter 1). But this is just because the triangle misrepresents the true diameter of the underlying geometry, which is really $3/2$. To resolve this nuisance, apply the procedure above to a fine subdivision of $G$ (and $\tilde{G}$), which make $d \to \infty$ and the ratio is brought back to the desired $m$. Next, consider simplicial complexes of higher dimension. It seems to me that if $X$ is a sufficiently nice topological space triangluated by a simplicial complex $K$, then the diameter of $X$ can be well approximated by the diameter of the 1-skeleton of a sufficiently fine subdivision of $K$. Is this true? Given two points in $X$ and a long path between them, if the path is close to a PL one than this should be the case. I hope that if $X$ is not too pathological, its diameter is represented by a tame path. Finally, I would hope that a general Riemannian manifold (or some other kind of space for which we need to prove this) can be effectively triangulated, although this extends beyond my off-the-top-of-my-head knowledge. Can something like this work? - I guess that "floor" means your tree? Then it might be up to $2d$ in each floor --- not $d$... – Anton Petrunin Dec 5 '09 at 21:05 That's right, and I modified my post to reflect that. However, I guess the point is that whatever the correct ratio is for graphs (up to subdivision, as outlined) should be the correct ratio for manifolds, right? If we can actually find a graph of diameter $d$ (say, a bipartite one to avoid the odd-cycle-diameter issue), and a connected $m$ cover with diamater $\alpha$md for $\alpha>1$, does this not also mean that the manifold case cannot be better? – Alon Amit Dec 5 '09 at 22:21 That is right --- it is sufficient to make it for graphs. In fact any metric space (in particular Riemannian manifold) can be approximated by a graph, say in Gromov--Hausdorff sense. – Anton Petrunin Dec 5 '09 at 22:42 It seems that on this way, the best you can get is $2(m-1)d$... – Anton Petrunin Dec 6 '09 at 3:11 I can show that $diam(\widetilde{M})\leq (m+1)diam(M)$. It follows from the fact that the fundamental group of $M$ is generated by "short" loops of length at most $2diam(M)$ (this is proved in Gromov's book "Metric structures ..."). Lets show that if $p$ and $q$ in $\widetilde{M}$ have the same projection $x$ in $M$, then $dist(p,q)\leq [m/2]\cdot diam(M)$. Consider the following graph. Vertices are $m$ preimages of $x$, edges correspond to short loops in $M$. This graph is 2-connected, therefore the diameter of the graph is at most $[m/2]$. To improve the estimate it is sufficient to show, that for every two points $x,y\in M$ short loops with based point $x$ through $y$ generates the fundamental group. - Well it works only for regular coverings. Look at the coverings of figure-eight its graph can be a tree... BTW, for regular coverings you can do $m\cdot\mathop{diam}M$. – Anton Petrunin Dec 9 '09 at 16:54 You forget to multiply by 2 in $dist(pq)[m/2]diam(M)$. – Anton Petrunin Dec 9 '09 at 20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199504852294922, "perplexity": 239.9583480020234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00322-ip-10-236-182-209.ec2.internal.warc.gz"}