url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.eg.bucknell.edu/~phys211/fa2017/questions/hw.html
Sun, Dec 10, 6:41 p.m. - For question 8 in Chapter 9 in Wolfson, I said that because momentum has to be conserved, there is even a very small velocity maintaining the momentum in the after picture, which conserves a part of kinetic energy. So, my answer was NO. Is that correct? No, that isn't correct. It is possible to have an inelastic collision where all the kinetic energy is lost. A 1-kg ball of ground beef is moving to the right with a speed of 2 m/s. Another 1-kg ball of ground beef is moving to the left with a speed of 2 m/s. They collide and stick together. If you conserve momentum you will find that total momentum is zero in the before picture, so the ground beef must be motionless in the after picture since it is now a 2-kg, stuck together mass of meat. So, all of the initial kinetic energy is gone. Mon, Dec 4, 2:18 a.m. - Ch. 11 Wolfson #58. Potential energy of the spring equals the kinetic energy of the mass... and the rotational energy of the turntable or no? <p>How does the turntable have any rotational energy? Let's think about this. There's no friction. The mass can't exert a force on the table, then. How, then, is the table moving? Isn't the mass basically not in contact with the table? Please explain! No, it is not correct that potential energy of the spring equal kinetic energy of the mass. Please, please, please, please ... don't ever start a problem with potential energy of (something) equals kinetic energy of (whatever)'' -- I can promise you that we will ALWAYS deduct at least 2 or 3 points if you start a problem like that. The starting principle is that since $W_{NC} = 0$, that means that mechanical energy is conserved, i.e., $E_{before} = E_{after}$. It is true that $E_{before} = K_b+U_b = U_{spring}$ since everything is motionless in the before picture (which I know that you have drawn because it is the first step in the step-by-step approach and I trust that you now know never to leave out any steps). $E_{after} = K_a+U_a = K_{trans} + K_{rot} + 0$, so, yes, the turntable does have rotational energy. By the way, don't forget conservation of angular momentum as well -- that plays an important role. Answering your question -- it is correct that the mass is not in contact with the table. It is flying off in one direction, and the table is then rotating. Total angular momentum is clearly zero in the before picture that you have sketched, so with the mass flying off and clearly carrying angular momentum (calculate it using $\vec{L} = \vec{r} \times \vec{p}$), the turntable must be carrying the opposite angular momentum (it must be spinning) to keep the total at zero. By the way, conceptually, when the spring is pushing the mass, it is also pushing back on the table (by Newton's 3rd Law), so the table gets a torque while the spring is pushing that causes it to rotate. Sun, Dec 3, 4:23 p.m. - For Question A44c, why in the world does the yoyo move towards the string, let alone accelrate so quickly? My brain is litterally on the verge of melting; this is trully the first time in this course where I cannot fathom a reason why something is happening. But I'm sure it will be a simple answer. Yeah, it's a great puzzle, isn't it? I love this particular question, because almost no one ever expects it to roll the way that it does. The answer, of course, can be found in the standard step-by-step approach that we have discussed. Draw a free-body diagram for the yo-yo, and write down Newton's Second Law in component form (you only need the component in the direction of acceleration). Draw a torque diagram and apply $\tau_{net} = I\alpha$ (warning: you'll need two different radii, one -- let's call it $r_1$ -- for the small radius of the spool around which the string is wrapped, and a second -- let's call it $r_2$ -- for the outer radius of the yo-yo. And then $a = r_1\alpha$. Using those three equations and solving for $a$, you'll find that it will accelerate in the direction of the tension in any case where $r_2 \gt r_1$. By the way, the result is quite interesting, because if you use the center of the yo-yo as the origin for torque calculations, the torque from the tension causes the yo-yo to twist one way, and the torque from the friction causes the yo-yo to twist the other way. The friction torque wins this battle, resulting in the yo-yo accelerating the way you found; however the tension force has to be stronger than the friction force, resulting in the yo-yo accelerating in the direction of the tension. These statements are consistent because of the fact that the radius for the friction force is larger than the radius for the tension force, so the friction torque can be (and is) larger than the tension torque, even though the friction force is smaller than the tension force. By the way, there is a quicker way to argue that the yo-yo has to accelerate in the direction that you found: draw the torque diagram, but use the contact point with the table as the observation point when determining the net torque. When you do that, there will only be one force that causes a torque, and it should be easy to figure out the direction of that torque. And then consider that your observation point is the contact point, and then you'll be able to see which way the yo-yo will rotate. This is a quick approach, but a bit subtle. Sun, Nov 5, 11:41 a.m. - For SUPP CH 9 #16b, I thought that delta-S total should be conserved. Why, then, is it 0.6 J/K if no energy is being added to the system as a whole? Ugh. Here is again the fallacy that conservation'' has anything to do with entropy. It doesn't. Something is conserved if its value doesn't change. That's not the case with entropy. Entropy can and often does increase during a process. In 9-16, you are taking a high temperature object and putting it in contact with a low temperature object, in which case heat will flow from the hot to the cold object, resulting in a decrease in the entropy of the hot object, and even larger increase in the entropy of the cold object, and an overall increase in entropy of the entire system. An increase in entropy means that the system is going to a more likely state. THAT'S THE WHOLE POINT!!! Heat flows from hot to cold because it is overwhelmingly, mind-bogglingly more likely that it will do so, rather than having no energy flow or (even less likely) a flow from cold to hot. Sun, Nov 5, 11:34 a.m. - For SUPP CH 9 #14, the relationship is given to us in 9.10. Where might we want to look to begin deriving that relationship? Does it have to do something with how thermal energy is conserved, perhaps? Please don't answer 9-14 by saying, $S_{AB} = S_A + S_B$ because equation 9.10 says so.'' You can prove this by (a) recognizing that the total multiplicity $\Omega_{AB} = \Omega_A \Omega_B$, from Boltzmann's relation for entropy $S = k_Bln\Omega$, and from the mathematics of natural logs. Specifically, remember that $ln(ab) = ln(a)+ln(b)$. Sun, Nov 5, 11:27 a.m. - For SUPP CH 9 #7c, should the explanation be something along the lines of the fact that 7b has more energy units than 7a and an incremental unit of energy added to 7b yields a lower increase in entropy than an incremental unit of energy added to 7a? Do we have to explain anything on a molecular level? Even if we don't, though, how could one explain this phenomenon on a molecular and probabilistic level (rather than just a computational one)? Yes to your first question. As for your second question, any explanation should involve probabilities/multiplicities and that is what you are doing in parts a and b. So, no -- once you have completely explained something, you do not have to come up with another explanation. As for your third question: you are explaining this on a probabilistic level. That's what the computation in (a) and (b) are doing. Remember: entropy is a measure of probability (actually, likelihood, and the natural log of that). Sun, Nov 5, 11:13 a.m. - For SUPP CH 9 #5, would the approach to answering this question be that the temperature of both can't go up (one has to go up and one has to go down per dS/dE = 1/T)? How do we prove entropy is conserved, though? Answer to your first question: no. Answer to your second question: why would you want to prove the entropy is conserved? There is no conservation of entropy'' principle and if you ever try to prove that there is one, you will be wasting your time. In class last week, we said that for dice, there is only one microstate that gives a total of 2 for a roll of two dice, but there are 6 microstates for a total of 7. So, rolling a 7 is more likely than rolling a 2, which means that 7'' has a higher entropy. Does that mean that you will never roll at 2? Why is it that you can end up with smaller entropy states when rolling 2 dice, but you never end up with lower entropy states in thermodynamic systems? Hint number 1: if you were working with an Einstein solid that had only two particles in it, you would often see the system entropy decreasing. Why don't you ever see an entropy decrease for a solid with, say, a mole of atoms? Hint number 2: you might want to use the words, really'', unbelievably'' and mind-bogglingly'' in your answer when discussing probabilities/likelihoods when discussing systems with large numbers of molecules in the Einstein solid. Sun, Nov 5, 10:56 a.m. - For SUPP CH 9 #3c, why can we not simply use the multiplicity formula (subbing q=6 and N=3)? What may be a better approach to solving this problem instead (to get 10/216 instead of the answer that the multiplicity formula gives with the approach I outlined above: 28). The multiplicity formula that we use $\Omega = \frac{(q+N-1)!}{q!(N-1)!}$ is only valid for the Einstein solid. Please make sure that you have that on your 3x5 card and that you only use this formula if the problem specifically states that you are dealing with an Einstein solid. For this problem, by far the easiest way to do it is to simply write down any combination of the three dice that will give you a sum of 6. I'll give you the first one: 1 on the first die, 1 on the second and 4 on the third. Keep going. Sun, Nov 5, 10:55 a.m. - For SUPP CH 7 #15e, why do you have to use force = (NkT)/L instead of density * volume * average force per molecule (derived in part 15d)? You aren't using NkT/L -- I'm not sure where that came from. I think that you are overthinking part (e). Once you have figured out in part (d) how much force a single molecule is exerting on the wall, simply multiply by the number of molecules in the box that are hitting the wall. Note the second assumption at the top of the problem that says pretend that a third of the particles are moving in each direction. And the good news is that you will know if you have done this correctly because when you take the force determined in part (e) and divide by the area of the wall (in part f), you should get a pressure of 1 atm (i.e., 101 kPa). Sun, Nov 5, 10:53 a.m. - For SUPP CH 7 #12, is the answer that you are adding thermal energy by dragging the molecule? What I thought is that if you add enough thermal energy to a few atoms/molecules, the thermal energy is dissipated across all neighboring molecules. This is something that you should be noticing when you actually do the simulation. (By the way, if your response to this question doesn't explicitly discuss what you did and what you saw in the simulation, then don't expect to get full credit -- you actually have to do the simulation.) Yes, the energy that you put into dragging a single molecule gets transferred to other molecules (not just the neighboring ones). Can you explain why? And why does that result in melting of the solid? For that, refer back to 7-1(c). Sun, Nov 5, 10:52 a.m. - For SUPP CH 7 #1c, is the answer that if the thermal energy is high enough that the molecules are too far from their minimum of their potential well, the pairs of molecules are stretched out far enough (at least d/10 per the Lindemann criteria) that causes the interaction of molecules to be weaker (the model of the spring goes away)? It isn't that the interaction between the molecules goes away -- that's what happens when the stuff turns into a gas. It's only in the gas state that the molecules are far enough so that the interaction is negligible. See Fig. 6.1 for more about that. A solid doesn't melt when the interactions become negligible. Rather, a solid melts when the atoms in the solid can move around (due to thermal jiggling) so that they can start exchanging places, rather than basically jiggling in place. That's the key difference between a solid and a liquid -- in both of them, the atoms are still bound together, but in a liquid, the atoms are still able to move around. And that is the key to understanding why comparing $x_{therm}$ to the equilibrium spacing $d$ for a solid is the key. I'll let you figure out the rest of the argument. Sat, Nov 4, 4:57 p.m. - Silly question, but seriously not sure about this: Are we allowed to do use blank paper instead of lined paper for the hand-in set? I don't see any harm in that. Go for it. Sat, Nov 4, 3:59 p.m. - For Supp CH 7 #12, where can we find the molecular dynamics applet? It can be found on the calendar page for Lecture 16 or 17. You can also get to it by clicking here: http://physics.weber.edu/schroeder/md/InteractiveMD.html Sun, Oct 29, 2:36 p.m. - I noticed that there are no answers to CH8 on the back of the supplementary reading. Would you please post it on the website? Thanks for catching this. We have posted the answers for Ch 8 on the calendar page for the October 31 lecture. Sun, Oct 8, 4:32 p.m. - How do you do A38 and A39. I can't figure out a method even after reviewing the lecture example, and the strategy. The video examples also don't work on my computer. Thank you in advance. The method discussed in lecture and on the lecture materials'' for this past Thursday's lecture works perfectly for both of these problems, but you can't skip any steps. For each problem: 1. Draw before'' and after'' sketches. For A38, the Before sketch should show the motionless particle and the photon that is about to strike it. The After sketch should show just a single particle that is moving. For A39, the Before sketch should show a moving particle and a stationary particle, and the After sketch should show a motionless particle and a photon. 2. Write down p and E for each particle in your sketches. Some of this information is given in the statement of the problems. For some of it, you will need to use $E^2 = p^2c^2+m^2c^4$. Count how many unknowns you have. If more than two, then use $E^2 = p^2c^2+m^2c^4$ to write some of the unknowns in terms of other unknowns. 3. Once down to 2 unknowns, then $\Sigma p_{x,before} = \Sigma p_{x,after}$ and $\Sigma E_{before} = \Sigma E_{after}$. 4. You now have two equations and two unknowns. Solve. Sun, Sep 24, 2:10 p.m. - To add on, the problem for A74 is that if you try to draw the FBD of the ball, there is mg acting down and no normal force. I guess you'll have to assume that the ball is rolled on the ice instead... Everything for the handin set due tomorrow deals with conservation of momentum. You don't need free body diagrams. Before and after sketches, momentum before - in components if necessary - momentum after, set befire equaks aftet, solve. If elastic, also use speed of approach ewuals soeed if recession. (By the way, the fact that there is acceleration in the y-direction doesn't have any effect on its x-motion. And besides, the problem specifically says that the ball is moving horizontally when the child catches it. So, you don't need to worry about what has been happening with its vertical motion before she catches it.) Sun, Sep 24, 2:06 p.m. - For A74 on Hand-In Set #5, the problem states that the girl "catches a 1.1 kg ball moving horizontally at 9.5 m/s." Can we assume that from the time the ball is thrown horizontally to the time the girl catches the ball, the ball gains no vertical velocity/momentum? You don't have to worry about vertical momentum. Horizontal is all you need. Mon, Sep 18, 3:28 p.m. - For number A22. Extreme Skiing, how do you begin to solve part f? Is it asking about the maximum magnitude of the normal force at point B? (1) Draw a sketch of the skier at the point B. (Optional) (2) Draw a free-body diagram for her at point B. (3) Write down $\vec{F_{net}} = m\vec{a}$. (4) Choose coordinate axes. (5) Write Newton's law in component form. You'll need only one direction in this case (the direction of the acceleration, which is toward the center of the circle). (6) Solve for the normal force. Now, you will need the speed of the skier at point B to finish this off. To get her speed at point B, use the standard approach for mechanical energy problems: (1) Draw before and after sketches (with before when she is starting up the hill and after at point B) (2) Write down an expression for $E_{before}$. (3) Write down an expression for $E_{after}$. (4) $W_{nc} = 0$ so $E_{before} = E_{after}$. Solve for the speed (which should be in $E_{after}$), pop it into your expression for $N$, solve, and pat yourself on the back. Sun, Sep 17, 10:24 p.m. - Can you explain the clock problem from one of the homework problems? I am confused about how to get the acceleration. On one hand, you have displacement over time is the average velocity. Then, you also have 2*pi*R/time, which gives the magnitude of the velocity at any point, which is constant. Please explain in full if possible--thanks! The problem asks for the average acceleration with is defined as $\Delta \vec{v}/\Delta t$. $\Delta \vec{v} = \vec{v_f} - \vec{v_i}$. The velocity $\vec{v_f}$ is the velocity of the hand when it is at the bottom (i.e., 6:00) and $\vec{v_i}$ is the velocity of the hand when it is at the top (i.e., 12:00). In both cases, the velocity has a magnitude $v = (circumference)/(rotation period) = 2\pi r/(12 hours)$. Once you have this numerical value for the speed, then $\vec{v_i} =$ (this numerical value)$\times \hat{i}$ since the hand is moving in the positive x-direction at the top, and $\vec{v_f} =$ (this numerical value)$\times (-\hat{i})$ since the hand is moving in the negative x-direction at 6:00. Do the subtraction $\Delta \vec{v} = -(something)\hat{i} - (something)\hat{i} = -2(something)\hat{i}$, divide by 6 hours (which is $\Delta t$ between 12:00 and 6:00) and convert units to meters and seconds. Sat, Sep 16, 1:29 p.m. - Are there Assigned Problem for Wednesday (9/20)'s problem session? Nope. But, as I mentioned in lecture Thursday, there IS problem session on Wednesday and you will be receiving a 5-point grade based on the activities in that problem session. But no assigned problems for Wednesday. Thu, Sep 14, 5:33 p.m. - In Hand-In Set #4, can we assume the drag force to be negligible for A25 and A78? Well, if you assume that the drag force (friction) is negligible for A25, that would certainly make the problem easier. Determine the magnitude of the friction force ...'' Answer: We are going to assume that friction is negligible. Therefore, the answer is zero!'' In other words, no, you can't assume drag force to be negligible for A25, since that is what the problem is asking you to measure. And it clearly isn't negligible because the block is stopping. (Did you mean to ask, Can we assume that air resistance is negligible?'' If so, the answer is yes -- don't worry about air resistance, which really is negligible in this case.) As for A78: Yes, neglect air resistance. You can't do the problem unless you neglect air resistance here which is actually a reasonable approximation -- air resistance really wouldn't have a huge effect here. Thu, Sep 14, 5:19 p.m. - For A24 on Hand-In Set #4, can we assume that drag force is negligible since the question asks us to 'energy' potential energy? That is an assumption that you will have to make, yes, but say explicitly that you are, in fact, neglected drag forces here. Wed, Sep 13, 8:04 p.m. - For A25, I understand that you must first find the potential energy of the spring using U= .5(k)(x)^2 and plug in, then use fk=(mu)(m)(g) to find that mu=U/((m)(g)(d) but there is no mass value given for the block? How would you go about solving this provlem without a mass value given? A few comments here: first, it took me quite a while to figure out how you are going about this problem here. If you are doing this on an exam, please make sure that you start with some general principle or a sentence explaining how you are going about this. And don't make statements like $f_k = \mu mg$ or $\mu = \frac{U}{mgd}$ without explaining where that came from. Actually, now that I look at this more ... where did $\mu = \frac{U}{mgd}$ come from? And why are you looking for $\mu$ when the problem is asking for the magnitude $f$ of the friction force? There is a very straightforward way of doing this problem: use either $W_{net} = \Delta K$ or $W_{nc} = \Delta E$. Either approach will work fine. The key is that $W_{net} = W_{spring} + W_{friction}$ and $\Delta K = 0$ since it is motionless at the beginning at at the end. $W_{spring} = \frac{1}{2}kx^2$ and $W_{friction} = f\Delta rcos(angle)$. Throw into work-KE and solve for $f$. OR if you use $W_{nc} = \Delta E$, then $W_{nc}$ is simply the work done by friction and $\Delta E$ is the difference between the mechanical energy at the end (rhymes with hero'') and beginning (just spring potential energy). Note that you don't need the mass of the object for either of these approaches. Sun, Sep 10, 7:36 p.m. - For A18, if I were to find the initial velocityof my dart, how would I do that? Or are there any other ways to find the friction? Thank you! You already have found the initial speed of your blow dart. Back for the first hand-in set of the semester, for problem A4 you fired a blow dart straight up into the air, timed how long it was in the air, and used that timing to determine how fast the blow dart is moving when it leaves the blow dart gun. You can use the same speed for A18 -- the fact that you are firing the blow dart horizontally doesn't really affect the speed when it leaves the blow dart gun. And, yes, you are correct that you do need this information to find the friction. Sun, Sep 10, 12:00 a.m. - For the Chapter 1 question 2 on the supplementary reading, it asks for a comparison between the results for question #1 and question #2. Does this mean that we have to do question # 1 as well or not? Of course you have to do question 1 as well -- that is one of the assigned homework problems from this past Wednesday. Every assigned problem is required in this course. The good news is that you have already done it. You don't have to show all the work for question 1 -- you can refer to the answers that you got for number 1 and write a couple of sentences about the similarities and/or differences between the results for number 2, including a brief discussion explaining why there are any differences and if they are what you would expect them to be. Thu, Sep 7, 7:08 p.m. - For question 21 Chap 6, do you have to prove that acceleration is zero before using the W=Frcos equation? Thanks friendo There isn't anything in this problem to indicate that the acceleration is zero. It could be zero, or it might not be zero. It actually doesn't matter at all what the acceleration is. The relation W = Frcos(theta) only requires that the force be constant during the process, which is the case here. Sun, Aug 27, 1:10 p.m. - What is the expectation for hand-in sets in terms of formatting? At the top, put your name, your PROBLEM SESSION instructor's name, and the time of your problem session (NOT the lecture section). For each problem, the issue of formatting'' is answered by the Show All Work'' policy that I discussed at the end of this past Thursday's lecture. This is spelled out point-by-point on the Lecture materials'' PDF that is posted on Thursday's calendar page, but the short summary: (1) Every problem/question must start with either a generally-applicable equation or definition (i.e., something that is always valid); a special-case formula WITH A STATEMENT explaining why that formula is valid (e.g., a is constant, therefore ...''); or a sentence explain what you are doing. (2) There should be sufficient steps that show how you get from your starting point to the answer; and (3) The answer must have correct units. Sat, Aug 26, 5:39 p.m. - I am sorry to disturb you on the weekend, but I wanted a clarification on one of the homework problems. Problem A83 tells that the balls in the second pit are half the diameter of the balls in the first pit. Would you please clarify if the previous statement is referring to the second ball's radius or diameter. This difference is turning out to be humongous in my final answer. The diameter $D_2$ of the balls in the second pit is one-half the diameter $D_1$ of the balls in the first pit. I.e., $D_2 = \frac{1}{2}D_1$. If you were considering the radii, then the radius of the second is 1/2 the radius of the first: $r_2 = \frac{1}{2}r_1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653125762939453, "perplexity": 370.48955038660625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514113.3/warc/CC-MAIN-20171211222541-20171212002541-00628.warc.gz"}
http://mathhelpforum.com/calculus/1425-green-s-theorem.html
# Math Help - green's theorem 1. ## green's theorem the final answer I am getting is 8, but i dont know its correct or not Attached Thumbnails 2. Originally Posted by bobby77 the final answer I am getting is 8, but i dont know its correct or not Show us what you have done so far. Tell us what Green's theorem is, and how that applies to the function you are given. RonL 3. ## Green's theorem I have included greens theorem that is given in my book.Also I hv included my solution. Thankyou. Attached Files 4. Originally Posted by bobby77 I have included greens theorem that is given in my book.Also I hv included my solution. Thankyou. The region you are integrating over is not the interior of the triangle. You define the region by: $-2\ <\ x\ <\ 2$ and $0\ <\ y\ <\ 2$, which is a rectangle. I think (and you will have to check this) that what you need is: $-2\ <\ x\ <\ 2$ and $y\ <\ -|x|+2$. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028441905975342, "perplexity": 1528.7347675101398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095632.21/warc/CC-MAIN-20150627031815-00264-ip-10-179-60-89.ec2.internal.warc.gz"}
https://atrium.lib.uoguelph.ca/xmlui/handle/10214/14298
# Effects of the social environment on the behaviour and fitness of a territorial squirrel Title: Effects of the social environment on the behaviour and fitness of a territorial squirrel Siracusa, Erin Department of Integrative Biology Integrative Biology McAdam, Andrew Most organisms interact with conspecifics and therefore have a social environment. While it is understood that variation in the composition of this social environment can have important consequences for gregarious species, solitary, territorial species may also live in socially complex environments where the composition of neighbouring conspecifics can directly influence time and energy spent on territory defence. The importance of the social environment for solitary species is, however, poorly understood. In this thesis, I combined behavioural observations, field experiments, and long-term data analysis from a population of North American red squirrels (Tamiasciurus hudsonicus) to explore the effects of the social environment on the territorial dynamics, behaviour and fitness of an ‘asocial’ species. In particular, I looked at two aspects of the social environment that were likely to have important effects on territorial species: familiarity and relatedness with neighbours. In my first chapter, I established that squirrels living in social neighbourhoods with high average familiarity faced reduced risk of intrusion and cache pilferage from conspecifics, providing evidence of the ‘dear-enemy’ phenomenon in red squirrels. In Chapter 2, I experimentally demonstrated that red squirrel ‘rattle’ vocalizations serve an important territorial defence function by deterring conspecifics from intruding. In Chapter 3, I found that red squirrels respond to changes in their social environment by adjusting their behaviour in a manner that reduces the costs of territoriality: in familiar social neighbourhoods red squirrels reduced their rattling rates and increased the proportion of time spent in nest. Finally, in Chapter 4, I used 21 years of data to show that living near familiar neighbours has substantial fitness benefits, increasing annual reproductive success and survival in both male and female squirrels. Collectively my thesis contributes to our broader understanding of the importance of the social environment for ‘asocial’ species and provides evidence that interactions between territorial individuals are not strictly competitive but can also be cooperative in nature. In particular, mutualistic interactions, rather than kin-selection, were important in mitigating conflict and enhancing fitness. Studying social interactions in asocial animals may, therefore, provide important insights into the mechanisms driving the evolution of social systems. http://hdl.handle.net/10214/14298 2018-09 Attribution-NonCommercial-NoDerivs 2.5 Canada  ## Files in this item Files Size Format View Description Siracusa_Erin_201809_PhD.pdf 2.155Mb PDF View/Open PhD thesis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128123879432678, "perplexity": 3992.7816360135002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00193.warc.gz"}
http://www.mrpt.org/tutorials/programming/statistics-and-bayes-filtering/particle_filter_algorithms/
# Particle Filter Algorithms This page describes the theory behinds the particle filter algorithms implemented in the C++ libraries of MRPT. See also the different resampling schemes. For the list of corresponding C++ classes see Particle Filters. ## 1. Sequential Importance Resampling – SIR (pfStandardProposal) Standard proposal distribution + weights according to likelihood function. ## 2. Auxiliary Particle Filter – APF (pfAuxiliaryPFStandard) This method was introduced by Pitt and Shephard in 1999 [1] Let’s assume the filtered posterior is described by the following $M$ weighted samples: $p(x_t|z_{1:t}) \approx \sum_{i=1}^M \omega^{(i)}_t \delta \left( x_t - x^{(i)}_t \right)$ Then, each step in the algorithm consists of first drawing a sample of the particle index $$k$$ which will be propragated from $$t-1$$ into the new step $$t$$. These indexes are auxiliary variables only used as an intermediary step, hence the name of the algorithm. The indexes are drawn according to the likelihood of some reference point $$\mu^{(i)}_t$$ which in some way is related to the transition model $$x_t|x_{t-1}$$ (for example, the mean, a sample, etc.): $k^{(i)} \sim P(i=k|z_t) \propto \omega^{(i)}_t p( z_t | \mu^{(i)}_t )$ This is repeated for $$i=1,2,…,M$$, and using these indexes we can now draw the conditional samples: $x_t^{(i)} \sim p( x_t | x^{k^{(i)}}_{t-1})$ Finally, the weights are updated to account for the mismatch between the likelihood at the actual sample and the predicted point $$\mu_t^{k^{(i)}}$$: $\omega_t^{(i)} \propto \frac{p( z_t | x^{(i)}_t) } { p( z_t | \mu^{k^{(i)}}_t) }$ ## 3. Optimal Sampling (pfOptimalProposal) Use the exact optimal proposal distribution (where available!, usually this will perform approximations). In the case of the RBPF-SLAM implementation, this method follows [2] ## 4. Approximate Optimal Sampling (pfAuxiliaryPFOptimal) Use the optimal proposal and a auxiliary particle filter (see paper [3] ). ## References • [1] Pitt, M.K. and Shephard, N., [http://www.nuff.ox.ac.uk/economics/papers/1997/w13/sir.pdf Filtering Via Simulation: Auxiliary Particle Filters], Journal of the American Statistical Association, vol. 94, pp. 590-591, 1999. • [2] Grisetti, G. and Stachniss, C. and Burgard, W., “Improving grid-based slam with rao-blackwellized particle filters by adaptive proposals and selective resampling”, ICRA 2005. • [3] J.-L. Blanco, J. González, and J.-A. Fernández-Madrigal,An Optimal Filtering Algorithm for Non-Parametric Observation Models in Robot Localization, IEEE International Conference on Robotics and Automation (ICRA), Pasadena (California, USA), May 19-23, 2008, pp. 461–466.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82600998878479, "perplexity": 2625.584021120919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105195.16/warc/CC-MAIN-20170818233221-20170819013221-00347.warc.gz"}
https://cs.stackexchange.com/questions/88624/what-is-the-complexity-of-checking-whether-an-integer-n-geq-2-is-expressible
# What is the complexity of checking whether an integer $n \geq 2$ is expressible in the form $a^b$ where $a, b \in \mathbb{N}$? I am currently studying the paper Primes is in P and have a question regarding 5 section of this paper. Line 1 of the algorithm (on page 3) requires the following operation to be performed if (n = a^b for some natural numbers a and b), output 'composite' I have attempted to deduce the complexity of this line and have deduced the following. Assuming $n \geq 2$, $b$ must be bounded above by $\lceil \log_2 (n) \rceil$. Thus, if there exists $a,b \in \mathbb{N}$ such that $a^b = n$ then $n^\frac{1}{b}$ must be an integer. Thus, to test whether such values of $a$ and $b$ exist, we must check whether $$n^\frac{1}{x}$$ is an integer for each $2 \leq x \leq \lceil \log_2 (n) \rceil$. This requires a maximum of $\lfloor \log_2 (n) \rfloor$ operations to be performed, so the complexity of this step (according to my likely incorrect analysis) would be $O(\log_2 (n))$. However, on page 6 of the paper it says that the complexity of this step is $O^\sim (\log^3(n))$. Why is this? Note that this question regards the specific complexity of this operation, rather than simply verifying that it is doable in polynomial time (which was done here). • We expect references to fulfill the minimal scholarly requirements and be as robust over time as possible. Please take some time to improve your post in this regard. We have collected some advice here. – Raphael Feb 26 '18 at 12:43 • @M Smith Most probably they will be doing the binary search on numbers between 2 and $\log n$. This will give the time $O(\log^2 n)$. – Complexity Feb 26 '18 at 12:58 • I think Ariel has already posted the answer. You should take a look at the Wikipedia link he gave you. – Willard Zhan Feb 26 '18 at 16:48 • – Willard Zhan Feb 26 '18 at 16:50 • The operation to be performed is “is the b-th root of a an integer”. Is that s constant time operation for n bit integers? – gnasher729 Feb 26 '18 at 19:35 ## 2 Answers The complexity very much depends on whether you are looking at the worst case, or at the average case when you choose n at random. To find whether $n^{1/b}$ is an integer, you could pre-calculate the set of possible integers $(a^b) \mod k$ for various k, and check whether $n \mod k$ is among those values - if not then $n^{1/b}$ is not an integer. If n is chosen at random, then this will eliminate many cases very quickly. You can numerically calculate $n^{1/b}$ and check if it is an integer. If you calculate this value with (log n / b + 20) bits precision, and n was chosen at random, then your chance is 99.9999% that you can prove it is not an integer - this will fail if the result is an integer or close to one. There is a clever method doing the calculation using quadratically convergent Newton iteration involving only multiplications and additions / subtractions. Since only the last step requires the full precision, and the calculation involves raising numbers to powers b which is done in log b multiplications, this can be done in $O ({(\log n)^2 \over b^2} * \log b)$ steps using naive multiplication, and $O ((\log n / b) * \log (\log n / b) * \log b)$ using FFT. For random n, most of the time this is all that is needed. It is possible that we find $n^{1/b}$ rounded to the nearest integer and cannot prove that $n^{1/b}$ is not an integer. In these cases we may have to calculate $a^b$. Again, only the final product needs full precision, so this is done in $O(log^2 n)$ using naive multiplication, or $O (log n log log n) using FFT. • There are also some fast pre-tests that can very quickly reject large percentages of arbitrary inputs. Bloom filters (fastest) or some modular methods as shown in Cohen and elsewhere (Cook has a blog about it also, though he doesn't discuss bloom filters). This is a big speedup for the many algorithms that do millions of tests on native size (e.g. 64-bit) inputs. I like your description of the partial precision method, which I think GMP uses. – DanaJ Apr 3 '18 at 18:44 It is true that the search for$b$leads to a loop with$O(\log n)$iterations. But for a given$b$you also have to find the corresponding$a$. So, how much time do we need for that? I would try binary search. With$k:=\frac{\log n}b$, observe that the number of digits of$a$is$O(k)$. So is the number of iterations of the binary search. Raising a candidate$a$to the$b$-th power requires a loop of length$O(\log b)$. In the$i$-th iteration of said loop, we are muliplying$2^ik$-bit numbers. Which can be done in$O(2^ik\log(2^ik)\log\log(2^ik))$time. Let's bound that as$O(2^ik(i+\log k)\log\log\log n)$. The full loop of length$O(\log b)$then takes$O(bk(\log b+\log k)\log\log\log n)$time. The binary search thus takes time$O(bk^2(\log b+\log k)\log\log\log n)$, which we bound by$O((\log n)^2\frac1b\log\log n\log\log\log n)$. The loop involving$b$means summing$\frac1b$up to$\log n$resulting in$O(\log\log n)$. So the overall running time is$O((\log n)^2(\log\log n)^2\log\log\log n)$. Which gives a better upper bound than$O((\log n)^3)$but is also messier. Maybe the paper assumed a less sophisticated multiplication algorithm. Or maybe they deliberately traded bound tightness for simplicity. (Which, by the way, I may also have done above with regard to sums of lesser terms. For example, for summing I bounded$\log\log(2^ik)$by$\log\log(2^{\log b}k)\$.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525649905204773, "perplexity": 372.9270632402035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00284.warc.gz"}
https://www.electricalexams.co/basic-characteristics-of-dc-motor-mcq/
# Basic Characteristics of DC Motor MCQ [Free PDF] – Objective Question Answer for Basic Characteristics of DC Motor Quiz 1. Swinburne’s test can be conducted on ___________ A. Series motor B. Shunt motor C. Compound motor D. Shunt and compound motor The test is practically applicable for machines that have flux constant like the shunt and compound machines as this is a no-load test and the DC series motor should not be run at no-load because of high speed. 2. The generated e.m.f from 20-pole armature having 800 conductors driven at 30 rev/sec having flux per pole as 60 mWb, with 16 parallel paths is ___________ A. 1900 V B. 1840 V C. 1700 V D. 1800 V The generated e.m.f can be calculated using the formula Eb = Φ×Z×N×P÷60×A Where Φ represents flux per pole Z represents the total number of conductors P represents the number of poles A represents the number of parallel paths N represents speed in rpm Eb = .06×20×1800×800÷60×16 = 1800 V. 3. The unit of active power is Watt. A. True B. False The active power in the electrical circuits is useful to power. It determines the power factor of the system. It is expressed in terms of Watt. P=VIcosΦ. 4. Calculate the mass of the ball having a moment of inertia of 4.5 kgm2 and a radius of 14 cm. A. 229.59 kg B. 228.56 kg C. 228.54 kg D. 227.52 kg The moment of inertia of the ball can be calculated using the formula I=∑miri2. The moment of inertia of the ball and radius is given. M=(4.5)÷(.14)2 = 229.59 kg. It depends upon the orientation of the rotational axis. 5. The field control method is suitable for constant torque drives. A. True B. False The field control method is generally used for obtaining speeds greater than the base speed. It is also known as the flux weakening method. It is suitable for constant power drives. 6. What is the unit of the intensity? A. Watt/m2 B. Watt/m C. Watt/m4 D. Watt/m3 Intensity is defined as the amount of power incident in a particular area. It is mathematically expressed as I = Power incident (Watt)÷Area(m2). 7. Calculate the value of the frequency of the signal that completes half of the cycle in 70 sec. Assume the signal is periodic. A. 0.00714 Hz B. 0.00456 Hz C. 0.00845 Hz D. 0.00145 Hz The frequency is defined as the number of oscillations per second. It is reciprocal to the time period. It is expressed in Hz. The given signal completes half of the cycle in 70 seconds then it will complete a full cycle in 140 seconds. F = 1÷T=1÷140=.00714 Hz. 8. The slope of the V-I curve is 26°. Calculate the value of resistance. Assume the relationship between voltage and current is a straight line. A. .487 Ω B. .482 Ω C. .483 Ω D. .448 Ω The slope of the V-I curve is resistance. The slope given is 26° so R=tan(26°) = .487 Ω. The slope of the V-I curve is resistance. 9. For large DC machines, the yoke is usually made of which material? A. Cast steel B. Cast iron C. Iron D. Cast steel or cast iron Yoke in DC machines is made up of cast steel. Yoke provides structural support and mechanical strength to the machine. It helps in carrying the flux from the North pole to the South pole. 10. Calculate the terminal voltage of the Permanent Magnet DC motor having a resistance of 2 Ω and a full load current of 5 A with 20 V back e.m.f. A. 30 V B. 25 V C. 20 V D. 31 V A permanent magnet DC motor is a special type of motor in which flux remains constant. The terminal voltage can be calculated using the relation Vt = Eb+IaRa = 20+5×2 = 30 V. 11. Armature reaction is demagnetizing in nature due to purely lagging load. A. True B. False Due to purely lagging load, armature current is in the opposite phase with the field magneto-motive force. Armature magneto-motive force produced due to this current will be in the opposite phase with the field flux. It will try to reduce the net magnetic field. 12. The unit of Magento-motive force is Ampere-turns. A. True B. False The magneto-motive force is defined as the product of current and turns. It is mathematically expressed as F=NI. 13. Calculate the velocity of the wheel if the angular speed is 25 rad/s and the radius is 10 m. A. 250 m/s B. 260 m/s C. 270 m/s D. 240 m/s The velocity of the wheel can be calculated using the relation V=ω×r. The velocity is the vector product of angular speed and radius. V=Ω×r = 25×10 = 250 m/s. 14. Displacement is a ____________ quantity. A. Scalar B. Vector C. Scalar and Vector D. Tensor Displacement is a vector quantity. It depends on the initial and final position of the body. It has both direction and magnitude. Distance is a scalar quantity. 15. When a UJT is used for triggering an SCR, then the wave shape of voltage obtained from the UJT circuit will be ____________ A. Square B. Pulse C. Trapezoidal D. Saw-tooth
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753189444541931, "perplexity": 2377.5255750214974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00018.warc.gz"}
https://www.physicsforums.com/threads/power-math-question.94361/
# Homework Help: Power math question 1. Oct 12, 2005 ### Tcat Suppose that a person has a resistance of 13.0 kiloohms as part of a circuit which passes through his hands. This person accidentally grasps the terminals of a power supply with a potential difference of 16.0 V. PART A:If the internal resistance of the power supply is 2100 ohms , what is the current through the person's body? I calculated the current to be 1.06×10−3 A by using the equation I = V/(R_p + R_i) PART B: To find the power dissipated in his body I thought you use the equation P = I*V so P =(1.06*10^-3 A)*(16.0V) which gave me 1.70*10^-2 which is wrong. What I am doing wrong?? By using P = IV, you have essentiatlly calculated the power DELIVERED by the power suppy instead of the power being absorbed by the person. If you want to use P=IV, then the V must be the potential difference across the person which can be calculated using voltage division. The easier equation would be $$P = I^2R$$ where you use R of the hands.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386951088905334, "perplexity": 808.0186636368037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00439.warc.gz"}
http://math.stackexchange.com/questions/49617/gradient-nonzero-extensions-of-a-vector-field-on-the-circle/70370
# Gradient nonzero extensions of a vector field on the circle Let $\mathbf{v}=(a,b)$ be a smooth vector field on the unit circle $\mathbb{S}^{1}$ such that $a^{2}+b^{2}\neq0$ everywhere in $\mathbb{S}^{1}$ with degree $\deg\mathbf{v}=0$. Suppose also that $\int\limits_{\mathbb{S} ^{1}}a\mathtt{dx}+b\mathtt{dy}=0$. My question is whether the field $\mathbf{v}$ may be extended to a nonzero gradient vector field $\overline {\mathbf{v}}=(A,B)$ on the unit disk $\mathbb{D}$, i.e. whether there exist smooth functions $A=A(x,y)$, $B=B(x,y)$, $\ (x,y)\in\mathbb{D}$, such that $A|_{\mathbb{S}^{1}}=a$, $B|_{\mathbb{S}^{1}}=b$, $A^{2}+B^{2}\neq0$ everywhere in $\mathbb{D}$ and finally $\frac{\partial B}{\partial x} =\frac{\partial A}{\partial y}$ in $\mathbb{D}$. Let me make some remarks. 1. The condition $\deg\mathbf{v}=0$ is necessary, for the field $\mathbf{v}$ to have an everywhere nonzero extension in the unit disk. The degree is defined as usually as the degree of $\mathbf{v/}\left\Vert \mathbf{v} \right\Vert$ considered as a map $\mathbb{S}^{1}\rightarrow\mathbb{S}^{1}$. 2. The condition $\int\limits_{\mathbb{S}^{1}}a\mathtt{dx}+b\mathtt{dy}=0$ is also necessary for $\mathbf{v}$ to have a gradient extension $\overline {\mathbf{v}}$, as following by Green's Theorem. 3. I suppose that this proposition should have some elegant proof (if true :)) and may be probably something well-known, but I have only few examples, not a proof. So, any references are welcome as well. Note also that this is somehow a "global" proposition, not a "local" one. Thanks in advance. - @Leonid Kovalev: thanks for reopening this discussion... I had forgotten about it, and see that one of the points I didn't discuss in enough detail is actually false. – Sam Lisi Jun 16 '12 at 8:56 No, not every vector field of this form can be extended in the way you desire. In particular, it's sometimes possible to see from the vector field that there must be an interior critical point. For example, consider the vector field $\mathbf{v}$ on $\mathbb{S}^1$ defined by $$\mathbf{v}(\theta) \;=\; \bigl(\sin \theta + 2\sin 2\theta \bigr)\,\mathbf{t}(\theta) \,+\, \bigl(1+2\cos\theta\bigr)\,\textbf{n}(\theta),$$ where $\mathbf{t}(\theta) = (-\sin\theta,\cos\theta)$ is the counterclockwise unit tangent vector, and $\mathbf{n}(\theta) = (\cos\theta,\sin\theta)$ is the outward-pointing normal vector. Here is a picture of this vector field: It is easy to check that the degree of this vector field is zero, and it is clear from the vertical symmetry that $\displaystyle\oint_{\mathbb{S}^1} \!a\,dx+b\, dy = 0$. Now suppose that $\mathbf{v}$ is the restriction to the unit circle of the gradient of some function $f\colon\mathbb{D}\to\mathbb{R}$. Using the Gradient Theorem, we can compute the restriction of $f$ to the unit circle: $$f(\cos\theta,\sin\theta) \;=\; \int \bigl(\sin\theta + 2\sin 2\theta\bigr)\,d\theta \;=\; -\cos\theta - \cos 2\theta + C.$$ Here is a plot of $f$ on the unit circle, assuming $C=0$: Now, observe that: 1. The absolute minimum of $f$ on $\mathbb{S}^1$ occurs when $\theta=0$, i.e. at the point $(1,0)$. 2. At the point $(1,0)$, the gradient vector $\textbf{v}$ points directly to the right. Thus, if $p = (1-\epsilon,0)$ is a point slightly to the left of $(1,0)$, then the value of $f$ at $p$ is less than the value of $f$ anywhere on the unit circle. Therefore, $f$ obtains its minimum somewhere in the interior on the unit disk, so $f$ must have a critical point. - Thanks for giving a nice counterexample to what I claimed (in step 1). I'll edit my answer to point out where the problem shows up. – Sam Lisi Jun 16 '12 at 8:57 Hi Jim! :) Thanks for settling this question. – user31373 Jun 16 '12 at 16:11 I'm happy to help. This was a neat question! Thanks also to Sam Lisi for his answer. – Jim Belk Jun 16 '12 at 18:57 EDIT: My original answer is wrong -- I addressed a slightly different question. I originally wrote, "If you are still interested in the question, I would suggest reformulating it slightly as two different questions." That's not actually correct. Here are the two questions I originally suggested: (1) Suppose you have a gradient vector field of a function $f$ on the disk, $\nabla f$ non-vanishing on the boundary and of degree $0$. Then, you can deform the function $f$, keeping it constant near the boundary of the disk, so that it has no critical points. (2) Suppose you have a 1-form $\alpha = adx + b dy$, defined on the boundary of the disk, satisfying $\int_{S^1} \alpha = 0$. Then, you can extend $\alpha$ to a closed 1-form on the disk. This is therefore exact since we are on the disk. Then, of course, by taking the dual vector field, we are in the setting to apply (1). The problem is that point (1) does not actually address the question of keeping the gradient of $f$ fixed on the boundary. That's not actually possible, as Jim Belk's example shows -- his gradient defined along the boundary points inwards at the boundary maximum, showing that there must be an interior max that we cannot deform away. The argument I sketch below allows us to keep the function values fixed along the boundary, but requires us to let the gradient change. With these caveats now, the rest of what I wrote before is correct, so I will leave it at least until I have time to illustrate the difference between the two problems. I would prove (1) by using a Morse critical point cancellation. First, perturb $f$ so that it has non-degenerate critical points. Then, because the degree of $\nabla f$ is zero, the sum of the degrees calculated on a small circle around each zero of $\nabla f$ in $D$ is also zero. Now note that at a local minimum or a local maximum, the corresponding degree is $+1$ and at a saddle, the degree is $-1$. This means we can pair each of {min or max} with a saddle and then cancel them pairwise. There may be an easier proof, but I can't think of one right now. I would prove (2) by first extending $\alpha$ to any 1-form on the disk. To make our life easier later, extend $\alpha$ to a closed $1$-form in a neighbourhood of the circle. This can be done easily in polar coordinates near the circle. Now extend to the disk by an arbitrary 1-form. Call this extension $\beta$. (Note that we can do this without any condition on $\alpha$.) Now, we want to show that $d\beta = d \nu$ for a $1$-form $\nu$ with compact support. We may write $d\beta = g dx \wedge dy$. The function $g$ then has compact support in the interior of the disk. There are now a couple of ways of showing that there exists a compactly supported 1-form $\nu$ with the desired property. (One is by showing the pairing between de Rham cohomology with compact support on the disk and homology of $D^2$ rel its boundary is non-degenerate... thus $d\beta$ represents the $0$ class in compactly supported de Rham cohomology, and thus is exact. Another is by solving $\Delta u = g$ in the disk with the boundary condition $u=0$. Then (modulo a sign question) take $\nu = du \circ i$, where $i$ is the standard complex structure on the disk. I'm sure there's a simpler way, by hand. I unfortunately don't have time to think about finding it now.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662297964096069, "perplexity": 113.5550767813794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00143-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/508104/all-bases-for-a-finitely-generated-abelian-group-have-the-same-cardinality
# All bases for a finitely-generated abelian group have the same cardinality. Let $B$ be a subgroup of a free abelian group $A$ with basis $(x_i)_{i=1...n}$. It has already been shown that $B$ has a basis of cardinality $\leq n$. ... We also observe that our proof shows that there exists at least one basis of $B$ whose cardinality is $\leq n$. We shall therefore be finished when we prove the last statement, that any two bases of $B$ have the same cardinality. Let $S$ be one basis, with a finite number of elements $m$. Let $T$ be another basis, and suppose that $T$ has at least $r$ elements. It will suffice to prove that $r \leq m$ (one can then use symmetry). Let $p$ be a prime number. Then $B/pB$ is a direct sum of cyclic groups of order $p$, with $m$ terms in the sum. Hence its order is $p^m$. Using the basis $T$ instead of $S$, we conclude that $B/pB$ contains an $r$-fold product of cyclic groups of order $p$, whence $p^r \leq p^m$ and $r \leq m$, as was to be shown. (Note that we did not assume a priori that T was finite.) I've bolded the parts I'm having trouble with. How do I show the first part and what's an $r$-fold product? Alternative proofs to the problem welcome. I know that $pB = \{ \sum_{i} p k_i x_i\ | \sum_{i} k_i x_i \in B\}$ and that it forms a normal subgroup, $A$ being abelian. - That $S = \{s_1,\,\dotsc,\, s_m\}$ is a basis of $B$ means that every $b \in B$ can be written in a unique way as $b = \sum\limits_{i=1}^m k_i\cdot s_i$ with all $k_i \in \mathbb{z}$. Thus $B$ is the direct sum of $m$ copies of $\mathbb{Z}$, $$B = \bigoplus_{i=1}^m \mathbb{Z}\cdot s_i.$$ Then we have $$pB = \bigoplus_{i=1}^m p\mathbb{Z}\cdot s_i,$$ and that yields $$B/pB \cong \bigoplus_{i=1}^m (\mathbb{Z}/p\mathbb{Z})\cdot s_i,$$ that $B/pB$ is the direct sum of $m$ cyclic groups of order $p$. Now $T$ is by assumption also a basis of $B$, so we also have $$B = \bigoplus_{\tau \in T} \mathbb{Z}\cdot \tau,$$ and $$B/pB \cong \bigoplus_{\tau \in T} (\mathbb{Z}/p\mathbb{Z})\cdot\tau.$$ If $T$ contains at least $r$ elements, say we have $t_1,\,\dotsc,\,t_r \in T$, then $$\bigoplus_{j=1}^r (\mathbb{Z}/p\mathbb{Z})\cdot t_j$$ is a subgroup of $B/pB$, and thus $B/pB$ contains the direct sum of $r$ cyclic groups of order $p$. Since for finitely many summands/factors the direct sum and direct product of (abelian) groups are isomorphic, it contains a product of $r$ cyclic groups of order $p$, an $r$-fold product of $\mathbb{Z}/p\mathbb{Z}$. - How can the last sum be a subgroup of $B/pB$ if we're talking about isomorphisms? –  Enjoys Math Sep 28 '13 at 20:10 Okay I understand that the direct product and sum coincide in this case. –  Enjoys Math Sep 28 '13 at 20:13 Okay, the only part I don't get now is how $r \leq m$ now. But since that wasn't part of my question I've checked your answer. –  Enjoys Math Sep 28 '13 at 20:20 I should have stayed with $=$ also for the quotients ;) But saying "contains an isomorphic image of $\bigoplus\limits_{j=1}^r (\mathbb{Z}/p\mathbb{Z})\cdot t_j$" would achieve the same, showing that $r \leqslant m$. –  Daniel Fischer Sep 28 '13 at 20:20 What lemma are you using to show that $r\leq m$ ? –  Enjoys Math Sep 28 '13 at 20:21 1) "$r$-fold" means a direct sum of $r$ exemplars of a group. 2) Every element of $B/pB$ has the order $p$. It is known that such Abelian group is a direct sum of cyclic groups of order $p$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977462649345398, "perplexity": 104.92514355916705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099036.81/warc/CC-MAIN-20150627031819-00000-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/143406/determining-the-asymptotic-behavior-of-some-function-of-random-matrix
# Determining the asymptotic behavior of some function of random matrix Consider a series of random matrices $X_n\in\mathbb{R}^{n\times m}$ consisting of i.i.d. entries, each with zero mean and variance $1/m$, and let $a_n,b_n\in\mathbb{R}^{n\times1}$ be two deterministic (or random and independent on $X$) vectors, say with bounded norm. I want to find the structure of some "nice"/"simple" "limit" function, $f_n$, of the following term $$a_n^T\left(X_nX_n^T+I_n\right)^{-1}b_n-f_n\to0$$ almost surely, as $n,m\to\infty$ with fixed ratio. EDIT: Due to Ofer's answer and comments I will consider some specific choice of $a_n$: $$\frac{1}{n}w_n^TX_n^T\left(X_nX_n^T+I_n\right)^{-1}b_n-f_n\to0$$ Since $w_n^TX_n^T = \sum_{i=1}^mw_{i}x_i^T$ where $x_i$ is $i$th raw of $X_n$, we can write that \begin{align} \frac{1}{n}w_n^TX_n^T\left(X_nX_n^T+I_n\right)^{-1}b_n = \frac{1}{n}\sum_{i=1}^mw_{i}x_i^T\left(X_nX_n^T+I_n\right)^{-1}b_n \end{align} We know that $X_nX_n^T = \sum_{i=1}^mx_ix_i^T$. Let $\left[X_nX_n^T\right]_i = X_nX_n^T-x_ix_i^T$. Thus, \begin{align} \frac{1}{n}w_n^TX_n^T\left(X_nX_n^T+I_n\right)^{-1}b_n &= \frac{1}{n}\sum_{i=1}^mw_{i}x_i^T\left(X_nX_n^T+I_n\right)^{-1}b_n\\ &=\frac{1}{n}\sum_{i=1}^mw_{i}\frac{x_i^T\left(\left[X_nX_n^T\right]_i+I_n\right)^{-1}b_n}{1+x_i^T\left(\left[X_nX_n^T\right]_i+I_n\right)^{-1}x_i} \end{align} Now, since $x_i$ is independent on $\left(\left[X_nX_n^T\right]_i+I_n\right)^{-1}$, we know that (a.s.) $$x_i^T\left(\left[X_nX_n^T\right]_i+I_n\right)^{-1}x_i-\int (1+x)^{-1} \rho(dx)\to0$$ where $\rho$ is the limit density of eigenvalues of $XX^T$. The same is true for the numerator. So, the speculation is that $f_n$ behaves like \begin{align} f_n &= \frac{1}{n}\sum_{i=1}^mw_{i}\frac{x_i^Tb_n\int (1+x)^{-1} \rho(dx)}{1+\int (1+x)^{-1} \rho(dx)}\\ &=\frac{\int (1+x)^{-1} \rho(dx)}{1+\int (1+x)^{-1} \rho(dx)}\frac{1}{n}w_n^TX_n^Tb_n \end{align} Update: Numerical calculations suggests that the above "limit" is not true, although, I can't really say that I completely understand where the rub is. - For $a_n=b_n$ with $\|a_n\|=\|b_n\|$, the result is standard (and can be found for example in papers of Bai and Silverstein, usually as a technical lemma in the appendix...): let $\rho$ be the limit density of eigenvalues of $XX^T$ (the Pastur-Marchenko law). Then the limit you seek is asymptotically the normalized trace of $(XX^T+I)^{-1}$, i.e., $A:=\int (1+x)^{-1} \rho(dx)$. If $a_n\neq b_n$ it requires a bit more work, but the answer should be $A\cdot \langle a_n,b_n\rangle$. The reasoning is similar: the expression you have is $\sum \lambda_i \alpha_i \beta_i$ where $\alpha_i$ are the coefficients of $a_n$ in the eigenbase of $XX^T$. This will concentrate near its mean (to prove that, use results on the eigenvectors as in Silverstein's paper from the mid 90's on eigenvectors of covariance matrices; everything is much simpler in the Gaussian case), which will give the expression I wrote. In general of course you do not have a decomposition. However, let's see in your case. Let's take $b_n=\alpha a_n+b_n'$ where $b_n'$ is orthogonal to $a_n$. To make notation simple, assume the norm of $a_n$ and of $b_n'$ to be $1$. You are trying to compute $a_n^T V b_n'$ where $V$ is your matrix of the form $UDU^T$. At least in the Gaussian case, $U$ is Haar distributed and independent of $D$. Let $W$ be the unitary matrix such that $Wa_n=e_1$ and $Wb_n'=e_2$. Then you are trying to compute the (1,2) element of $WUD(WU)^T$. But $WU$ is again Haar distributed, so law is same as –  ofer zeitouni Sep 28 '13 at 11:09 (1,2) element of $UDU^T$, which is $\sum_j U_{1j} U_{2j} D_j$. But this has mean $0$ and is of order $1/\sqrt{n}$. Are you sure about your matlab simulation? –  ofer zeitouni Sep 28 '13 at 11:14 The version with $w^T XX^T$ is trivial: write $w^TXX^T=w^T (XX^T-I)+ w^T$. Now use the previous answer for the second term (first term is trivial). –  ofer zeitouni Sep 28 '13 at 18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979478716850281, "perplexity": 172.86935918139412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00198-ip-10-171-96-226.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1203397/universal-properties-and-isomorphisms
# Universal Properties and Isomorphisms If two objects satisfy the same universal property, we know that they are isomorphic in that category. Is the converse true? That is, if two objects are isomorphic in some category, can we construct an appropriate category in which there exists at least one universal property that they both satisfy? If this is always possible, why can we always construct an appropriate category? If not, what are some examples of isomorphic objects that can't ever be made to satisfy the same universal property? An Example From universal to isomorphic: In the category of sets, in which objects are sets and the arrows are functions, then singleton sets are final objects. This means they are isomorphic. From isomorphic to universal: Any two sets with the same number of elements are isomorphic in the category of sets, since there is a bijection between them. Can we construct a category in which sets with two elements are final or initial objects? Definition of Universal Property, for Reference (Paraphrased from Algebra Chapter 0, by Paolo Aluffi) An object satisfies a universal property when it is a terminal object of a category. A terminal object is an object that is final, or initial, or both. Let $C$ be a category. An object $I$ of $C$ is initial in $C$ if for every object $A$ of $C$ there exists exactly one morphism from $I$ to $A$ in $C$. An object $F$ of $C$ is final in $C$ if for every object $A$ of $C$ there exists exactly one morphism from $A$ to $F$. • The quoted definition seems to be more general than the one given by Maclane (and others). Maclane defines a universal property when you have a terminal object in a SLICE category of the original category where the object was living. The def. Given by Aluffi seems to me a bit weak. Check wikipedia for similar defs. – magma Mar 24 '15 at 20:03 • You're right - it does appear that the definition I provided is less specific than the one on Wikipedia, for example. Would using Wikipedia's definition (based on terminal objects in the corresponding slice category) change the answer to this question? – millsmess Mar 24 '15 at 22:25 Can you give a rigorous definition of "universal property" for which your question does not trivially admit an answer of the form "$X$ has this universal property and hence $Y$ does because it's isomorphic to $X$"? For the definition I know ("blah blah blah implies there's a unique map to/from $X$ from/to blah") it seems that your question trivially admits a positive answer. Edit: the OP gave a definition of universal property, so the question is answered by the following Lemma If $\mathcal{C}$ is a category, if $X$ and $Y$ are objects of $\mathcal{C}$, and if $X$ is a terminal object and $Y$ is isomorphic to $X$ then $Y$ is a terminal object too. Proof: Say $X$ is initial (the other case is just as easy). Fix isomorphisms $a:X\to Y$ and $b:Y\to X$. By standard nonsense $a$ and $b$ induce bijections $Hom(X,Z)=Hom(Y,Z)$ for all objects $Z$ of $\mathcal{C}$ (because they induce maps whose composite in either direction is the identity). Hence if $Hom(X,Z)$ has size 1 for all $Z$ then so does $Hom(Y,Z)$. EDIT: the OP changed the question. The answer to the new question is still yes, just look at the category of objects over $X$ to see a new category where $X$ is terminal (as is anything isomorphic to it). In their example, if $X$ has two elements, then consider the category of sets equipped with a map to $X$. • Could you expand on your answer? Why should objects that are isomorphic necessarily satisfy the same universal property? – millsmess Mar 23 '15 at 22:47 • Can you define what you mean by "universal property" and then we can talk about the question properly. Take for example the tensor product of two vector spaces. This satisfies a universal property, of the form "if there are some maps then there is some other map satisfying some properties" and if I change my tensor product to something isomorphic to it then clearly the new guy still satisties exactly the same properties. – slider Mar 23 '15 at 22:49 • Please see my definition added to the question above for reference. You may need to refresh the page. – millsmess Mar 23 '15 at 22:50 • Thanks for the definition. OK so here's a lemma: if $X$ is a terminal object of a category, and $Y$ is isomorphic to $X$ in that category, then $Y$ is also a terminal object. So done, right? – slider Mar 23 '15 at 22:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321232438087463, "perplexity": 122.45782705172324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00302.warc.gz"}
http://bthierry.pages.math.cnrs.fr/tags/multiple-scattering/
# Multiple scattering ## Single Scattering Preconditioner Applied to Boundary Integral Equations In a homogeneous medium, when illumated by an incident time-harmonic acoustic wave $u^{inc}$, the $M > 1$ obstacles $\Omega_p, p=1, …,M$, generate a scattered wave $u$ solution of the Helmholtz equation: $$\left\{ \begin{array}{r c l l} \Delta u + k^2u & = &0 & \mathbb{R}^3\setminus\overline{\cup_{p=1}^M\Omega_p}\\ u & = & -u^{inc} & \cup_{p=1}^M\Gamma_p\\ u & \text{ is } & \text{radiating.} \end{array} \right.$$ The quantity $k$ is the positive wavenumber, the radiating condition stands for the Sommerfeld one and $\Gamma_p$ are the boundaries of $\Omega_p$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452688694000244, "perplexity": 920.6046859411252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00397.warc.gz"}
http://www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2000029&top=Cryptography
We give estimates for the running-time of the function field sieve (FFS) to compute discrete logarithms in $\F_{p^n}^{\times}$ for small~$p$. Specifically, we obtain sharp probability estimates that allow us to select optimal parameters in cases of cryptographic interest, without appealing to the heuristics commonly relied upon in an asymptotic analysis. We also give evidence that for any fixed field size some may be weaker than others of a different characteristic or field representation, and compare the relative difficulty of computing discrete logarithms via the FFS in such cases.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217924237251282, "perplexity": 471.92976473419776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270134.8/warc/CC-MAIN-20160524002110-00228-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/31485/constructive-proof-of-kronecker-weber/31600
# Constructive Proof of Kronecker-Weber? This question is motivated by my attempt at solving Proving $2 ( \cos \frac{4\pi}{19} + \cos \frac{6\pi}{19}+\cos \frac{10\pi}{19} )$ is a root of$\sqrt{ 4+ \sqrt{ 4 + \sqrt{ 4-x}}}=x$ Consider numbers expressible as exponential sums $$\sum_k a_k \exp(2 i \pi \theta_k),$$ with $a_k$,$\theta_k$ a finite list of rationals. These numbers are algebraic and satisfy some polynomial whose Galois group is abelian. The Kronecker-Weber theorem says the converse also holds. Given an abelian polynomial (especially quadratic or cubic), how can we solve it in terms of one of these sums? Basically I am looking for a proof of the Kronecker-Weber theorem that is constructive enough that I can compute with it. - for quadratic polynomials, Gauss sum is enough - see en.wikipedia.org/wiki/Quadratic_Gauss_sum –  user8268 Apr 7 '11 at 7:38 Some formatting/notation stuff: using $i$ as an index and the imaginary unit can be confusing. If you do, you might want to set the imaginary unit in Roman to show it's not a variable. Also, $2\pi\mathrm{i}$ makes more sense (to me) than $2\mathrm{i}\pi$. –  joriki Apr 7 '11 at 8:12 @joriki, yeah that was silly using the same variable for different things. fixed it, thanks. –  quanta Apr 7 '11 at 9:13 You are looking for an algorithm? My guess is that the proof of Kronecker-Weber already has one... –  Aryabhata Apr 7 '11 at 14:07 $\def\QQ{\mathbb{Q}}$ As user8268 says, for quadratics, Gauss sums are enough. Let me make sure you understand that comment: Let $K = \QQ(\sqrt{D})$. For simplicity, I'll do the case the $D$ is a prime $p$ which is $1 \mod 4$ and leave you to make the necessary adjustments in the general case. Let $\zeta$ be a primitive $p$-th root of unity and let $\left( \frac{k}{p} \right)$ be the Legendre symbol. Set $g = \sum_{k=0}^{p-1} \left( \frac{k}{p} \right) \zeta^k$. Then $g^2 = p$. So this shows that $K$ is a subfield of $\mathbb{Q}(\zeta)$. If $D$ is not prime, or not $1 \mod 4$, you can either multiply together formulas for its prime factors, or you can use the Kronecker symbol which will do it all for you in one swoop. Now, let's look at cubics. Let the roots of your cubic be $\theta_1$, $\theta_2$, $\theta_3$, with the (abelian) Galois group acting cyclically. Let $K$ be $\mathbb{Q}(\theta_1)$; because of the presumed Galois structure, $\theta_2$ and $\theta_3$ are also in $K$. Let $L = K(\omega)$, where $\omega$ is a primitive third root of unity. Then $L/\QQ(\omega)$ is a Kummer extension: $L = \QQ(\omega)(\beta^{1/3})$. There is an almost explicit formula for $\beta$: We have $\beta = (\theta_1 + \omega \theta_2 + \omega^2 \theta_3)^3$. (Exercise: Check that $\beta$ is fixed by cyclically permuting the $\theta$'s, and thus lies in $\QQ(\omega)$.) Note that, if you expand this out, you get an expression for $\beta$ as various cyclically symmetric polynomials in the $\theta$'s times powers of $\omega$. The ring of cyclically symmetric polynomials in three variables is generated by the elementary symmetric functions, which are the coefficients of the minimal polynomial of $\theta_1$, and by the discriminant $(\theta_1-\theta_2)(\theta_2-\theta_3)(\theta_3-\theta_1)$ so, if you know these quantities, you can compute $\beta$. I say "almost" because this quantity could be $0$, if you are unlucky. In that case, replace $\theta_1$ by a different primitive element and try again. From now on, I'll assume you have found a $\beta$ that works. Moreover, we can multiply $\beta$ by cubes of elements in $\mathbb{Q}(\omega)$ without changing the fact that $L = \QQ(\omega)(\beta^{1/3})$. We use this freedom to assume that every prime divides $\beta$ with multiplicity between $0$ and $2$. (Recall that $\mathbb{Z}[\omega]$ is a UFD.) So $$\beta = \epsilon \sqrt{-3}^k \prod_{p_i \equiv 1 \mod 3} \pi_i^{a_i} \overline{\pi_i}^{\overline{a}_i} \prod_{q_i \equiv 2 \mod 3} q_i^{b_i}$$ where $\epsilon$ is a unit, $p_i$ and $q_i$ are primes of $\mathbb{Q}$, the prime factorization of $p_i$ in $\mathbb{Z}[\omega]$ is $\pi_i \overline{\pi_i}$ and we have $0 \leq k, \ a_i,\ \overline{a_i}, \ b_i \leq 3$. Let bar denote the symmetry which exchanges $\omega$ and $\omega^{-1}$, which preserving the $\theta_i$'s. (This is consisting with the notation $(\pi, \overline{\pi})$ introduced above.) Let $N = \beta \overline{\beta}$. Observe that $N = M^3$, where $M = \sum \theta_i^2 - \sum_{i<j} \theta_i \theta_j$. So $M$ is rational, and you could extract $M$ directly from the minimial polynomial of $\theta$. So we see that $\beta \overline{\beta} = M^3$ and so $$(3)^{2k} \prod_{p_i \equiv 1 \mod 3} p_i^{a_i + \overline{a_i}} \prod_{q_i \equiv 2 \mod 3} q_i^{2 b_i}$$ is a cube. Together with the fact that the exponents are supposed to be between $0$ and $2$, we see that $k$ and $b_i$ are zero and that $(a_i, \overline{a_i})$ are $(0,0)$, $(1,2)$ or $(2,1)$. So $\beta$ must be of the form $$\epsilon \prod \pi_i \overline{\pi_i}^2$$ where $\epsilon$ is a unit and we may have switched the names of $\pi_i$ and $\overline{\pi}_i$ in some places. We can extract the primes $p_i$ occurring above by factoring $M$; that's a quite reasonable computation. To figure out exactly which unit we get and to figure out which of the two factors of $p_i$ gets squared, I think you honestly need to work in the extension fields. Theoretically, though, everything I have said is constructive. This is a good place to pause. Our temporary goal is to show that $\beta^{1/3}$ is in a cyclotomic extension. (Just like before we wanted to show that $\sqrt{D}$ was in a cyclotomic extension, but this time we will have to do more work after that.) I will restrict to the case that $\beta = \pi \overline{\pi}^2$, just like I restricted myself before to the case that $D$ was a prime which was $1$ modulo $4$. I'll also want to assume that $\pi \equiv 1 \mod 3$; this tells you which of the six generators of the ideal $(\pi)$ I should focus on. (Analogously, I used the prime $p$, not $-p$, back in the quadratic case.) Removing these restrictions is further work, but it doesn't require too much new insight. Let $\zeta$ be a $p$-th root of unity. Let $\chi: (\mathbb{Z}/p)^* \to \{ 1, \omega, \omega^2 \}$ be a multiplicative character, and extend it to $\mathbb{Z}/p$ by $\chi(0)=0$. Define $$\gamma = \sum_{k=0}^{p-1} \chi(k) \zeta^k.$$ It is not too hard to show that $\gamma^3$ is fixed by $\mathrm{Gal}(\QQ(\omega, \zeta)/\QQ(\zeta)$, so $\gamma$ is in $\QQ(\omega)$. Stickelberger's relation states that the ideal $(\gamma^3)$ is equal to the ideal $(\pi \overline{\pi}^2)$, after possibly switching $\pi$ and $\overline{\pi}$. I believe the right statement, if I choose $\pi$ to be $1 \mod 3$, should be that $\gamma^3 = \pi \overline{\pi}^2$, again up to the above switching, but I can't find a reference for this so be forewarned. Unfortunately, the Wikipedia article on Stickelberger's theorem is very abstract. Try these notes from my colleague Kartik for a more down to earth presentation. Note that his $(m, l)$ are my $(3,p)$. So, up to getting the details right here, $\gamma^3 = \pi \overline{\pi}^2 = \beta$. So $\beta^{1/3}$ is in $\QQ(\omega, \zeta)$. This is the hard part, now we have to clean up the details. For simplicity, lets assume that $\beta = (\theta_1 + \omega \theta_2 + \omega^2 \theta_3)$ on the nose, without us having had to multiply or divide by any cubes. So $$\gamma = \beta^{1/3} = \theta_1+ \omega \theta_2 + \omega^2 \theta_3.$$ Let bar act on $\QQ(\omega, \zeta)$ by switching $\omega$ and $\omega^{-1}$, while fixing $\zeta$. Then we have: $$\overline{\gamma} = \theta_1 + \omega^2 \theta_2 + \omega \theta_3.$$ And, of course, $$\mathrm{Tr}(\theta_1) = \theta_1 + \theta_2 + \theta_3,$$ and this is a rational number which can be computed from the minimial polynomial of $\theta_1$. Solve these linear equations, and you get an expression for $\theta_1$ as an explicit element of $\QQ(\omega, \zeta)$, a cyclotomic field. In summary, you need to perform a bunch of operations with symmetric polynomials to find out what $\beta$ is; you need to do factorization in $\mathbb{Z}[\omega]$ in order to find the $\pi_i$ (although you can turn it into a smaller factorization problem in $\mathbb{Z}$ by looking at $M$ instead), you need to write down a bunch of Gauss sums and you need to do some final linear algebra. Stickelberger's relation appears to save you at a miracle point. This same outline will get you through $\mathbb{Z}/4$ extensions, and is a good exercise to make sure you understand the above. Once you get past there, life becomes much harder. There are two problems (1) you may not have unique factorization but, before that (2) the unit groups of your cyclotomic fields become infinite! I could leave dealing with units as an exercise as long as it was just a finite list to check off, but once the unit group gets infinite, the issue seems very hard to me. I tried to come up with a "brute force" proof of Kronecker-Weber a few years ago, and that is where I got stuck. I'd be curious to hear any ideas for how to get past this. - Actually, writing this all out made me realize an idea for how to deal with the unit group. Roughly, my thinking is to mimic the above to get an element $\gamma$ such that $\gamma^m = \epsilon \beta$ and work out how the Galois group acts on $\epsilon$, then show that the only units where the action is of that form are roots of unity times $m$-th powers of units. $m$-th powers of units can be removed by changing $\beta$, and the remaining case is easy... Grrr, this is frustrating. It would probably make a good Monthly article, but I don't have time to write it up at that quality. –  David Speyer Apr 7 '11 at 19:17 Update: This works out OK for $m$ odd, but seems to be trickier for powers of $2$. –  David Speyer Apr 14 '11 at 12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611074328422546, "perplexity": 143.2408516668043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00053-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/882651/manifold-has-uncountable-many-smooth-stuctures-if-it-has-one
# Manifold has uncountable many smooth stuctures if it has one This is the Problem 1-6 of John Lee's Introduction to smooth manifold: Let $M$ be a nonempty topological manifold of dimension $n\geq1$. If $M$ has a smooth structure, show that it has uncountably many distinct ones. [Hint: first show that for any $s>0$, $F_s(x)=|x|^{s-1}x$ defines a homeomorphism from $\mathbb{B}^n$ to itself, which is a diffeomorphism if and only if $s=1$.] What I tried: It can be proved there is a atlas $\mathcal{A}$ (not maximal) which is compact with the original smooth sturcture of $M$ and has the following property: $\forall(U,\psi)\in\mathcal{A}$, $\psi(U)=\mathbb{B}^n$. I tried to define $\psi'=F_s\circ\psi$ and hope $\{(U, \psi')\}$ to form a new atlas for $M$. But $$\varphi'\circ(\psi')^{-1}=F_s\circ\varphi\circ\psi^{-1}\circ F_s^{-1}$$ may not be diffeomorphism. Any help, thanks. • I think the idea is to modify only a single chart - find an atlas which contains a ball whose centre is not in any other chart, then compose the corresponding chart with $F_s$. Since $F_s$ is a diffeomorphism when restricted away from $0$ this should work out. – Anthony Carapetis Jul 31 '14 at 6:17 • @AnthonyCarapetis I think you are right. Would you please coppy your comment as an anwser so I can accept it. Thank you! – Danielsen Jul 31 '14 at 10:10 It took me a while to understand the great idea proposed by Anthony Carapetis since I think that other people may have the same doubt that I had, I decided to write a more detailed answer using his idea. First of all, remember the Proposition $$1.17$$ (this proposition is straightforward to prove) of John Lee's Introduction to smooth manifold: and note that $$F_s: B(0,1) \rightarrow B(0,1)$$ is an homeomorphism that isn't a diffeomorphism $$\forall s>0$$ and $$s\neq 1$$, and $$F_s: B(0,1)\setminus \{0\} \rightarrow B(0,1)\setminus \{0\}$$ is a smooth diffeomorphism $$\forall s>0$$ ( where $$B(y,r) = \{x\in \mathbb{R}^n;$$ $$|x-y| $$\}$$). Let $$\mathcal{A} = (\varphi_i,U_i)_{i \in I}$$ be a smooth atlas of $$M$$. We will construct a smooth atlas $$\mathcal{B}$$ such that $$\mathcal{A} \cup \mathcal{B}$$ is not a smooth atlas. Note that for every $$x$$ $$\in$$ $$M$$, there exists a chart $$(\varphi_x,U_x)$$ $$\in$$ $$\mathcal{A}$$, satisfying $$x$$ $$\in$$ $$U_x$$. Using that $$\varphi(U_x) \subset \mathbb{R}^n$$ is open, $$\exists$$ $$\delta_x >0$$, such that, $$B(\varphi(x), \delta_x) \subset \varphi (U_x)$$. So, we can define a smooth atlas $$\mathcal{C} = \left\{(\varphi_x, \varphi^{-1}_x \left( B(\varphi(x),\delta_x) \right) )\right\}_{x \in M},$$ which has the same smooth structure of $$\mathcal{A}$$. Moreover, note that for every $$x\in M$$, there is a function $$\xi_x : B(\varphi(x), \delta_x) \rightarrow B(0,1)$$ such that $$\xi_x$$ is a smooth diffeomorphism between $$B(\varphi(x), \delta_x)$$ and $$B(0,1)$$ (in fact we cand define $$\xi_x(y) = \frac{1}{\delta_x}(y-\varphi(x) )$$ ). Consequently, we are able to define a new smooth atlas $$\mathcal{D} = \left\{\left(\xi_x \circ \varphi_x, \varphi_x^{-1}\left(B(\varphi(x), \delta_x) \right)\right)\right\}_{x \in M},$$ which has the same smooth structure of $$\mathcal{A}$$ and $$B(0,1) = \xi_x \circ \varphi^{-1}_x(B(\varphi(x),\delta_x ))$$, $$\forall$$ $$x$$ $$\in$$ $$M$$. Now, fixed $$x_0$$ $$\in$$ $$M$$, and using that $$M$$ is Hausdorff, for every $$y$$ $$\in$$ $$M$$ ($$y$$ $$\neq$$ $$x$$), exists a neighborhood $$V_y$$ of $$y$$, such that $$x$$ $$\notin$$ $$V_y$$. Therefore, $$\mathcal{E} = \left\{(\xi_{x_0} \circ \varphi_{x_0}, \varphi_x^{-1}(B(\varphi(x_0), \delta_{x_0}) )\right\} \cup \left\{(\xi_y \circ \varphi_y, \varphi_y^{-1}(B(\varphi(y), \delta_y) \cap V_y )\right\}_{y \in M\setminus \{x_0\}}$$ is a smooth atlas which has same smooth structure of $$\mathcal{A}$$. So, we can finally use Carapetis' idea. Define $$\mathcal{B} = \left\{(F_s \circ \xi_{x_0} \circ \varphi_{x_0}, \varphi_x^{-1}(B(\varphi(x_0), \delta_{x_0}) )\right\} \cup \left\{ (\xi_y \circ \varphi_y, \varphi_y^{-1}(B(\varphi(y), \delta_y) \cap V_y )\right\}_{y \in M\setminus \{x_0\}},$$ Now, we need to prove that $$\mathcal{B}$$ is a smooth atlas, the only nontrivial property that needs to be checked is: if, $$\forall$$ $$y$$ $$\in$$ $$M\setminus \{x_0\}$$ $$F_s \circ \xi_{x_0} \circ \varphi_{x_0} \circ (\xi_{y} \circ \varphi_y)^{-1}: \xi_y \circ \varphi_y (Z_y) \rightarrow F_s \circ \xi_{x_0}\circ \varphi_{x_0} (Z_y),$$ ( where $$Z_y = \left( V_y \cap \varphi_y^{-1}(B(\varphi(y), \delta_{y})) \right) \cap \varphi_{x_0}^{-1}(B(\varphi(x_0), \delta_{x_0}))$$ ) is a smooth diffeomorphism. This follows directly from the fact that $$\xi_{x_0} \circ \varphi_{x_0} \circ (\xi_{y} \circ \varphi_y)^{-1}$$ and $$F_s\vert_{ \xi_{x_0} \circ \varphi_{x_0} \circ (\xi_{y} \circ \varphi_y)^{-1}(Z_y)}$$ are diffeomorphisms, because $$0$$ $$\notin \xi_{x_0} \circ \varphi_{x_0} \circ (\xi_{y} \circ \varphi_y)^{-1}(Z_y)$$, sinse $$x_0$$ $$\notin$$ $$U_y$$. Then $$\mathcal{B}$$ is a smooth atlas, but $$\mathcal{B} \cup \mathcal{D}$$ isn't a smooth atlas, because $$F_s = F_s \circ \xi_{x_0} \circ \varphi_{x_0} \circ ( \xi_{x_0} \circ \varphi_{x_0})^{-1} : B(0,1) \rightarrow B(0,1)$$ is not a smooth diffeomorphism. Using Proposition 1.17, we conclude that the smooth structure determined by $$\mathcal{B}$$ is different of the smooth structure determined by $$\mathcal{A}$$ (because of the smooth structure determined by $$\mathcal{A}$$ $$=$$ smooth structure determined by $$\mathcal{D}$$ $$\neq$$ smooth structure determined by $$\mathcal{B}$$ ). Since this result holds for all $$s>0$$, we can construct uncountable many differents smooth atlas $$\mathcal{B}$$, which all have different smooth structures among them, which completes the proof. Given a smooth structure, we would like to find a coordinate ball $(U_0,\phi_0)$ such that the center $p$ of $U_0$ is covered by only this chart. It's not hard to see that we need only find a point $p$ covered by one chart. Then we can replace this chart with $(U_0,F_s \circ \phi_0)$ and get a new smooth structure, which is not smoothly compatible with the original one. By Thm 1.15 and Lemma 1.10, we can find a countable, locally finite open refinement of the smooth structure consisting of precompact coordinate balls. This refinement is also a smooth structure, let's work with it. Then choose an arbitrary point $q$ on the manifold, it has a neighborhood intersects finitely many smooth charts denoted them as $U_1$ ~ $U_k$. 1)If $k=1$, then $q$ is only covered by $U_1$. We can replace $(U_1,\phi_1)$ with $(U_1,F_s\circ\frac{\phi_1 - \phi_1(q)}{r+|\phi_1(q)|})$, where r is the radius of $\phi(U_1)$. 2) If $k>1$, then repeat the following procedure starting from $i=1$: If $U_i$ is covered by the rest charts, then remove it from the refinement and get a new smooth structure otherwise stop the procedure. Eventually, there is going to be a point $q'$ covered by only one precompact coordinate ball. If we stop before $i=k$, then $q' \neq q$ otherwise $q'=q$. Apply 1). By 1) and 2), we find a smooth structure distinct from the original one. Since there are uncountably many $F_s$, we have proved that given any smooth structure of a topological manifold, there exists uncountably many distinct smooth structures on the manifold. • You are welcome to post an Answer to this two+ year-old Question, but you've structured your post as a Comment. If you wish, revise your post to benefit future Readers as a self-contained Answer. Review How do I write a good Answer? for guidelines. – hardmath Feb 6 '17 at 19:27 • F_s is not a diffeomorphism, I don't understand how it yields a smooth structure. – ciceksiz kakarot Mar 3 '17 at 5:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 85, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924130439758301, "perplexity": 2177.7022825530153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00588.warc.gz"}
https://www.physicsforums.com/threads/calculating-magnetic-flux-of-a-rod.909265/
# Homework Help: Calculating magnetic flux of a rod Tags: 1. Mar 27, 2017 ### yahelra 1. The problem statement, all variables and given/known data B = 0.02T L = 40 cm w = 10 rad/s a: Electromotive force between O and C = ? b: If electromotive force is the same - so what is the velocity? I found 2 ways of solution. See "3. The attempt at a solution" 2. Relevant equations φ = BA ε = -dφ(t)/dt 3. The attempt at a solution So I found 2 ways: 1. What every one would do - by Electric: 2. By the magnetic flux. But wait - There is no area. So I found that if I would take in problem a the area of what the rod had passed (part of a disc) I would get the exact solution. For problem b if I would take the area of a triangle I will get the solution too. My question - Why does the second way is also true? How can it be explained (For both problem a and b)? Thanks a lot! Last edited: Mar 27, 2017 2. Mar 27, 2017 ### TJGilb Is that a line or a rod? The way the picture is presented it doesn't look like there is any flux, let alone a change in flux. Did you give us the entire problem statement? Last edited: Mar 27, 2017 3. Mar 28, 2017 ### yahelra I gave you the entire statement. And you are right, as I wrote there are no magnetic flux because there is no area. I meant that Ive found that we can take an imaginary area and it will give us the same solution. ( See picture -0.016 V) My questions are why is it true and how can it be explained?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926918625831604, "perplexity": 840.9038967721256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591481.75/warc/CC-MAIN-20180720022026-20180720042026-00364.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2009_v46n3_621
COMMUTATIVITY AND HYPONORMALITY OF TOEPLITZ OPERATORS ON THE WEIGHTED BERGMAN SPACE Title & Authors COMMUTATIVITY AND HYPONORMALITY OF TOEPLITZ OPERATORS ON THE WEIGHTED BERGMAN SPACE Lu, Yufeng; Liu, Chaomei; Abstract In this paper we give necessary and sufficient conditions that two Toeplitz operators with monomial symbols acting on the weighted Bergman space commute. We also present necessary and sufficient conditions for the hyponormality of Toeplitz operators with some special symbols on the weighted Bergman space. All the results are stated in terms of the Mellin transform of the symbol. Keywords weighted Bergman space;Toeplitz operator;Mellin transform;commutativity;hyponormality; Language English Cited by 1. HYPONORMALITY OF TOEPLITZ OPERATORS ON THE WEIGHTED BERGMAN SPACES,;; 호남수학학술지, 2013. vol.35. 2, pp.311-317 1. Hyponormal Toeplitz operators on the polydisk, Acta Mathematica Sinica, English Series, 2012, 28, 2, 333 2. HYPONORMALITY OF TOEPLITZ OPERATORS ON THE WEIGHTED BERGMAN SPACES, Honam Mathematical Journal, 2013, 35, 2, 311 3. Hyponormal Toeplitz operators with polynomial symbols on weighted Bergman spaces, Journal of Inequalities and Applications, 2014, 2014, 1, 335 References 1. S. Axler and Z. Cuckovic, Commuting Toeplitz operators with harmonic symbols, Integral Equations Operator Theory 14 (1991), no. 1, 1–12 2. S. Axler, Z. Cuckovic, and N. V. Rao, Commutants of analytic Toeplitz operators on the Bergman space, Proc. Amer. Math. Soc. 128 (2000), no. 7, 1951–1953 3. A. Brown and P. R. Halmos, Algebraic properties of Toeplitz operators, J. Reine Angew. Math. 213 (1963), 89–102 4. B. R. Choe, H. Koo, and Y. J. Lee, Commuting Toeplitz operators on the polydisk, Trans. Amer. Math. Soc. 356 (2004), no. 5, 1727–1749 5. J. B. Conway, Functions of One Complex Variable, Second edition. Graduate Texts in Mathematics, 11. Springer-Verlag, New York-Berlin, 1978 6. C. C. Cowen, Hyponormal and subnormal Toeplitz operators, Surveys of some recent results in operator theory, Vol. I, 155–167, Pitman Res. Notes Math. Ser., 171, Longman Sci. Tech., Harlow, 1988 7. C. C. Cowen, Hyponormality of Toeplitz operators, Proc. Amer. Math. Soc. 103 (1988), no. 3, 809–812 8. Zeljko Cuckovic and N. V. Rao, Mellin transform, monomial symbols, and commuting Toeplitz operators, J. Funct. Anal. 154 (1998), no. 1, 195–214 9. R. E. Curto and W. Y. Lee, Joint hyponormality of Toeplitz pairs, Mem. Amer. Math. Soc. 150 (2001), no. 712, x+65 pp 10. P. Fan, Remarks on hyponormal trigonometric Toeplitz operators, Rocky Mountain J. Math. 13 (1983), no. 3, 489–493 11. D. R. Farenick and W. Y. Lee, Hyponormality and spectra of Toeplitz operators, Trans. Amer. Math. Soc. 348 (1996), no. 10, 4153–4174 12. C. X. Gu, A generalization of Cowen's characterization of hyponormal Toeplitz operators, J. Funct. Anal. 124 (1994), no. 1, 135–148 13. C. X. Gu, On a class of jointly hyponormal Toeplitz operators, Trans. Amer. Math. Soc. 354 (2002), no. 8, 3275–3298 14. I. S. Hwang, Hyponormal Toeplitz operators on the Bergman space, J. Korean Math. Soc. 42 (2005), no. 2, 387–403 15. I. S. Hwang, I. H. Kim, and W. Y. Lee, Hyponormality of Toeplitz operators with polynomial symbols, Math. Ann. 313 (1999), no. 2, 247–261 16. I. S. Hwang and W. Y. Lee, Hyponormality of trigonometric Toeplitz operators, Trans. Amer. Math. Soc. 354 (2002), no. 6, 2461–2474 17. Y. J. Lee, Pluriharmonic symbols of commuting Toeplitz type operators on the weighted Bergman spaces, Canad. Math. Bull. 41 (1998), no. 2, 129–136 18. I. Louhichi and E. Strouse and Elizabeth and L. Zakariasy, Products of Toeplitz operators on the Bergman space, Integral Equations Operator Theory 54 (2006), no. 4, 525–539 19. Y. F. Lu, Commuting of Toeplitz operators on the Bergman spaces of the bidisc, Bull. Austral. Math. Soc. 66 (2002), no. 2, 345–351 20. T. Nakazi and K. Takahashi, Hyponormal Toeplitz operators and extremal problems of Hardy spaces, Trans. Amer. Math. Soc. 338 (1993), no. 2, 753–767 21. H. Sadraoui, Hyponormality of Toeplitz operators and Composition operators, Thesis, Purdue University, 1992 22. D. Sarason, Generalized interpolation in H$^{\infty}$, Trans. Amer. math. Soc. 127 (1967), 179–203 23. K. Stroethof, Essentially commuting Toeplitz operators with harmonic symbols, Canad. J. Math. 45 (1993), no. 5, 1080–1093 24. E. T. Whittaker and G. N. Watson, A Course of Modern Analysis. An Introduction to the General Theory of Infinite Processes and of Analytic Functions with an Account of the Principal Transcendental Functions, Fourth edition. Reprinted Cambridge University Press, New York 1962 25. D. Zheng, Commuting Toeplitz operators with pluriharmonic symbols, Trans. Amer. Math. Soc. 350 (1998), no. 4, 1595–1618 26. K. H. Zhu, Hyponormal Toeplitz operators with polynomial symbols, Integral Equations Operator Theory 21 (1995), no. 3, 376–381
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590813279151917, "perplexity": 776.8031929725025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00071-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/106905/deciding-equivalence-of-regular-languages?sort=newest
# Deciding equivalence of regular languages Given two regular expressions $R$ and $S$ on an alphabet $\Sigma$ it is possible to decide their equivalence as follows: 1. build two finite automata $M_R$ and $M_S$ such that $L(R) = L(M_R)$ and $L(S) = L(M_S)$ 2. build an automaton $M$ such that $L(M) = (L(M_R) - L(M_S)) \cup (L(M_S) - L(M_R))$ 3. test emptyness of $L(M)$ using a reachability algorithm on $M$ I was wondering if there is another way to decide equivalence. Suppose $M_R$ and $M_S$ are the minimal DFA (without epsilon-moves) such that $L(R) = L(M_R)$ and $L(S) = L(M_S)$. If they have a different number of states, then $R$ and $S$ are not equivalent. Otherwise let $m$ be the number of states of the two automata. Is it true that $L(M_R) = L(M_S)$ iff ${x \in L(M_R) : |x| \leq m +1 } = {x \in L(M_S) : |x| \leq m +1 }$? How to prove that with the Myhill-Nerode theorem? - The minimal automaton of a given language is unique, so if you can compute it, you can decide equivalence more easily just by checking whether the two automata are isomorphic. –  Emil Jeřábek Sep 11 '12 at 12:49 Actually I think the computational complexity of isomorphism checking is slower than the polynomial time algorithm above. –  Benjamin Steinberg Sep 11 '12 at 13:21 ? Isomorphism checking for DFA can be done in more or less linear time (depending on the computational model), whereas checking all words whose length is bounded by the size of the automata needs exponential time. –  Emil Jeřábek Sep 11 '12 at 13:37 Yes, I agree that isomorphism checking is fast. I think the algorithm above, which is the standard one, is usually used because one doesn't have to minimize first (although minimizing is pretty fast). Emptiness testing is essentially linear step 2 above is essentially quadratic. I think minimizing is slightly above quadratic but I don't remember. –  Benjamin Steinberg Sep 11 '12 at 13:48 Thank you very much... But actually I wasn't bothering about computational complexity, but just about the correctness of the last statement, and how to prove it if it is true. ;) –  Alberto Sep 11 '12 at 13:53 If $M_R$ and $M_S$ have $m,n$ states respectively, then one has that $L(M_R)=L(M_S)$ iff they have the same words of length at most $mn-1$. This is essentially the content of your algorithm. Suppose that they are different and let $w$ be a minimal length word accepted by one of the machines and not the other. If $w$ has length greater than $mn-1$, then when you run $w$ from the initial state of $M_R\times M_S$, you will get a loop. This loop will give you a factorization $w=xuy$ where $u$ reads a loop in both $M_R$ and $M_S$. So then $xy$ will be accepted in one of the machines and not the other and have smaller length. Added. I believe this is a counter example. Let m be an integer. Consider over a unary alphabet the languages $R=\lbrace a^n\mid n\not\equiv m-2 \pmod m\rbrace$ and $S=\lbrace 1,a,..,a^{m-3}\rbrace\cup \lbrace a^n\mid n\geq m-1\rbrace$. Then both of these are recognized by an $m$-state automaton (I believe both are minimal) and the shortest word in one, but not the other, is $a^{2m-2}$, which has length $2m-2$. I hope this works. My original $R$ is not what I meant to write. I fixed it. –  Benjamin Steinberg Sep 12 '12 at 0:20 That is indeed correct. Restating a little bit, we can also say what follows. The index $\mathsf{ind}(L)$ of a regular language is the size of the smallest DFA that accepts $L$. Let $m >1$ be a natural number and define $L_1 = \{a^n : n \text{ not multiple of } m\}$ and $L_2 = \{a^n : n \geq 1,\ n \neq m\}$. Then $L_1$ and $L_2$ are regular languages over the unary alphabet $\{a\}$, $\mathsf{ind}(L_1) = \mathsf{ind}(L_2) = m+2$, $\{x \in L_1 : |x| < 2m \} = \{x \in L_2 : |x| < 2m \}$ but $L_1 \neq L_2$. –  Alberto Sep 12 '12 at 13:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319891333580017, "perplexity": 189.4447070259777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463475.57/warc/CC-MAIN-20150226074103-00014-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2999055/show-that-riemann-integrable-function-f-on-a-b-must-be-a-bounded-function?noredirect=1
Show that Riemann integrable function $f$ on $[a,b]$ must be a bounded function. [duplicate] I see these two: The first uses a very different definition of Riemann integrable functions. The second post offers casual intuition, not a formal proof. The definitions I'm working with: A Riemann Sum is defined for a partition $$\mathcal{P}$$ of $$[a,b]$$ as: \begin{align*} \mathcal{R}(f, \mathcal{P}) &= \sum\limits_{j=1}^k f(s_j) \Delta_j \\ \end{align*} The function is Riemann integrable if Riemann sums converge to a number $$\ell$$ as the mesh sizes of the partitions approach zero. A function $$f$$ is Riemann integrable if for any $$\epsilon > 0$$, there must exist some $$\delta > 0$$ and some partition $$\mathcal{P}$$ such that: \begin{align*} m(\mathcal{P}) < \delta &\implies |\mathcal{R}(f, \mathcal{P}) - \ell| < \epsilon \\ \end{align*} Intuitively, if $$f$$ is unbounded, it looks like that Riemann sum will not converge, but I can't see how to formally demonstrate that. marked as duplicate by Paramanand Singh real-analysis StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Nov 15 '18 at 3:09 How are you choosing the $$s_j$$? The definition in Stephen Abbott's book allows us to choose the $$s_j$$ freely so long as $$\Delta_j<\delta$$. Let $$P=\{x_i\}_1^n$$ with $$x_i and $$\Delta_i<\delta$$ for all $$i$$. Then if $$f$$ is bounded on $$[x_i,x_{i+1}]$$, then it is bounded on $$[a,b]$$, hence if it is unbounded, then there exists $$[x_i,x_{i+1}]$$ such that $$f$$ is unbounded on $$[x_i,x_{i+1}]$$. Suppose $$f$$ is unbounded and choose an $$i$$ such that $$f$$ is unbounded on $$[x_i,x_{i+1}]$$. For simplicity we assume it is unbounded above (the case for unbounded below is similar). Now fix $$s_j\in[x_j,x_{j+1}]$$ for all $$j\not=i$$. Then we can make $$\sum_{j=1}^nf(s_j)\Delta_j$$ as large as we wish by varying $$s_i$$. For if $$Q>0$$, then choose $$s_i$$ such that $$f(s_i)>\dfrac{Q-\sum_{\substack{j=1\\j\not=i}}^nf(s_j)\Delta_j}{\Delta_i}.$$ Then $$\sum_{j=1}^nf(s_j)\Delta_j>\sum_{\substack{j=1\\j\not=i}}^nf(s_j)\Delta_j+\dfrac{Q-\sum_{\substack{j=1\\j\not=i}}^nf(s_j)\Delta_j}{\Delta_i}\Delta_i=Q.$$ This shows for any $$\delta$$ that there exists no real number $$A$$ such that for every tagged partition $$(P,\{s_k\})$$ with $$\Delta_k<\delta$$ we find $$\mathcal{R}(f,P)\in(A-\epsilon,A+\epsilon)$$. Contrapositively if $$f$$ is Riemann integrable, then it is bounded. Assuming $$f$$ is not bounded we get that for every $$K>0$$ and every Partition $$\mathcal{P}$$ of $$[a,b]$$ we find $$x\in [a,b]$$ such that for every $$\Delta_j$$ of this partition we have $$f(x)\cdot \Delta_j>K$$ so $$f$$ is not Riemann integrable. So we conclude that if $$f$$ is Riemann integrable, it is bounded.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 54, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9943879246711731, "perplexity": 769.7412051603931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00363.warc.gz"}
http://math.stackexchange.com/questions/270297/difference-between-tensor-and-tensor-field
# Difference between tensor and tensor field? I couldn't get the difference between tensor and tensor field. I'm learning from Barret O'neill's Semi-Riemann Geometry and here are the definitions: if $A:(V^*)^r \times V^s\to K$ transformation is $K$-multilinear then $A$ is a tensor on $V$. $M$ is a manifold, $\mathfrak{X}(M)$ is the vector fields' set that is $F(M)$-module. (Here is the point that I didn't understand) If $A$ is a tensor on $\mathfrak{X}(M)$ then we say $A$ is a tensor field on $M$. What is the difference between a tensor and tensor field. - Usually, a tensor field of a manifold $M$ is an assignment of a tensor to each point of $M$. Just like a vector field of $M$ gives you a vector at a particular point of $M$. – Lemon Jan 4 '13 at 14:23 I got it.I have another question.We said A is a tensor on V but how did we say A is a tensor on M.Shouldn't it be a tensor on X(M)? – Serkan Yaray Jan 4 '13 at 14:29 Where is it said that $A$ is a tensor on $M$? O'Neill's book says precisely that $A$ is a tensor on $\mathfrak{X}(M)$ and equivalently $A$ is a tensor field on $M$. – Willie Wong Jan 4 '13 at 14:31 Oh pardon you are right.I wrote it wrong. – Serkan Yaray Jan 4 '13 at 14:37 The difference in calling the same object $A$ a "tensor over $\mathfrak{X}(M)$" as opposed to "a tensor field over $M$" is that the former emphasizes the fact that we have an algebraic object: a tensor over some module, while the latter emphasizes the fact that underlying the module there is some manifold and geometry is going on there. Calling something a tensor field instead of a tensor forces you to remember that $\mathfrak{X}(M)$ is not just some arbitrary module, but that its elements can be identified with smooth sections of the tangent bundle of some manifold. These additional structures are occasionally useful.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737249612808228, "perplexity": 275.39481303191025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462709.88/warc/CC-MAIN-20151124205422-00142-ip-10-71-132-137.ec2.internal.warc.gz"}
https://hungrybeagle.com/index.php/ca12/ca12-3-derivatives/231-ca12-3-1-inverse-derivative
## 3.1 Derivative of the Inverse of a Function There is the relationship between the dierivative of a function and the derivative of its inverse.  You will explore this as part of today's lesson. You will also begin looking at inverse of trigonometric functions.  In order to make the inverse a function, we need to restrict the domain of the trigonometric function. Resources • Notes Assignment • p162 #20 • p162 #1-5, 7, 10 Attachments: FileDescriptionFile size 3.1.notes.pdf 4025 kB
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475820660591125, "perplexity": 1465.05851778115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867085.95/warc/CC-MAIN-20180525102302-20180525122302-00576.warc.gz"}
http://www.computer.org/csdl/trans/td/2003/03/l0203-abs.html
The Community for Technology Leaders Subscribe Issue No.03 - March (2003 vol.14) pp: 203-212 ABSTRACT <p><b>Abstract</b>—In a two- or three-dimensional image array, the computation of <it>Euclidean distance transform</it> (EDT) is an important task. With the increasing application of 3D voxel images, it is useful to consider the distance transform of a 3D digital image array. Because the EDT computation is a global operation, it is prohibitively time consuming when performing the EDT for image processing. In order to provide the efficient transform computations, parallelism is employed. In this paper, we first derive several important geometry relations and properties among parallel planes. We then, develop a parallel algorithm for the three-dimensional Euclidean distance transform (3D_EDT) on the EREW PRAM computation model. The time complexity of our parallel algorithm is <tmath> O(\log^2 N) </tmath> for an <tmath> N \times N \times N </tmath> image array and this is currently the best known result. A generalized parallel algorithm for the 3D-EDT is also proposed. We implement the proposed algorithms sequentially, the performance of which exceeds the existing algorithms (proposed by <it>Yamada</it>, <it>Toriwaki</it>). Finally, we develop the corresponding parallel programs on both the emulated EREW PRAM model computer and the IBM SP2 to verify the speed-up properties of the proposed algorithms.</p> INDEX TERMS Computer vision, Euclidean distance, distance transform, image processing, parallel algorithm, three-dimension, EREW PRAM model. CITATION Yu-Hua Lee, Shi-Jinn Horng, Jennifer Seitzer, "Parallel Computation of the Euclidean Distance Transform on a Three-Dimensional Image Array", IEEE Transactions on Parallel & Distributed Systems, vol.14, no. 3, pp. 203-212, March 2003, doi:10.1109/TPDS.2003.1189579 FULL ARTICLE 35 ms (Ver 2.0) Marketing Automation Platform
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496577382087708, "perplexity": 2553.221280405816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064869.18/warc/CC-MAIN-20150827025424-00101-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/injective-and-surjective-linear-maps-examples-1
Injective and Surjective Linear Maps Examples 1 # Injective and Surjective Linear Maps Examples 1 Recall from the Injective and Surjective Linear Maps page that a linear map $T : V \to W$ is said to be injective if: • $T(u) = T(v)$ implies that $u = v$. • $\mathrm{null} (T) = \{ 0 \}$. Furthermore, the linear map $T : V \to W$ is said to be surjective if:** • If for every $w \in W$ there exists a $v \in V$ such that $T(v) = w$. • $\mathrm{range} (T) = W$. We will now look at some examples regarding injective/surjective linear maps. ## Example 1 Let $T \in \mathcal L ( \wp (\mathbb{R}), \mathbb{R})$ be defined by $T(p(x)) = \int_0^1 2p'(x) \: dx$. Prove whether or not $T$ is injective, surjective, or both. We will first determine whether $T$ is injective. Suppose that $p(x) \in \wp (\mathbb{R})$ and $T(p(x)) = 0$. Then we have that: (1) \begin{align} \quad \int_0^1 2p'(x) \: dx = 0 \\ \quad 2 \int_0^1 p'(x) \: dx = 0 \end{align} Note that if $p(x) = C$ where $C \in \mathbb{R}$, then $p'(x) = 0$ and hence $2 \int_0^1 p'(x) \: dx = 0$. Hence $\mathrm{null} (T) \neq \{ 0 \}$ and so $T$ is not injective. We will now determine whether $T$ is surjective. Suppose that $C \in \mathbb{R}$. We want to determine whether or not there exists a $p(x) \in \wp (\mathbb{R})$ such that: (2) \begin{align} \quad \int_0^1 2p'(x) \: dx = C \end{align} Take the polynomial $p(x) = \frac{C}{2}x$. Then $p'(x) = \frac{C}{2}$ and hence: (3) \begin{align} \quad \int_0^1 2p'(x) \: dx = \int_0^1 C \: dx = Cx \biggr \rvert_0^1 = C \end{align} Therefore $T$ is surjective. ## Example 2 Suppose that $S_1, S_2, ..., S_n$ are injective linear maps for which the composition $S_1 \circ S_2 \circ ... \circ S_n$ makes sense. Prove that $S_1 \circ S_2 \circ ... \circ S_n$ is injective. Let $u$ and $v$ be vectors in the domain of $S_n$, and suppose that: (4) \begin{align} \quad S_1 \circ S_2 \circ ... \circ S_n (u) = S_1 \circ S_2 \circ ... \circ S_n (v) \\ \quad (S_1 \circ S_2 \circ ... \circ S_{n-1})(S_n(u)) = (S_1 \circ S_2 \circ ... \circ S_{n-1})(S_n(v)) \end{align} From the equation above we see that $S_n (u) = S_n(v)$ and since $S_n$ injective this implies that $u = v$. Since the remaining maps $S_1, S_2, ..., S_{n-1}$ are also injective, we have that $u = v$, so $S_1 \circ S_2 \circ ... \circ S_n$ is injective. ## Example 3 Let $T$ be a linear map from $V$ to $W$, and suppose that $T$ is injective and that $\{ v_1, v_2, ..., v_n \}$ is a linearly independent set of vectors in $V$. Show that $\{ T(v_1), ..., T(v_n) \}$ is a linearly independent set of vectors in $W$. Consider the following equation (noting that $T(0) = 0$): (5) \begin{align} a_1T(v_1) + a_2T(v_2) + ... + a_nT(v_n) = 0 \\ T(a_1v_1 + a_2v_2 + ... + a_nv_n) = T(0) \end{align} Now since $T$ is injective, this implies that $a_1v_1 + a_2v_2 + ... + a_nv_n = 0$. However, $\{ v_1, v_2, ..., v_n \}$ is a linearly independent set in $V$ which implies that $a_1 = a_2 = ... = a_n = 0$. Therefore $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a linearly independent set in $W$. ## Example 4 Let $T$ be a linear map from $V$ to $W$ and suppose that $T$ is surjective and that the set of vectors $\{ v_1, v_2, ..., v_n \}$ spans $V$. Show that $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $W$. Let $w \in W$. Since $T$ is surjective, then there exists a vector $v \in V$ such that $T(v) = w$, and since $\{ v_1, v_2, ..., v_n \}$ spans $V$, then we have that $v$ can be written as a linear combination of this set of vectors, and so for some $a_1, a_2, ..., a_n \in \mathbb{F}$ we have that $v = a_1v_1 + a_2v_2 + ... + a_nv_n$ and so: (6) \begin{align} \quad T(a_1v_1 + a_2v_2 + ... + a_nv_n) = w \\ \quad a_1T(v_1) + a_2T(v_2) + ... + a_nT(v_n) = w \end{align} Therefore any $w \in W$ can be written as a linear combination of $\{ T(v_1), T(v_2), ..., T(v_n) \}$ and so $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $W$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997307658195496, "perplexity": 146.06542400041164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00411.warc.gz"}
https://resonaances.blogspot.com/2014/10/weekend-plot-stealth-stops-exposed.html?showComment=1412693098005
## Saturday, 4 October 2014 ### Weekend Plot: Stealth stops exposed This weekend we admire the new ATLAS limits on stops - hypothetical supersymmetric partners of the top quark: For a stop promptly decaying to a top quark and an invisible neutralino, the new search excludes the mass range between m_top and 191 GeV. These numbers do not seem impressive at first sight, but let me explain why it's interesting. No sign of SUSY at the LHC could mean that she is dead, or that she is resting hiding. Indeed, the current experimental coverage has several blind spots where supersymmetric particles, in spite of being produced in large numbers, induce too subtle signals in a detector to be easily spotted. For example, based on the observed distribution of events with a top-antitop quark pair accompanied by large missing momentum, ATLAS and CMS put the lower limit on the stop mass at around 750 GeV. However, these searches are inefficient if the stop mass is close to that of the top quark, 175-200 GeV (more generally, for m_top+m_neutralino ≈ m_stop). In this so-called stealth stop region,  the momentum carried away by the neutralino is too small to distinguish stop production from the standard model process of top quark production. We need another trick to smoke out light stops. The ATLAS collaboration followed theorist's suggestion to use spin correlations. In the standard model, gluons couple  either to 2 left-handed or to 2 right-handed quarks. This leads to a certain amount of correlation between  the spins of the top and the antitop quark, which can be seen by looking at angular distributions of the decay products of  the top quarks. If, on the other hand, a pair of top quarks originates from a decay of spin-0 stops, the spins of the pair are not correlated. ATLAS measured spin correlation in top pair production; in practice, they measured the distribution of the azimuthal angle between the two charged leptons in the events where both top quarks decay leptonically. As usual, they found it in a good agreement with the standard model prediction. This allows them to deduce that there cannot be too many stops polluting the top quark sample, and place the limit of 20 picobarns on the stop production cross section at the LHC, see the black line on the plot. Given the theoretical uncertainties, that cross section corresponds to the stop mass somewhere between 191 GeV and 202 GeV. So, the stealth stop window is not completely closed yet, but we're getting there. Anonymous said... There is a related theoretical study here: http://arXiv.org/abs/arXiv:1407.1043 Robert L. Oldershaw said... How about a new "confinement" gambit. There are zillions of sparticles but they are all confined to unobservable extra dimensions so you cannot see them? Hey, it works for theologians! Alex Small said... Jester, Since you keep reporting on negative results for different BSM theories, I am curious what sorts of BSM ideas you think are worth exploring? Jester said... Currently, there's no strong motivations for any particular BSM framework. Various theories of dark matter are worth exploring, because dark matter is one thing from beyond the SM that we're sure it exists (though of course we don't know if dark matter is accessible at the LHC). Pragmatically, we should explore as wide range of theories and signatures as possible... we may always find something unexpected. Ervin Goldfain said... Jester, Your answer to Alex' question is well formulated and makes a lot of sense. I would only add that, in my opinion, one must continue to explore the inner structure of the Standard Model (SM)- as we know it today - to better understand its dynamics and properties. Both SM and the Renormalization Group are non-trivial frameworks leading to complex systems of non-linear equations. Little is known about the behavior of strongly coupled gauge theory (aside from lattice simulations) and non-perturbative phenomena. Many questions remain open on the Higgs sector...and so on. Sven said... Hi, indeed the question *which* BSM theory should be pursued is an important and complicated one (DM and possibly neutino masses alone are sufficient justification that we must look for BSM physics). In the absense of any clear BSM hint from the LHC, one should look at the various pros and cons of the BSM alternatives, and also which measurements they predicted (or not). In this sense the prediction of the Higgs boson mass by SUSY theories remains very interesting. :-) Anonymous said... Jester, are we all 100% completely fully sure that dark matter is BSM? Would anybody bet his life on it? Jester said... Yes, i would. In the worst case dark matter is right-handed neutrinos, but formally that's still BSM. Alex Small said... The astronomers seem pretty sure DM isn't made out of standard model particles. They seem pretty sure that GR is correct on astronomical and cosmological length scales. If the data and methods for those suppositions are sound, BSM particles are pretty much what it has to be. Chris said... Dark matter is not baryonic - the ratio of 2nd to 3rd acoustic peak of the CMB tells us that. Carlos said... Let me stress something that perhaps needs to be said regarding the LHC limits on direct stop production. You mention that the current limit is of the order of 700 GeV, unless the stop mass is close to the top mass. I would say that, assuming that the neutralino is lighter than the stop and provided the neutralino mass is larger than 250 GeV, there is essentially no limit on the stop mass coming from direct stop production. This means that the real bound on the stop mass is 250 GeV, unless the mass is close to the top mass in which case it can be even smaller. Let me also emphasize that 250 GeV is far away from 700 GeV (actually it is far closer to the top mass, 173 GeV), so it is wrong to say that it is 700 GeV unless the spectrum is compressed or the stop mass is close to the top mass. cb said... Talking about BSM and understanding better the Higgs sector and the issue of the stability of the electroweak vacuum, I hope Jester will find some time to comment about the experimental search for a more conservative U(1)B-L gauge symmetry that could extend the SM. @Sven, there are also interesting Higgs mass post-dictions* from Solid Theoretical Research In Natural Geometric Structures** without SUSY that could be interesting... (*arxiv.org/abs/1208.1030 and arxiv.org/abs/1408.5367; **STRINGS definition according to Juan Maldacena in physics.princeton.edu/strings2014/slides/Maldacena.pdf ;-) Cormac said... Lovely post - I hope the title didn't raised a few flags at the NSA! Anonymous said... Jester, Chris, yes, the CMP peaks height ratios point towards dark matter. But could there not be another explanation for the peak heights? I am astonished and amazed that you would bet your life on such an indirect argument, with all the pitfalls due to measurement errors and possibly forgotten terms in the oscillation equations. I wish you that you are right! Jester said... There are numerous independent arguments for dark matter: galactic rotation curves, dynamics of galaxy clusters, large scale structure, CMB peaks, bullet cluster, baryon acoustic oscillations. Any one of these would not be completely convincing on its own, but taken together they are just enough to bet a life on :) Alex Small said... The most convincing evidence is that the independent signals don't just point to missing mass. They also point to roughly the same amount of missing mass. Theo Nieuwenhuizen said... @ Jester "dark matter is one thing from beyond the SM that we're sure it exists" I strongly doubt the existence of WIMPs or axions. They would mess up the Galaxy, LCDM heavily fails at galactic scales. The problem is not "dirty gastrophysics" but a failed theory poorly rescued by more and more epicycles. @ Jester "There are numerous independent arguments for dark matter: galactic rotation curves, dynamics of galaxy clusters, large scale structure, CMB peaks, bullet cluster, baryon acoustic oscillations." True, there must be something out there. We still have a candidate in active neutrinos (a fit of lensing data for the galaxy cluster A1689 predicts eV ranged masses) and sterile ones of either eV range or perhaps 7 keV. At the galactic scale, there may be an important role for dark baryons, related to nonlinear structure formation at those "small" scales. Chris said... In my book sterile neutrinos are BSM. Also, don't forget that the acoustic oscillation peak ratio was a *prediction* of non-baryonic DM. It is most certainly not a measurement error (if i am not mistaken, at least 4 collaborations have seen it) and anything you might have to add to the oscilation equations will be BSM stuff, too. I am singling out this one measurement because even MOND people acknowledge that nonbaryonic DM is the only way to explain it. Filippo said... Primordial black holes are not excluded as DM candidates, and I wouldn't say they are BSM. andrew said... Dark matter phenomena - absolutely. DM v. inaccuracies in the conventional equations of gravity or how we approximate them? Not so sure. There are solid cases to be made for each. I also think that there is a great deal to be said for probing the why's of the SM. And, outside DM if you are looking for the highest percentage of papers where experiment doesn't produce the expected result, the answer has to be in QCD. Maybe we're just operationalizing the equations wrong. But, there are more than enough anomalous QCD results out there for us to be missing something - probably something subtle - but I wouldn't be at all surprised to find out that there are some corners that we've missed. There's lots of just plain brute force work to be done pinning down SM (or maybe even BSM) neutrino physics - CP violation parameter, Majorana v. Dirac, normal v. inverted hierarchy, absolute neutrino masses, the quadrants of one of the theta angles, and neutrinoless double beta decay exclusions, just to start with. One you have those data points, you have a complete set of reasonably accurately known SM parameters and you can really get serious about exploring deeper inner workings. Another tempting experimental target in terms of SUSY and BSM physics is the experimentally measure the running of the three gauge coupling constants in order to see if they more closely match SM or SUSY. Some of the distinctions should be accessible at LHC energies. My intuition is also that we will probably find out that once the gauge coupling beta functions are modified to incorporate a running of the quantum gravity Newton's constant term that voila, we will find out that there is SM gauge unification at a renormalized Planck's constant value. A little 1% tweak here, and 1% tweak there over all of the orders of magnitude from TeV to the the GUT scale is really all it takes. vmarko said... Andrew, Just a small note --- gravity is not renormalizable, so it would be really hard to make sense of "running of the quantum gravity Newton's constant". Anonymous said... vmarko, Gravity is perturbatively non-renormalizable, but it may be renormalizable in a "would be" non-perturbative theory.One may speculate that Newton's constant runs with the energy scale in such a non-perturbative model. vmarko said... Anonymous, AFAIK, the concept of renormalizability is always connected to the perturbation expansion. Given a full nonperturbative theory, you can perform a perturbation expansion at two different energy scales, and compare the corresponding coupling constants, thereby deriving their flow. The theory is called renormalizable if the comparison between the couplings at two different scales can be made at all; otherwise the theory is nonrenormalizable. The fact that general relativity is (perturbatively) nonrenormalizable means that one cannot compare couplings at two different energy scales (the number of the coupling constants is different at different energies, and the RGE-s cannot be formulated). Given that, I don't understand what does it mean to say that Newton's constant runs with the energy scale in a nonperturbative theory. In order to even recognize what is Newton's constant, one needs to perform the perturbation expansion of this full theory, and one cannot construct a RGE for it, due to (perturbative) nonrenormalizability. Anyway, we are getting off-topic here... :-) Anonymous said... vmarko, Asymptotic safety scenarios in quantum gravity are related to my comment above, see for example: http://www.scholarpedia.org/article/Asymptotic_Safety_in_quantum_gravity and http://www.physics.utoronto.ca/~manber/On%20the%20running%20of%20the%20coupling%20constants%209.pdf vmarko said... Anonymous, Ok, I agree, in the asymptotic safety approach to QG one can indeed make sense of the running of G. But I'd say that AS is probably the only model with this property --- i.e. it's not a generic thing for QG, but specific to AS only. Best, :-) Marko Ervin Goldfain said... "But I'd say that AS is probably the only model with this property --- i.e. it's not a generic thing for QG, but specific to AS only". There are few Quantum Gravity scenarios with fancy names that may directly (or indirectly) relate to Asymptotic Safety: Causal Dynamic Triangulation (CDT), Horava-Lifshitz gravity, Quantum Einstein Gravity (QEG) and models based on multifractional spacetimes. But the "jury is still out" and is presently unclear if either one of these scenarios will stand the test of time. vmarko said... It wasn't my intention to get dragged into a discussion about this, but... CDT can have AS only as an effective field theory approximation, which breaks down at small enough scales (since CDT is fundamentally piecewise-linear). So flow of G is not defined beyond the approximation scale. Horava gravity has big problems with the scalar mode. QEG and "multifractional spacetimes" are just frameworks and buzzwords, rather than well-defined precise models of QG. Ervin Goldfain said... vmarko, Please read my last sentence, I am not disagreeing with you. ruhul said... The astronomers seem pretty sure DM isn't made out of standard model particles.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580235838890076, "perplexity": 1126.408222454982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00383.warc.gz"}
https://preparmy.com/chemistry/chemical-reaction/
# Chemical Reaction ## Chemical Reaction The process in which substances (reactants) react to form new compounds (products), is known as a chemical reaction. This process involves the breaking of old bonds and formation of new bonds. If bond energies of reactants are greater than the bond energies of products, the reaction occurs with the evolution of energy in the form of heat. However, in an opposite condition, absorption of energy takes place. ### Properties of a Chemical Reaction Chemical reaction can be observed with the help of any of the following observations. 1. Change in state 2. Change in color 3. Evolution of a gas 4. Change in temperature 5. Formation of Precipitate ### Chemical Equation The short representation of a chemical reaction with the help of symbols of elements or formula of compounds is called chemical equation. 1. The substances or compounds which take part in a reaction are called reactants. These are written on the left hand side (LHS) with a plus sign (+) in between them. 2. The substances or compounds formed in the course of reaction are called products. These are written on the right hand side (RHS) with a plush sign (+) in between them. 3. The arrow head (→) point towards the products which shows the direction of reaction. e.g., zinc reacts with sulphuric acid to form zinc sulphate and hydrogen gas. Zinc + Sulphuric Acid → Zinc Sulphate + Hydrogen ### Rules of Writing a balanced Chemical Reaction Equation i) The number of atoms of reactants should be equal to the number of atoms of products. (According to the law of conservation of mass). Fe + H2O → Fe3O4 + H2 As per rule, the above equation is incorrect and can be correctly written as 3Fe + 4H2O → Fe3O4 + 4H2 ii) The physical States of reactants and products should be mentioned along with their chemical formula in parenthesis. The above equation can be written in accordance with to rule ii) 3Fe (s) + 4H2O (g) → Fe3O4 (s) + 4H2 (g) ## Types of Chemical Reactions Different types of chemical reactions with examples of chemical reactions Combination Reaction A reaction in which a single new product is formed from two or more reactants, is called a combination reaction. Such reactions may occur in between the element or compounds. For example, formation of slaked lime by the reaction of calcium oxide with water CaO (s) + H2O (l)→ Ca(OH)2 (aq) Other examples of combination reactions are 1. Burning of coal C (s) + O2 (g) → CO2 (g) 2. Formation fo water from H2(g) and O2 (g) 2H2 (g) + O2 (g) → 2H2O (l) ### Decomposition Reaction A chemical reaction in which a single reactant (compound) breaks down to give simpler products, is called a decomposition reaction. The decomposition reactions require energy in the form of heat, light or electricity. Therefore, decomposition reactions are of three types #### Thermal Decomposition When a decomposition is carried out by heating, it is called thermal decomposition. For example, decomposition of calcium carbonate to calcium oxide and carbon dioxide upon heating CaCO3 (s) → CaO (s) + CO2 (g) Another example of thermal decomposition is the decomposition of lead nitrate to lead oxide, nitrogen dioxide (brown fumes) and oxygen. 2Pb(NO3)2 → 2PbO (s) + 4 NO2 (g) + O2 #### Photolysis When a decomposition reaction is brought about by sunlight, it is called photolysis For example,    2AgCl (s) → 2 Ag (s) + Cl2 (g) • The above reaction is used in black and white photography since silver chloride or silver bromide turns grey in sunlight. • When metal salts are heated, their ions emit various colors of light • A decomposition reaction is the reverse of the combination reaction. • Decomposition reaction of calcium carbonate is used in various industries. e.g., in the manufacturing of cement #### Electrolysis When a decomposition reaction is brought about by electricity, it is called electrolysis 2 H2O (l) → 2H2 + O2 ### Displacement Reaction A reaction in which more reactive element displaces less reactive element from its compound present in the dissolved state, is called a displacement reaction. For example, when an iron nail is suspended in an aqueous solution of copper sulfate for 20 minutes, it becomes brownish and the blue color of the solution is slightly faded. This indicates that iron has displaced copper from copper sulfate solution. Fe (s) + CUSO4 (aq) → FeSO4 (aq) + Cu (s) Zinc and lead are more reactive elements than copper, so they displace Cu from the aqueous solutions of its compounds. ### Double Displacement Reaction A chemical reaction in which there is an exchange of ions between the reactants to give new substances is called a double displacement reaction. Na2SO4 (aq) + BaCl2 (aq) → BaSO4 (s)↓ + 2 NaCl (aq) In the above reaction, precipitates are formed. So, this reaction is also known as precipitation reaction. ### Neutralisation Reaction Acids and bases neutralize each other to form corresponding salts and water. This reaction is called neutralization reaction. If acid and base both are strong, 57.1 kJ heat is in released during the process. HCl + NaOH → NaCl + H2O ### Isomerisation or Rearrangement Reaction A chemical reaction in which the atoms of the molecule of a compound undergo rearrangement is called an isomerization or rearrangement reaction. It is generally seen in case of organic compounds. For Example, isomerization of ammonium cyanate into urea. NH4CNO → 2NH3 ### Reversible and Irreversible Reaction A chemical reaction which proceeds in both the directions is called a reversible reaction. For example, the formation of ammonia from nitrogen and hydrogen by Haber’s process. N2 + H2 ↔ 2NH3 A chemical reaction which proceeds only in one direction is called irreversible reaction. 2NaOH + H2SO4 ### Hydrolysis Reaction It is the reaction between salts of a weak acid or a weak base with water. Due to high dielectric constant, water has a very strong hydrating tendency. It dissolves many ionic compounds. However, certain covalent and some iconic compounds are hydrolyzed in water. CH3COONa + H2O → CH3COOH + NaOH ### Photochemical Reaction These chemical reactions take place in the presence of sunlight. 6CO2 + 12H2O → C6H12O6 + 6H2O +6O2 The rate of a photochemical reaction is affected by the intensity of light. The photosensitizer is a substance which brings about a reaction without undergoing any chemical change itself. In the process of photosynthesis, chlorophyll acts as a photosensitizer. ### Exothermic and Endothermic Reactions Reactions occurring with the evolution of energy are called exothermic reactions, e.g., respiration, decomposition, burning of natural gas, etc whereas reactions for the occurrence of which energy is absorbed, are called endothermic reactions. e.g., digestion A+B → C + Δ(exothermic) A+B → C – Δ(endothermic) ### Oxidation and Reduction Oxidation It is defined as a chemical reaction in which a substance gains oxygen or any other electronegative element or loses hydrogen or electrons and shows an increase in oxidation number. 2Cu + O2 → 2CuO (Copper is oxidized to CuO) CuO + H2 → Cu + H2O (Hydrogen is oxidized to H2O) Reduction It is defined as a chemical reaction in which a substance gains hydrogen or electropositive element or electrons or loses oxygen or electronegative element and shows decrease in oxidation number Oxidising Agent and Reducing Agent The acceptor of electrons is an oxidizing agent (oxidant). The donor of electrons is reducing agent (reductant). In short, a substance which is oxidized or oxidation number of which is increased acts as a reducing agent while a substance which is reduced or oxidation number of which is decreased acts as an oxidizing agent.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696519732475281, "perplexity": 3405.1073247930744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00344.warc.gz"}
https://docs.plasmapy.org/en/latest/api/plasmapy.formulary.parameters.magnetic_energy_density.html
# magnetic_energy_density¶ plasmapy.formulary.parameters.magnetic_energy_density(B: Unit(‘T’)) Calculate the magnetic energy density. Aliases: ub_ Parameters B (Quantity) – The magnetic field in units convertible to tesla. Returns E_B – The magnetic energy density in units of joules per cubic meter. Return type Quantity Raises Warns UnitsWarning – If units are not provided, SI units are assumed Notes The magnetic energy density is given by: $E_B = \frac{B^2}{2 μ_0}$ The motivation behind having two separate functions for magnetic pressure and magnetic energy density is that it allows greater insight into the physics that are being considered by the user and thus more readable code. magnetic_pressure Returns an equivalent Quantity, except in units of pascals. >>> from astropy import units as u
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357684254646301, "perplexity": 1218.356294199167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00015.warc.gz"}
http://clay6.com/qa/15314/in-an-n-type-semiconductor-the-fermi-energy-level-lies
Browse Questions In an n-type semiconductor, the fermi energy level lies (a) in the forbidden energy gap nearer to the conduction band (b) in the forbidden energy gap nearer to the valence band (c) in the middle of forbiddeb energy gap (d) outside the forbidden energy gap (c) in the middle of forbiddeb energy gap
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9050884246826172, "perplexity": 4143.849039619762}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540928.63/warc/CC-MAIN-20161202170900-00083-ip-10-31-129-80.ec2.internal.warc.gz"}
https://plainmath.net/5640/factor-the-given-polynomial-15x-2-plus-7x-4
# Factor the given polynomial. 15x^2+7x-4 Question Polynomial factorization Factor the given polynomial. $$\displaystyle{15}{x}^{{2}}+{7}{x}-{4}$$ 2021-03-09 Step 1 $$\displaystyle{a}{x}^{{2}}+{b}{x}+{c}$$ i) Multiply the highest degree and lowest degree coefficient i.e. a and c ii) Add or subtract two numbers to find the coefficient b such that multiplication of these two give the multiplication from step i) iii) write in factorization form by taking common part out from the polynomial Step 2 Given polynomial $$\displaystyle{15}{x}^{{2}}+{7}{x}-{4}$$ $$\displaystyle{15}{x}^{{2}}+{12}{x}-{5}{x}-{4}\ldots\ldots\ldots..{\left[{12}\cdot{\left(-{5}\right)}={15}\cdot{\left(-{4}\right)}\right]}$$ 3x ( 5x + 4) -1 ( 5x +4 ) (3x -1) ( 5x + 4) ### Relevant Questions Complete Factorization Factor the polynomial completely, and find all its zeros.State the multiplicity of each zero. $$P(x) = x^{5}+7x^{3}$$ The term $$20x^{3}$$ appears in both the expressions $$20x^{3} + 8x^{2}$$ and $$20x^{3} + 10x$$ but factor $$20x^{3}$$ in different ways to obtain each polynomial factorization Whether the given statement make sense or does not make sence. Complete Factorization Factor the polynomial completely, and find all its zeros.State the multiplicity of each zero. $$P(x) = x^{4}+2x^{2}+1$$ Use the factor theorem to determine if the given binomial is a factor of f(x). $$\displaystyle{f{{\left({x}\right)}}}={x}^{{4}}+{8}{x}^{{3}}+{11}{x}^{{2}}-{11}{x}+{3},{x}+{3}$$ Try to complete Factorization Factor the polynomial completely , and find all its zeros. State the multiplicity of each zero. $$P(x)=x^{4}-625$$ Use the linear factorization theorem to construct a polynomial function with the given condition. n=3 with real coefficients, zeros = -4, 2+3i. Factor each polynomial. If a polynomial cannot be factored, write prime. Factor out the greatest common factor as necessary. $$\displaystyle{3}{y}^{{3}}+{24}{y}^{{2}}+{9}{y}$$ Find the factor of polynomial $$\displaystyle{P}{\left({x}\right)}={x}^{{2}}+{x}-{2}$$ Factor the polynomial completely. xy+11x-5y-55 Use the factorization theorem to determine whether $$\displaystyle{x}−\frac{{1}}{{2}}$$ is a factor of $$\displaystyle{f{{\left({x}\right)}}}={2}{x}^{{4}}−{x}^{{3}}+{2}{x}−{1}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837369084358215, "perplexity": 1063.663915261056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00356.warc.gz"}
http://smartservo.org/en/wind-turbine-efficiency-cal-en/
# Calculation of wind energy and wind turbine efficiency The function of the wind turbine is to extract energy from the wind. The wind is flowing air, which has speed and therefore kinetic energy. When the wind flows through the wind turbine blades, part of the kinetic energy is transferred to the wind turbine to make it rotate. The wind turbine gains energy, and the wind loses energy, so the wind speed after flowing through the wind turbine will slow down. This article explains the calculation methods of wind energy and wind turbine power and efficiency, and provides a video to make it easier to understand. The wind turbine rotates a circle, the area swept by the blades is called the “Swept area” (denoted as A), When the wind speed is V, the air density is $\rho$. Then the kinetic energy E in T seconds is: Total wind energy: \begin{aligned} E&=\frac{1}{2}mV^2 \\ &=\frac{1}{2}\rho AVT \cdot V^2\\ &=\frac{1}{2}\rho AV^3 \cdot T \end{aligned} But what we care about is power, so divide by time T to get wind power P: Wind Power: $P=\frac{E}{T} =\frac{1}{2}\rho AV^3 ............(1)$ The function of a wind turbine is to convert wind energy into mechanical energy for the rotation of the main shaft. Its power P’ can be expressed by the rotation speed of the main shaft multiplied by the torque:: Wind turbine power: $P' =$ Angular speed(rad/sec) × Torque(N-m)    ……………..(2) The efficiency of a wind turbine Cp, is the ratio of the total wind energy converted to the mechanical energy of the main shaft of the wind turbine, The higher the efficiency, the more mechanical energy can be obtained, which is defined as follows: Wind turbine efficiency: $C_p = \frac{P'}{P} ............(3)$ If the wind turbine efficiency is known, formula (1) can be substituted into formula (3) to obtain the wind turbine power P’: Wind turbine power: $P' = Cp \times \frac{1}{2}\rho AV^3 ............(4)$ Therefore, the power of the wind turbine is proportional to the efficiency Cp, the swept area and the third power of the wind speed V. Therefore, in order to obtain more energy, in addition to improving efficiency, the wind turbines are also made larger and larger to increase the swept area, and set up where the wind speed is stronger. For example, higher altitude or offshore.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902172684669495, "perplexity": 596.6365023387716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00248.warc.gz"}
https://calculator.tutorvista.com/simplifying-fractions-calculator.html
Top Simplifying Fractions Calculator Top Fractions are proportions of whole numbers, and they can be proper, improper or mixed. Simplifying a fraction simply means that will look like for an equivalent fraction that has the smallest numbers we can get for the numerator and the denominator. Simplifying fractions is useful when we try to make an answer look simpler or make the numerator and the denominator look small. Sometimes simplifying fractions before adding, subtracting, dividing and multiplying can make calculation less complicated and confusing. ## Steps for Fractions Calculator Step 1 : Read the problem and list the given values. Step 2 : Enter the values in the particular tab to get the simplified values. ## Problems on Fractions Calculator 1. ### Simplify the fraction given below. $\frac{2}{6} + \frac{6}{12}$ $\frac{2}{6} - \frac{6}{12}$ $\frac{2}{6} \times \frac{6}{12}$ $\frac{2}{6} \div \frac{6}{12}$ Step 1 : Given data $\frac{2}{6} + \frac{6}{12}$ $\frac{2}{6} - \frac{6}{12}$ $\frac{2}{6} \times \frac{6}{12}$ $\frac{2}{6} \div \frac{6}{12}$ Step 2 : Enter the values and simplify. $\frac{2}{6} + \frac{6}{12} = \frac{5}{6}$ $\frac{2}{6} - \frac{6}{12} = \frac{-1}{6}$ $\frac{2}{6} \times \frac{6}{12} = \frac{1}{6}$ $\frac{2}{6} \div \frac{6}{12} =\frac{2}{3}$ $\frac{2}{6} + \frac{6}{12} = \frac{5}{6}$ $\frac{2}{6} - \frac{6}{12} = \frac{-1}{6}$ $\frac{2}{6} \times \frac{6}{12} = \frac{1}{6}$ $\frac{2}{6} \div \frac{6}{12} =\frac{2}{3}$ 2. ### Simplify the fraction given below. $\frac{3}{12} + \frac{6}{18}$ $\frac{3}{12} - \frac{6}{18}$ $\frac{3}{12} \times \frac{6}{18}$ $\frac{3}{12} \div \frac{6}{18}$ Step 1 : Given data $\frac{3}{12} + \frac{6}{18}$ $\frac{3}{12} - \frac{6}{18}$ $\frac{3}{12} \times \frac{6}{18}$ $\frac{3}{12} \div \frac{6}{18}$ Step 2 : Enter the values and simplify $\frac{3}{12} + \frac{6}{18} = \frac{7}{12}$ $\frac{3}{12} - \frac{6}{18} = \frac{-1}{12}$ $\frac{3}{12} \times \frac{6}{18} = \frac{1}{12}$ $\frac{3}{12} \div \frac{6}{18} = \frac{3}{4}$ $\frac{3}{12} + \frac{6}{18} = \frac{7}{12}$ $\frac{3}{12} - \frac{6}{18} = \frac{-1}{12}$ $\frac{3}{12} \times \frac{6}{18} = \frac{1}{12}$ $\frac{3}{12} \div \frac{6}{18} = \frac{3}{4}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900366067886353, "perplexity": 1216.3202966330168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00606.warc.gz"}
http://www.math-only-math.com/worksheet-on-reflection-in-different-axis.html
# Worksheet on Reflection in Different Axis Practice the questions given in the worksheet on reflection in different axis. The questions are based on how to find the co-ordinates of the image of a point under reflection in different axis. 1. The point M (2, -9) is reflected in the x-axis; find the co-ordinates of the image M’. This M' is further reflected in y-axis. Find the so-ordinates of the image M''. Also, find the image of M (2, -9) in the origin. 2. If point P (-2, 7) is reflected in the origin, then find the co-ordinates of the image P’. It is again reflected in y-axis. Find the image of P' i.e., P'' 3. Write the reflection in different axis when the co-ordinates of the object and image are known. Object               Image (i)               P (5, -2)            P' (-5,-2) (ii)              Q (-1, 3)            Q' (-1, -3) (iii)              R (-2, -4)          R' (2, 4) (iv)             S (0, -3)            S' (0, 3) 4. Point P (h, k) is reflected in x-axis and point P' is obtained as the image of P. now P' is reflected in y-axis the image of P’ is obtained as P’’. Find the co-ordinates of P''. 5. If point M (-3, 8) is reflected in origin, then find the co-ordinate of the image M'. It is again reflected in y-axis. Find the image of M' i.e., M''. Answers for the worksheet on reflection in different axis are given below to check the exact answers of the above questions on reflection. 1. M' (2, 9), M'' (-2, 9); in origin (-2, 9) 2. P' (2, -7), P'' (-2, -7) 3. (i) Y (ii) X (iii) origin (iv) X 4. P'' (-h, -k) 5. M' (3, -8), M'' (-3, -8) `
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253331184387207, "perplexity": 2671.6145698738615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863570.21/warc/CC-MAIN-20180520131723-20180520151723-00287.warc.gz"}
https://mpl.bibliocommons.com/item/show/2254030075
A New Approach to the Local Embedding Theorem of CR-structures for N [greater Than or Equal To] 4 (the Local Solvability for the Operator [overbarred Partial] B in the Abstract Sense) Book - 1987 Rate this: This book is aimed at researchers in complex analysis, several complex variables, or partial differential equations. Kuranishi proved that any abstract strongly pseudo convex CR-structure of real dimension \$\geq 9\$ can be locally embedded in a complex euclidean space. For the case of real dimension \$=3\$, there is the famous Nirenberg counter example, but the cases of real dimension \$=5\$ or 7 were left open. The author of this book establishes the result for real dimension \$=7\$ and, at the same time, presents a new approach to Kuranishi's result. Publisher: Providence, R.I. : American Mathematical Society, 1987 ISBN: 9780821824283 0821824287 Branch Call Number: QA331 .A475 1987 Characteristics: xv, 257 p. ; 26 cm Uniform Title: Opinion Community Activity Comment There are no comments for this title yet. Age There are no ages for this title yet. Summary There are no summaries for this title yet. Notices There are no notices for this title yet. Quotes There are no quotes for this title yet.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376771211624146, "perplexity": 2411.0597755385434}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00406.warc.gz"}
https://rjlipton.wordpress.com/2019/10/27/quantum-supremacy-at-last/
What it takes to understand and verify the claim Cropped from 2014 Wired source John Martinis of U.C. Santa Barbara and Google is the last author of a paper published Wednesday in Nature that claims to have demonstrated a task executed with minimum effort by a quantum computer that no classical computer can emulate without expending Herculean—or Sisyphean—effort. Today we present a lay understanding of the claim and discuss degrees of establishing it. There are 76 other authors of the paper. The first 75 are alphabetical, then comes Hartmut Neven before Martinis. Usually pride of place goes to the first author, but that depends on size. Martinis is also the corresponding author. The cox in a rowing race rides at the rear. We have discussed aspects of papers with a huge number of authors here. Three planks of a quantum supremacy claim are: 1. Build a physical device capable of a nontrivial sampling task. 2. Prove that it gains advantage over known classical approaches. 3. Prove that comparable classical hardware cannot gain such advantage. Scott Aaronson not only has made two great posts on these and many other aspects of the claim, he independently proposed in 2015 the sampling task that was programmed, and he analyzed it in a foundational paper with Lijie Chen of MIT. Researchers at Google had already been thinking along those lines, and they anchored the team composed from numerous other institutions as well. As if on cue—just a couple days before Wednesday’s announcement—a group from IBM put out a post and paper taking issue with the argument for the third plank. Any ${n}$-qubit quantum circuit ${C}$ and input ${x}$ to ${C}$ induces a probability distribution ${D_C}$ on ${\{0,1\}^n}$. Because it will not matter if we prepend up to ${n}$ NOT gates to ${C}$, we may suppose ${x = 0^n}$. Then ${C(0^n)}$ is a unit complex vector of length ${N = 2^n}$ with entries ${a_z}$ corresponding to possible outputs ${z \in \{0,1\}^n}$. Then the probability of getting ${z}$ by a final measurement of all qubits is $\displaystyle p_z = D_C(z) = |a_z|^2.$ Next we consider probability distributions ${D_1}$ that are generated uniformly at random by the following process, for some ${r \geq n}$ and taking ${R = 2^r}$: for ${i = 1}$ to ${R}$: choose a ${z \in \{0,1\}^n}$ uniformly at random; increment its probability ${D_1(z)}$ by ${\frac{1}{R}}$. Here we intend ${r}$ to be the number of binary nondeterministic gates in the circuit. In place of Hadamard gates the experimental circuits get their nondeterminism from these three single-qubit gates (ignoring global phase for ${\mathbf{Y}^{1/2}}$ in particular): $\displaystyle \mathbf{X}^{1/2} = \frac{1}{2}\begin{bmatrix} 1 + i & 1 - i \\ 1 - i & 1 + i \end{bmatrix},~ \mathbf{Y}^{1/2} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix},~ \mathbf{W}^{1/2} = \frac{1}{2}\begin{bmatrix} 1 + i & - i\sqrt{2} \\ \sqrt{2} & 1 + i \end{bmatrix}.$ Here ${\mathbf{W} = \frac{1}{\sqrt{2}}(\mathbf{X} + \mathbf{Y})}$ where ${\mathbf{Y} = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}}$ and ${\mathbf{X}}$ is another name for NOT. The difference from using Hadamard gates matters to technical analysis of the distributions ${D_C}$ but the interplay between quantum nondeterministic gates and classical random coins remains in force. The choice of ${\mathbf{X}^{1/2}}$, ${\mathbf{Y}^{1/2}}$, or ${\mathbf{W}^{1/2}}$ is itself uniformly random at each point where a single-qubit gate is used, except for not repeating the same gate on the same qubit, and those choices determine ${C}$. Now we can give an initial statement of the task tailored to what the paper achieves: Given randomly-generated quantum circuits ${C}$ as inputs, distinguish ${D_C}$ with high probability from any ${D_1}$. In more detail, the object is to take a number ${\delta > 0}$ and moderately large integer ${k}$, both dictated by practical elements of the experiment, and fulfill this task statement: Given randomly-generated ${C}$, generate samples ${z_1,...,z_k \in \{0,1\}^n}$ such that ${\frac{1}{k}(D_C(z_1) + \cdots D_C(z_k)) \geq 1 + \delta}$. It’s important to note that there are two stages of randomness: one over ${C}$ which chooses ${D_C}$, and then the stage of measuring after (perhaps imperfectly) executing ${C(0^n)}$. The latter can be repeated to get a large sample of strings ${z}$ for a given ${C}$. The nature of the former stage matters most to justifying how to interpret tests of the samples and to closing loopholes. Our ${D_1}$ does not signify having uniform distribution in the latter sampling, but rather covers classical alternatives in the former stage that (with overwhelming probability) belong to a class we call ${\mathcal{D}_1}$. The ${D_C}$ for random ${C}$ will (again w.o.p.) belong to a class ${\mathcal{D}_2}$ which we explain next. ## The World Series of Quantum Computing In honor of the baseball World Series, we offer a baseball analogy. To make differences sharper to see, we take ${r = n}$, so ${R = N = 2^n}$. This is not what the experiment does: their biggest instance has 20 layers totaling ${r = 1,\!113}$ nondeterministic single-qubit gates (plus ${430}$ two-qubit gates) on the ${n = 53}$ qubits. But let us continue. We are distributing ${N}$ units of probability among ${N}$ “batters” ${z \in \{0,1\}^n}$. A batter who gets two units hits a double, three units makes a triple, and so on. The key distinction is between the familiar batting average and the slugging average, which averages all the bases scored with hits: • The chance of making an out—that is, getting no units—is ${(\frac{N-1}{N})^N}$ which is approximately ${\frac{1}{e} = 0.367879\dots}$ • The chance of hitting a single is also about ${\frac{1}{e}}$, leaving ${1 - \frac{2}{e}}$ as the frequency of getting an extra-base hit—which makes ${z}$ a “heavy hitter.” • From ${k}$ batters chosen uniformly at random, their expected batting average will be ${1 - \frac{1}{e} = 0.632\dots}$. • Their expected slugging average, however, will just be ${1}$: they expect ${k}$ units to be distributed among them. Thus with respect to a random ${D_1}$, and without any knowledge of ${D_1}$, a chosen team of ${k}$ hitters cannot expect to have a joint slugging average higher than ${1}$. Moreover, for any fixed ${\delta > 0}$, the chance of getting a slugging average higher than ${1 + \delta}$ tails away exponentially in ${k}$ (provided ${N}$ also grows). With respect to ${D_C}$, however, a quantum device can do better. Google’s device programs itself given ${C}$ as the blueprint. So it just executes ${C}$ and measures all qubits to sample the output. Finding its own heavy hitters is what a quantum circuit is good at. The probability of getting a hitter who hits a triple is magnified by ${3}$ compared to a uniform choice. Moreover, ${C}$ will never output a string with zero hits—a “can’t miss” property denied to a classical reader of ${C}$. For large ${N}$ the probability distribution approaches ${xe^{-x}}$ and the slugging expectation is approximately $\displaystyle \int_0^\infty x^2 e^{-x} = 2.$ That is, a team ${z_1,\dots,z_k}$ drafted by sampling from random quantum circuits ${C}$ expects to have a slugging average near ${2}$. This defines the class ${\mathcal{D}_2}$. If ${C}$ works perfectly, the average will surpass ${1 + \delta}$ whenever ${0 < \delta < 1}$ with near certainty as ${N}$ grows. Google’s circuits have up to ${r = 20n}$, so ${R \gg N}$. Then the “can’t miss” aspect of the quantum advantage is less sharp but the ${xe^{-x}}$ approximation is closer and the idea of ${\mathcal{D}_2}$ is the same. The nature of ${\mathcal{D}_2}$ can actually be seen from point intensities in speckle patterns of laser light: ## Real-World Execution The practical challenge is that the implementation of ${C}$ is not perfect. The consequence of an error in the final output is severe. The heavy-hitter outputs ${z}$ of a random ${C}$ are generally not bit-wise similar, so sampling their neighbors is like sampling uniform distribution. As the paper says, “A single bit or phase flip over the course of the algorithm will completely shuffle the speckle pattern and result in close to zero fidelity.” Their circuits are sufficiently random that effects of sporadic errors over millions of samples can be modeled by a simple equation using quantum mixed states. We shortcut the paper’s physical analysis by drawing on John Preskill’s illustration of a de-polarizing channel in chapter 3 of his wonderful online notes on quantum computation to reach the same equation (1). The modeling has informative symmetry when the errors of a bit flip, phase flip, or both are considered equally likely with probability ${\frac{p}{3}}$. The action on the entangled pair ${|\Phi^+\rangle}$ in the Bell basis is given by the density matrix evolution ${\rho = |\Phi^+\rangle\langle\Phi^+| \mapsto \rho'}$ where $\displaystyle \begin{array}{rcl} \rho' &=& (1 - p)|\Phi^+\rangle\langle\Phi^+| \;+\; \frac{p}{3}\left(|\Psi^+\rangle\langle\Psi^+| \;+\; |\Phi^-\rangle\langle\Phi^-| \;+\; |\Psi^-\rangle\langle\Psi^-|\right)\\ ~~~\\ &=& (1 - p') |\Phi^+\rangle\langle\Phi^+| \;+\; p'\frac{I}{4}, \end{array}$ where ${p' = \frac{4}{3} p}$ and ${\frac{I}{4}}$ is the density matrix of the completely mixed two-qubit state which is just a classical distribution. This presumes ${p \leq \frac{3}{4}}$; note that ${p = \frac{3}{4}}$ completely mixes the Bell basis already. The fidelity of ${\vec{\rho}'}$ to the original state is then given by $\displaystyle F = \langle\Phi^+|\rho'|\Phi^+\rangle = 1 - p'.$ This modeling already indicates that with ${m}$ serial opportunities for error the fidelity will decay as ${(1 - p')^m}$. The Google team found low ‘crosstalk’ between qubits and they used exactly this expression in the form ${(1 - \frac{e_1}{1 - 1/D^2})^m}$, evidently with ${p}$ being the native gate error rate they call ${e_1}$ and ${D = 2^k,}$ where having ${k=1}$ for single-qubit gates supplies the factor ${\frac{2^{2k}}{2^{2k} - 1} = \frac{4}{3}}$. The error ${e_2}$ for the two-qubit gates is similarly represented. (The full modeling in the supplement, section V, is more refined.) By observing their benchmarks (discussed below) for varying small ${m}$ they could calculate the decay concretely and hence estimate values of ${F}$ for the vast majority of runs with larger ${m}$. The random nature of the circuits ${C}$ evidently makes covariance of errors that could systematically upset this modeling negligible. Thus they can conclude that their device effectively samples from the distribution $\displaystyle F|\langle z \;|\; C \;|\; 0^n\rangle|^2 \;\;+\;\; (1 - F)\frac{1}{N}. \ \ \ \ \ (1)$ Such distributions can be said to belong to the class ${\mathcal{D}_{1 + F}}$. The paper reports that their ${F}$ is driven below ${0.01}$ but stays above ${0.001}$ in trials. This bounds the range of the ${\delta}$ they can separate by. That ${\delta}$ is separated from zero achieves the first plank and starts on the second. The third needs attention first, however. ## The Third Plank Both concrete and asymptotic complexity evidence matter for the third plank, the former for now and the latter for how ${n}$ and everything else may scale up in the future. In asymptotic complexity, we still don’t know that ${\mathsf{P}}$ and ${\mathsf{PSPACE}}$, which sandwich the quantum feasible class ${\mathsf{BQP}}$, are different. Thus asymptotic evidence about polynomial bounds must be conditional. Asymptotic evidence about linear time bounds can be sharper but then tends to be conditioned on forms of SETH in ways we still find puzzling. Lower bounds in concrete complexity are less known and have a self-defeating aspect: We are trying to say that any program ${P}$ run for less than an infeasible time ${T}$ must fail. But we can’t run ${P}$ for time ${T-1}$ to show that it fails because time ${T-1}$ is just as infeasible as time ${T}$. The best we can do is run ${P}$ for a feasible ${T_0 \ll T}$, either (i) on a smaller task size, or (ii) on the original task but argue it doesn’t show progress. Neither is the same; we made some attempts on (ii). What the paper does instead is argue that a particular classical approach ${P}$ (also from the Aaronson-Chen paper) would take 10,000 years on today’s hardware. This reminds us of a famous 1977 “Mathematical Games” column by Martin Gardner, which quotes an estimate by Ron Rivest that for factoring a 126-digit number on then-current hardware, “the running time required would be about 40 quadrillion years!” It took only until 1994 for this to be broken. Sure enough, IBM calculated that a more-clever implementation of ${P}$ on the Summit supercomputer would take under 3 days. The point is not so much that the Summit hardware is comparable as that estimates based on what are currently thought to be the best possible (classical) methods need asterisks. On the asymptotic side, the last section (XI) of the paper’s 66-page supplement proves a theorem toward showing that a classical simulation from ${\mathcal{D}_{1 + \delta}}$ that scales polynomially with ${n}$ would collapse ${\mathsf{\#P}}$ to ${\mathsf{AM}}$, and similarly for sub-exponential running times. It does not get all the way there, however: improvements would need to be made in upper bounds for approximation and for worst-case to average-case equivalence. [Added 10/31/19: see this new paper by Scott A. and Sam Gunn.] Moreover, there is a difference from what their statistical testing achieves that we try to explain next. ## The Statistical Tests We can cast the second plank in the general context of predictive modeling. Consider a forecaster who places estimates ${\{q_i\}}$ on the true probabilities ${\{p_i\}}$ of various events. Here we need to compute the probabilities ${\Pr(z)}$ of output strings ${z}$ observed from the physical device, using the given circuit ${C}$ and the estimate of ${F}$. This must be done classically, and incurs the “${T_0}$-versus-${T}$” issue discussed above. But before we get to that issue, let’s say more from the viewpoint of predictive modeling. We measure how well the forecasts ${q_i}$ conform to the true ${p_i}$ by applying a prediction scoring rule. If outcome ${i}$ happens, then the log-likelihood rule assesses a penalty of $\displaystyle L_i = \log(\frac{1}{q_i}).$ This is zero if the outcome was predicted with certainty but goes to infinity if the individual ${q_i}$ is very low—which is an issue in the quantum case. The expected score based on the true probabilities is $\displaystyle E[L_i] = \sum_i p_i \log(\frac{1}{q_i}). \ \ \ \ \ (2)$ The log-likelihood rule is strictly proper insofar as the unique way to minimize ${E[L_i]}$ is to set ${q_i = p_i}$ for each ${i}$. In human contexts this means the model has incentive to be as accurate as possible. The formula (2) is the cross-entropy from the ${\vec{q}}$ distribution to the ${\vec{p}}$ distribution. Before we can use it, we need to ask a question: What is forecasting what? Is the device the imperfect model and do the “true ${p_i}$” come from the analysis of ${C}$ giving ${\Pr(z)}$? This is how it appeared to us and seems from other writing, but we can argue the opposite from first principles: The physical device is the “ground truth” however it works. The assertion that it is executing a blueprint ${C}$ with some estimated loss in fidelity ${F}$ is really the model. Then it follows that ${\Pr(z)}$ is analogous to ${q_i}$ not ${p_i}$, and we can call it ${q_z}$. Since we can compute it, we can calculate ${\log(\frac{1}{q_i})}$ in (2). Saying this leaves “${p_i}$” in (2) as denoting the device’s true probabilities ${p_z}$ of giving the output strings ${z}$. These are not directly observable: it is infeasible to sample the device often enough to expect a sufficiently large number of repeated occurrences based on the “birthday” threshold. Thus there appears to be no way to estimate individual values ${p_i}$ in (2), but this doesn’t matter: the very act of sampling the device carries out the “${\sum_i p_i}$” part of (2). Summing ${\log(\frac{1}{q_z})}$ for the ${z}$ that occur over a large but feasible number ${T}$ of trials gives estimates of (2) that are close enough to make the needed distinctions with high confidence. We can then match the estimate of ${E[L_i]}$ against the theoretical estimate, which we may call ${E_{1+F}}$, assuming accurate knowledge of ${F}$. By the ${L_i}$ scoring function being strictly proper, this match entails achieving ${p_z = q_z}$ with sufficient approximation. This property of the goal mitigates some of the modeling issue. This issue was clarified by reading Ryan O’Donnell’s 2018 notes on quantum supremacy, which preview this same experiment. The above view on which is “forecaster”/”forecastee” might defend the team against his opinion that it “kind of gets its stats backwards”—but the inability to compute cross-entropy from the blueprints’ distribution to that of the device remains an issue. What the team did instead, however, is shift to something simpler they call “linear cross-entropy.” They simply show that the ${q_z}$ from their samples collectively beat the “${E_1}$” that applies to ${\mathcal{D}_1}$—more simply put, that when summed over ${T}$-many trials ${z_t}$, $\displaystyle \frac{1}{T} \sum_{t = 1}^T q_{z_t} > \frac{1}{N} + \delta. \ \ \ \ \ (3)$ This just boils down to giving a z-score based on the modeling for ${\mathcal{D}_1}$. It is analogous to how I (Ken writing this) test for cheating at chess. We are blowing a whistle to say the physical device is getting surreptitious input from quantum mechanics to achieve a strength of ${1 + \delta}$ compared to a “classical player” who is “rated” as having strength ${1}$. The difference from showing that the device’s score from (2) is within a hair of ${E_{1+F}}$ is that this is based on ${E_1}$. To be sure, the paper shows that their ${z}$-scores conform to those one would expect an “${E_{1+F}}$-rated” device to achieve. But this is still not the same as (2). Whether it is tantamount for enough purposes—including the theorem about ${\mathsf{AM}}$—is where we’re most unsure, and we note distinctions between fully (classically) sampling and “spoofing” the statistical tests(s) raised by Scott (including directly in reply to me here) and others. The authors say that using “linear cross-entropy” gave sharper results and that they tried other (unspecified) measures. We wonder how much of the space of scoring rules familiar in predictive modeling has been tried, and whether rules having more gentle tail behavior for tiny ${q_i}$ than ${L_i}$ might do better. Finally, there is the issue that the team were able to verify ${q_z}$ exactly only for circuits up to ${43}$ qubits and/or with ${14}$ levels, not ${53}$ with ${20}$ levels. This creates a dilemma in that IBM’s paper may push them toward ${n = 60}$ or ${70}$, but that increases the gap from instance sizes they can verify. This also pushes away from the possibly of observing the ${\mathcal{D}_{1+F}}$ nature of ${D_C}$ more directly by finding repeated strings ${z}$ in the second-stage sampling of a fixed ${C}$. The “birthday paradox” threshold for repeats is roughly ${2^{n/2}}$ samples, which might be feasible for ${n}$ around ${50}$ (given the classical work needed for each ${z}$, which IBM’s cleverness might speed) but not above ${60}$. The distinguishing power of repeats drops further with ${F}$. We intend to say more about these last few points, and we are sure there are many chapters still to write about supremacy experiments. ## Open Problems Is the evidence so far convincing to you? Is enough being done on the third plank to exclude possible clever classical use of the fact that the circuits ${C}$ are given as “white boxes”? Are there possible loopholes? We would also be grateful to know where we may have oversimplified our characterization of the task and our analysis of the issues. [Added more error-modeling details to the real-world section; some minor word changes; clarified how X,Y,W are chosen; addendum to clarify modeling issues; 10/31/19: removed addendum after blending it into a revision of the last main section—original version preserved here—and linked new Aaronson-Gunn paper.] October 27, 2019 10:31 am I am a quantum computer skeptic who is amazed that it appears to have worked for a 53 qubit machine. However, I am dissapointed that there does not appear to be a way to directly test it for say a 100 qubit machine, since no digital computer can simulate such a machine. So how long will we have to wait until Shor’s algorithm can be tested? How many qubits will be required? October 27, 2019 5:54 pm I”m between skeptic and believer because I think quantum supremacy is undecidable. Whenever people manage to build a quantum machine with tremendous effort, a possibility appears to solve the same task with huge classical resources. So we may never see quantum computers leave the laboratory. October 28, 2019 11:27 am It’s unfortunate that the Google authors didn’t just announce their device and the results of running it, and let the academic community engage in a tasteful discussion of whether this was an example of Quantum Supremacy, any outcome of which would not have reflected negatively on their impressive advance in engineering a reliable superconducting quantum logic circuit. But of course, that’s not what happened, and with Pons and Fleischmann-like bravado, they had to proclaim to the world that they had won the Gold in the Quantum Supremacy Olympics, with the entirely predictable fertilizer storm over that claim grabbing press attention away from their actual accomplishment. Quantum Supremacy should at least strongly suggest the inevitability of working quantum computers with low noise and useful numbers of bits. This result, especially with IBM’s contribution. seems to fall a bit short of that mark. 4. October 30, 2019 11:03 am Excellent post! As you write, there are two crucial questions regarding the Google’s quantum supremacy claims. The first is about the quality of the evidence for their prediction regarding the outcome of the 53 qubit experiment. They predict that the statistical test will give a value larger than t= 1/10,000. (Or 1+1/10,000 if we don’t subtract the 1.) Let’s call t the fidelity. The second question is about the quality of their claim that achieving a sample with t larger than  1/10,000 represents “quantum supremacy”. (To save latex power I use t instead of $\delta$.) My own prediction (in several works) about experiments of this kind is that (modulo some small issues that I will neglect) the resulting distribution can be achieved by a classical computer with a polynomial time algorithm in the size of the circuit. And here “polynomial-time” refers to a low degree polynomial with moderate constants. Regarding the second question. Let me remark that as far as I can see the IBM claim is about a full computation of all the 2^53 probabilities. It is reasonable to think that producing a sample (or even a complete list of 2^53 probabilities) with fidelity t reduces the classical effort linearly with t. (This is the claim about the specific algorithm used by the Google team.) If this holds for the IBM algorithm then the 2.5 days will go down to less than a minute. It will be very interesting to explore if for a version of the IBM algorithm we can have a running time which is linear with t. (It is also interesting if in a discussion based on heuristic arguments both for computational hardness and computational easiness, we can take for granted such a linear dependence on the algorithm running time on t.) In my view the second issue is the crucial one. Reaching a fidelity above 1/10,000 for a 53 qubit experiments would cause me to have doubts on my conjecture “against” quantum supremacy. (Even if it is only one minute on the IBM super-duper computer.) The crucial aspect, in my view, is to understand (and also further develop, and replicate) the experiments in the regime that can be tested by a classical computer. (This regime is larger now by the IBM method.) What I would like to see is: A) an independent verification of the statistical tests outcomes for the experiments in the regime where the Google team classically computed the probabilities. This seems quite easy to perform and maybe this was already done as part as the refereeing process. This looks to me a crucial step in a verification of such an important experiment. B) Experiments giving full histograms for circuits in the 10-25 qubit range. (Again maybe this was already done.) See this comment by Ryan. Perhaps at a later stage also: C) Experiments in the 10-30 qubit range on the IBM quantum computer. (This is an excellent suggestion by Greg Kuperberg.) D) In practice the 53 qubit samples might be still hard to check for IBM. (The 1/10,000 improvement that I heuristically suggested is not relevant for checking the fidelity in Google’s sample.)  But maybe Google can produce samples for 41-49 qubits for which IBM can compute the probabilities quickly and test the Google’s prediction on the fidelity. 5. December 30, 2019 3:10 pm I ask: Does the distinction “polynomial time-nondeterministic polynomial time” dissolve here?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 227, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084791898727417, "perplexity": 734.359395666112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00013.warc.gz"}
https://www.physicsforums.com/threads/confused-about-period-of-pendulum.81322/
# Confused about period of pendulum 1. Jul 6, 2005 ### Lalasushi hey guys, i have a question. "A meter stick oscillates back and forth about a pivot point at one of its ends. Is the period of a simple pendulum of length 1m greater than, less than or the same as the period of the meter stick?" i thought that the period wouldnt depend on the mass because of what Galileo Galilei found (that the period doesnt change with the amplitude or the mass) so...which i found confusing because the formula does not contain m in it (T = 2 pi root L/g) and the correct answer is "greater than"...i found on some websites that actually the amplitude does change slightly...but what about the mass?? can someone please explain? 2. Jul 6, 2005 ### dextercioby There's a difference between the physical and the mathematical pendulum. The first is a generalization of the second and the formula of the period for small tautochrone oscillations for both is $$T=2\pi\sqrt{\frac{I}{mgd}}$$ , where "I" is the moment of inertia of the oscillating body of mass "m" wrt to an axis perpendicular to the plane of oscillation passing through the point of suspension, and "d" is the distance between the CM of the oscillating body and the point of suspension. Daniel. 3. Jul 6, 2005 ### Jelfish For small oscillations, the angular frequency of a pendulum is: $$\omega = \sqrt{\frac{g}{L}}$$ where g is acc. of gravity and L is the length to the center of mass. The conclusion that Galileo found was that for said small oscillations, the mass of the pendulum had no bearing to the oscillation period, only the length. So for your question, consider what "L" is for both situations. For the mass on a string, it's usually assumed that the string has negligible mass, so the length would be 1m. However, for the meter stick, the mass is not concentrated at the end. In fact, it's concentrated at the center of mass, which, if the meter stick has equal mass density throughout, should be in the middle, i.e. 0.5m. Since the meter stick has a smaller "L," then the angular frequency will be greater and thus the period will be shorter than the mass-string pendulum. 4. Jul 6, 2005 ### OlderDan This is not correct. A physical pendulum has distributed mass, and the simple pendulum equation does not apply. By your reasoning, if the meter stick were mounted and free to rotate at a point near its center the frequncy of oscillation would be higher than when mounted at the end, and would be the same as for a very short simple pendulum. In fact, the frequency would be much smaller when the stick is mounted near the center as compared to the end. In the limiting case of mounting at the center, the frequency of oscillation goes to zero (infinite period). 5. Jul 7, 2005 ### Jelfish I'll admit that my prescription of that equation for the meter stick may be misguided, but in my meager defense, the center of mass that I meant to refer to would depend on where the mounting point was located. That is, the center of mass would be defined from the portion between the mounting point and the end. If it's not mounted at one end, then there would be effectively two pendulums that are linked by their angle (ok.. this is starting to sound like nonsense). Anyway, I was going more toward the concept of how a simple pendulum acts as an upper limit case of mass distribution for pendulums of a specific length and the idea that if another simple pendulum were used to model the motion of the meter stick mounted at the end, it would make sense that it would be shorter. Please let me know if I'm missing something intuitively. 6. Jul 7, 2005 ### OlderDan Go back and look at the general formula for the period of small oscillations posted by dextercioby in #2. That equation comes from equating the restoring torque due to the gravitational force acting at the center of mass of a rigid body, free to rotate about some axis, to the product of moment of inertia and angular acceleration about that axis (Newton's second law for rotation). You cannot separate the masses of the different parts of the rigid body, or exclude part of the mass in the calculation. The problem is one of rotation of the entire body. If the point of suspension is at the center of mass, there can be no restoring torque and the period becomes infinite. It is true that the simple pendulum is a limiting case of the general physical pendulum, but if you want to find the simple pendulum that has the same period as some physical pendulum the length of the simple pendulum must be $$\l_{eq} = \frac{I}{md}$$ The equality is guaranteed if the pendulum is a point mass at distance $$\l = d$$ from the point of suspension. For a pendulum that is a stick or rod of length L, the moment of inertia about a point at distance d from the center of mass is, by the parallel axis theorem, $$I = \frac{mL^2}{12} + md^2$$ so the equivalent length of a simple pendulum would be $$\l_{eq} = \left[\frac{L}{12d}\right] L + d$$ I don't see anything intuitive about this result. The length of the equivalent simple pendulum clearly does not correspond to the position of the center of mass (d) and is influenced by the entire length no matter where you support the stick. As a function of d, the equivalent length has a minimum value at $$d = \frac{L}{\sqrt{12}}$$, which is $$\l_{eqMin} = \frac{L}{\sqrt{3}}$$ This position for the support minimizes the period of oscillation of the stick. You can calculate that minimum period using the equivalent length in the period equation for a simple pendulum. 7. Jul 7, 2005 ### Jelfish I understand now. Thank you for the explanation. 8. Feb 1, 2011 ### Ryker Re: confused!! about period of pendulum Sorry to ressurect such an old thread, but what happens if you keep the mass of the bob the same, but instead assume that the mass of the string isn't really negligible? I guess the pendulum then goes from a simple one to a physical one, but what happens to its period? I guess the added mass cancels out in the equation and the moment of inertia decreases. That by itself would make the period of oscillation shorter, but "d" in the equation also decreases. Does that then cancel out the effect or is the time period still shorter? My guess is the latter, but I'm having a bit of trouble wrapping my head around this so as to apply it for any case of string and a ball (that is, not having to know the exact center of mass of the new system and the precise equation for its moment of inertia). Similar Discussions: Confused about period of pendulum
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519309401512146, "perplexity": 275.60127008497403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074644-00047.warc.gz"}
http://math.stackexchange.com/questions/45264/algebraic-proof-of-a-trig-matrix-identity/45375
# Algebraic proof of a trig matrix identity? I'll put the question first, and then the background, because I'm not sure that the background is necessary to answer the question: I have a geometric proof, but is there an elegant algebraic proof that $$\left[ \begin{matrix}-1 & 1 + \cos\frac{\pi}{2n} \\ -2 & 1 + 2\cos\frac{\pi}{2n}\end{matrix} \right] ^n \left[ \begin{matrix}0 \\ 1\end{matrix} \right] = \left[ \begin{matrix}\cot \frac{\pi}{4n} \\ \cot \frac{\pi}{4n}\end{matrix} \right]$$ Background: This is motivated by the latest maths puzzle from Spanish newspaper El País (problem statement there in Spanish, obviously). NB the deadline for submitting solutions to win the prize has passed, so don't worry about cheating. The relevant part of the problem is this: We have two straight lines (in Euclidean geometry) and we wish to draw a zigzag between them. We start at the intersection point and draw a straight line segment of length $r$ along one of the lines (which we shall call the "horizontal line"). We then draw a straight line segment of length $r$ from the end-point to the other ("non-horizontal") line. We alternate between the two with straight line segments of the same length, without overlapping or doubling back. The 20th such line segment is perpendicular to the horizontal line. What is the angle between the lines? There's a simple geometric solution (which I shan't state here, in case anyone wants to solve it himself - although I expect that an answer may include spoilers, so if you do want to solve it yourself, look away now), but before finding it I went down an algebraic approach which led me to an equation involving a matrix similar to the one above raised to the 10th power. Basically the equation above is the generalisation to the case with $2n$ line segments and the answer (derived from the geometric proof) substituted for the unknown angle. The question is motivated by little more than curiosity, because I already have a proof. I've tried to prove it myself, but the best I've got so far is a messy expression in terms of some matrices whose powers do have a relatively nice closed form. Filling in some gaps: if the lines intersect at the origin, the horizontal line is along the x-axis, the non-horizontal line is in the first quadrant, and the angle between the lines is $\alpha$, then the points are $P_{2n} = (x_{2n}, x_{2n} \tan \alpha)$ and $P_{2n+1} = (x_{2n+1}, 0)$. Given that the distance between each pair of consecutive points is $r$ we find that $x_{2n}$ and $x_{2n+2}$ are the two roots of $$(z - x_{2n+1})^2 + z^2 \tan^2 \alpha = r^2$$, and we can use the properties of the quadratic equation to get $$x_{2n+2} = \frac{2}{1 + \tan^2 \alpha}x_{2n+1} - x_{2n}$$ By using the symmetry of the triangle formed by $P_{2n+1}$, $P_{2n+2}$, $P_{2n+3}$, we get $$x_{2n+3} = 2x_{2n+2} - x_{2n+1}$$ Putting this all together we can get a recurrence which leads to an expression for $(x_{2n}, x_{2n+1})$ in terms of the nth power of a matrix applied to $(x_{0} = 0, x_{1} = 1)$. Putting any more details risks spoiling the value of $\alpha$. The RHS of the original equation comes from observing that if the $(2n)^{th}$ line segment is perpendicular to the horizontal line, $x_{2n-1} = x_{2n} = x_{2n+1}$ and $\tan \alpha = r / x_{2n}$. The furthest I've got so far with the matrix is to split it out as $$\left( \cos\frac{\pi}{2n} \left[ \begin{matrix}0 & 1 \\ 0 & 2\end{matrix} \right] + \left[ \begin{matrix}-1 & 1 \\ -2 & 1\end{matrix} \right] \right)^n = \sum_{k=0}^{n} \left( \begin{matrix}n \\ k\end{matrix} \right) \cos^k\frac{\pi}{2n} \left[ \begin{matrix}0 & 1 \\ 0 & 2\end{matrix} \right]^k \left[ \begin{matrix}-1 & 1 \\ -2 & 1\end{matrix} \right]^{n-k}$$ where both of the matrix powers on the RHS have closed forms - the markup here doesn't seem to like the URLs and I've taken too long to write this up and need to run, but http://www.wolframalpha.com/input/?i=[[0,1],[0,2]]^k and http://www.wolframalpha.com/input/?i=[[-1,1],[-2,1]]^(r-k) - The binomial theorem can only be applied if the matrices in question commute. Do they? –  Gerry Myerson Jun 14 '11 at 6:40 The usual way to raise a matrix to a power is to diagonalize it first, that is, to find $A^n$, first find an invertible matrix $P$ and a diagonal matrix $D$ such that $P^{-1}AP=D$ - this you do by letting the columns of $P$ be the eigenvectors of $A$, and the diagonal entries of $D$, the eigenvalues of $A$. Then $A^n=PD^nP^{-1}$, which is good because calculating $D^n$ is trivial. –  Gerry Myerson Jun 14 '11 at 6:42 [moved from comment to answer] Let $\alpha=\pi/2n$ and let's call Peter's 2x2 matrix $$A=\left(\begin{array}{rr}-1&1+\cos\alpha\\-2&1+2\cos\alpha\end{array}\right).$$ Luboš's answer tells us that the eigenvalues of $A$ are $e^{\pm i\alpha}$, so we expect the matrix $A$ to be similar to the rotation matrix $$R=\left(\begin{array}{rr}\cos\alpha&\sin\alpha\\-\sin\alpha&\cos\alpha\end{array}\right).$$ We want to find a $2\times2$ matrix $P$ such that $PA=RP$. This is a homogeneous linear system in the unknown entries of $P$. As $R$ commutes with all the rotation matrices, we are at liberty to multiply $P$ from the left with one. Therefore we can assume that (for example) the bottom left entry of $P$ is equal to zero. The resulting system is not hard to solve, and let's pick among the solutions $$P=\left(\begin{array}{rc}2&-1-\cos\alpha\\0&\sin\alpha\end{array}\right).$$ Then $A=P^{-1}RP$, so $A^n=P^{-1}R^nP$. Here we benefit from the fact that $$R^n=\left(\begin{array}{rr}\cos n\alpha&\sin n\alpha\\-\sin n\alpha&\cos n\alpha\end{array}\right)=\left(\begin{array}{rr}0&1\\-1&0\end{array}\right).$$ Therefore $$A^n\left(\begin{array}{r}1\\0\end{array}\right)=P^{-1}\left(\begin{array}{rr}0&1\\-1&0\end{array}\right)\left(\begin{array}{c}-1-\cos\alpha\\\sin\alpha\end{array}\right)= \frac{1}{\sin\alpha}\left(\begin{array}{r}1+\cos\alpha\\1+\cos\alpha\end{array}\right).$$ The identities $\sin\alpha=2\sin(\alpha/2)\cos(\alpha/2)$ and $1+\cos\alpha=2\cos^2(\alpha/2)$ then give Peter's claim as now $(1+\cos\alpha)/\sin\alpha=\cot(\alpha/2)=\cot(\pi/4n).$ You may calculate the eigenvalues of the matrix on the left hand side (which is exponentiated to the $n$-th power) which are simply $$\lambda_\pm = + \exp(\pm i\pi / 2n)$$ So the matrix $A$ is written as $A=CDC^{-1}$ where $D={\rm diag}(\lambda_+,\lambda_-)$. The corresponding eigenvectors are $$(\frac{1}{2}(1 +\exp(\pm i\pi/2n),1)^T$$ which are written as columns of $C$, in the order "plus minus" again. Given the form of your vector $(0,1)^T$ you added, only the second column of $C^{-1}$ is important. It is $$\mp\frac{1}{2\sin (\pi/2n)} (1 + \exp(\mp \pi i/2n))$$ well, I mean $(n_{1},n_{2})^T$ where $n_{1,2}$ are the numbers above with the signs "upper one, lower one". Clearly, $D^n$ is just ${\rm diag}(i,-i)$ because $\exp(i\pi /2)=i$ and so on. And I hope that the rest of $C D C^{-1} v$ is already easily calculated. Note that we had $1/\sin(\pi/2n)$ over there which is $2\sin(\pi/4n)\cos(\pi/4n)$ and the cosine cancels, leaving the sign which combines to the cotangent after some work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693896174430847, "perplexity": 95.61866377741546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00144-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/2108-please-help-me-out-print.html
• Mar 6th 2006, 10:23 AM mjohnson I just need to know where to start this one off.. (243)^1/5 + ^6 sqr.rt. of 64 • Mar 6th 2006, 10:33 AM earboth Quote: Originally Posted by mjohnson I just need to know where to start this one off.. (243)^1/5 + ^6 sqr.rt. of 64 Hello, I'm gessing only but I hope that you would like to caculate this: $(243)^\frac{1}{5}+\sqrt[6]{64}$ $(3^5)^\frac{1}{5}+\sqrt[6]{2^6}=3+2=5$ Greetings EB • Mar 6th 2006, 02:36 PM ThePerfectHacker Let me just help explain earboth's response. The meaning of $16^{\frac{1}{4}}$ means, the same as $\sqrt[4]{16}$ and that means find a number when mutiplied 4 times gives 16, thus, $16^{\frac{1}{4}}=2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599807024002075, "perplexity": 4318.459122569851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00221.warc.gz"}
https://en.wikipedia.org/wiki/Polynomial_interpolation
# Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.[1] ## Applications Polynomials can be used to approximate complicated curves, for example, the shapes of letters in typography,[citation needed] given a few points. A relevant application is the evaluation of the natural logarithm and trigonometric functions: pick a few known data points, create a lookup table, and interpolate between those data points. This results in significantly faster computations.[specify] Polynomial interpolation also forms the basis for algorithms in numerical quadrature and numerical ordinary differential equations and Secure Multi Party Computation, Secret Sharing schemes. Polynomial interpolation is also essential to perform sub-quadratic multiplication and squaring such as Karatsuba multiplication and Toom–Cook multiplication, where an interpolation through points on a polynomial which defines the product yields the product itself. For example, given a = f(x) = a0x0 + a1x1 + ... and b = g(x) = b0x0 + b1x1 + ..., the product ab is equivalent to W(x) = f(x)g(x). Finding points along W(x) by substituting x for small values in f(x) and g(x) yields points on the curve. Interpolation based on those points will yield the terms of W(x) and subsequently the product ab. In the case of Karatsuba multiplication this technique is substantially faster than quadratic multiplication, even for modest-sized inputs. This is especially true when implemented in parallel hardware. ## Definition Given a set of n + 1 data points (xi, yi) where no two xi are the same, one is looking for a polynomial p of degree at most n with the property ${\displaystyle p(x_{i})=y_{i},\qquad i=0,\ldots ,n.}$ The unisolvence theorem states that such a polynomial p exists and is unique, and can be proved by the Vandermonde matrix, as described below. The theorem states that for n + 1 interpolation nodes (xi), polynomial interpolation defines a linear bijection ${\displaystyle L_{n}:\mathbb {K} ^{n+1}\to \Pi _{n}}$ where Πn is the vector space of polynomials (defined on any interval containing the nodes) of degree at most n. ## Constructing the interpolation polynomial The red dots denote the data points (xk, yk), while the blue curve shows the interpolation polynomial. Suppose that the interpolation polynomial is in the form ${\displaystyle p(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}.\qquad (1)}$ The statement that p interpolates the data points means that ${\displaystyle p(x_{i})=y_{i}\qquad {\mbox{for all }}i\in \left\{0,1,\dots ,n\right\}.}$ If we substitute equation (1) in here, we get a system of linear equations in the coefficients ak. The system in matrix-vector form reads ${\displaystyle {\begin{bmatrix}x_{0}^{n}&x_{0}^{n-1}&x_{0}^{n-2}&\ldots &x_{0}&1\\x_{1}^{n}&x_{1}^{n-1}&x_{1}^{n-2}&\ldots &x_{1}&1\\\vdots &\vdots &\vdots &&\vdots &\vdots \\x_{n}^{n}&x_{n}^{n-1}&x_{n}^{n-2}&\ldots &x_{n}&1\end{bmatrix}}{\begin{bmatrix}a_{n}\\a_{n-1}\\\vdots \\a_{0}\end{bmatrix}}={\begin{bmatrix}y_{0}\\y_{1}\\\vdots \\y_{n}\end{bmatrix}}.}$ We have to solve this system for ak to construct the interpolant p(x). The matrix on the left is commonly referred to as a Vandermonde matrix. The condition number of the Vandermonde matrix may be large,[2] causing large errors when computing the coefficients ai if the system of equations is solved using Gaussian elimination. Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(n2) operations instead of the O(n3) required by Gaussian elimination.[3][4][5] These methods rely on constructing first a Newton interpolation of the polynomial and then converting it to the monomial form above. Alternatively, we may write down the polynomial immediately in terms of Lagrange polynomials: {\displaystyle {\begin{aligned}p(x)&={\frac {(x-x_{1})(x-x_{2})\cdots (x-x_{n})}{(x_{0}-x_{1})(x_{0}-x_{2})\cdots (x_{0}-x_{n})}}y_{0}+{\frac {(x-x_{0})(x-x_{2})\cdots (x-x_{n})}{(x_{1}-x_{0})(x_{1}-x_{2})\cdots (x_{1}-x_{n})}}y_{1}+\ldots +{\frac {(x-x_{0})(x-x_{1})\cdots (x-x_{n-1})}{(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})}}y_{n}\\&=\sum _{i=0}^{n}\left(\prod _{\stackrel {\!0\leq j\leq n}{j\neq i}}{\frac {x-x_{j}}{x_{i}-x_{j}}}\right)y_{i}\end{aligned}}} For matrix arguments, this formula is called Sylvester's formula and the matrix-valued Lagrange polynomials are the Frobenius covariants. ## Uniqueness of the interpolating polynomial ### Proof 1 Suppose we interpolate through n + 1 data points with an at-most n degree polynomial p(x) (we need at least n + 1 datapoints or else the polynomial cannot be fully solved for). Suppose also another polynomial exists also of degree at most n that also interpolates the n + 1 points; call it q(x). Consider ${\displaystyle r(x)=p(x)-q(x)}$. We know, 1. r(x) is a polynomial 2. r(x) has degree at most n, since p(x) and q(x) are no higher than this and we are just subtracting them. 3. At the n + 1 data points, ${\displaystyle r(x_{i})=p(x_{i})-q(x_{i})=y_{i}-y_{i}=0}$. Therefore, r(x) has n + 1 roots. But r(x) is a polynomial of degree n. It has one root too many. Formally, if r(x) is any non-zero polynomial, it must be writable as ${\displaystyle r(x)=A(x-x_{0})(x-x_{1})\cdots (x-x_{n})}$, for some constant A. By distributivity, the n + 1 x's multiply together to give leading term ${\displaystyle Ax^{n+1}}$, i.e. one degree higher than the maximum we set. So the only way r(x) can exist is if A = 0, or equivalently, r(x) = 0. ${\displaystyle r(x)=0=p(x)-q(x)\implies p(x)=q(x)}$ So q(x) (which could be any polynomial, so long as it interpolates the points) is identical with p(x), and q(x) is unique. ### Proof 2 Given the Vandermonde matrix used above to construct the interpolant, we can set up the system ${\displaystyle Va=y}$ To prove that V is nonsingular we use the Vandermonde determinant formula: ${\displaystyle \det(V)=\prod _{i,j=0,i since the n + 1 points are distinct, the determinant can't be zero as ${\displaystyle x_{i}-x_{j}}$ is never zero, therefore V is nonsingular and the system has a unique solution. Either way this means that no matter what method we use to do our interpolation: direct, Lagrange etc., (assuming we can do all our calculations perfectly) we will always get the same polynomial. ## Non-Vandermonde solutions We are trying to construct our unique interpolation polynomial in the vector space Πn of polynomials of degree n. When using a monomial basis for Πn we have to solve the Vandermonde matrix to construct the coefficients ak for the interpolation polynomial. This can be a very costly operation (as counted in clock cycles of a computer trying to do the job). By choosing another basis for Πn we can simplify the calculation of the coefficients but then we have to do additional calculations when we want to express the interpolation polynomial in terms of a monomial basis. One method is to write the interpolation polynomial in the Newton form and use the method of divided differences to construct the coefficients, e.g. Neville's algorithm. The cost is O(n2) operations, while Gaussian elimination costs O(n3) operations. Furthermore, you only need to do O(n) extra work if an extra point is added to the data set, while for the other methods, you have to redo the whole computation. Another method is to use the Lagrange form of the interpolation polynomial. The resulting formula immediately shows that the interpolation polynomial exists under the conditions stated in the above theorem. Lagrange formula is to be preferred to Vandermonde formula when we are not interested in computing the coefficients of the polynomial, but in computing the value of p(x) in a given x not in the original data set. In this case, we can reduce complexity to O(n2).[6] The Bernstein form was used in a constructive proof of the Weierstrass approximation theorem by Bernstein and has nowadays gained great importance in computer graphics in the form of Bézier curves. ## Linear combination of the given values The Lagrange form of the interpolating polynomial is a linear combination of the given values. In many scenarios, an efficient and convenient polynomial interpolation is a linear combination of the given values, using previously known coefficients. Given a set of ${\displaystyle k+1}$ data points ${\displaystyle (x_{0},y_{0}),\ldots ,(x_{j},y_{j}),\ldots ,(x_{k},y_{k})}$ where each data point is a (position, value) pair and where no two positions ${\displaystyle x_{j}}$ are the same, the interpolation polynomial in the Lagrange form is a linear combination ${\displaystyle y(x):=\sum _{j=0}^{k}y_{j}c_{j}(x)}$ of the given values ${\displaystyle y_{j}}$ with each coefficient ${\displaystyle c_{j}(x)}$ given by evaluating the corresponding Lagrange basis polynomial using the given ${\displaystyle k+1}$ positions ${\displaystyle x_{j}}$. ${\displaystyle c_{j}(x)=\ell _{j}(x,x_{0},x_{1},\ldots ,x_{k}):=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}}.}$ Each coefficient ${\displaystyle c_{j}(x)}$ in the linear combination depends on the given positions ${\displaystyle x_{j}}$ and the desired position ${\displaystyle x}$, but not on the given values ${\displaystyle y_{j}}$. For each coefficient, inserting the values of the given positions ${\displaystyle x_{j}}$ and simplifying yields an expression ${\displaystyle c_{j}(x)}$, which depends only on ${\displaystyle x}$. Thus the same coefficient expressions ${\displaystyle c_{j}(x)}$ can be used in a polynomial interpolation of a given second set of ${\displaystyle k+1}$ data points ${\displaystyle (x_{0},v_{0}),\ldots ,(x_{j},v_{j}),\ldots ,(x_{k},v_{k})}$ at the same given positions ${\displaystyle x_{j}}$, where the second given values ${\displaystyle v_{j}}$ differ from the first given values ${\displaystyle y_{j}}$. Using the same coefficient expressions ${\displaystyle c_{j}(x)}$ as for the first set of data points, the interpolation polynomial of the second set of data points is the linear combination ${\displaystyle v(x):=\sum _{j=0}^{k}v_{j}c_{j}(x).}$ For each coefficient ${\displaystyle c_{j}(x)}$ in the linear combination, the expression resulting from the Lagrange basis polynomial ${\displaystyle \ell _{j}(x,x_{0},x_{1},\ldots ,x_{k})}$ only depends on the relative spaces between the given positions, not on the individual value of any position. Thus the same coefficient expressions ${\displaystyle c_{j}(x)}$ can be used in a polynomial interpolation of a given third set of ${\displaystyle k+1}$ data points ${\displaystyle (t_{0},w_{0}),\ldots ,(t_{j},w_{j}),\ldots ,(t_{k},w_{k})}$ where each position ${\displaystyle t_{j}}$ is related to the corresponding position ${\displaystyle x_{j}}$ in the first set by ${\displaystyle t_{i}=a*x_{i}+b}$ and the desired positions are related by ${\displaystyle t=a*x+b}$, for a constant scaling factor a and a constant shift b for all positions. Using the same coefficient expressions ${\displaystyle c_{j}(t)}$ as for the first set of data points, the interpolation polynomial of the third set of data points is the linear combination ${\displaystyle w(t):=\sum _{j=0}^{k}w_{j}c_{j}(t).}$ In many applications of polynomial interpolation, the given set of ${\displaystyle k+1}$ data points is at equally spaced positions. In this case, it can be convenient to define the x-axis of the positions such that ${\displaystyle x_{i}=i}$. For example, a given set of 3 equally-spaced data points ${\displaystyle (x_{0},y_{0}),(x_{1},y_{1}),(x_{2},y_{2})}$ is then ${\displaystyle (0,y_{0}),(1,y_{1}),(2,y_{2})}$. The interpolation polynomial in the Lagrange form is the linear combination {\displaystyle {\begin{aligned}y(x):=\sum _{j=0}^{2}y_{j}c_{j}(x)=y_{0}{\frac {(x-1)(x-2)}{(0-1)(0-2)}}+y_{1}{\frac {(x-0)(x-2)}{(1-0)(1-2)}}+y_{2}{\frac {(x-0)(x-1)}{(2-0)(2-1)}}\\=y_{0}{\frac {(x-1)(x-2)}{2}}+y_{1}{\frac {(x-0)(x-2)}{-1}}+y_{2}{\frac {(x-0)(x-1)}{2}}.\end{aligned}}} This quadratic interpolation is valid for any position x, near or far from the given positions. So, given 3 equally-spaced data points at ${\displaystyle x=0,1,2}$ defining a quadratic polynomial, at an example desired position ${\displaystyle x=1.5}$, the interpolated value after simplification is given by ${\displaystyle y(1.5)=y_{1.5}=(-y_{0}+6y_{1}+3y_{2})/8}$ This is a quadratic interpolation typically used in the Multigrid method. Again given 3 equally-spaced data points at ${\displaystyle x=0,1,2}$ defining a quadratic polynomial, at the next equally spaced position ${\displaystyle x=3}$, the interpolated value after simplification is given by ${\displaystyle y(3)=y_{3}=y_{0}-3y_{1}+3y_{2}}$ In the above polynomial interpolations using a linear combination of the given values, the coefficients were determined using the Lagrange method. In some scenarios, the coefficients can be more easily determined using other methods. Examples follow. According to the method of finite differences, for any polynomial of degree d or less, any sequence of ${\displaystyle d+2}$ values at equally spaced positions has a ${\displaystyle (d+1)}$th difference exactly equal to 0. The element sd+1 of the Binomial transform is such a ${\displaystyle (d+1)}$th difference. This area is surveyed here[7]. The binomial transform, T, of a sequence of values {vn}, is the sequence {sn} defined by ${\displaystyle s_{n}=\sum _{k=0}^{n}(-1)^{k}{n \choose k}v_{k}.}$ Ignoring the sign term ${\displaystyle (-1)^{k}}$, the ${\displaystyle n+1}$ coefficients of the element sn are the respective ${\displaystyle n+1}$ elements of the row n of Pascal's Triangle. The triangle of binomial transform coefficients is like Pascal's triangle. The entry in the nth row and kth column of the BTC triangle is ${\displaystyle (-1)^{k}{\tbinom {n}{k}}}$ for any non-negative integer n and any integer k between 0 and n. This results in the following example rows n = 0 through n = 7, top to bottom, for the BTC triangle: 1 // Row n = 0. 1 -1 // Row n = 1 or d = 0. 1 -2 1 // Row n = 2 or d = 1. 1 -3 3 -1 // Row n = 3 or d = 2. 1 -4 6 -4 1 // Row n = 4 or d = 3. 1 -5 10 -10 5 -1 // Row n = 5 or d = 4. 1 -6 15 -20 15 -6 1 // Row n = 6 or d = 5. 1 -7 21 -35 35 -21 7 -1 // Row n = 7 or d = 6. For convenience, each row n of the above example BTC triangle also has a label ${\displaystyle d=n-1}$. Thus for any polynomial of degree d or less, any sequence of ${\displaystyle d+2}$ values at equally spaced positions has a linear combination result of 0, when using the ${\displaystyle d+2}$ elements of row d as the corresponding linear coefficients. For example, 4 equally spaced data points of a quadratic polynomial obey the linear equation given by row ${\displaystyle d=2}$ of the BTC triangle. ${\displaystyle 0=y_{0}-3y_{1}+3y_{2}-y_{3}}$ This is the same linear equation as obtained above using the Lagrange method. The BTC triangle can also be used to derive other polynomial interpolations. For example, the above quadratic interpolation ${\displaystyle y(1.5)=y_{1.5}=(-y_{0}+6y_{1}+3y_{2})/8}$ can be derived in 3 simple steps as follows. The equally spaced points of a quadratic polynomial obey the rows of the BTC triangle with ${\displaystyle d=2}$ or higher. First, the row ${\displaystyle d=3}$ spans the given and desired data points ${\displaystyle y_{0},y_{1},y_{1.5},y_{2}}$ with the linear equation ${\displaystyle 0=1y_{0}-4y_{0.5}+6y_{1}-4y_{1.5}+1y_{2}}$ Second, the unwanted data point ${\displaystyle y_{0.5}}$ is replaced by an expression in terms of wanted data points. The row ${\displaystyle d=2}$ provides a linear equation with a term ${\displaystyle 1y_{0.5}}$, which results in a term ${\displaystyle 4y_{0.5}}$ by multiplying the linear equation by 4. ${\displaystyle 0=1y_{0.5}-3y_{1}+3y_{1.5}-1y_{2}=4y_{0.5}-12y_{1}+12y_{1.5}-4y_{2}}$ Third, the above two linear equations are added to yield a linear equation equivalent to the above quadratic interpolation for ${\displaystyle y_{1.5}}$. ${\displaystyle 0=(1+0)y_{0}+(-4+4)y_{0.5}+(6-12)y_{1}+(-4+12)y_{1.5}+(1-4)y_{2}=y_{0}-6y_{1}+8y_{1.5}-3y_{2}}$ Similar to other uses of linear equations, the above derivation scales and adds vectors of coefficients. In polynomial interpolation as a linear combination of values, the elements of a vector correspond to a contiguous sequence of regularly spaced positions. The p non-zero elements of a vector are the p coefficients in a linear equation obeyed by any sequence of p data points from any degree d polynomial on any regularly spaced grid, where d is noted by the subscript of the vector. For any vector of coefficients, the subscript obeys ${\displaystyle d\leq p-2}$. When adding vectors with various subscript values, the lowest subscript applies for the resulting vector. So, starting with the vector of row ${\displaystyle d=3}$ and the vector of row ${\displaystyle d=2}$ of the BTC triangle, the above quadratic interpolation for ${\displaystyle y_{1.5}}$ is derived by the vector calculation ${\displaystyle (1,-4,6,-4,1)_{3}+4*(0,1,-3,3,-1)_{2}=(1,0,-6,+8,-3)_{2}}$ Similarly, the cubic interpolation typical in the Multigrid method, ${\displaystyle y_{1.5}=(-y_{0}+9y_{1}+9y_{2}-y_{3})/16,}$ can be derived by a vector calculation starting with the vector of row ${\displaystyle d=5}$ and the vector of row ${\displaystyle d=3}$ of the BTC triangle. ${\displaystyle (1,-6,15,-20,15,-6,1)_{5}+6*(0,1,-4,6,-4,1,0)_{3}=(1,0,-9,16,-9,0,1)_{3}}$ ## Interpolation error When interpolating a given function f by a polynomial of degree n at the nodes x0,...,xn we get the error ${\displaystyle f(x)-p_{n}(x)=f[x_{0},\ldots ,x_{n},x]\prod _{i=0}^{n}(x-x_{i})}$ where ${\displaystyle f[x_{0},\ldots ,x_{n},x]}$ is the notation for divided differences. If f is n + 1 times continuously differentiable on a closed interval I and ${\displaystyle p_{n}(x)}$ is a polynomial of degree at most n that interpolates f at n + 1 distinct points {xi} (i=0,1,...,n) in that interval, then for each x in the interval there exists ξ in that interval such that ${\displaystyle f(x)-p_{n}(x)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}\prod _{i=0}^{n}(x-x_{i}).}$ The above error bound suggests choosing the interpolation points xi such that the product ${\displaystyle \left|\prod (x-x_{i})\right|,}$ is as small as possible. The Chebyshev nodes achieve this. ### Proof Set the error term as ${\displaystyle R_{n}(x)=f(x)-p_{n}(x)}$ and set up an auxiliary function: ${\displaystyle Y(t)=R_{n}(t)-{\frac {R_{n}(x)}{W(x)}}W(t)}$ where ${\displaystyle W(t)=\prod _{i=0}^{n}(t-x_{i})}$ Since xi are roots of ${\displaystyle R_{n}(t)}$ and ${\displaystyle W(t)}$, we have Y(x) = Y(xi) = 0, which means Y has at least n + 2 roots. From Rolle's theorem, ${\displaystyle Y^{\prime }(t)}$ has at least n + 1 roots, then ${\displaystyle Y^{(n+1)}(t)}$ has at least one root ξ, where ξ is in the interval I. So we can get ${\displaystyle Y^{(n+1)}(t)=R_{n}^{(n+1)}(t)-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!}$ Since ${\displaystyle p_{n}(x)}$ is a polynomial of degree at most n, then ${\displaystyle R_{n}^{(n+1)}(t)=f^{(n+1)}(t)}$ Thus ${\displaystyle Y^{(n+1)}(t)=f^{(n+1)}(t)-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!}$ Since ξ is the root of ${\displaystyle Y^{(n+1)}(t)}$, so ${\displaystyle Y^{(n+1)}(\xi )=f^{(n+1)}(\xi )-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!=0}$ Therefore, ${\displaystyle R_{n}(x)=f(x)-p_{n}(x)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}\prod _{i=0}^{n}(x-x_{i})}$. Thus the remainder term in the Lagrange form of the Taylor theorem is a special case of interpolation error when all interpolation nodes xi are identical.[8] Note that the error will be zero when ${\displaystyle x=x_{i}}$ for any i. Thus, the maximum error will occur at some point in the interval between two successive nodes. ### For equally spaced intervals In the case of equally spaced interpolation nodes where ${\displaystyle x_{i}=a+ih}$, for ${\displaystyle i=0,1,\ldots ,n,}$ and where ${\displaystyle h=(b-a)/n,}$ the product term in the interpolation error formula can be bound as[9] ${\displaystyle \left|\prod _{i=0}^{n}(x-x_{i})\right|=\prod _{i=0}^{n}\left|x-x_{i}\right|\leq {\frac {n!}{4}}h^{n+1}}$. Thus the error bound can be given as ${\displaystyle \left|R_{n}(x)\right|\leq {\frac {h^{n+1}}{4(n+1)}}\max _{\xi \in [a,b]}\left|f^{(n+1)}(\xi )\right|}$ However, this assumes that ${\displaystyle f^{(n+1)}(\xi )}$ is dominated by ${\displaystyle h^{n+1}}$, i.e. ${\displaystyle f^{(n+1)}(\xi )h^{n+1}\ll 1}$. In several cases, this is not true and the error actually increases as n → ∞ (see Runge's phenomenon). That question is treated in the section Convergence properties. ## Lebesgue constants See the main article: Lebesgue constant. We fix the interpolation nodes x0, ..., xn and an interval [a, b] containing all the interpolation nodes. The process of interpolation maps the function f to a polynomial p. This defines a mapping X from the space C([a, b]) of all continuous functions on [a, b] to itself. The map X is linear and it is a projection on the subspace Πn of polynomials of degree n or less. The Lebesgue constant L is defined as the operator norm of X. One has (a special case of Lebesgue's lemma): ${\displaystyle \|f-X(f)\|\leq (L+1)\|f-p^{*}\|.}$ In other words, the interpolation polynomial is at most a factor (L + 1) worse than the best possible approximation. This suggests that we look for a set of interpolation nodes that makes L small. In particular, we have for Chebyshev nodes: ${\displaystyle L\leq {\frac {2}{\pi }}\log(n+1)+1.}$ We conclude again that Chebyshev nodes are a very good choice for polynomial interpolation, as the growth in n is exponential for equidistant nodes. However, those nodes are not optimal. ## Convergence properties It is natural to ask, for which classes of functions and for which interpolation nodes the sequence of interpolating polynomials converges to the interpolated function as n → ∞? Convergence may be understood in different ways, e.g. pointwise, uniform or in some integral norm. The situation is rather bad for equidistant nodes, in that uniform convergence is not even guaranteed for infinitely differentiable functions. One classical example, due to Carl Runge, is the function f(x) = 1 / (1 + x2) on the interval [−5, 5]. The interpolation error || fpn|| grows without bound as n → ∞. Another example is the function f(x) = |x| on the interval [−1, 1], for which the interpolating polynomials do not even converge pointwise except at the three points x = ±1, 0.[10] One might think that better convergence properties may be obtained by choosing different interpolation nodes. The following result seems to give a rather encouraging answer: Theorem. For any function f(x) continuous on an interval [a,b] there exists a table of nodes for which the sequence of interpolating polynomials ${\displaystyle p_{n}(x)}$ converges to f(x) uniformly on [a,b]. Proof. It's clear that the sequence of polynomials of best approximation ${\displaystyle p_{n}^{*}(x)}$ converges to f(x) uniformly (due to Weierstrass approximation theorem). Now we have only to show that each ${\displaystyle p_{n}^{*}(x)}$ may be obtained by means of interpolation on certain nodes. But this is true due to a special property of polynomials of best approximation known from the equioscillation theorem. Specifically, we know that such polynomials should intersect f(x) at least n + 1 times. Choosing the points of intersection as interpolation nodes we obtain the interpolating polynomial coinciding with the best approximation polynomial. The defect of this method, however, is that interpolation nodes should be calculated anew for each new function f(x), but the algorithm is hard to be implemented numerically. Does there exist a single table of nodes for which the sequence of interpolating polynomials converge to any continuous function f(x)? The answer is unfortunately negative: Theorem. For any table of nodes there is a continuous function f(x) on an interval [a, b] for which the sequence of interpolating polynomials diverges on [a,b].[11] The proof essentially uses the lower bound estimation of the Lebesgue constant, which we defined above to be the operator norm of Xn (where Xn is the projection operator on Πn). Now we seek a table of nodes for which ${\displaystyle \lim _{n\to \infty }X_{n}f=f,{\text{ for every }}f\in C([a,b]).}$ Due to the Banach–Steinhaus theorem, this is only possible when norms of Xn are uniformly bounded, which cannot be true since we know that ${\displaystyle \|X_{n}\|\geq {\tfrac {2}{\pi }}\log(n+1)+C.}$ For example, if equidistant points are chosen as interpolation nodes, the function from Runge's phenomenon demonstrates divergence of such interpolation. Note that this function is not only continuous but even infinitely times differentiable on [−1, 1]. For better Chebyshev nodes, however, such an example is much harder to find due to the following result: Theorem. For every absolutely continuous function on [−1, 1] the sequence of interpolating polynomials constructed on Chebyshev nodes converges to f(x) uniformly.[12] ## Related concepts Runge's phenomenon shows that for high values of n, the interpolation polynomial may oscillate wildly between the data points. This problem is commonly resolved by the use of spline interpolation. Here, the interpolant is not a polynomial but a spline: a chain of several polynomials of a lower degree. Interpolation of periodic functions by harmonic functions is accomplished by Fourier transform. This can be seen as a form of polynomial interpolation with harmonic base functions, see trigonometric interpolation and trigonometric polynomial. Hermite interpolation problems are those where not only the values of the polynomial p at the nodes are given, but also all derivatives up to a given order. This turns out to be equivalent to a system of simultaneous polynomial congruences, and may be solved by means of the Chinese remainder theorem for polynomials. Birkhoff interpolation is a further generalization where only derivatives of some orders are prescribed, not necessarily all orders from 0 to a k. Collocation methods for the solution of differential and integral equations are based on polynomial interpolation. The technique of rational function modeling is a generalization that considers ratios of polynomial functions. At last, multivariate interpolation for higher dimensions. ## Notes 1. ^ Tiemann, Jerome J. (May–June 1981). "Polynomial Interpolation". I/O News. 1 (5): 16. ISSN 0274-9998. Retrieved 3 November 2017. 2. ^ Gautschi, Walter (1975). "Norm Estimates for Inverses of Vandermonde Matrices". Numerische Mathematik. 23 (4): 337–347. doi:10.1007/BF01438260. 3. ^ Higham, N. J. (1988). "Fast Solution of Vandermonde-Like Systems Involving Orthogonal Polynomials". IMA Journal of Numerical Analysis. 8 (4): 473–486. doi:10.1093/imanum/8.4.473. 4. ^ Björck, Å; V. Pereyra (1970). "Solution of Vandermonde Systems of Equations". Mathematics of Computation. American Mathematical Society. 24 (112): 893–903. doi:10.2307/2004623. JSTOR 2004623. 5. ^ Calvetti, D & Reichel, L (1993). "Fast Inversion of Vanderomnde-Like Matrices Involving Orthogonal Polynomials". BIT. 33 (33): 473–484. doi:10.1007/BF01990529. 6. ^ R.Bevilaqua, D. Bini, M.Capovani and O. Menchi (2003). Appunti di Calcolo Numerico. Chapter 5, p. 89. Servizio Editoriale Universitario Pisa - Azienda Regionale Diritto allo Studio Universitario. 7. ^ Boyadzhiev, Boyad (2012). "Close Encounters with the Stirling Numbers of the Second Kind" (PDF). Math. Mag. 85: 252–266. 8. ^ 9. ^ 10. ^ Watson (1980, p. 21) attributes the last example to Bernstein (1912). 11. ^ Watson (1980, p. 21) attributes this theorem to Faber (1914). 12. ^ Krylov, V. I. (1956). "Сходимость алгебраического интерполирования покорням многочленов Чебышева для абсолютно непрерывных функций и функций с ограниченным изменением" [Convergence of algebraic interpolation with respect to the roots of Chebyshev's polynomial for absolutely continuous functions and functions of bounded variation]. Doklady Akademii Nauk SSSR (N.S.) (in Russian). 107: 362–365. MR 18-32. ## References • Atkinson, Kendell A. (1988), "Chapter 3.", An Introduction to Numerical Analysis (2nd ed.), John Wiley and Sons, ISBN 0-471-50023-2 • Bernstein, Sergei N. (1912), "Sur l'ordre de la meilleure approximation des fonctions continues par les polynômes de degré donné" [On the order of the best approximation of continuous functions by polynomials of a given degree], Mem. Acad. Roy. Belg. (in French), 4: 1&ndash, 104 • Brutman, L. (1997), "Lebesgue functions for polynomial interpolation — a survey", Ann. Numer. Math., 4: 111&ndash, 127 • Faber, Georg (1914), "Über die interpolatorische Darstellung stetiger Funktionen" [On the Interpolation of Continuous Functions], Deutsche Math. Jahr. (in German), 23: 192&ndash, 210 • Powell, M. J. D. (1981), "Chapter 4", Approximation Theory and Methods, Cambridge University Press, ISBN 0-521-29514-9 • Schatzman, Michelle (2002), "Chapter 4", Numerical Analysis: A Mathematical Introduction, Oxford: Clarendon Press, ISBN 0-19-850279-6 • Süli, Endre; Mayers, David (2003), "Chapter 6", An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1 • Watson, G. Alistair (1980), Approximation Theory and Numerical Methods, John Wiley, ISBN 0-471-27706-1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 129, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388479590415955, "perplexity": 4312.702903117062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744348.50/warc/CC-MAIN-20181118093845-20181118115845-00433.warc.gz"}
http://mathhelpforum.com/calculus/117168-derivatives.html
# Math Help - derivatives 1. ## derivatives Can someone help me out with this questions? I am completely lost. A) If y=ln{x+(x^2-a^2)}, evaluate dy/dx, expressing your answer in its simplest form. Show that (x^2 - a^2) d^(2)y/dx^2 +x(dy/dx)=0 2. Originally Posted by sderosa518 Can someone help me out with this questions? I am completely lost. A) If y=ln{x+(x^2-a^2)}, evaluate dy/dx, expressing your answer in its simplest form. Show that (x^2 - a^2) d^(2)y/dx^2 +x(dy/dx)=0 check your original log function and the conclusion you're supposed to reach ... as you've written it, it doesn't work. 3. I copy the problem down right. Is it possible for you to write out the problem that you are reading or the equation, but its 100% correct 4. Originally Posted by sderosa518 I copy the problem down right. Is it possible for you to write out the problem that you are reading or the equation, but its 100% correct $y = \ln\left[x + (x^2-a^2)\right]$ $\frac{dy}{dx} = \frac{2x+1}{x+(x^2-a^2)}$ $\frac{d^2y}{dx^2} = \frac{2[x+(x^2-a^2)] - (2x+1)^2}{[x + (x^2-a^2)]^2} = -\frac{2x^2+2x+2a^2+1}{[x+(x^2-a^2)]^2}$ $x \cdot \frac{dy}{dx} = \frac{x(2x+1)}{x+(x^2-a^2)}$ $(x^2-a^2) \cdot \frac{d^2y}{dx^2} = \frac{(a^2-x^2)(2x^2+2x+2a^2+1)}{[x+(x^2-a^2)]^2}$ ... now, see if you can get the last two expressions to add up to 0. 5. With all do with respect, can you show the final answer. As you were saying, the problem is written down incorrect. R u trying to tell me that there is no solution, but if there is can you finish for me. i would appreciate it!!! 6. Originally Posted by sderosa518 With all do with respect, can you show the final answer. As you were saying, the problem is written down incorrect. R u trying to tell me that there is no solution, but if there is can you finish for me. i would appreciate it!!! I am telling you that this statement ... Show that (x^2 - a^2) d^(2)y/dx^2 +x(dy/dx)=0 does not work. 7. Thats part of my homework assignment. Its in my calculus book. 8. Originally Posted by sderosa518 Thats part of my homework assignment. Its in my calculus book. Look, there's a problem with the question as posted. This has been said several times now. Sorting out this problem is not our job. It's your teacher's job. So speak to your teacher. 9. Lets try this. this is exactly how the problem is written. Check attachment.. Let me know if its better, if it doesnt work, then I will go to my professor. Thank You Attached Files 10. Originally Posted by sderosa518 Lets try this. this is exactly how the problem is written. Check attachment.. Let me know if its better, if it doesnt work, then I will go to my professor. Thank You nothing 's changed ... get your professor to show you how the last statement is true. I eagerly await to see his/her complete solution. 11. Indeed!! When he shows me the answer, ill get back to you as soon as possible. 12. Skeeter your right!!!! It cant be solve. Doesnt equal zero. the professor tried to solve to zero, its not possible. My professor is wrong. Anyway thank you for you help. 13. Skeeter, We found out what was wrong with the solution. The person who type up the test left an exponent. Thhhhhhh y=ln{x+(x^2-a^2)^(1/2)},
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146393895149231, "perplexity": 849.1578730490907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678697782/warc/CC-MAIN-20140313024457-00022-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.allmath.com/determinent-calculator.php
# Determinant Calculator Enter the values of the matrix to calculate the determinant of a matrix using determinants calculator. Give Us Feedback This matrix determinant calculator can calculate the determinants of matrices up to the order of 5. What’s amazing about this calculator is that you can see all the steps of calculations. ## Definition - Determinant of a Matrix Determinants are scalar quantities and are functions of square matrices. They help to know the nature and inverse of matrices. A determinant is usually represented with capital alphabets. It is a function of a matrix that gives a singular value. ## Formula to find determinant of matrix Square matrices exist in various sizes. Each size has a different formula of determinants. It is not possible to write formulas for all. So below are the formulas for the most common ones. ### For a 3x3 matrix The expansion method for the above equation: ## How to find determinant of a matrix? Example Find the determinant of the following matrix: Solution: Step 1: Write down the formula. = (a x d) - (c x b) Step 2: Place the values in the equation. =(4 x 2) - (3 x 6) = 8 - 18 = -10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9203862547874451, "perplexity": 782.3518243250946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00730.warc.gz"}
http://math.stackexchange.com/questions/351498/finding-the-f-conductor-of-of-basis-vectors
# Finding the $f$-conductor of of basis vectors Let $f$ be a linear operator on $\Bbb{K}^4$ whose matrix is $$\begin{bmatrix} c & 0 & 0 & 0 \\ 1 & c & 0 &0 \\ 0 &1 &c & 0 \\0& 0 & 1 & c \end{bmatrix}$$ Let $W=\ker(f-cI_4)$ Find the generators of $S(e_4,W) ,S(e_3,W), S(e_2,W),S(e_1,W)$. Where $S(e_i,W)=\{P(x)\in \Bbb{K}[x] \mid P(f)(e_i)\in W\}$ and $e_i$ are the standard basis vectors. What I've done: I showed that $W$ is spanned by $e_4$. So for $S(e_4,W)$ , its the set of consant times $e_4$, is it right to say $S(e_4,W)=<c>$? As for $S(e_3,W)$ : $(f-cI_n)(e_3)=e_4$ hence $S(e_3,W)$ is the set of all polynomials in $\Bbb{K}[x]$ of the form $a_1x+\ldots+a_nx^n$ . I still don't know how to write or find the generator. As for $S(e_3,W)$ and $S(e_4,W)$ we have $(f-cI_n)(e_2)=e_3$ and $(f-cI_n)(e_1)=e_2$ so upon another application of $(f-cI_n)$ we get $e_4$ . My problem is that I don't know how to write the generators of these sets, especially for the last three sets. - $f-cI$ has last two columns zero. Its kernel cannot be spanned by $\mathbf{e}_4$ since the kernel is two dimensional. What is true, however, is that $\mathbf{e}_4$ (and $\mathbf{e}_3$) is in $W$. This is enough to conclude that every polynomial is in $S(\mathbf{e}_4,\ W)$ because $P(f)$ commutes with $f-cI$ for all $P\in \Bbb{K}[x]$. Therefore the monic generator of $S(\mathbf{e}_4,\ W)$ (and likewise for $S(\mathbf{e}_3,\ W)$) is $1$. –  EuYu Apr 4 '13 at 23:28 @EuYu I am very sorry I got no answers on this so I did not notice the typo . I hope I didn't take much from your time –  user10444 Apr 4 '13 at 23:38 Since $f-cI$ has kernel spanned by $\mathbf{e}_4$ it follows that every polynomial $P\in \mathbb{K}[x]$ is an element of $S$. This is because $P(f)$ commutes with $f-cI$ for all $P$ and therefore $$(f-cI)\left[P(f)\mathbf{e}_4\right] = P(f)\left[(f-cI)\mathbf{e}_4\right] = P(f)\mathbf{0} = \mathbf{0}$$ Therefore the monic generator for $S(\mathbf{e}_4,\ W)$ is $1$ (indeed any non-zero constant can act as a suitable generator as you correctly concluded. We will talk about monic generators for the remainder of the question). By definition, the generator must divide each element of the ideal. You have actually already done the majority of the work by noticing the shift property of $(f-cI)$. We have $(x-c)\in S(\mathbf{e}_3,\ W)$ so the generator is either $1$, which is impossible since $\mathbf{e}_3\notin W$, or it is $(x-c)$ which is the case here. Likewise, we have $(x-c)^2 \in S(\mathbf{e}_2,\ W)$ so the generator is either $1$ (no good since $\mathbf{e}_2\notin W$) or $(x-c)$ (again, no good since $(f-cI)\mathbf{e}_2 = \mathbf{e}_3\notin W$) or $(x-c)^2$, which is the case here. The case for $S(\mathbf{e}_1,\ W)$ is very similar. Try your hand at that. - Thank you very much that cleared a lot up for me, again sorry for the typo. Concerning $S(e_1,W)$ we have $(f-cI)^3e_1=0$ then $(x-c)^3\in S(e_1,W)$ hence the generator is either $1, x-c, (x-c)^2, (x-c)^3$. $1$ does not work since $e_1 \notin W$ neither is it $x-c$ since $(f-cI)e_1=e_2$ neither is it $(x-c)^2$ since $(f-cI)^2e_1=e_3$ hence it is $(x-c)^3$ –  user10444 Apr 5 '13 at 0:37 @user10444 That's exactly right! –  EuYu Apr 5 '13 at 1:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821686148643494, "perplexity": 141.40718038930882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00282-ip-10-234-18-248.ec2.internal.warc.gz"}
http://amicor.blogspot.com/2014/04/the-lhc-has-found-new-particle.html
SEXPAND Not content with perhaps the biggest scientific discovery of the decade, scientists at the Large Hadron Collide continue to search for new particles—and now they've found one that seems to be an entirely new form of matter. A series of experiments at the LHC have confirmed that a new particle called Z(4430)—catchy!— actually exists, and it's the best evidence to date of a new form of matter called a tetraquark. Quarks are the subatomic particles that, combined, form all matter. In pairs they form mesons; in triplets, protons and neutrons. Tetraquarks are a hypothesized combination of four of the little things—and Z(4430) was, if it existed, thought to be an example. Thing was, nobody knew for sure—until now—that it existed or not. Its sighting at the LHC changes things. Researchers from CERN have found as many as 4000 of the particles, which means that those who think tetraquarks do exist are pretty excited. There remains some work to be done to understand once and for all if Z(4430) is with 100 percent certainty a tetraquark, and even then exactly what that means for us. But in the meantime, it's nice to know that the LHC isn't resting on its laurels. [arXiv via New Scientist]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197598457336426, "perplexity": 1160.7036117355462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258936356.77/warc/CC-MAIN-20160723072856-00283-ip-10-185-27-174.ec2.internal.warc.gz"}
https://tomkojar.wordpress.com/
The Action is defined as $S(q(t)):=\int_{[t_{1},t_{2}]}L(q,q',t)dt$. For example, in absence of potential, the Lagrangian equals $L=\frac{1}{2}\cdot|\gamma|^{2}= \frac{1}{2} m(\dot{r}^{2} + r^{2} \dot{\phi}^{2} ).$ So the $\partial S=0\Rightarrow$EL equations are $\partial_{t\dot{\phi}}L=\partial_{\phi}L$ and  $\partial_{t\dot{r}}L=\partial_{r}L$. For our example,$m\cdot r^{2}\ddot{\phi} =0$ and $2m \ddot{r}=m\cdot r\dot{\phi}\Rightarrow$ with solutions $re^{i\phi}=(at+b)+i(ct+d).$ Intuitively: Action is the distance in configuration space like the absolute metric is the distance in Euclidean space; it gives the shortest path between states. ## Gelfand–Naimark theorem and Probability summarize connections with probability In random matrix theory ## Buffon’s noodle problem I will summarize steps in Proof of Buffon’s noodle problem ## Large deviations and entropy Details on 1)relation of microstate and rate functions 2)Kullback–Leibler divergence
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914172291755676, "perplexity": 2209.4538494834087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00356-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/GAUTAM_GANGOPADHYAY
Articles written in Pramana – Journal of Physics • The effect of environment induced pure decoherence on the generalized Jaynes-Cummings model We have studied the effect of environment induced pure decoherence on the generalized Jaynes-Cummings model (JCM). This generalized JCM is introduced to take into account both atom-field interaction and a class of spin-orbit interactions in the same framework. For generalized JCM with atom-field interaction, it is shown that along with the suppression of the oscillatory behaviour of the atomic and field variables, in the steady state, atomic energy is transferred to the field or vice versa through the dressed state coherence depending on the initial condition of the atom-field system and the model under consideration. It is also shown that initial Poissonian field acquires a sub-Poissonian character in the steady state and thus all the nonclassical properties are not erased by the decoherence in JCM. An interesting effect of this decoherence mechanism is that it affects the population and coherence properties of the individual subsystem in a different way. As an example of generalized JCM with spin-orbit interaction, the dynamics of spin of the hydrogen atom in a magnetic field is studied to show the effect of decoherence. • Dynamics of cascade three-level system interacting with the classical and quantized field We study the exact solutions of the cascade three-level atom interacting with a single mode classical and quantized field with different initial conditions of the atom. For the semiclassical model, it is found that if the atom is initially in the middle level, the time-dependent populations of the upper and lower levels are always equal. This dynamical symmetry exhibited by the classical field is spoiled on quantization of the field mode. To reveal this non-classical effect, a Euler matrix formalism is developed to solve the dressed states of the cascade Jaynes-Cummings model (JCM). Possible modification of such an effect on the collapse and revival phenomenon is also discussed by taking the quantized field in a coherent state • Effect of field quantization on Rabi oscillation of equidistant cascade four-level system We have exactly solved a model of equidistant cascade four-level system interacting with a single-mode radiation field both semiclassically and quantum mechanically by exploiting its similarity with Jaynes-Cummings model. For the classical field, it is shown that the Rabi oscillation of the system initially in the first level (second level) is similar to that of the system when it is initially in the fourth level (third level). We then proceed to solve the quantized version of the model where the dressed state is constructed using a six-parameter four-dimensional matrix and show that the symmetry exhibited in the Rabi oscillation of the system for the semiclassical model is completely destroyed on the quantization of the cavity field. Finally, we have studied the collapse and revival of the system for the cavity field-mode in a coherent state to discuss the restoration of symmetry and its implication is discussed. • Dynamical symmetry breaking of lambda- and vee-type three-level systems on quantization of the field modes We develop a scheme to construct the Hamiltonians of the lambda-, vee- and cascade-type three-level configurations using the generators of $SU(3)$ group. It turns out that this approach provides a well-defined selection rule to give different Hamiltonians for each configuration. The lambda- and vee-type configurations are exactly solved with different initial conditions while taking the two-mode classical and quantized fields. For the classical field, it is shown that the Rabi oscillation of the lambda model is similar to that of the vee model and the dynamics of the vee model can be recovered from lambda model and vice versa simply by inversion. We then proceed to solve the quantized version of both models by introducing a novel Euler matrix formalism. It is shown that this dynamical symmetry exhibited in the Rabi oscillation of two configurations for the semiclassical models is completely destroyed on quantization of the field modes. The symmetry can be restored within the quantized models when both field modes are in the coherent states with large average photon number which is depicted through the collapse and revival of the Rabi oscillations. • Physics potential of the ICAL detector at the India-based Neutrino Observatory (INO) The upcoming 50 kt magnetized iron calorimeter (ICAL) detector at the India-based Neutrino Observatory (INO) is designed to study the atmospheric neutrinos and antineutrinos separately over a wide range of energies andpath lengths. The primary focus of this experiment is to explore the Earth matter effects by observing the energy and zenith angle dependence of the atmospheric neutrinos in the multi-GeV range. This study will be crucial toaddress some of the outstanding issues in neutrino oscillation physics, including the fundamental issue of neutrino mass hierarchy. In this document, we present the physics potential of the detector as obtained from realistic detector simulations.We describe the simulation framework, the neutrino interactions in the detector, and the expected responseof the detector to particles traversing it. The ICAL detector can determine the energy and direction of the muons to a high precision, and in addition, its sensitivity to multi-GeV hadrons increases its physics reach substantially. Itscharge identification capability, and hence its ability to distinguish neutrinos from antineutrinos, makes it an efficient detector for determining the neutrino mass hierarchy. In this report, we outline the analyses carried out for the determination of neutrino mass hierarchy and precision measurements of atmospheric neutrino mixing parameters at ICAL, and give the expected physics reach of the detector with 10 years of runtime. We also explore the potential of ICAL for probing new physics scenarios like CPT violation and the presence of magnetic monopoles. • # Pramana – Journal of Physics Volume 95, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424867033958435, "perplexity": 677.8128522843892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00435.warc.gz"}
http://mathhelpforum.com/calculus/135170-average-value-function.html
# Math Help - Average value of a function 1. ## Average value of a function • Let the function f be defined as F(x) = ((x^2)-1)/((x^2)+1) Calculate the average of f, avg (f), on the interval [0,2] • Find the average of the function f(x)=x-(x^3) Thanks 2. Originally Posted by ces91 • Let the function f be defined as F(x) = ((x^2)-1)/((x^2)+1) Calculate the average of f, avg (f), on the interval [0,2] • Find the average of the function f(x)=x-(x^3) Thanks The average value of a function f(x) over the interval [a, b] is $\frac{1}{b - a} \int_a^b f(x) \, dx$. This formula is undoubtedly in your class notes and textbook. If you need more help, please show all your working and say where you are stuck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984171986579895, "perplexity": 837.4903128845733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683543/warc/CC-MAIN-20140313024443-00027-ip-10-183-142-35.ec2.internal.warc.gz"}
http://cognet.mit.edu/journal/10.1162/artl.2008.14.3.14303
## Artificial Life Summer 2008, Vol. 14, No. 3, Pages 265-275 (doi: 10.1162/artl.2008.14.3.14303) © 2008 Massachusetts Institute of Technology The Emergence of Overlapping Scale-free Genetic Architecture in Digital Organisms Article PDF (270.44 KB) Abstract We have studied the evolution of genetic architecture in digital organisms and found that the gene overlap follows a scale-free distribution, which is commonly found in metabolic networks of many organisms. Our results show that the slope of the scale-free distribution depends on the mutation rate and that the gene development is driven by expansion of already existing genes, which is in direct correspondence to the preferential growth algorithm that gives rise to scale-free networks. To further validate our results we have constructed a simple model of gene development, which recapitulates the results from the evolutionary process and shows that the mutation rate affects the tendency of genes to cluster. In addition we could relate the slope of the scale-free distribution to the genetic complexity of the organisms and show that a high mutation rate gives rise to a more complex genetic architecture.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255366683006287, "perplexity": 946.2654949669418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00278.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-1-functions-and-limits-1-3-new-functions-from-old-functions-1-3-exercises-page-44/51
## Calculus 8th Edition $S(t)$ = $(f\circ g\circ h)(t)$ for $h(t)=\cos t,$ $g(t)=\sin t$, and $f(t)=t^{2}$ $(f\circ g\circ h) (t)=f[g(h(t))]$ If we used a calculator, step by step, for each step, define the operation we would perform on the current result. Starting with $t$, we would 1. calculate the cosine of t ,$\ \ h(t)=\cos t$ 2. calculate the sine of current result, $\ \ g(x)=\sin t$ 3. square the current result , $\ \ f(t)=t^{2}$ $S(t)$ = $(f\circ g\circ h)(t)$ for $h(t)=\cos t,$ $g(t)=\sin t$, and $f(t)=t^{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959693551063538, "perplexity": 394.40237578173816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583680452.20/warc/CC-MAIN-20190119180834-20190119202834-00056.warc.gz"}
http://en.wikipedia.org/wiki/Neutrino_theory_of_light
# Neutrino theory of light The neutrino theory of light is the proposal that the photon is a composite particle formed of a neutrinoantineutrino pair. It is based on the idea that emission and absorption of a photon corresponds to the creation and annihilation of a particle–antiparticle pair. The neutrino theory of light is not currently accepted as part of mainstream physics, as according to the standard model the photon is an elementary particle, a gauge boson. ## History In the past, many particles that were once thought to be elementary such as protons, neutrons, pions, and kaons have turned out to be composite particles. In 1932, Louis de Broglie[1][2] suggested that the photon might be the combination of a neutrino and an antineutrino. During the 1930s there was great interest in the neutrino theory of light and Pascual Jordan,[3] Ralph Kronig, Max Born, and others worked on the theory. In 1938, Maurice Henry Lecorney Pryce[4] brought work on the composite photon theory to halt. He showed that the conditions imposed by Bose–Einstein commutation relations for the composite photon and the connection between its spin and polarization were incompatible. Pryce also pointed out other possible problems, “In so far as the failure of the theory can be traced to any one cause it is fair to say that it lies in the fact that light waves are polarized transversely while neutrino ‘waves’ are polarized longitudinally,” and lack of rotational invariance. In 1966, V S Berezinskii[5] reanalyzed Pryce’s paper, giving a clearer picture of the problem that Pryce uncovered. Starting in the 1960s work on the neutrino theory of light resumed, and there continues to be some interest in recent years.[6][7][8][9] Attempts have been made to solve the problem pointed out by Pryce, known as Pryce’s Theorem, and other problems with the composite photon theory. The incentive is seeing the natural way that many photon properties are generated from the theory and the knowledge that some problems exist[10][11] with the current photon model. However, there is no experimental evidence that the photon has a composite structure. Some of the problems for the neutrino theory of light are the non-existence for massless neutrinos[12] with both spin parallel and antiparallel to their momentum and the fact that composite photons are not bosons. Attempts to solve some of these problems will be discussed, but the lack of massless neutrinos makes it impossible to form a massless photon with this theory. The neutrino theory of light is not considered to be part of mainstream physics. ## Forming photon from neutrinos Actually, it is not difficult to obtain transversely polarized photons from neutrinos.[13][14] ### The neutrino field The neutrino field satisfies the Dirac equation with the mass set to zero, $\gamma^\mu p_\mu \Psi = 0.$ The gamma matrices in the Weyl basis are: $\gamma^0 = \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right), \; \; \; \; \gamma^1 = \left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ -1 & 0 & 0 & 0 \end{array} \right),$ $\gamma^2 = \left( \begin{array}{cccc} 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ -i & 0 & 0 & 0 \end{array} \right), \; \; \; \; \gamma^3 = \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right).$ The matrix $\gamma^0$ is Hermitian while $\gamma^k$ is antihermitian. They satisfy the anticommutation relation, $\gamma^{\mu} \gamma^{\nu} + \gamma^{\nu} \gamma^{\mu} = 2 \eta^{\mu \nu}I$ where $\eta^{\mu \nu}$ is the Minkowski metric with signature $(+---)$ and $I$ is the unit matrix. The neutrino field is given by, $\Psi(x) = {1 \over \sqrt{V}} \sum_\mathbf{k} \left\{ \left[ a_1(\mathbf{k}) u^{+1}_{+1}(\mathbf{k}) + a_2(\mathbf{k}) u^{+1}_{-1}(\mathbf{k}) \right] e^{i k x} \right.$ $\left. + \left[ c_1^\dagger(\mathbf{k}) u^{-1}_{-1} (\mathbf{-k}) + c_2^\dagger(\mathbf{k}) u^{-1}_{+1}(\mathbf{-k}) \right]e^{-i k x} \right\},$ where $k x$ stands for $\mathbf{k} \cdot \mathbf{x} - k_0t$. $a_1$ and $c_1$ are the fermion annihilation operators for $\nu_1$ and $\overline \nu_1$ respectively, while $a_2$ and $c_2$ are the annihilation operators for $\nu_2$ and $\overline \nu_2$. $\nu_1$ is a right-handed neutrino and $\nu_2$ is a left-handed neutrino. The $u$'s are spinors with the superscripts and subscripts referring to the energy and helicity states respectively. Spinor solutions for the Dirac equation are, $u^{+1}_{+1}(\mathbf{p}) = \sqrt{ {E + p_3} \over 2 E} \left( \begin{array}{c} 1 \\ {{p_1 + i p_2} \over {E + p_3}} \\ 0 \\ 0 \end{array} \right),$ $u^{-1}_{-1}(\mathbf{p}) = \sqrt{ {E + p_3} \over 2 E} \left( \begin{array}{c} {{-p_1 + i p_2} \over {E + p_3}} \\ 1 \\ 0 \\ 0 \end{array} \right),$ $u^{-1}_{+1}(\mathbf{p}) = \sqrt{ {E + p_3} \over 2 E} \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ {{p_1 + i p_2} \over {E + p_3}} \end{array} \right),$ $u^{+1}_{-1}(\mathbf{p}) = \sqrt{ {E + p_3} \over 2 E} \left( \begin{array}{c} 0 \\ 0 \\ {{-p_1 + i p_2} \over {E + p_3}} \\ 1 \end{array} \right).$ The neutrino spinors for negative momenta are related to those of positive momenta by, $u^{+1}_{+1}(\mathbf{-p}) = u^{-1}_{-1}(\mathbf{p}),$ $u^{-1}_{-1}(\mathbf{-p}) = u^{+1}_{+1}(\mathbf{p}),$ $u^{+1}_{-1}(\mathbf{-p}) = u^{-1}_{+1}(\mathbf{p}),$ $u^{-1}_{+1}(\mathbf{-p}) = u^{+1}_{-1}(\mathbf{p}).$ ### The composite photon field De Broglie[1] and Kronig[13] suggested the use of a local interaction to bind the neutrino–antineutrino pair. (Rosen and Singer[15] have used a delta potential interaction in forming a composite photon.) Fermi and Yang[16] used a local interaction to bind a fermion–antiferminon pair in attempting to form a pion. A four-vector field can be created from a fermion–antifermion pair,[17] $\Psi^\dagger \gamma_0 \gamma_{\mu} \Psi.$ Forming the photon field can be done simply by, $A_\mu(x) = \sum_\mathbf{p} {-1 \over 2 \sqrt{V p_0}}\left\{ \left[Q_R(\mathbf{p}) u^{-1}_{-1}(\mathbf{p})^\dagger \gamma_0 \gamma_{\mu} u^{+1}_{+1}(\mathbf{p}) + Q_L(\mathbf{p}) u^{+1}_{+1}(\mathbf{p})^\dagger \gamma_0 \gamma_{\mu} u^{-1}_{-1}(\mathbf{p}) \right]e^{i p x} \right.$ $\left. + \left[Q_R^\dagger(\mathbf{p}) u^{+1}_{+1}(\mathbf{p})^\dagger \gamma_0 \gamma_{\mu} u^{-1}_{-1}(\mathbf{p}) + Q_L^\dagger(\mathbf{p}) u^{-1}_{-1}(\mathbf{p})^\dagger \gamma_0 \gamma_{\mu} u^{+1}_{+1}(\mathbf{p}) \right]e^{-i p x} \right\}, \quad\quad (1)$ where $p x = \mathbf{p} \cdot \mathbf{x} - p_0t = \mathbf{p} \cdot \mathbf{x} - Et$. The annihilation operators for right-handed and left-handed photons formed of fermion–antifermion pairs are defined as,[18][19][20][21] $Q_R(\mathbf{p}) = \sum_\mathbf{k} F^\dagger(\mathbf{k}) \left [ c_1(\mathbf{p}/2-\mathbf{k})a_1(\mathbf{p}/2+\mathbf{k}) + c_2(\mathbf{p}/2+\mathbf{k})a_2(\mathbf{p}/2-\mathbf{k}) \right ]$ $Q_L(\mathbf{p}) = \sum_\mathbf{k} F^\dagger(\mathbf{k}) \left [ c_2(\mathbf{p}/2-\mathbf{k})a_2(\mathbf{p}/2+\mathbf{k}) + c_1(\mathbf{p}/2+\mathbf{k})a_1(\mathbf{p}/2-\mathbf{k}), \right ].$ $F(\mathbf{k})$ is a spectral function, normalized by $\sum_\mathbf{k} \left| F(\mathbf{k}) \right|^2 = 1.$ ## Photon polarization vectors The polarization vectors corresponding to the combinations used in Eq. (1) are, $\epsilon_\mu^1( p ) = {-1 \over \sqrt{2}} [u^{-1}_{-1}(\mathbf{p})]^\dagger \gamma_0 \gamma_{\mu} u^{+1}_{+1}(\mathbf{p}),$ $\epsilon_\mu^2( p ) = {-1 \over \sqrt{2}} [u^{+1}_{+1}(\mathbf{p})]^\dagger \gamma_0 \gamma_{\mu} u^{-1}_{-1}(\mathbf{p}).$ Carrying out the matrix multiplications results in, $\epsilon_\mu^1(p) \!= \!{1 \over \sqrt{2}} \left( {{-i p_1 p_2 \!+\!E^2 \!+\!p_3 E \!-\!p_1^2} \over {E(E + p_3)}}, {{- p_1 p_2 \! + \!iE^2 \! +\!ip_3 E \! - \!ip_2^2 } \over {E(E + p_3)}}, {{\!-p_1 \!- \!i p_2} \over E}, 0 \right),$ $\epsilon_\mu^2(p) \!= \!{1 \over \sqrt{2}} \left( {{i p_1 p_2 \!+\!E^2 \!+\!p_3 E \!-\!p_1^2} \over {E(E + p_3)}}, {{-p_1 p_2 \! - \!iE^2 \! -\!ip_3 E \! + \!ip_2^2 } \over {E(E + p_3)}}, {{\!-p_1 \!+ \!i p_2} \over E}, 0 \right), \quad (2)$ where $\epsilon_0^1(p)$ and $\epsilon_0^2(p)$ have been placed on the right. For massless fermions the polarization vectors depend only upon the direction of $\mathbf{p}$. Let $\mathbf{n} = \mathbf{p}/ |\mathbf{p}|$. $\epsilon_\mu^1(n) \!= \!{1 \over \sqrt{2}} \left( {{-i n_1 n_2 \!+\!1 \!+\!n_3 \!-\!n_1^2} \over {1 + n_3}}, {{- n_1 n_2 \!+ \!in_1^2 \!+ \!in_3^2 \!+ \!in_3} \over {1 + n_3}}, \!-n_1 \!- \!i n_2, 0 \right),$ $\epsilon_\mu^2(n) \!= \!{1 \over \sqrt{2}} \left( {{i n_1 n_2 \!+\!1 \!+\!n_3 \!-\!n_1^2} \over {1 + n_3}}, {{- n_1 n_2 \!- \!in_1^2 \!- \!in_3^2 \!- \!in_3} \over {1 + n_3}}, \!-n_1 \!+ \!i n_2, 0 \right).$ These polarization vectors satisfy the normalization relation, $\epsilon_\mu^j(p) \cdot \epsilon_\mu^{j*}(p) = 1,$ $\epsilon_\mu^j(p) \cdot \epsilon_\mu^{k*}(p) = 0 \;\; \text{for} \;\; k \ne j.$ The Lorentz-invariant dot products of the internal four-momentum $p_\mu$ with the polarization vectors are, $p_\mu \epsilon_\mu^1(p) = 0,$ $p_\mu \epsilon_\mu^2(p) = 0. \quad\quad\quad\quad (3)$ In three dimensions, $\mathbf{p} \cdot \mathbf{\epsilon^1}(\mathbf{p}) = \mathbf{p} \cdot \mathbf{\epsilon^2}(\mathbf{p}) = 0,$ $\mathbf{\epsilon^1}(\mathbf{p}) \times \mathbf{\epsilon^2}(\mathbf{p})= -i\mathbf{p} / p_0,$ $\mathbf{p} \times \mathbf{\epsilon^1}(\mathbf{p})=-i p_0 \mathbf{\epsilon^1}(\mathbf{p}),$ $\mathbf{p} \times \mathbf{\epsilon^2}(\mathbf{p})= i p_0 \mathbf{\epsilon^2}(\mathbf{p}). \quad\quad\quad\quad (4)$ ## Composite photon satisfies Maxwell’s equations In terms of the polarization vectors, $A_\mu(x)$ becomes, $A_\mu(x) = \sum_\mathbf{p} {1 \over \sqrt{2 V p_0}}\left\{ \left[Q_R(\mathbf{p}) \epsilon_\mu^1(\mathbf{p}) + Q_L(\mathbf{p}) \epsilon_\mu^2(\mathbf{p}) \right]e^{i p x} \right.$ $\left. + \left[Q_R^\dagger(\mathbf{p}) \epsilon_\mu^{1*}(\mathbf{p}) + Q_L^\dagger(\mathbf{p}) \epsilon_\mu^{2*}(\mathbf{p}) \right]e^{-i p x} \right\}. \quad\quad\quad (5)$ The electric field $\mathbf{E}~$ and magnetic field $\mathbf{H}~$ are given by, $\mathbf{E}(x) = - { \partial \mathbf{A}(x) \over \partial t },$ $\mathbf{H}(x) = \nabla \times \mathbf{A}(x). \quad\quad\quad\quad (6)$ Applying Eq. (6) to Eq. (5), results in, $E_\mu(x) = i \sum_\mathbf{p} {\sqrt{p_0} \over \sqrt{2 V }}\left\{ \left[Q_R(\mathbf{p}) \epsilon_\mu^1(\mathbf{p}) + Q_L(\mathbf{p}) \epsilon_\mu^2(\mathbf{p}) \right]e^{i p x} \right.$ $\left. - \left[Q_R^\dagger(\mathbf{p}) \epsilon_\mu^{1*}(\mathbf{p}) + Q_L^\dagger(\mathbf{p}) \epsilon_\mu^{2*}(\mathbf{p}), \right]e^{-i p x} \right\}.$ $H_\mu(x) = \sum_\mathbf{p} {\sqrt{p_0} \over \sqrt{2 V }}\left\{ \left[Q_R(\mathbf{p}) \epsilon_\mu^1(\mathbf{p}) - Q_L(\mathbf{p}) \epsilon_\mu^2(\mathbf{p}) \right]e^{i p x} \right.$ $\left. + \left[Q_R^\dagger(\mathbf{p}) \epsilon_\mu^{1*}(\mathbf{p}) - Q_L^\dagger(\mathbf{p}) \epsilon_\mu^{2*}(\mathbf{p}), \right]e^{-i p x} \right\}.$ Maxwell's equations for free space are obtained as follows: $\partial E_1(x) / \partial x_1 = i \sum_\mathbf{p} {\sqrt{p_0} \over \sqrt{2 V }}\left\{ \left[Q_R(\mathbf{p}) p_1 \epsilon_1^1(\mathbf{p}) + Q_L(\mathbf{p}) p_1 \epsilon_1^2(\mathbf{p}) \right]e^{i p x} \right.$ $\left. + \left[Q_R^\dagger(\mathbf{p}) p_1 \epsilon_1^{1*}(\mathbf{p}) + Q_L^\dagger(\mathbf{p}) p_1 \epsilon_1^{2*}(\mathbf{p}) \right]e^{-i p x} \right\}.$ Thus, $\partial E_1(x) / \partial x_1 + \partial E_2(x) / \partial x_2 + \partial E_3(x) / \partial x_3$ contains terms of the form $p_1 \epsilon_1^1(\mathbf{p}) + p_2 \epsilon_2^1(\mathbf{p}) + p_3 \epsilon_3^1(\mathbf{p})$ which equate to zero by the first of Eq. (4). This gives, $\nabla \cdot \mathbf{E}(x) = 0,$ $\nabla \cdot \mathbf{H}(x) = 0.$ as $\mathbf{H}$ contains similar terms. The expression $\nabla \times \mathbf{E}(x)$ contains terms of the form $\mathbf{p} \times \mathbf{\epsilon^1}(\mathbf{p})$ while $\partial \mathbf{H}(x) / \partial t$ contains terms of form $i p_0 \mathbf{\epsilon^1}(\mathbf{p})$. Thus, the last two equations of (4) can be used to show that, $\nabla \times \mathbf{E}(x) = - \partial \mathbf{H}(x) / \partial t,$ $\nabla \times \mathbf{H}(x) = \partial \mathbf{E}(x) / \partial t.$ Although the neutrino field violates parity and charge conjugation ,[22] $\mathbf{E}$ and $\mathbf{H}$ transform in the usual way ,[14][21] $P \mathbf{E}(\mathbf{x},t) P^-1 = -\mathbf{E}(\mathbf{-x},t),$ $P \mathbf{H}(\mathbf{x},t) P^-1 = \mathbf{H}(\mathbf{-x},t),$ $C \mathbf{E}(\mathbf{x},t) C^-1 = -\mathbf{E}(\mathbf{x},t),$ $C \mathbf{H}(\mathbf{x},t) C^-1 = -\mathbf{H}(\mathbf{x},t).$ $A_\mu$ satisfies the Lorentz condition, $\partial A_\mu / \partial x_\mu = 0$ which follows from Eq. (3). Although many choices for gamma matrices can satisfy the Dirac equation, it is essential that one use the Weyl representation in order to get the correct photon polarization vectors and $\mathbf{E}$ and $\mathbf{H}$ that satisfy Maxwell's equations. Kronig[13] first realized this. In the Weyl representation, the four-component spinors are describing two sets of two-component neutrinos. The connection between the photon antisymmetric tensor and the two-component Weyl equation was also noted by Sen.[23] One can also produce the above results using a two-component neutrino theory.[8] To compute the commutation relations for the photon field, one needs the equation, $\sum_{j=1}^2 \epsilon_{\mu}^j(\mathbf{p}) \epsilon_{\nu}^{j*}(\mathbf{p}) == \sum_{j==1}^2 \epsilon_{\mu}^{j*}(\mathbf{p}) \epsilon_{\nu}^j(\mathbf{p}) = \delta_{\mu \nu} - {p_{\mu} p_{\nu} \over E^2}.$ To obtain this equation, Kronig[13] wrote a relation between the neutrino spinors that was not rotationally invariant as pointed out by Pryce.[4] However, as Perkins[14] showed, this equation follows directly from summing over the polarization vectors, Eq. (2), that were obtained by explicitly solving for the neutrino spinors. If the momentum is along the third axis, $\epsilon_\mu^1(n)$ and $\epsilon_\mu^2(n)$ reduce to the usual polarization vectors for right and left circularly polarized photons respectively. $\epsilon_\mu^1(n) = {1 \over \sqrt{2}}(1,i,0,0),$ $\epsilon_\mu^2(n) = {1 \over \sqrt{2}}(1,-i,0,0).$ ## Problems with the neutrino theory of light Although composite photons satisfy many properties of real photons, there are major problems with this theory. ### Bose–Einstein commutation relations It is known that a photon is a boson.[24] Does the composite photon satisfy Bose–Einstein commutation relations? Fermions are defined as the particles whose creation and annihilation operators adhere to the anticommutation relations $\{a(\mathbf{k}),a(\mathbf{l})\} = 0,$ $\{a^\dagger(\mathbf{k}),a^\dagger(\mathbf{l})\} = 0,$ $\{a(\mathbf{k}),a^\dagger(\mathbf{l})\} = \delta(\mathbf{k}-\mathbf{l}),$ while bosons are defined as the particles that adhere to the commutation relations $\left[b(\mathbf{k}),b(\mathbf{l})\right] = 0,$ $\left[b^\dagger(\mathbf{k}),b^\dagger(\mathbf{l})\right] = 0,$ $\left[b(\mathbf{k}),b^\dagger(\mathbf{l})\right] = \delta(\mathbf{k}-\mathbf{l}). \quad\quad (7)$ The creation and annihilation operators of composite particles formed of fermion pairs adhere to the commutation relations of the form[18][19][20][21] $\left[Q(\mathbf{k}),Q(\mathbf{l})\right] = 0,$ $\left[Q^\dagger(\mathbf{k}),Q^\dagger(\mathbf{l})\right] = 0,$ $\left[Q(\mathbf{k}),Q^\dagger(\mathbf{l})\right] = \delta(\mathbf{k}-\mathbf{l})- \Delta(\mathbf{k},\mathbf{l}). \quad\quad (8)$ with $\Delta(\mathbf{p}^{\prime},\mathbf{p}) = \sum_\mathbf{k} F^\dagger(\mathbf{k}) \left[ F(\mathbf{p}^{\prime}/2-\mathbf{p}/2+\mathbf{k}) a^\dagger(\mathbf{p}-\mathbf{p}^{\prime}/2-\mathbf{k}) a(\mathbf{p}^{\prime}/2-\mathbf{k}) \right.$ $\left. + F(\mathbf{p}/2-\mathbf{p}^{\prime}/2+\mathbf{k}) c^\dagger(\mathbf{p}-\mathbf{p}^{\prime}/2+\mathbf{k}) c(\mathbf{p}^{\prime}/2+\mathbf{k}) \right]. \quad\quad (9)$ For Cooper electron pairs,[20] "a" and "c" represent different spin directions. For nucleon pairs (the deuteron),[18][19] "a" and "c" represent proton and neutron. For neutrino–antineutrino pairs,[21] "a" and "c" represent neutrino and antineutrino. The size of the deviations from pure Bose behavior, $\Delta(\mathbf{p}^{\prime},\mathbf{p}),$ depends on the degree of overlap of the fermion wave functions and the constraints of the Pauli exclusion principle. If the state has the form $|\Phi \rangle = a^\dagger(\mathbf{k_1}) a^\dagger(\mathbf{k_2})...a^\dagger(\mathbf{k_n}) c^\dagger(\mathbf{q_1})c^\dagger(\mathbf{q_2})...c^\dagger(\mathbf{q_m})|0 \rangle$ then the expectation value of Eq. (9) vanishes for $\mathbf{p}^{\prime} \ne \mathbf{p}$, and the expression for $\Delta(\mathbf{p}^{\prime},\mathbf{p})$ can be approximated by $\Delta(\mathbf{p}^{\prime},\mathbf{p}) = \delta(\mathbf{p}^{\prime}-\mathbf{p}) \sum_\mathbf{k} \left| F(\mathbf{k}) \right|^2 \left[ a^\dagger(\mathbf{p}/2-\mathbf{k}) a(\mathbf{p}/2-\mathbf{k}) \right.$ $\left. + c^\dagger(\mathbf{p}/2+\mathbf{k}) c(\mathbf{p}/2+\mathbf{k}) \right].$ Using the fermion number operators $n_a(\mathbf{k})$ and $n_c(\mathbf{k})$, this can be written, $\Delta(\mathbf{p}^{\prime},\mathbf{p}) = \delta(\mathbf{p}^{\prime}-\mathbf{p}) \sum_\mathbf{k} \left| F(\mathbf{k}) \right|^2 \left[ n_a( \mathbf{p}/2-\mathbf{k}) + n_c(\mathbf{p}/2+\mathbf{k}) \right]$ $= \delta(\mathbf{p}^{\prime}-\mathbf{p}) \sum_\mathbf{k} \left[ \left| F(\mathbf{p}/2-\mathbf{k}) \right|^2 n_a(\mathbf{k}) + \left| F(\mathbf{k}- \mathbf{p}/2) \right|^2 n_c(\mathbf{k}) \right]$ $= \delta(\mathbf{p}^{\prime}-\mathbf{p}) \overline {\Delta} (\mathbf{p},\mathbf{p})$ showing that it is the average number of fermions in a particular state $\mathbf{k}$ averaged over all states with weighting factors $F( \mathbf{p}/2-\mathbf{k})$ and $F(\mathbf{k}-\mathbf{p}/2)$. #### Jordan’s attempt to solve problem De Broglie did not address the problem of statistics for the composite photon. However, "Jordan considered the essential part of the problem was to construct Bose–Einstein amplitudes from Fermi–Dirac amplitudes", as Pryce[4] noted. Jordan[3] "suggested that it is not the interaction between neutrinos and antineutrinos that binds them together into photons, but rather the manner in which they interact with charged particles that leads to the simplified description of light in terms of photons." Jordan's hypothesis eliminated the need for theorizing an unknown interaction, but his hypothesis that the neutrino and antineutrino are emitted in exactly the same direction seems rather artificial as noted by Fock.[25] His strong desire to obtain exact Bose–Einstein commutation relations for the composite photon led him to work with a scalar or longitudinally polarized photon. Greenberg and Wightman[26] have pointed out why the one-dimensional case works, but the three-dimensional case does not. In 1928, Jordan noticed that commutation relations for pairs of fermions were similar to those for bosons.[27] Compare Eq. (7) with Eq. (8). From 1935 till 1937, Jordan, Kronig, and others[28] tried to obtain exact Bose–Einstein commutation relations for the composite photon. Terms were added to the commutation relations to cancel out the delta term in Eq. (8). These terms corresponded to "simulated photons". For example, the absorption of a photon of momentum $\mathbf{p}$ could be simulated by a Raman effect in which a neutrino with momentum $\mathbf{p+k}$ is absorbed while another of another with opposite spin and momentum $\mathbf{k}$ is emitted. (It is now known that single neutrinos or antineutrinos interact so weakly that they cannot simulate photons.) #### Pryce’s theorem In 1938, Pryce[4] showed that one cannot obtain both Bose–Einstein statistics and transversely polarized photons from neutrino–antineutrino pairs. Construction of transversely polarized photons is not the problem.[29] As Berezinski[5] noted, "The only actual difficulty is that the construction of a transverse four-vector is incompatible with the requirement of statistics." In some ways Berezinski gives a clearer picture of the problem. A simple version of the proof is as follows: The expectation values of the commutation relations for composite right and left-handed photons are: $\left[ Q_R(\mathbf{p}^{\prime}), Q_R(\mathbf{p}) \right] = 0, \; \left[ Q_L(\mathbf{p}^{\prime}), Q_L(\mathbf{p}) \right] = 0,$ $\left[ Q_R(\mathbf{p}^{\prime}), Q_R^\dagger(\mathbf{p}) \right] = \delta( \mathbf{p}^{\prime} - \mathbf{p}) (1 -{\overline \Delta_{12}}(\mathbf{p},\mathbf{p})),$ $\left[ Q_L(\mathbf{p}^{\prime}), Q_L^\dagger(\mathbf{p}) \right] = \delta( \mathbf{p}^{\prime} - \mathbf{p}) (1 -{\overline \Delta_{21}}(\mathbf{p},\mathbf{p})),$ $\left[ Q_R(\mathbf{p}^{\prime}), Q_L(\mathbf{p}) \right] = 0, \; \left[ Q_R(\mathbf{p}^{\prime}), Q_L^\dagger(\mathbf{p}) \right] = 0, \quad\quad\quad\quad (10)$ where ${\overline \Delta_{12}}(\mathbf{p},\mathbf{p}) = \sum_\mathbf{k} \left[ \left| F(\mathbf{k}-\mathbf{p}/2) \right|^2 (n_{a1}(\mathbf{k}) + n_{c2}(\mathbf{k}) ) \right.$ $\left. + \left| F(\mathbf{p}/2-\mathbf{k}) \right|^2 ( n_{c1}(\mathbf{k}) + n_{a2}(\mathbf{k}) ) \right]. \quad\quad\quad\quad (11)$ The deviation from Bose–Einstein statistics is caused by $\overline \Delta_{12}(\mathbf{p},\mathbf{p})$ and $\overline \Delta_{21}(\mathbf{p},\mathbf{p})$, which are functions of the neutrino numbers operators. Linear polarization photon operators are defined by $\xi( \mathbf{p}) = {1 \over \sqrt{2}} \left[ Q_L(\mathbf{p}) + Q_R(\mathbf{p}) \right],$ $\eta( \mathbf{p}) = {i \over \sqrt{2}} \left[ Q_L(\mathbf{p}) - Q_R(\mathbf{p}) \right]. \quad\quad\quad\quad (12)$ A particularly interesting commutation relation is, $[\xi( \mathbf{p}^{\prime}),\eta^\dagger( \mathbf{p})] = {i \over 2} \delta( \mathbf{p}^{\prime} - \mathbf{p}) [\overline \Delta_{21}(\mathbf{p},\mathbf{p}) -\overline \Delta_{12}(\mathbf{p},\mathbf{p})], \quad\quad (13)$ which follows from (10) and (12). For the composite photon to obey Bose–Einstein commutation relations, at the very least, $[\xi( \mathbf{p}^{\prime}),\eta^\dagger( \mathbf{p})] = 0 \quad\quad\quad\quad (14)$ Pryce noted.[4] From Eq. (11) and Eq. (13) the requirement is that $\sum_\mathbf{k} \left[ \left| F(\mathbf{k}-\mathbf{p}/2) \right|^2 (n_{a1}(\mathbf{k}) + n_{c2}(\mathbf{k}) - n_{a2}(\mathbf{k}) - n_{c1}(\mathbf{k}) ) \right.$ $\left. + \left| F(\mathbf{p}/2-\mathbf{k}) \right|^2 ( n_{c1}(\mathbf{k}) + n_{a2}(\mathbf{k}) - n_{c2}(\mathbf{k}) - n_{a1}(\mathbf{k}) ) \right]$ gives zero when applied to any state vector. Thus, all the coefficients of $n_{a1}(\mathbf{k})$ and $n_{c1}(\mathbf{k})$, etc. must vanish separately. This means $F(\mathbf{k}) = 0$, and the composite photon does not exist,[4][5] completing the proof. #### Perkins’ attempt to solve problem Perkins[14][21] reasoned that the photon does not have to obey Bose–Einstein commutation relations, because the non-Bose terms are small and they may not cause any detectable effects. Perkins[11] noted, "As presented in many quantum mechanics texts it may appear that Bose statistics follow from basic principles, but it is really from the classical canonical formalism. This is not a reliable procedure as evidenced by the fact that it gives the completely wrong result for spin-1/2 particles." Furthermore, "most integral spin particles (light mesons, strange mesons, etc.) are composite particles formed of quarks. Because of their underlying fermion structure, these integral spin particles are not fundamental bosons, but composite quasibosons. However, in the asymptotic limit, which generally applies, they are essentially bosons. For these particles, Bose commutation relations are just an approximation, albeit a very good one. There are some differences; bringing two of these composite particles close together will force their identical fermions to jump to excited states because of the Pauli exclusion principle." Brzezinski in reaffirming Pryce's theorem argues that commutation relation (14) is necessary for the photon to be truly neutral. However, Perkins[21] has shown that a neutral photon in the usual sense can be obtained without Bose–Einstein commutation relations. The number operator for a composite photon is defined as $N(\mathbf{p}) = Q^\dagger(\mathbf{p}) Q(\mathbf{p}).$ Lipkin[18] suggested for a rough estimate to assume that $F(\mathbf{k})= 1 / \Omega$ where $\Omega$ is a constant equal to the number of states used to construct the wave packet. Perkins[11] showed that the effect of the composite photon’s number operator acting on a state of $m$ composite photons is, $N(\mathbf{p}) (Q^\dagger(\mathbf{p}))^m|0\rangle \; = \left( m - {m(m-1) \over \Omega } \right) (Q^\dagger(\mathbf{p}))^m|0\rangle,$ using $N(\mathbf{p})|0\rangle = 0$. This result differs from the usual one because of the second term which is small for large $\Omega$. Normalizing in the usual manner,[30] $Q^\dagger(\mathbf{p})|n_\mathbf{p} \rangle \; = \sqrt{ (n_\mathbf{p} +1) \left( 1- {n_\mathbf{p} \over \Omega} \right) } |n_\mathbf{p} +1\rangle,$ $Q(\mathbf{p})|n_\mathbf{p} \rangle \; = \sqrt{ n_\mathbf{p} \left( 1- {(n_\mathbf{p}-1) \over \Omega} \right) } |n_\mathbf{p} -1\rangle, \quad\quad\quad\quad (15)$ where $|n_\mathbf{p}\rangle$ is the state of $n_\mathbf{p}$ composite photons having momentum $\mathbf{p}$ which is created by applying $Q^\dagger(\mathbf{p})$ on the vacuum $n_\mathbf{p}$ times. Note that, $Q^\dagger(\mathbf{p})|0 \rangle = | 1_\mathbf{p}\rangle,$ $Q(\mathbf{p})|1_\mathbf{p}\rangle = |0\rangle,$ which is the same result as obtained with boson operators. The formulas in Eq. (15) are similar to the usual ones with correction factors that approach zero for large $\Omega$. The main evidence indicating that photons are bosons comes from the Blackbody radiation experiments which are in agreement with Planck's distribution. Perkins[11] calculated the photon distribution for Blackbody radiation using the second quantization method,[30] but with a composite photon. The atoms in the walls of the cavity are taken to be a two-level system with photons emitted from the upper level β and absorbed at the lower level α. The transition probability for emission of a photon is enhanced when np photons are present, $w_{\alpha \beta}( n_\mathbf{p} + 1 \leftarrow n_\mathbf{p} ) = (n_\mathbf{p} + 1) \left( 1 - {n_\mathbf{p} \over \Omega} \right) w_{\alpha \beta}( 1_\mathbf{p} \leftarrow 0), \quad (16)$ where the first of (15) has been used. The absorption is enhanced less since the second of (15) is used, $w_{ \beta \alpha}( n_\mathbf{p} - 1 \leftarrow n_\mathbf{p} ) = n_\mathbf{p} \left( 1 - {n_\mathbf{p}-1 \over \Omega} \right) w_{ \beta \alpha}( 0 \leftarrow 1_\mathbf{p}). \quad (17)$ Using the equality, $w_{ \beta \alpha}( 0 \leftarrow 1_\mathbf{p}) = w_{ \alpha \beta}( 1_\mathbf{p} \leftarrow 0 ),$ of the transition rates, Eqs. (16) and (17) are combined to give, ${w_{\alpha \beta}( n_\mathbf{p}+1 \leftarrow n_\mathbf{p}) \over w_{ \beta \alpha}( n_\mathbf{p} - 1 \leftarrow n_\mathbf{p} )} = {(n_\mathbf{p}+1) \left( 1 - {n_\mathbf{p} \over \Omega} \right) \over n_\mathbf{p} \left( 1 - {(n_\mathbf{p}-1) \over \Omega} \right)}.$ The probability of finding the system with energy E is proportional to e−E/kT according to Boltzmann's distribution law. Thus, the equilibrium between emission and absorption requires that, $w_{\alpha \beta}( n_\mathbf{p}+1 \leftarrow n_\mathbf{p} ) e^{-E_{\beta} /kT} = w_{ \beta \alpha}( n_\mathbf{p} - 1 \leftarrow n_\mathbf{p} ) e^{-E_{\alpha} /kT},$ with the photon energy $\omega_p = E_{\beta} - E_{\alpha}$. Combining the last two equations results in, $n_\mathbf{p} = {2 \over u+(u+2)/ \Omega + \sqrt{u^2(1+2/\Omega) + (u+2)^2 /\Omega^2}},$ with $u = e^{\omega_p/kT} - 1$. For $\Omega(\omega_p /kT) >> 1$, this reduces to $n_\mathbf{p} = {1 \over e^{\omega_p /kT} \left( 1 + {1 \over \Omega} \right) - 1}.$ This equation differs from Planck’s law because of the $1 / \Omega$ term. For the conditions used in the Blackbody radiation experiments of Coblentz,[31] Perkins estimates that 1 / Ω < 109, and the maximum deviation from Planck's law is less than one part in 108, which is too small to be detected. ### Only left-handed neutrinos exist Experimental results show that only left-handed neutrinos and right-handed antineutrinos exist. Three sets of neutrinos have been observed,[32][33] one that is connected with electrons, one with muons, and one with tau leptons.[34] In the standard model the pion and muon decay modes are: π+ → μ+ + ν μ μ+ → e+ + ν e + ν μ To form a photon, which satisfies parity and charge conjugation, two sets of two-component neutrinos (i.e., right-handed and left-handed neutrinos) are needed. Perkins (see Sec. VI of Ref.[14]) attempted to solve this problem by noting that the needed two sets of two-component neutrinos would exist if the positive muon is identified as the particle and the negative muon as the antiparticle. The reasoning is as follows: let ν1 be the right-handed neutrino and ν2 the left-handed neutrino with their corresponding antineutrinos (with opposite helicity). The neutrinos involved in beta decay are ν2 and ν2, while those for π–μ decay are ν1 and ν1. With this scheme the pion and muon decay modes are: π+ → μ+ + ν1 μ+ → e+ + ν2 + ν1 ### Absence of massless neutrinos There is convincing evidence that neutrinos have mass. In experiments at the SuperKamiokande researchers[12] have discovered neutrino oscillations in which one flavor of neutrino changed into another. This means that neutrinos have non-zero mass. Since massless neutrinos are needed to form a massless photon, a composite photon is not possible. ## References 1. ^ a b L. de Broglie (1932). Compt. Rend. 195: 536, 862. Missing or empty |title= (help) 2. ^ L. de Broglie (1934). Une novelle conception de la lumiere. Paris (France): Hermann et. Cie. 3. ^ a b P. Jordan (1935). "Zur Neutrinotheorie des Lichtes". Z. Phys. 93 (7–8): 464–472. Bibcode:1935ZPhy...93..464J. doi:10.1007/BF01330373. 4. M. H. L. Pryce (1938). "On the neutrino theory of light". Proceedings of the Royal Society A165: 247–271. 5. ^ a b c V. S. Berezinskii (1966). "Pryce's theorem and the neutrino theory of photons". Zh. Eksperim. I Teor. Fiz. 51: 1374–1384. • translated in Soviet Physics JETP, 24: 927 (1967) 6. ^ V. V. Dvoeglazov (1999). "Speculations on the neutrino theory of light". Annales Fond. Broglie 24: 111–127. arXiv:physics/9807013. Bibcode:1998physics...7013D. 7. ^ V. V. Dvoeglazov (2001). "Again on the possible compositeness of the photon". Phys. Scripta 64 (2): 119–127. arXiv:hep-th/9908057. Bibcode:2001PhyS...64..119D. doi:10.1238/Physica.Regular.064a00119. 8. ^ a b W. A. Perkins (World Scientific, Singapore). A. E. Chubykalo, V. V. Dvoeglazov, D. J. Ernst, V. G. Kadyshevsky, and Y. S. Kim, ed. Interpreted History of Neutrino Theory of Light and Its Future. pp. 115–126. Check date values in: |date= (help) 9. ^ D. K. Sen (2007). "Left- and right-handed neutrinos and baryon–lepton masses". Journal of Mathematical Physics 48 (2): 022304. Bibcode:2007JMP....48b2304S. doi:10.1063/1.2436985. 10. ^ V. V. Varlamov (2001). "About Algebraic Foundation of Majorana–Oppenheimer Quantum Electrodynamics and de Brogie–Jordan Neutrino Theory of Light". arXiv:math-ph/0109024. 11. ^ a b c d W. A. Perkins (2002). "Quasibosons". International Journal of Theoretical Physics 41 (5): 823–838. doi:10.1023/A:1015728722664. 12. ^ a b Y. Fukuda et al. (Super-Kamiokande Collaboration) (1998). "Evidence for oscillation of atmospheric neutrinos". Physical Review Letters 81 (8): 1562–1567. arXiv:hep-ex/9807003. Bibcode:1998PhRvL..81.1562F. doi:10.1103/PhysRevLett.81.1562. 13. ^ a b c d R. de L. Kronig (1936). "On a relativistically invariant formulation of the neutrino theory of light". Physica 3 (10): 1120–1132. Bibcode:1936Phy.....3.1120K. doi:10.1016/S0031-8914(36)80340-1. 14. W. A. Perkins (1965). "Neutrino theory of photons". Physical Review 137 (5B): B1291–B1301. Bibcode:1965PhRv..137.1291P. doi:10.1103/PhysRev.137.B1291. 15. ^ N. Rosen and P. Singer (1959). "The photon as a composite particle". Bulletin of the Research Council of Israel 8F (5): 51–62. Bibcode:1967PhRv..157.1444B. doi:10.1103/PhysRev.157.1444. 16. ^ E. Fermi and C. N. Yang (1949). "Are mesons elementary particles". Physical Review 76 (12): 1739–1743. Bibcode:1949PhRv...76.1739F. doi:10.1103/PhysRev.76.1739. 17. ^ J. D. Bjorken and S. D. Drell (1965). Relativistic Quantum Fields. New York (NY): McGraw-Hill. 18. ^ a b c d H. J. Lipkin (1973). Quantum Mechanics. Amsterdam (Holland): North-Holland. 19. ^ a b c H. L. Sahlin and J. L. Schwartz (1965). "The many body problem for composite particles". Physical Review 138: B267–B273. Bibcode:1965PhRv..138..267S. doi:10.1103/PhysRev.138.B267. 20. ^ a b c R. H. Landau (1996). Quantum Mechanics II. New York (NY): Wiley. 21. W. A. Perkins (1972). "Statistics of a composite photon formed of two fermions". Physical Review D 5 (6): 1375–1384. Bibcode:1972PhRvD...5.1375P. doi:10.1103/PhysRevD.5.1375. 22. ^ T. D. Lee and C. N. Yang (1957). "Parity nonconservation and two-component theory of the neutrino". Physical Review 105 (5): 1671–1675. Bibcode:1957PhRv..105.1671L. doi:10.1103/PhysRev.105.1671. 23. ^ D. K. Sen (1964). "A theoretical basis for two neutrinos". Il Nuovo Cimento 31 (3): 660–669. doi:10.1007/BF02733763. 24. ^ C. Amsler et al. (Particle Data Group) (2008). "The review of particle physics". Physics Letters B 667: 1–1340. Bibcode:2008PhLB..667....1P. doi:10.1016/j.physletb.2008.07.018. 25. ^ Fock (1937). Phys. Z. Sowjetunion 11: 1. Missing or empty |title= (help) 26. ^ O. W. Greenberg and A. S. Wightman (1955). "Re-examination of the neutrino theory of light". Physical Review 99: 675 A. doi:10.1103/PhysRev.99.605. 27. ^ P. Jordan (1928). "Die Lichtquantenhypothese: Entwicklung und gegenwärtiger Stand". Ergebnisse der exakten Naturwissenschaften 7: 158–208. Bibcode:1928ErNW....7..158J. doi:10.1007/BFb0111850. 28. ^ M. Born and N. S. Nagendra Nath (1936). Proc. Indian Acad. Sci. A3: 318. Missing or empty |title= (help) 29. ^ K. M. Case (1957). "Composite particles of zero mass". Physical Review 106 (6): 1316–1320. Bibcode:1957PhRv..106.1316C. doi:10.1103/PhysRev.106.1316. 30. ^ a b D. S. Koltun and J. M. Eisenberg (1988). Quantum Mechanics of Many Degrees of Freedom. New York (NY): Wiley. 31. ^ W. W. Coblentz (1916). Natl. Bur. Std. (U.S.) Bull. 13: 459. doi:10.6028/bulletin.310. Missing or empty |title= (help) 32. ^ G. Danby, J-M Gaillard, K. Goulianos, L. M. Lederman, N. Mistry, M. Schwartz, and J. Steinberger, (1962). "Observation of high-energy neutrino interactions and the existence of two kinds of neutrinos". Physical Review Letters 9: 36–44. Bibcode:1962PhRvL...9...36D. doi:10.1103/PhysRevLett.9.36. 33. ^ K. Kodama et al. (DONUT collaboration) (2001). "Observation of tau neutrino interactions". Physics Letters B 504 (3): 218–224. arXiv:hep-ex/0012035. Bibcode:2001PhLB..504..218D. doi:10.1016/S0370-2693(01)00307-0. 34. ^ M. L. Perl et al. (1975). "Evidence for anomalous lepton production in e+ – e− annihilation". Physical Review Letters 35 (22): 1489–1492. Bibcode:1975PhRvL..35.1489P. doi:10.1103/PhysRevLett.35.1489.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 171, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687621593475342, "perplexity": 1456.913012805662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928015.28/warc/CC-MAIN-20150521113208-00144-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cosmic-rays-and-supernovae.20634/
# Cosmic Rays and Supernovae 1. Apr 14, 2004 ### physics_illiterate Why is it that we think ultra high energy cosmic rays are emitted from supernovae. 2. Apr 14, 2004 ### Nereid Staff Emeritus Welcome to Physics Forums physics_illiterate! Do you have a link to a document which discusses ultra-high energy (UHE)* cosmic rays (CR) being emitted from supernovae? http://www.physics.adelaide.edu.au/astrophysics/cr_new.html [Broken] from the University of Utah's Hi-Res group both state that supernovae are unlikely to be sources of UHE CRs; e.g. (from the second source) "However, it is difficult to explain the existence of cosmic rays above 1018 eV, because supernovae are simply not large enough to maintain acceleration to the UHE regime." However, the huge energies generated in gamma ray bursts (GRB) suggested to many that they might be a source of UHE CR. Recently, the link between at least some "long duration" GRBs and supernovae was established. http://ws2004.ift.uni.wroc.pl/Lectures/Lipari/SECOND.PDF [Broken] give a flavour of some current work and thinking. *the term "ultra-high energy" here means above ~1018 eV. Last edited by a moderator: May 1, 2017 Similar Discussions: Cosmic Rays and Supernovae
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200260400772095, "perplexity": 4798.466183624932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00185.warc.gz"}
https://www.groundai.com/project/track-estimation-with-binary-derivative-observations/
Track estimation with binary derivative observations # Track estimation with binary derivative observations Adrien Ickowicz A. Ickowicz, CEREMADE, University of Paris-Dauphine, Tel.: +33-1-45465874, e-mail: [email protected] Received: date / Accepted: date ###### Abstract We focus in this paper in the estimation of a target trajectory defined by whether a time constant parameter in a simple stochastic process or a random walk with binary observations. The binary observation comes from binary derivative sensors, that is, the target is getting closer or moving away. Such a binary obervation has a time property that will be used to ensure the quality of a max-likelihood estimation, through single index model or classification for the constant velocity movement. In the second part of this paper we present a new algorithm for target tracking within a binary sensor network when the target trajectory is assumed to be modeled by a random walk. For a given target, this algorithm provides an estimation of its velocity and its position. The greatest improvements are made through a position correction and velocity analysis. ## I Introduction Sensor networks are systems made of many small and simple sensors deployed over an area in an attempt to sense events of interest within that particular area. In general, the sensors have limited capacities in terms of say range, precision, etc. The ultimate information level for a sensor is a binary one, referring to its output. However, it is important to make a distinction according to the nature of this binary information. Actually, it can be related to a information (non-detection or detection) or to relative motion information. For example, if the sensors are getting sound levels, instead of using the real sound level (which may cause confusion between loud near objects and quieter close objects), the sensor may simply report whether the Doppler frequency is suddenly changing, which can be easily translated in whether the target is getting closer or moving away. Moreover, low-power sensors with limited computation and communication capabilities can only perform binary detection. We could also cite video sensors, with the intuitive reasoning: the target is getting closer if its size is increasing. The need to use that kind of sensor networks leads to the development of a model for target tracking in binary sensor networks. We consider a sensor network, made with sensors (e.g. video),with (known) positions. Each sensor can only gives us a binary information [2], i.e. whether the target-sensor distance is decreasing () or increasing (). This ”choice” can result from severe communication requirements or from the difficulties from fusing inhomogeneous data. Even if many important works deal with proximity sensors [7], [6], we decide here to focus on the binary information [2]. Here, the aim is to estimate the parameters defining the target trajectory. Even if our methods can be rather easily extended to more complex models of target motion, we decide to focus here on a constant velocity movement. Actually, this framework is sufficiently general to present the main problems we have to face, as well as the foundations of the methods we have to develop for dealing with these binary data. See fig. 1 for an example. In a first time, the observability requirements are considered. Then, we turn toward the development of specific estimation methods. Especially, the new concept of the velocity plane is introduced as an exhaustive representation of the spatio-temporal sequence of binary data. It is then used both in a separation-oriented framework (SVM) and in a projection pursuit regression (PPR) one. The corresponding methods are carefully presented and analyzed. In the following part we release the assumption of (piecewise) constant velocity motion, and we try to follow both position and velocity in real time. In particular, it is shown that it is the trajectory ”diversity” which renders this possible. Obviously, tracking a diffusive Markovian target widely differs from the (batch) estimation of deterministic parameters. However, both problems present strong similarities. Indeed, the geometrical properties remains the same at each instant. Once the target motion model has been introduced, the most important properties we used to perform the tracking are presented. Then, the method which allows us to perform adapted corrections for tracking the target is presented. It is the main contribution of this part of the paper. Simulation results illustrate the behavior of the estimators, as well as the performances of the tracking algorithm. We conclude on further works about the tracking in binary sensor networks. ## Ii Binary Sensor Network Observability Properties Let us denote a sensor whose position is represented by the the vector Similarly, the vector represents the position vector of the target at the time-period . Let us denote the (time-varying) distance from sensor to the target at time . Then, we have that: di(t)↘⟺˙di(t)<0,or:~{}⟨xt−ti,vt⟩<0, (1) where is the instantaneous target velocity. We thus have the following lemma. ###### Lemma 1. Let (resp. ) a sensor whose the target distance is decreasing (resp. increasing) at the time-period , then we have: ⟨tj,vt⟩<⟨xt,vt⟩<⟨ti,vt⟩. (2) If we restrict to binary motion information, we consider that the output of a sensor (at time ) is or according to the distance is decreasing or increasing, so that we have: {\vskip6.0ptplus2.0ptminus2.0ptsi(t)=+1if˙di(t)<0,sj(t)=−1if˙dj(t)>0. (3) Let us denote the subset of sensor whose output is and the subset of sensors whose output is , i.e. and and and their convex hulls, then we have [2]: ###### Proposition 1. and . Proof: The proof is quite simple is reproduced here only for the sake of completeness. First assume that , this means that there exists an element of , lying in . Let be this element (and its associated position), then we have (): t=∑j∈Bβjtj,βj≥0and~{}∑j∈Bβj=1so that we have on the first hand:⟨t,vt⟩=∑j∈Bβj⟨tj,vt⟩<⟨xt,vt⟩(see eq. ???),and, on the other one (t∈C(A)):⟨t,vt⟩=∑i∈Aαi⟨ti,vt⟩≥(∑i∈Aαi)mini{⟨ti,v(t)⟩}>⟨xt,vt⟩. (4) Thus a contradiction which shows that . For the second part, we have simply to assume that ), which yields: ⟨xt,vt⟩=∑i∈Aαi⟨ti,vt⟩≥mini∈A⟨ti,vt⟩, (5) which is clearly a contradiction, idem if . So, and being two disjoint convex subsets, we know that there exists an hyperplane (here a line) separating them. Then, let be a generic sensor, we can write , so that: ⟨tk,vt⟩=λ∥vt∥2>0⟺λ>0. (6) This means that the line spanned by the vector separates and . Without considering the translation and considering again the basis , we have : {tk∈A⟺λ∥vt∥2>⟨xt,vt⟩,tk∈B⟺λ∥vt∥2<⟨xt,vt⟩. (7) Thus in the basis , the line passing by the point and whose direction is given by is separating and . We have now to turn toward the indistinguishability conditions for two trajectories. Two trajectories are said indistinguishable if they induce the same outputs from the sensor network. We have then the following property [2]. ###### Proposition 2. Assume that the sensor network is dense, then two target trajectories (say and ) are indistinguishable iff the following conditions hold true: (8) Proof:  First, we shall consider the implications of the indistinguishability. Actually, the two trajectories are indistinguishable iff the following condition holds: ⟨tj−ti,˙xt⟩≤0⟺⟨tj−ti,˙yt⟩≤0∀t∀(ti,tj). (9) We then choose (i.e. and both belongs to the line separating and ) and consider the following decomposition of the vector: ˙yt=λt˙xt+μt˙x⊥t, so that we have: ⟨tj−ti,˙yt⟩=αμt∥˙x⊥t∥2≤0. (10) Now, it is always possible to choose a scalar of the same sign than . So, we conclude that the scalar is necessarily equal to zero. Thus , if the trajectories and are indistinguishable we have necessarily: ˙yt=λt˙xt,∀t. Furthermore, the scalar is necessarily positive (see eq. 9). Then, the lemma 1 inequalities yield: ⟨tj−ti,˙xt⟩<⟨xt−yt,˙xt⟩<⟨ti−tj,˙xt⟩. (11) Choosing once again , we deduce from eq. 11 the second part of prop. 2, i.e. . Considering now the distance between the two indistinguishable trajectories, we have () : ddt∥xt−yt∥2=2⟨xt−yt,˙xt−˙yt⟩=0, (12) so that we have . Reciprocally, assume that the two conditions and hold true , are the two trajectories then indistinguishable? It is sufficient to remark that: \vskip6.0ptplus2.0ptminus2.0pt⟨yt,˙yt⟩=⟨xt+(yt−xt),˙yt⟩=⟨xt,˙yt⟩=λt⟨xt,˙xt⟩,⟨ti,˙yt⟩=λt⟨ti,˙xt⟩. (13) Since the scalar is positive this ends the proof. Let us now consider the practical applications of the above general results. ### Rectilinear and uniform motion Admitting now that the target motions are rectilinear and uniform (i.e. ). Then prop. 2 yields   () and: ⟨yt−xt,˙x⟩=⟨y0−x0,˙x⟩+t(1−λ)∥˙x∥2=0∀t. (14) Then, from eq. 14 we deduce that and . So that, the target velocity is fully observable while the position is uniquely determined modulo a translation. ### leg-by-leg trajectory Consider now a leg-by-leg trajectory modeling. For a -leg one, we have for two indistinguishable trajectories: {xt=x0+t1v1x+(t−t1)v2x,yt=y0+t′1v1y+(t−t′1)v2y, (15) where is the velocity of the trajectory on the -th leg and is the epoch of maneuver. Furthermore, we can assume that . Considering the implications of prop. 2 both for and for , we know that if the trajectories are indistinguishable we must have: v1x=v1yand:~{}v2x=v2y. (16) So, our objective is now to prove that we have also . Considering prop. 2, we thus have the following system of equations : ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩⟨y0−x0+(t−t1)(v1x−v2x),v1x⟩=0for~{}:% t1 Now, on the -st leg we have also (see prop. 2 for ), so that eqs  17a,b yield: ⟨(v1x−v2x),v1x⟩=⟨(v1x−v2x),v1x⟩=0. (18) This means that and are both orthogonal to the same vector (), so they are collinear, and we straightforwardly deduce from eq. 18 that . Finally, it has thus been proved that and this reasoning can be extended to any leg number. The observability requirements having been considered, we turn now toward the development of the algorithmic approaches. Let us first introduce the following functional. ## Iii The stairwise functional Our fist aim is to estimate the target velocity, within a batch processing framework. We assume that binary ( sensors are uniformly distributed on the field of interest (see fig. 2). Each sensor will be coupled with a counter, that will be increased by a unity each time-period the sensor gives us a , an will keep its value each time the sensor gives us a . Then, at the end of the trajectory, each sensor has a entire value representing the number of periods the target was approaching. Within a given batch, the outputs of the sensor counters can be represented by a stairwise functional (see fig. 3). Then, once this stair is built, we can define what we call the velocity plane. This plane is the tangent plane of the stairwise functional, which means that its direction gives the direction of the stair, while its angle gives the slope. The direction of the plane gives us the target heading, while the target speed is given by: v=1tan(θ). (19) Thus, estimating the velocity is equivalent to estimating the velocity plane parameters. Mathematical justifications are then presented. The target moves with a constant velocity . Considering the results of section V, its starting position is given by the following equation: x(0)=x0+λv⊥,λ∈R,so, that:x(t)=x0+λv⊥+tv (20) This means that at each time period , the possible positions define a (moving) straight line, whose direction is . Let us consider now the scalar product , then we have: ∂∂t⟨x(t),v⟩=v2. (21) This is clearly constant, which means that the surface is a plane. The conclusion follows: the stairwise plane is an exhaustive information for the velocity vector. We provide in the next section two solutions to estimate the velocity plane from the observed data, and give some asymptotic results about the estimation. ## Iv Statistical Methods to Estimate the Velocity Plane We showed that estimating the velocity plane allows us to estimate the velocity vector. Wile there exists several methods to do that, we shall focus on two of them. ### Iv-a The Support Vector Machine (SVM) approach [3] As seen previously, the problem we have to face is to optimally separate the two classes of sensors (i.e. the and ). So, we can use the general framework of SVM, widely used in the classification context. The set of labeled patterns ( and sensor positions) is said to be linearly separable if there exists a vector and a scalar such that the following inequalities hold true: (22) Let (: normal vector) be this optimal separation plane. and define the margin () as the distance of the closest point to , then it is easily seen that . Thus, maximizing the margin lead to consider the following problem: ∣∣ ∣∣\vskip6.0ptplus2.0ptminus2.0ptminw,bτ(w)δ=∥w∥2,s.t.\; :yi(⟨w,xi⟩+b)≥1∀i=1,⋯,lyi=±1. (23) Denoting the vector of Lagrange multipliers, dualization of eq. 23 leads to consider again a quadratic problem, but with more explicit constraints [3], i.e. : ∣∣ ∣∣\vskip6.0ptplus2.0ptminus2.0ptmaxΛW(Λ)=−12ΛTDΛ+ΛT1,s.t. :Λ≥0,ΛTY=0, (24) where is a vector made of and is the -dimensional vector of labels, and is the Gram matrix: Di,j=⟨yixi,yjxj⟩. (25) The dualized problem can be efficiently solved by classical quadratic programming methods. The less-perfect case consider the case when data cannot be separated without errors and lead to replace the constraints of eq. 23 by the following ones: yi(⟨w,xi⟩+b)≥1−ξi,ξi≥0, i=1,⋯,l. (26) Consider now a multiperiod extension of the previous analysis. Let us restrict first to a two-period analysis, we shall consider two separating hyperplanes (say ) defined by: {\vskip6.0ptplus2.0ptminus2.0pt⟨w,x1l⟩+b1≷±c1~{}according to:~{}y1l=±1,⟨w,x2l⟩+b2≷±c2~{}according to:% y2l=±1. (27) It is also assumed that these two separating planes are associated with time periods and , known. It is easily seen that the margin for the separating plane is , while for the plane it is . Thus, the problem we have to solve reads: ∣∣ ∣ ∣∣\vskip6.0ptplus2.0ptminus2.0ptminw,c1,c2,b1,b2[max1,2(∥w∥2c21,∥w∥2c22)],s.t.: y1l(⟨w,x1l⟩+b1)≥c1,y2l(⟨w,x2l⟩+b2)≥c2∀l. (28) At a first glance, this problem appears as very complicated. But, without restricting generality, we can assume that . This means that . Making the changes and then leads to consider the classical problem: (29) Let be the (unique) solution of eq. 29, then a straightforward calculation yields the distance between the two separating planes, i.e.: d(H∗1,H∗2)=|b∗1−b∗2|∥w∗∥. Finally, we deduce that the estimated velocity vector is given by: ^v=αw∗and:^v=1ΔTd(H∗1,H∗2). (30) The previous analysis can be easily extended to an arbitrary number of periods, as long as the target trajectory remains rectilinear. Another definite advantage is that it can be easily extended to multitarget tracking. #### Iv-A1 3d-Svm We can also mix the SVM ideas with that of section III. Indeed, instead of focusing on a 2-D dataset, we can consider a 3-dimensional dataset (sensor coordinates and values of the sensor counters). The second 3-D dataset is the same, but the value of the counter is increased with unity. So, the separation plane is 2-D, and will be as closed to the velocity plane as the sensor number can allow. See fig. 4 for a more explicit understanding. The results of the SVM estimation of the velocity plane are discussed in the Simulation Results section. ### Iv-B Projection Pursuit Regression The projection pursuit methods have first been introduced by Friedman and Tuckey [4]. Then, they have been developed for regression with the projection pursuit regression (PPR) by Friedman and Stuetzle [5]. PPR is mainly a non-parametric method to estimate a regression, with however a certain particularity. Indeed, instead of estimating a function such as , where and are known, and assuming to follow a certain law, PPR estimates g such as . The first step of the algorithm is to estimate the direction , and then . In our specific case, will represent the direction of the target, and will give us the value of the velocity. #### Iv-B1 Modeling Let be the value of the -th sensor counter. are the sensor coordinates. If is the value of the counter i at the end of the track, and the probability to have the right decision, we then have (: binomial): L(Yi|Xiθ)=B(n(Xiθ),p) (31) Assuming in a first time that , the two parameters we would like to estimate are the parameter and the function. #### Iv-B2 The PPR method in the network context We have some additional constraints on . First of all, it only takes integer values. Then, it is an increasing function (because ). The optimization problem we have to solve is the following: ^θ = argminθ∑(^n(Xiθ)−Yi)2, (32) where is calculated in a quite special way. First, we define a non parametric estimation of a function f, via: ^f(u) = ∑YiKh(Xiθ−u)∑Kh(Xiθ−u). (33) Then, we sort into a vector from the smallest to the biggest. After which we define via: {^n(Xθ(i))=^f(Xθ(i))if^f(Xθ(i))≥^f(Xθ(i−1)),^n(Xθ(i))=^f(Xθ(i−1))otherwise. (34) Sometimes, due to the integer value of the estimated function, we have to deal with many possible values of . Then, in this case, we choose the mean value of . Due to the specific behavior of our target and our modeling, we know in addition that the general form of n (say ) is given by: ~n(u) = ∑iI[(Xθ)⊥+(i−1)v,(Xθ)⊥+iv](u). (35) The next step is then to estimate . Such an estimation is given by the following optimization program: ^v = (36) #### Iv-B3 Convergence We will study if the estimation is good with an infinite number of sensors. Assuming we have an infinite number of sensors in a closed space, this means that each point of the space gives us an information . We then will have the exact parameters of the stairwise functional. To that aim, we will show in the following paragraph that the probability of having a sensor arbitrary close to the limits of each stair steps is . We assume that the sensor positions are randomly distributed, following an uniform law. Then, being fixed: L(X|Z) = U[Binf;Bsup] (37) If the velocity vector is denoted with , then: Binf=−bay−cinfa,Bsup=−bay−csupa, (38) where only depends on and , which means that they are deterministic, and independent from . It is quite obvious that represents the smaller -limit of a step, when represents its higher -limit. Then, considering the velocity plane, and both belong to the plane. Denote , then: ∀ε>0P(|u−Binf|<ε) = P(u−Binf<ε), = P(u<ε+Binf). where we note . We know that: P(infXi≤t)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩0if t≤Binf,1−(Bsup−tBsup−Binf)Nif t∈[Binf;Bsup],1if t>Bsup. (40) Then, we have the following probability calculations: P(A) = P(|u−Binf|<ε), = P(u<ε+Binf), = 1−(Bsup−(ϵ+Binf)Bsup−Binf)N1[Binf;Bsup]((ε+Binf)), = ⎧⎪ ⎪⎨⎪ ⎪⎩0if ε≤01−(1−εBsup−Binf)Nif ϵ∈]0;Bsup−Binf]1if ε>Bsup−Binf. Given the above equation, is smaller than one, which means that converges to as increases to infinity. Thus, we have finally: ∀ε>0limN→∞P(|u−Binf|<ε) = 1 (41) ending the proof. ## V Non-linear trajectory estimation ### V-a Target Motion Model The target is assumed to evolve with a Markov motion, given by: xk|xk−1 ∼ N(Fkxk−1,Qk) (42) for where is a gaussian distribution with mean and variance . The starting position is assumed to be unknown. ### V-B Sensor Measurement Model and Analysis At each time period, each sensor gives us a information, meaning that the target is getting closer or moving away. Given all the sensors reports at the time-period , we can easily define a space where the target is assumed to be at this time-period. This is the fundamental uncertainty we have at , and the area of this domain is, of course, directly related to the network parameters (sensor number, network geometry, etc.). ### V-C Velocity Estimation We can estimate the direction of the target based on the simple information given by the sensors. Obviously, that estimator will only be precise if the number of sensors is significantly great. To perform that estimation, we can use several methods, such as the Projection Pursuit Regression Method, or the Support Vector Machine Method. The SVM method chosen for our algorithm as a most common method, and is presented in the next paragraphs. #### V-C1 The effect of target acceleration To illustrate the effect of velocity change for estimating the target position, let us consider a very simple example. Assume that the target motion is uniformly accelerated, i.e. : xt=x0+t˙x0+t2¨x0. (43) We have now to deal with the following question: Is the target trajectory fully observable? To that aim, we first recall the following result. Considering a dense binary network, two target trajectories are said indistinguishable iff they provide the same (binary) information which is equivalent to the following conditions: {˙xt=˙yt,⟨yt−xt,˙yt⟩=0∀t. (44) Expliciting the second condition of eq 44, with the target motion model 43, we obtain that the following condition holds (): ⟨y0−x0,˙y0⟩+t⟨˙y0−˙x0,˙y0⟩+12t2⟨¨y0−¨x0,˙y0⟩,+t⟨y0−x0,¨y0⟩+t2⟨˙y0−˙x0,¨y0⟩+12t3⟨¨y0−¨x0,¨y0⟩=0. (45) Thus, is a zero polynomial, which means that all its coefficients are zero. For the coefficients we obtain the condition . Similarly with the , we obtain . Subtracting these two equalities yield , or . Quite similarly, we obtain the equality and the last equality: ⟨y0−x0,˙y0+t¨y0⟩=0∀t. (46) Assuming that the couple spans the sensor space then we deduce that . So, it has been shown that it was the target acceleration which render the problem fully observable. This reasoning can be extended to a wide variety of target modeling. ### V-D Tracking algorithm The main issue with the SVM estimation is that it only provides us the general direction of the target within a deterministic framework. Moreover, it is highly desirable to develop a reliable algorithm for target tracking (velocity and position). To solve this problem, we build a two-step algorithm. In the first step, we perform a correction through the estimated unitary velocity vector at each time-period , called . Then, in a second time, we perform a correction through the orthogonal-estimated (unitary) velocity vector, also at each time-period, called . These two corrections give us a better estimation of both the velocity and the position of the target. We refer to fig. 5 for the presentation of the rationale of the two correction factors. #### V-D1 The λ factor To build that correction factor, we started with a very simple assumption. At each period , the sensors provide binary motion information. Thanks to the first part of this article, we know that the target is in the (special) set lying between the two same-sign-sensors set. Then, starting from the previous estimated position of the target, we move the estimated target through the estimated velocity vector direction until it stands in that special set. We now define this operator in a mathematical way: Let the estimated normalized velocity vector at time . Moreover, let (respectively ) the coordinates of the sensors () giving a (respectively a ) at time . We sort (respectively ). Then, following a very simple geometrical reasoning, we note that should be between and . To ensure that property, we define the following correction factor: λt=vs(+,−)moy−⟨^vt,^xt−1⟩⟨^vt,^vt−1⟩,with the following definition of vs(+,−)moy:vs(+,−)moy=vs(−)max+vs(+)min2, (47) To calculate this factor, we consider the projection equality: ⟨^vt,(^xt−1+λt^vt−1)⟩=vs(+,−)moy (48) which means that the projection of the corrected value is equal to the mean value of the projection. Geometrically, this means that the position of the target is estimated to be in the center of the special set defined by the sensors. The value of the correction factor (see eq. 47) is then straightforwardly deduced from eq. 48. Similarly, the target position is updated via: ^xcorrt=^xt−1+λt^vt−1. (49) Here the correction factor has been calculated via the average value of the projection. This is an arbitrary choice and we can consider the lower or the upper bound of the projection with no significant difference on the results of the algorithm. Obviously, if the estimation of the position is not very good, the estimated velocity value (clearly based on ) will be quite different from the real value of the velocity. The next correction factor is based on the assumption that the target velocity changes are upper and lower bounded. #### V-D2 The θ correction factor We assume that the velocity of the target has bounded acceleration. Then, if the velocity estimated at a certain time is too different from the velocity estimated at time , this means that the estimated position of the target is far from the right one. Then, in that precise case, we consider an orthogonal correction, through . For that deterministic algorithm we decided to perform a very simple modeling of the velocity. Indeed, we take as a right value for the velocity the simple mean of the previous values of the estimated velocity (). We calculate in addition the variance (), and the factor can be non-zero iff the estimated value of the velocity at time is not in the interval given by . We then look for such that: ⟨^xcorrt+θt^v⊥t−(^xt−1+θt^v⊥t−1);^vt−1⟩=mt,k. (50) The previous equation needs some explanation. Given that is the estimated target position at time , we would like to correct the value to be closer to the right position. The only way we can deal with it, is to correct the estimated value of the velocity. is the previous calculated correction. If the difference between that estimation and the value is too important, we try to reduce that difference with a translation of the positions at time periods and . As we want the positions to stay in the special set defined by the sensors, the direction of that translation is given by for the position at time , and for the position at time . Performing straightforward calculation, leads to consider the following correction factor: θt=mt,k−λt⟨^v⊥t;^vt−1⟩. (51) Obviously, as we could expect when presenting the method, if the target motion is rectilinear and uniform , no correction factor can be calculated. Then, the final estimated position is given by: ^xfint=^xcorrt+θt^v⊥t. (52) #### V-D3 The final correction step Noticeably the most important step of the algorithm, i.e. the correction factor, is based on the estimation of the velocity change. Indeed, the best the estimation of the velocity is, the best we can estimate the position. Then, our aim is to perform a better analysis of the target motion. Considering that from time to time, the estimation of the position increases in quality, a promising way should be to perform a feedback of the newest corrector to the oldest position estimation. We denote the updated estimated position of the target at time . Then, according to the previous paragraph, the estimated position is updated via: ∀j With this new estimator we will be able to perform a better analysis of the target motion (position and velocity). #### V-D4 The final algorithm With the definition of the correction factors, the theoretical part of the algorithm is finished. Then, it is presented as follows, at time period : 1. Get the binary information of each sensor, and then the target position set. 2. Estimate the velocity direction at time via a SVM method 3. Perform the calculation, and add that correction to the estimated velocity at time . The time--position is then updated. 4. Check if the estimated velocity at time is too different from the modeled value, and in this case, calculate . 5. Update the position at time , and in this case, the velocity at time with the correction . Step 2 and 3 can be inverted with no damage in the process. This is the main part of the algorithm. However, there is no mention in that enumeration of the initialization. There are two main state vectors that have to be initialized. The position and the velocity. The position is assumed to be unknown, but thanks to the sensors, we can have a space where the target is assumed to be at first. We use here a uniform law for the initialization, given that we have no further information about where the target can start. The initialization of the velocity is not far from that solution. Indeed, with the binary information, we can provide a convenient estimate of the velocity direction. Even if we don’t have a precise idea of the speed value, we can then start the algorithm. ## Vi Simulation Results ### Vi-a Constant Velocity movement We shall now investigate the previous developments via simulations. The first figure (fig.6) will show the stair built by the previously explained method (: fixed). The position of the sensors are considered random, following a uniform law on the surveillance set. To evaluate the performance of our methods, we decided to calculate the mean square error of the two estimated parameters, which are the velocity value and the velocity direction. Fig. 7 shows the two MSEs values for both direction and velocity values, assuming the sensor number is growing from to , and the velocity vector is the vector (). Providing 2000 simulations, the MSEs seems to be unstable. However, the two parameter estimation methods leads to a very different conclusion. In the case of the direction estimation, the PPR method works highly better than the SVM method, and seems quite stable as the sensor number grows. On the other side, the SVM method is more erratic.One possible explanation is that the PPR method has been first developed for the particular case of direction estimation, while the SVM method is more focused on the margins maximization, which means in our case a simultaneous estimation of both parameters. The conclusions we can make on the velocity value estimation are rather opposite. The MSE becomes reasonable only for the SVM method, and for a number of sensors up to 60. Indeed, we have a error on a velocity value estimation for a theoretical value of . As erratic as the SVM’s MSE was in the direction estimation, it was however less erratic than the result we have for the PPR value. One answer to the MSE erratic value for the PPR could be to find a best way to estimate the velocity value. Indeed, in our case, we choose for estimating functional a sum of indicators functions. However, it is not clear that this optimization gives a single minimum solution. There could be a finest functional that could lead to a most robust optimization solution, and this would be the subject of future works. ### Vi-B Random walk We will present in that section the results of the tracking algorithm. We consider here that the target starts from the position and that its initial velocity vector is the vector. The number of sensors is equal to , in a quite wide space (300mx300m). The variance of the target motion is not very important, and the tracking duration is seconds. One simulation is presented in figure 8. In red is represented the real target trajectory, quite diffusive, and in green the estimated successive positions. The initialization is not very bad because the number of sensors is quite important, which means the uniform set is not too large. After the first step, the estimation seems to hang the real trajectory, and follows the target well (less than meter error). However, when the target turns right, we loose some precision, mainly because the correction factors seemed to be “lost”. The reason for that behavior is that the SVM method provides us a bad estimation of the velocity vector. Then, the algorithm provides a correction in a bad direction, which moves away from the real trajectory. During a few seconds, the estimation works quite bad, before hanging again the target direction, and then performing a quite good estimation of the velocity. Unfortunately, there is no evidence in that example that increasing indefinitely the tracking duration results in an estimated position closer and closer to the real target position. This is precisely the aim of the two next figures in 9. The first one shows the mean square error of the estimated position of the target through the trajectory. The total time is seconds, and we can see an amazing and remarkable decrease of that MSE in the first seconds. It seems however that there is a limit to that decrease. Indeed, the MSE will not converge to a zero value, even if we could perform a long-time tracking. Clearly, the limitation is due to the binary information at first, and certainly to the number of sensors in a second time. Some further work could certainly exhibits a strong link between the number of sensors and the MSE of the position. In the same way, the velocity estimation has some acceptable MSE through the tracking process. Despite the clearly strong decrease at the beginning, the curve then stands to an acceptable but non zero value. The effect is more obvious than in the position case, surely because of the velocity modelling we make in the algorithm, which forces the velocity estimation to very bad evolution. A clue could be to perform a most sophisticated modelling of the velocity, but given the binary information, this won’t be easy. This is another work in progress for the evolution of our algorithm. ## Vii Conclusion ### Vii-a Cvm In this paper, we chose to focus on the use of the at the level of information processing for a sensor network. Though this information is rather poor, it has been shown that it can provide very interesting results about the target velocity estimation. The theoretical aspects of our methods have been thoroughly investigated, and it has been shown that the PPR method leads to the right velocity plane if the number of sensors increase to infinity. The feasibility of the new concept (”velocity plane”) for estimating the target trajectory parameters has been put in evidence. The proposed methods seem to be sufficiently general and versatile to explore numerous extensions like: target tracking and dealing with multiple targets within the same binary context. ### Vii-B Random walk A new method for tracking both position and velocity of a moving target via binary data has been developed. Though the instantaneous data are poorly informative, our algorithm takes benefit of the network extent and density via specific spatio-temporal analysis. This is remarkable since the assumptions we made about target motion are not restrictive. Noticeably also, our algorithm is quite fast and reliable. Furthermore, it is clear that performance can be greatly improved if we can consider that the acquisition frequency is (far) greater than the maneuver frequency. In particular, we can mix the present method with the one we developed in [1]. However, some important questions remain. The first one concerns the velocity modeling. We focused on this paper on the adaptability of the different correction factors, but we didn’t pay much attention to that modeling, which can definitely improve the estimation quality. Moreover, our tracking algorithm is basically deterministic even if the target motion modeling is basically probabilistic. Thus, it should be worth to calculate the first correction factor () via a likelihood, such that does not always stays in the mean of the special set. Moreover, that likelihood should be related to all the sources of sensor uncertainty. In addition, the present algorithm gives a slow response to sudden target maneuver. A remedy should be to incorporate a stochastic modeling of such event in our algorithm. The second correction factor () may also be improved via a stochastic approach. Instead of considering a correction only related to the estimated velocity estimated, we could immerse this correction within a stochastic framework involving both and . These observations are part of our next work on that very constrained but also quite exciting tracking framework. The last important point is multiple target tracking. Even if our work in this area is quite preliminary, it is our strong belief that our spatio-temporal separation based algorithm should be the natural way to overcome the association problems. ## References • [1] A. Ickowicz, J.-P. Le Cadre, A new method for target trajectory estimation within a binary sensor network. Proc. of the 10th European Conference on Computer Vision: Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications Workshop, Oct 2008. • [2] J. Aslam, Z. Butler, F. Constantin, V. Crespi, G. Cybenko, D. Rus, Tracking a moving object with a binary sensor network. Proc. of the 1st international Conference on Embedded Networked Sensor Systems, Nov 2005, pp. 150–161. • [3] C. Cortes, V. Vapnik, Support-Vector Networks. Machine Learning, 20, 1995, pp. 273–297. • [4] J.H. Friedman and J. H. Tuckey, A Projection Pursuit Algorithm for Exploratory Data Analysis. IEEE Trans. Comput. 23, 1974, pp. 881–889. • [5] J. H. Friedman and W. Stuetzle, Projection Pursuit Regression. J. Amer. Stat. Soc., 76, 1981, pp. 817–823. • [6] L. Lazos, R. Poovendran and J.A. Ritcey , Probabilistic Detection of Mobile Targets in Heterogeneous Sensor Networks. Proc. of the 6-th IPSN, Apr. 2007. • [7] X. Wang and B. Moran , Multitarget Tracking Using Virtual Measurements of Binary Sensor Networks. Proc. of the 9-th Int. Conf. on Information Fusion, Jul. 2006. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075770974159241, "perplexity": 592.6630330782314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735964.82/warc/CC-MAIN-20200805183003-20200805213003-00363.warc.gz"}
http://math.au.dk/en/research/publications/publication-series/publication/publid/1104/
# The core of C*-algebras associated with circle maps By Benjamin Randeris Johannesen PhD Dissertations April 2017 Abstract: Let $$\phi\colon \mathbb{T}\to\mathbb{T}$$ be any (surjective) continuous and piecewise monotone circle map. We consider the principal and locally compact Hausdorff étale groupoid $$R_\phi^+$$ from [50]. Already $$C^\ast_r(R_\phi^+)$$ is a unital separable direct limit of Elliott--Thomsen building blocks. A characterization of simplicity of $$C^\ast_r(R_\phi^+)$$ is given assuming surjectivity in addition. We also prove that $$C^\ast_r(R_\phi^+)$$ has a unique tracial state and real rank zero when simple. As a consequence $$C^\ast_r(R_\phi^+)$$ has slow dimension growth in the sense of [36] when simple. This means that $$C^\ast_r(R_\phi^+)$$ are classified by their graded ordered K-theory due to [58]. We compute $$K_0(C^\ast_r(R_\phi^+))$$ for a subclass of circle maps. In general $$K_1(C^\ast_r(R_\phi^+)) \simeq \mathbb{Z}$$. A counterexample yields non-semiconjugate circle maps with isomorphic K-theory. We give a classification of transitive critically finite circle maps up to conjugacy. This class of circle maps contains the surjective circle maps for which $$C^\ast_r(R_\phi^+)$$ is simple. A transitive circle map is always conjugate to a uniformly piecewise linear circle map. We offer a constructive approach to this fact, which also implies a uniqueness result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777854084968567, "perplexity": 555.5319698016714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423769.10/warc/CC-MAIN-20170721102310-20170721122310-00265.warc.gz"}
https://brilliant.org/discussions/thread/dark-energy-the-mistery-of-universe/
× # Dark Energy - The Mystery of Universe I'm Continuing my Set about Universe In cosmology, dark energy is a hypothetical form of energy that would be distributed throughout space and tends to accelerate the expansion of the Universe. The main feature of dark energy is to have a strong negative pressure. According to the theory of relativity, the effect of this negative pressure would be similar, qualitatively, a force acting on a large scale in opposition to gravity. This hypothetical effect is often used for various current theories that attempt to explain the observations that point to a universe in accelerated expansion. The nature of dark energy is one of the biggest current challenges in physics, cosmology and philosophy. There are now many different phenomenological models, however observational data are still far from selecting one over the other. This happens because the choice of a model of dark energy depends on a good knowledge of the temporal variation of the expansion rate of the universe which requires the observation of properties of objects at very large distances (observations and distance measurement at high redshifts). The main forms of different proposals for dark energy is the cosmological constant (which can be interpreted either as a modification of geometric nature in the field equations of general relativity, and as an effect of vacuum energy, which fills the universe so homogeneous); and the quintessence (usually modeled as a scalar field whose energy density can vary in time and space). Another relatively popular among researchers proposed quartessência is aimed at unifying the concepts of dark energy and dark matter by postulating the existence of a form of energy known as Chaplygin gas that would be responsible for both effects of two dark components . • The expanding universe In $$1929$$, American astronomer Edwin Hubble studied exploding stars known as supernovae to determine that the universe is expanding. Since then, scientists have sought to determine just how fast. It seemed obvious that gravity, the force which draws everything together, would put the brakes on the spreading cosmos, so the question many asked was, just how much was the expansion slowing? The general expectation was that the cosmic expansion would gradually decrease, since the galaxies exert a gravitational force on its other. There was therefore a huge surprise in 1998 when two teams of astronomers working independently announced that the expansion was actually getting faster. Both teams drew on the explosion of supernovae of type Ia (see Figure 1). These explosions are so bright they can be seen at a distance of billions of light years. The light from an object that is, for example, five billion light years takes five billion years to reach us; in other words, we observe the universe as it was five billion years. Supernovae were found farther than expected for a universe moved only by inertia - which indicated an acceleration signal. The current best explanation for the unexpected acceleration of the Universe is dark energy, a form of energy whose density is almost exactly the same or perhaps everywhere and always. His persistence would provide a constant repulsive force to the universe, thus accelerating its expansion. Calculating the energy needed to overcome gravity, scientists determined that dark energy makes up roughly $$73$$ percent of the universe. Dark matter makes up another $$23$$ percent, leaving the "normal" matter that we are familiar with to make up less than $$4$$ percent of the cosmos around us. • Quintessence Knowing how dark energy affects the spreading universe only tells scientists so much. The properties of the unknown quantity are still up for grabs. Recent observations have indicated that dark energy has behaved constantly over the universe's history, which provides some insight into the unseen material. One possible solution for dark energy is that the universe is filled with a changing energy field, known as "quintessence." Another is that scientists do not correctly understand how gravity works. The leading theory, however, considers dark energy a property of space. Albert Einstein was the first to understand that space was not simply empty. He also understood that more space could continue to come into existence. In his theory of general relativity, Einstein included a cosmological constant to account for the stationary universe scientists thought existed. After Hubble announced the expanding universe, Einstein called his constant his "biggest blunder." But Einstein's blunder may be the best fit for dark energy. Predicting that empty space can have its own energy, the constant indicates that as more space emerges, more energy would be added to the universe, increasing its expansion. Although the cosmological constant matches up with observations, scientists still aren't certain just why it fits. • Dark energy versus dark matter Dark energy makes up most of the universe, but dark matter also covers a sizeable chunk. Comprising nearly $$27$$ percent of the universe, and $$80$$ percent of the matter, dark matter also plays a dominant role. Like dark energy, dark matter continues to confound scientists. While dark energy is a force that accounts for the expanding universe, dark matter explains how groups of objects function together. In the $$1950s$$, scientists studying other galaxies expected gravity to cause the centers to rotate faster than the outer edges, based on the distribution of the objects inside of them. To their surprise, both regions rotated at the same rate, indicating that the spiral galaxies contained significantly more mass than they appeared to. Studies of gas inside elliptical galaxies and of clusters of galaxies revealed that this hidden matter was spread throughout the universe. Scientists have a number of potential candidates for dark matter, ranging to incredibly dim objects to strange particles. But whatever the source of both dark matter and dark energy, it is clear that the universe is affected by things that scientists can't conventionally observe. • Vacuum energy ? The most accepted explanation about the nature of dark energy is that it is the energy of the vacuum, a perfectly uniform energy present in the empty spaces anywhere in the Universe. The authorship of this idea goes back to Einstein, who introduced the "cosmological constant" in his Theory of General Relativity in $$1917$$. At the time, astronomers thought the universe was neither expanding nor decelerating, and he then used the cosmological constant to compensate for the gravitational attraction of matter. When Edwin Hubble discovered the cosmic expansion in $$1929$$, Einstein realized that the cosmological constant was not needed and discarded the concept that would later call (according to physicist George Gamow) of "its most crass scientific error". The energy of the vacuum is not a gas, a fluid or any other substance; is more a property of space-time itself. It is simply the minimum amount of energy present in any region of space, the energy that remains when we remove all kinds of "stuff" that region. In general relativity, this amount can be positive or negative, without any particular reason to be zero. The microscopic world obeys the laws of quantum mechanics, which proclaim that our understanding of the state of any system involves an inevitable uncertainty (the famous uncertainty principle of Werner Heisenberg). Energy fields, so it will float in empty space, since we can not determine that empty space has zero energy. These "vacuum fluctuations" virtual particles appear and disappear in a split second. These particles contribute to the vacuum energy, but are not its sole cause, since general relativity allows us to assume an arbitrary vacuum energy without taking into account these fluctuations. Einstein certainly was not thinking about virtual particles conceived when the cosmological constant. If dark energy is actually observed vacuum energy, then it will be very little: the amount of it within the volume of the Earth is not greater than the average annual electricity consumption in Brazil. In fact, dark energy is observed over one hundred and twenty orders of magnitude below the most naive estimates for its value. • The fate of the Universe We know since $$1998$$ that the universe is expanding rapidly. But does this acceleration will continue forever? If so, what will be the fate of clusters of galaxies, galaxies and stars? The answer to these questions depends on an intricate balance between the geometry of the Universe and the properties of this subtle form of energy, dubbed "dark energy" which pervades all space. • The Role of Geometry A cosmos without dark energy, general relativity states that the ultimate fate of the Universe is completely and unequivocally determined by its geometry . A Universe with positive curvature, like the surface of a sphere, eventually implode (such universe is said closed). A universe geometrically flat (Euclidean) or negative curvature, like the surface of a saddle, expands indefinitely (this is an open Universe). The existence of dark energy considerably complicates the situation. If it is in fact the energy associated with the vacuum - a possibility that is consistent shows on the last observations of supernovae, of clusters of galaxies and the cosmic background radiation - then its energy density remains constant, whereas the energy densities of both the subject and the radiation continuously decreasing as far as the universe expands. This means that dark energy begins to prevail when the universe becomes sufficiently large. For an equation of state parameter $$w = -1$$, characterizing the vacuum gives the dominance of dark energy regardless of the sign of the geometric curvature. Since dark energy produces a repulsive force to gravity, cosmic expansion started to accelerate, as observed in our Universe today. If the expansion of our universe is governed by vacuum energy, then continue to accelerate, eventually resulting in a red shift: all galaxies are more distant than two dozen forming approximately our Local Group will as far that it can not be longer detect them. In other words, astronomers living in the Milky Way in $$100$$ billion years will not be able to observe no galaxies outside our Local Group. Indeed, such astronomers (assuming there then) even be able to observe the cosmic background radiation, because she too will suffer redshift. Size cosmic isolation and eventual death in a "big cooling" is not the worst of all possible fates of the universe. • The "big rip" and other possible destinations. • Big Crunch : Universe will collapse if Dark matter is the dominant force. • Big Rip : It will be ripped to pieces if Dark Energy wins over Dark Matter. • Big Chill : If dark energy wins over Dark Matter but remain constant, our universe will end up in Big Chill. If dark energy is not the energy of the vacuum, but instead are associated with some sort of quintessence field, characterized by a (more negative) equation of state parameter w less than -1, then the energy density of dark energy will grow over time. In this case, when the density of dark energy exceeds that of clusters of galaxies, these shall crumble. The same fate will have the stars, planets, people, even the atoms and atomic nuclei. No structure will survive the increasing density of dark energy. The Universe will end in what was called big rip ("The Big Rip"). Less extreme possibilities related to dark energy in the form of a scalar field when w is greater (less negative) than $$-1$$. Generally we expect the scalar field that decreases its potential energy as well as a marble rolling decrease their energy wings of a bowl, stand as ultimately reach its minimum potential energy. In this case, the fate of the universe depends on the value of the minimum potential energy. In a universe like ours, where the matter is just not enough to make it geometrically flat, any positive value would cause an accelerated expansion, and the same redshift caused by the vacuum energy happen. A minimum potential energy is exactly equal to zero would ensure a new realm of matter at some point in the future, and the universe would start to slow down. In this case, the destination is determined by the geometry of the universe, as in the case of a universe without dark energy. Finally, if the minimum potential energy is negative, ultimately the implosion of the universe occurs, regardless of their geometry. The complications brought about by the presence of dark energy are such that it is essentially impossible to determine the fate of the universe from just observations. Suppose, for example, in our universe of dark energy density was only one trillionth the density of the material - many orders of magnitude below any detection. Still, after it has been expanded by another factor of ten thousand, dark energy would become the dominant form of energy - one that would seal his fate. Therefore, we will not be able to know the fate of our universe for sure until we are able to complement the observations with a reliable theory that allows us to understand the nature and the specific properties of dark energy. There is another point worth noting. The composition of the universe, with its $$4%$$ normal matter (baryonic), $$23%$$ dark matter and $$73%$$ dark energy, seems to be quite arbitrary. Thus, there are physicists who think we are completely mistaken. Perhaps dark energy does not really exist; maybe our theories of gravity and general relativity give no account of the cosmological scales. Some alternative theories of gravity have been suggested starting from that line. Most of them involve extra dimensions beyond our three dimensions of space and one of time. Until now, no experimental or observational cracks in general relativity. But past experience teaches us to always expect the unexpected But what is this dark energy? We know that the density is nearly constant in time and space, but do not know what is fact, and understand the true nature of this energy may be the biggest chal-lenge of physics today. All the Best !! :D Note by Gabriel Merces 2 years, 6 months ago Sort by: Do you have any websites to research further? · 2 years, 6 months ago Noted · 1 year, 4 months ago @Gabriel Merces Thanks a lot. . · 2 years, 6 months ago Nice note. You spelt mystery wrong in the title, but otherwise it was perfect. · 2 years, 6 months ago OMG .... I Don't Seen It ! :/ · 2 years, 6 months ago :D · 2 years, 6 months ago lol xD I asked to Correct the Mistake ! :D · 2 years, 6 months ago You spelt the Mystery ta as Mistery. :D · 2 years, 6 months ago Great work as always ! :) from where do you get this all information about universe, stars, black holes etc. Could you please tell me? You are the best in writing notes!Thanks ☺ · 2 years, 6 months ago You can get all types of information from the book A Brief History of Time by Stephen Hawking or even wikipedia · 1 year, 8 months ago Seeking knowledge you'll get , everything is a search! · 2 years, 6 months ago Thanks ☺ · 2 years, 6 months ago Nice note.Could you please tell source of information ? · 1 year, 11 months ago I suggest u also check out the theory of warp drive and worm holes...its also very intresting :D · 2 years, 6 months ago so many grammatical errors... · 2 years, 6 months ago Come On, Are you here to check the grammar??!! Hope not · 2 years ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647796511650085, "perplexity": 518.0258306938074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz"}
https://malegislature.gov/Laws/SessionLaws/Acts/2007/Chapter221
# Session Law ## Chapter 221 AN ACT RELATIVE TO THE JACOB SEARS MEMORIAL LIBRARY. Be it enacted by the Senate and House of Representatives in General Court assembled, and by the authority of the same as follows: SECTION 1. The first paragraph of section 2 of chapter 254 of the acts of 1908, as appearing in section 1 of chapter 119 of the acts of 2004, is hereby amended by striking out the first sentence and inserting in place thereof the following sentence:- The 3 trustees, all of whom shall be residents of what is commonly known as East Dennis, in the town of Dennis, shall be elected by a vote of the residents of Quivet Neck. SECTION 2. Section 4 of said chapter 254 is hereby amended by striking out the first sentence and inserting in place thereof the following 2 sentences:- The inhabitants of Quivet Neck above the age of 18, who have resided in Quivet Neck for a period of 1 year before the time of the meetings described in this section, may elect by ballot annually a committee of 5 inhabitants of East Dennis, who shall not be trustees or their successors. The duties of the committee shall be to advise the trustees as to the administration of the said trust. SECTION 3. Said chapter 254 is hereby further amended by inserting after section 7 the following section:- Section 7A. The area of East Dennis shall be defined on an annual basis to comply with the district definition of the Dennis chamber of commerce including, but not limited to, state highway routes 134 and 6A within the defined area. SECTION 4. This act shall take effect upon its passage Approved December 28, 2007
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343839049339294, "perplexity": 1856.2675499374025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00196-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/completely-stable-nucleus-question.33895/
# Completely stable nucleus question 1. Jul 5, 2004 ### Sigma Rho "completely stable nucleus" question The nucleus of a helium isotope contains 2 protons. I have just worked out the magnitudes of the electrostatic force and the gravitational attraction between the 2 protons. The question now reads "the nucleus is completely stable. Considering the magnitudes of the forces [the previous questions], what conclusions can you draw about the forces that are holding the nucleus together?" My initial thoughts on reading the question before doing the math was that the forces would turn out to be equal, otherwise the nucleus would fall apart. I then realised that the electostatic force is always much more powerful than the gravitational one, which was the case when I worked out the magnitudes. So, that leaves me wondering... 1. Why isn't the nucleus ripped apart, as the forces pulling it apart are many times stronger than the ones holding it together? 2. What exactly does "completely stable" mean? 3. What conclusions can be drawn from the magnitudes of the forces? 4. The question also mentions that there is a single neutron in the nucleus - does this have anything to do with it? 2. Jul 5, 2004 2. It means that the atom won't decay into individual protons and electrons and neutrons. 1/3. There clearly must be another force! 4. Yup. 3. Jul 5, 2004 ### turin Shouldn't there be two neutrons? I didn't think that only one could provide a stable nuclear environment. 4. Jul 6, 2004 ### Sigma Rho The question says just one! Thanks for the answers guys, most helpful.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199617266654968, "perplexity": 882.6153866066187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00354-ip-10-171-10-108.ec2.internal.warc.gz"}
https://economics.stackexchange.com/questions/11791/doubt-regarding-walrasian-equilibrium-with-complements-for-both-agents/15149
# Doubt regarding Walrasian equilibrium with complements for both agents There are two goods $1,2$ and two agents $1,2$. Both have the utility function $u_{i}=\min({x_{1i},x_{2i}})$ for agent $i$ .The endowments are $(1,3)$ and $(3,1)$ for agent $1$ and $2$ respectively. I have to solve for the Walrasian equilibrium. I found out the demand for $x_{1}$ for both agents and at this point I tried to clear the market for $x_{1}$: (setting $p_{1}=1$ and $p_{2}=p$, I get) $\frac{1+3p}{1+p}+\frac{3+p}{1+p} = 4$ At this point I generally try to solve for the equilibrium price $p$. But In this case, Its a trivial equation. What does this mean ? What is the equilibrium price? Sorry if I am not seeing the obvious. I am new to this and dont know much. Since demand equals supply holds for every price $p$, this simply means that every $p$ is an equilibrium price. However, the equilibrium allocation that $p$ supports varies with $p$. To be precise, price $p$ supports the allocation in which 1 consumes $\left(\frac{1+3p}{1+p}, \frac{1+3p}{1+p}\right)$ and 2 consumes $\left(\frac{3+p}{1+p}, \frac{3+p}{1+p}\right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196404814720154, "perplexity": 320.089957404283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00466.warc.gz"}
https://icsecbsemath.com/2018/12/03/class-9-circles-exercise-12b/
Question 1: In an equilateral triangle, prove that the centroid and the center of the circum-circle (circum-center) coincide. Given: $\triangle ABC$ is an equilateral triangle. $D, E$ and $F$ are mid points of $BC, AC$ and $AB$ respectively. To Prove: Centroid and Circum-center are coincident. Construction: Draw medians $AD, BE$ and $CF$ Proof: Consider $\triangle BEC$ and $\triangle BFC$ $\angle FBC = \angle ECB = 60^o$ (given) BC is common Since $F$ and $E$ are mid points of $AB$ and $AC$ respectively, and $AB = AC$ because $\triangle ABC$ is equilateral we have $BF = CE$ Therefore $\triangle BEC \cong \triangle BFC$ (By S.A.S criterion) $\Rightarrow BE = CF$ … … … … … i) Similarly, $\triangle CAF \cong \triangle CAD$ $\Rightarrow CF = AD$ … … … … … ii) From i) and ii) we get $BE = CF = AD$ $\Rightarrow$ $\frac{2}{3}$ $BE =$ $\frac{2}{3}$ $CF =$ $\frac{2}{3}$ $AD$ $\Rightarrow BO = CO = AO$ $\Rightarrow O$ is equidistant from the vertices $\Rightarrow O$ is the circum-center of $\triangle ABC$ Question 2: Two circles whose centers are $O$ and $O'$ intersect at $P$. Through $P$, a line $l \parallel OO'$ intersecting the circles at $C$ and $D$ is drawn. Prove that $CD = 2 OO'$. Construction: Draw $OA \perp l$ and $OB \perp l$ Proof: $OA \perp l \Rightarrow OA \perp CP$ $\Rightarrow CA = AP$ $\Rightarrow CP = 2 AP$ … … … … … i) (perpendicular drawn from the center of the circle bisect the chord) Similarly, $O'B \perp l \Rightarrow O'B \perp PD$ $\Rightarrow PB = BD$ $\Rightarrow PD = 2 PB$ … … … … … ii) Therefore $CD = CP + PD$ $\Rightarrow CD = 2 AP + 2 PB$ $CD = 2 (AP + PB) = 2 AB = 2 OO'$ $\\$ Question 3: Prove that the line joining the mid points of two parallel chords of a circle passes through the center. Construction: Join $OP$ and $OQ$. Draw $OX \parallel AB$ and $CD$ Proof:  Since $P$ is the mid point of $AB$ $\Rightarrow OP \perp AB$ (Theorem 4) $\Rightarrow \angle OPB = 90^o$ Since $AB \parallel OX$ Therefore $\angle OPB + \angle POX = 180^o$ $\Rightarrow \angle POX = 180^o - 90^o = 90^o$ Similarly, $OQ \perp CD$ $\Rightarrow \angle OQD = 90^o$ (Theorem 4) Since $CD \parallel OX$ Therefore $\angle XOQ + \angle OQD = 180^o$ $\Rightarrow \angle XOQ = 180^o - 90^o = 90^o$ Hence $\angle POX + \angle XOQ = 90^o + 90^o = 180^o$ Hence $PQ$ is a straight line. $\\$ Question 4: In the adjoining figure $\widehat{AB} \cong \widehat{CD}$, Prove that $\angle A = \angle B$ $\widehat{AB} \cong \widehat{CD}$ $\angle AOB = \angle COD$ Since congruent arcs of a circle subtend equal angles at the center Therefore $\angle AOB + \angle BOC = \angle COD + \angle BOC$ $\Rightarrow \angle AOC = \angle BOD$ Consider $\triangle AOC$ and $\triangle BOD$ $AO = OB$ (radius) $OC = OD$ (radius) $\angle AOC = \angle BOD$ Therefore $\triangle AOC \cong \triangle BOD$ (By S.A.S criterion) Hence $\angle A = \angle B$ $\\$ Question 5: If two chords of a circle are equally inclined to the diameter through their point of intersection, prove that the chords are equal. Given: $\angle OAL = \angle OAM$ Construction: Draw $OL \perp AB$ and $OM \perp AC$ To Prove: $AB = AC$ Proof: Consider $\triangle AOL$ and $\triangle AOM$ $AO$ is common $\angle OAL = \angle OAM$  (given) $\angle OLA = \angle OMA = 90^o$ Therefore $\triangle AOL \cong \triangle AOM$ (By A.A.S criterion) Hence $OL = OM$ Since equidistant chords are equal, $AB = AC$ $\\$ Question 6: In adjoining figure, $O$ is the center of a circle and $PO$ bisects $\angle APD$. Prove that $AB = CD$. To Prove: $AB = CD$ Construction: Draw $OE \perp AB$ and $OF \perp CD$ Proof: Consider $\triangle OFP$ and $\triangle OEP$ $OP$ is common $\angle OFP = \angle OEP = 90^o$ $\angle OPF = \angle OPE$ ($OP$ bisects $\angle APD$ – given) Therefore $\triangle OFP \cong \triangle OEP$ $\Rightarrow OE = OF$ $\Rightarrow AB$ and $CD$ are equidistant Hence $AB = CD$ (equidistant chords in a circle are equal) $\\$ Question 7: Two equal chords $AB$ and $CD$ of a circle with center $O$, when produced meet at a point $E$ as shown in the adjoining diagram. Prove that $BE = DE$ and $AE = CE$. Given: $AB = CD$ Construction: Draw $OL \perp AB$ and $OM \perp CD$ To Prove: $BE = DE$ and $AE = CF$ Proof: Consider $\triangle OLE$ and $\triangle OME$ $OE$ is common $\angle OLE = \angle OME = 90^o$ Since $AB = CD$ $\Rightarrow$ they are equidistant Therefore $OL = OM$ Hence $\triangle OLE \cong \triangle OME$ (By S.A.S criterion) Therefore $LE = ME$ … … … … … i) We know $AB = CD \Rightarrow \frac{1}{2} AB = \frac{1}{2} CD \Rightarrow BL = DM$ … … … … … ii) i) – ii) we get $LE - BL = ME - DM \Rightarrow BE = DE$. Since $AB = CD$ and $BE = DE$, $AB + BE = CD + DE$ $\Rightarrow AE = CE$. Hence proved $\\$ Question 8: Prove that the line joining the mid points of two equal chords of a circle subtend equal angles with the chords. Given: $AB = CD, L$ and $M$ are mid points of $AB$ and $CD$ respectively. To Prove: $\angle ALM = \angle CML$ and $\angle BLM = \angle DML$ Construction: Draw $OL \perp AB$ and $OM \perp CD$ Proof: $OL = OM$ (equal chords in a circle are equidistant) Therefore $\angle OLM = \angle OML$ $\angle OLM + 90^o = \angle OML + 90^o$ $\angle BLM = \angle DML$ Similarly, $90^o - \angle OLM = 90^o - \angle OML$ $\Rightarrow \angle ALM = \angle LMC$ $\\$ Question 9: In adjoining figure, $L$ and $M$ are mid points of two equal chords $AB$ and $CD$ of a circle with center $O$. Prove that i) $\angle OLM = \angle OML$ ii) $\angle ALM = \angle CML$ Given: $AB = CD, OL \perp AB$ and $OM \perp CD$ Proof: Since equal chords are equidistant from the center, $OL = OM$ In $\triangle OLM$ $\angle OLM = \angle OML$ (angle opposite equal sides of a triangle are equal) $90^o - \angle OLM = 90 - \angle OML$ $\Rightarrow \angle ALM = \angle CML$ $\\$ Question 10: $PQ$ and $RQ$ are chords of a circle equidistant from the center. Prove that the diameter passing through $Q$ bisects $\angle PQR$ and $\angle PSR$. Given: $PQ$ and $RS$ are equidistant from $O$ To Prove: $\angle PQS = \angle RQS$ ($QS$ bisects $\angle PQR$) $\angle PSQ = \angle RSQ$ ($QS$ bisects $\angle PSR$) Construction: Join $PS$ and $RS$ Proof: Equidistant chords are equal $\Rightarrow PQ = QR$ Consider $\triangle PQS$ and $\triangle RQS$ $QS$ is common $QP = QR$ $\angle QPS = \angle QRS = 90^o$ (angles subtended by diameter) $\Rightarrow \triangle PQS \cong \triangle RQS$ (By. S.A.S criterion) $\Rightarrow \angle PQS = \angle RQS$ and $\angle PSQ = \angle RSQ$. Hence proved. $\\$ Question 11: If two chords of a circle bisect each other, show that they must be diameters. Construction: Join $AC, BD, AD$ and $BC$. To Prove: $AB$ and $CD$ are diameters Proof: Consider $\triangle AOC$ and $\triangle BOD$ $OC = OD$ (Mid point of $CD$) $OB = OA$ (Mid point of $AB$) $\angle AOC = \angle BOD$ (vertically opposite angles) $\triangle AOC \cong \triangle BOD$  (By S.A.S criterion) $\Rightarrow AC = BD$ $\Rightarrow \widehat{AC} \cong \widehat{BD}$ … … … … … i) Now consider $\triangle AOD$ and $\triangle BOC$ $OA = OB$ (Mid point of $AB$) $OC = OD$ (Mid point of $CD$) $\angle AOD = \angle COD$ (vertically opposite angles) $\triangle AOD \cong \triangle BOC$ $\Rightarrow AD = BC$ $\Rightarrow \widehat{AD} \cong \widehat{BC}$ … … … … … ii) $\widehat{AC} + \widehat{AD} = \widehat{BD} + \widehat{BC}$ $\widehat{CAD} \cong \widehat{CBD}$ $\Rightarrow CD$ divides the circle in 2 half’s $\Rightarrow CD$ is a diameter Similarly, $\widehat{AC} + \widehat{BC} = \widehat{BD} + \widehat{AD}$ $\widehat{ACB} \cong \widehat{BDA}$ Therefore $AB$ is diameter $\\$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 234, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991761803627014, "perplexity": 697.8256302660023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578650225.76/warc/CC-MAIN-20190424154437-20190424180437-00255.warc.gz"}
https://www.physicsforums.com/threads/statiscal-physics.172044/
# Statiscal physics 1. May 29, 2007 ### Unskilled I need help with a problem. Problem: Recalling that the Fermic-Dirac distribution function applies to all fermions, including protons and neutrons, each of which have spin 1/2, consider a nucleus of 22Ne consisting of 10 protons and 12 neutrons. Protons are distinguishable from neutrons, so two of each particle (spin up, spin down) can be put into each energy state. Assuming that the radius of the 22Ne nucleus is 3.1*10^-15 m, estimate the fermi energy and the average energy of the nucleus in 22Ne. Express your results in MeV. Do the results seem reasonable? My solution: We are going to estimate the fermi energy Ef. To do that i have integrate n(E)dE from 0 to Ef -> INT[0,Ef](n(E)dE)=INT[0,Ef](g(E)dE) here i use g(E)=(1/h^3)*4*PI*(2*m)^(3/2)*V*E^(1/2). After integration i get -> N=4*PI*(2*m)^(3/2)*V*Ef^(1/2). I solve Ef and get -> Ef=(N/V)^(2/3) * (1/((4*PI)^(2/3)*2*m)) *h^2 for protons i get m=mass=1.67*10^-27 kg N=number of protons in the atom=10 V=volume of the atom= (4/3)*PI*r^3 h=plancks constant=6.63*10^-34 Js Inserting this i get 22 MeV which i wrong the right answers seems to be 516 MeV Where i'm doing wrong?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730687737464905, "perplexity": 3053.291984960593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00281-ip-10-233-31-227.ec2.internal.warc.gz"}
http://v8doc.sas.com/sashtml/ets/chap8/sect25.htm
Chapter Contents Previous Next The AUTOREG Procedure ## Testing ### Heteroscedasticity and Normality Tests #### Portmanteau Q-Test For nonlinear time series models, the portmanteau test statistic based on squared residuals is used to test for independence of the series (McLeod and Li 1983): where This Q statistic is used to test the nonlinear effects (for example, GARCH effects) present in the residuals. The GARCH(p,q) process can be considered as an ARMA(max(p,q),p) process. See the section "Predicting the Conditional Variance" later in this chapter. Therefore, the Q statistic calculated from the squared residuals can be used to identify the order of the GARCH process. #### Lagrange Multiplier Test for ARCH Disturbances Engle (1982) proposed a Lagrange multiplier test for ARCH disturbances. The test statistic is asymptotically equivalent to the test used by Breusch and Pagan (1979). Engle's Lagrange multiplier test for the qth order ARCH process is written LM(q) = W'W where and The presample values ( ,..., ) have been set to 0. Note that the LM(q) tests may have different finite sample properties depending on the presample values, though they are asymptotically equivalent regardless of the presample values. The LM and Q statistics are computed from the OLS residuals assuming that disturbances are white noise. The Q and LM statistics have an approximate distribution under the white-noise null hypothesis. ### Normality Test Based on skewness and kurtosis, Bera and Jarque (1982) calculated the test statistic TN = [[N/6] b21+ [N/24] ( b2-3)2] where The 2(2)-distribution gives an approximation to the normality test TN. When the GARCH model is estimated, the normality test is obtained using the standardized residuals . The normality test can be used to detect misspecification of the family of ARCH models. ### Computation of the Chow Test Consider the linear regression model where the parameter vector contains k elements. Split the observations for this model into two subsets at the break point specified by the CHOW= option, so that y = ( y'1, y'2)', X = ( X'1, X'2)', and u = ( u'1, u'2)'. Now consider the two linear regressions for the two subsets of the data modeled separately, where the number of observations from the first set is n1 and the number of observations from the second set is n2. The Chow test statistic is used to test the null hypothesis conditional on the same error variance V(u1) = V(u2). The Chow test is computed using three sums of square errors. where is the regression residual vector from the full set model, is the regression residual vector from the first set model, and is the regression residual vector from the second set model. Under the null hypothesis, the Chow test statistic has an F-distribution with k and (n1+n2-2k) degrees of freedom, where k is the number of elements in . Chow (1960) suggested another test statistic that tests the hypothesis that the mean of prediction errors is 0. The predictive Chow test can also be used when n2 < k. The PCHOW= option computes the predictive Chow test statistic The predictive Chow test has an F-distribution with n2 and (n1-k) degrees of freedom. ### Unit Root and Cointegration Testing Consider the random walk process yt = yt-1 + ut where the disturbances might be serially correlated with possible heteroscedasticity. Phillips and Perron (1988) proposed the unit root test of the OLS regression model. Let and let be the variance estimate of the OLS estimator , where is the OLS residual. You can estimate the asymptotic variance of using the truncation lag l. where , for j>0, and . Then the Phillips-Perron Z() test (zero mean case) is written and has the following limiting distribution: where B(·) is a standard Brownian motion. Note that the realization Z(x) from the the stochastic process B(·) is distributed as N(0,x) and thus . Therefore, you can observe that as , which shows that the limiting distribution is skewed to the left. Let t be the t-test statistic for . The Phillips-Perron test is written and its limiting distribution is derived as When you test the regression model for the true random walk process (single mean case), the limiting distribution of the statistic is written while the limiting distribution of the statistic is given by Finally, the limiting distribution of the Phillips-Perron test for the random walk with drift process (trend case) can be derived as where c=1 for and for , When several variables zt = (z1t, ... ,zkt)' are cointegrated, there exists a (k×1) cointegrating vector c such that c'zt is stationary and c is a nonzero vector. The residual based cointegration test is based on the following regression model: where yt = z1t, xt = (z2t, ... ,zkt)', and = (2,...,k)'. You can estimate the consistent cointegrating vector using OLS if all variables are difference stationary, that is, I(1). The Phillips-Ouliaris test is computed using the OLS residuals from the preceding regression model, and it performs the test for the null hypothesis of no cointegration. The estimated cointegrating vector is . Since the AUTOREG procedure does not produce the p-value of the cointegration test, you need to refer to the tables by Phillips and Ouliaris (1990). Before you apply the cointegration test, you might perform the unit root test for each variable. Chapter Contents Previous Next Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699227213859558, "perplexity": 1637.664248817782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010746376/warc/CC-MAIN-20140305091226-00042-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/291675/power-tower-made-of-2s-and-3s-too-high-too-soon
# Power tower made of $2$s and $3$s: too high, too soon? A power tower of a number $x$ is typified by $$x^{x^{x^{x^{x^{x^{x^{x^{x^x}}}}}}}}.$$ Here, however, we take the liberty of referring to the set $T$ of "$\{2,3\}$-power towers"; i.e., numbers $$x_1^{x_2^{x_3^{ \cdots\cdots^{x_k}}}},$$ where each $x_h$ is $2$ or $3,$ and $k \geq 2.$ Let $T_2$ be the subset of $T$ consisting of towers rising from $x_1=2.$ Let $R$ be the sequence of ranks of towers in $T_2$ when all the towers in $T$ are jointly ranked. For example, $7 \in R$ means that the $7$th smallest element in $T$ is a power of $2$, not of $3$. (The term jointly ranked is borrowed from statistics: if the numbers in two or more sets are combined and arranged in nondecreasing order, they are said to be jointly ranked.) The first $15$ terms of $R$ are $$1, 2, 4, 7, 8, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29.$$ What are the next terms? Note that $T$ can be obtained recursively from $t_2 = \{2^2,2^3,3^2,3^3\}$ by defining $$t_n =2^{t_{n-1}} \cup 3^{t_{n-1}}$$ for $n \geq 3;$ then $T$ is the union of the sets $t_n$ for $n \geq 2.$ For a top-first version of the problem, change $x_1=2$ to $x_k=2,$ where $k$ is the height of the tower. Then the first $17$ terms are $$1,3,4,6,10,11,12,15,16,19,20,23,24,25,26,27,28,\ldots.$$ Here, too, the question is: what are the next terms? Added later: Thanks, Yaakov, you are right, so my question is, what are the positions of the numbers in $T_2$ in the sequence in the sequence $(1,2,3,\ldots)$. I have the first $30$ positions (or ranks) and would like to see a method for finding more terms. It may help to see a list of the first $20$ towers ranked: $$4 = 2^2$$ $$8 = 2^3$$ $$9 = 3^2$$ $$16 = 2^{2^{2}}$$ $$27 = 3^3$$ $$81 = 3^{2^{2}}$$ $$256 = 2^{2^{3}}$$ $$512 = 2^{3^{2}}$$ $$6561 = 3^{2^{3}}$$ $$19683 = 3^{3^{2}}$$ Continuing with tuple notation instead of tower notation: $(2,2,2,2), (3,2,2,2), (2,3,3), (3,3,3), (2,3,2,2), (3,3,2,2), (2,2,2,3), (3,2,2,3), (2,2,3,2), (3,2,3,2), (2,3,2,3).$ My method, so far, has been by computer sort, which reaches overflow pretty quickly. Surely there must be a more insightful method. A related question: what is the position (or rank) of $(2,2,2,2,2,2)?$ • What does jointly ranked mean? – David Handelman Jan 29 '18 at 14:24 • Jointly ranked means arranged in increasing order: $t_2 = \{4,8,16,256,\ldots\}$ and $t_3 = \{9,27,81,\ldots\},$ so that the joint ranking is $(4,8,9,16,27,81,256,\ldots).$ – Clark Kimberling Jan 29 '18 at 15:12 • $8=2^2$? $9=2^3$?? – Gerry Myerson Jan 29 '18 at 22:07 • I find a certain rough affinity of this question with my question on math.SE about how to order the numbers in the googol-stack-bang-plex hierarchy: math.stackexchange.com/q/72646/413 – Joel David Hamkins Jan 29 '18 at 22:40 • $512=2^{3^3}$? – Gerry Myerson Jan 30 '18 at 1:30 Let $s_1=2^2$, $s_2=2^3$, $s_3=3^2$ and so on. For $i\ge 5$, it holds that $s_{i+1}\ge 2s_i$. This can be proved by induction. Then $s_{2i+3}=2^{s_i}$ and $s_{2i+4}=3^{s_i}$. In particular, all the remaining elements of $R$ are precisely the odd numbers larger than the ones shown, and the solution of the top-first version of the problem consists of sequences of consecutive numbers doubling in length. The rank of $(2,2,2,2,2,2)$ can be found out simply by enumerating the sequences up to it. • This argument can be extended to completely settle the problem: if two towers have the same length, their relative ranking is determined only by the top $3$ terms; if the towers have different length, the longer one has higher ranking unless the lengths differ by $1$ and the top of the longer tower is $2^{2^2}$, while the top of the shorter is $3^3$. – Yaakov Baruch Feb 3 '18 at 17:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313837647438049, "perplexity": 251.89213046002487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527204.71/warc/CC-MAIN-20190721205413-20190721231413-00228.warc.gz"}
https://www.studyadda.com/sample-papers/neet-sample-test-paper-39_q69/228/282991
• question_answer Which of the following statements is correct tor the spontaneous adsorption of a gas? A) $\Delta S$is negative and therefore$\Delta H$should be highly positive.B)  $\Delta S$ is negative and therefore,$\Delta H$should be highly negative.C) $\Delta S$is positive and therefore$\Delta H$should be negative.                            D)  $\Delta S$is positive and therefore $\Delta H$should also be highly positive. For adsorption $\Delta S<0$and for a spontaneous change$\Delta G=-ve$ Hence$\Delta \Eta$should be highly negative which is clear from the equation $\Delta G=\Delta H-T\Delta S$ $=-\Delta \Eta -\Tau (-\Delta S)=-\Delta H+T\Delta S$ So if $\Delta \Eta$is highly negative, $\Delta G$will also be -ve,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850834369659424, "perplexity": 3512.3430261249628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00183.warc.gz"}
https://www.math.umanitoba.ca/seminars/2018/2/15/landis-wong/
# Landis Wong ## A Proof of the Lebesgue Number Lemma and Applications Date Thursday, February 15, 2018 In this talk I will discuss the Lebesgue Number Lemma and give a proof of the result. I will then discuss some applications of the lemma - including its use in proving the famous Seifert-van Kampen theorem in topology.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945919930934906, "perplexity": 348.25513316032936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824525.29/warc/CC-MAIN-20181213054204-20181213075704-00149.warc.gz"}
https://studysoup.com/tsg/11531/calculus-early-transcendentals-1-edition-chapter-4-1-problem-49e
× Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 4.1 - Problem 49e Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 4.1 - Problem 49e × # Absolute maxima and minima a. Find the | Ch 4.1 - 49E ISBN: 9780321570567 2 ## Solution for problem 49E Chapter 4.1 Calculus: Early Transcendentals | 1st Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Calculus: Early Transcendentals | 1st Edition 4 5 1 391 Reviews 10 5 Problem 49E Absolute maxima and minima a. Find the critical points off on the given interval. b. Determine the absolute extreme values off on the given interval. c. Use a graphing utility to confirm your conclusions. ? ? ? ? f(?x)=2x? sin ?x;[?2, 6] Step-by-Step Solution: Step 1 of 3 Solution 49E Step-1 Critical point definition; Let f be a continuous function defined on an open interval containing a number 1 1 ‘c’.The number ‘c’ is critical value ( or critical number ). If f (c) = 0 or f (c) is undefined. A critical point on that graph of f has the form (c,f(c)). Step-2 Absolute extreme values definition ; When an output value of a function is a maximum or a minimum over the entire domain of the function, the value is called the absolute maximum or the absolute minimum. Let be a fun ction wit h domain D and let c be a fixed constant in D . Th en the output value f (c) is the 1. Absolute maximum value of f on D if and only if f(x) f(c) , for all x in D. 2. Absolute minimum value of f on D if and only if f(c) f(x) , for all x in D. Step_3 x a). The given functio n is f(x) = 2 sin(x) on [ -2,6].Clearly the function contains both trigonometric, and algebraic functions and it is continuous for all of x . Now , we have to find out the critical points of f on the given interval. Now , f(x) = 2x sin(x) for the critical values we have to differentiate the function both sides with respect to x. d (f(x)) = d (2 sin(x) ) dx dx 1 x x f (x) = sin(x) dx (2 ) +2 dxsin(x) , since dx(uv) = u dx(v)+v dx(u) x x d x x d = sin(x)(2 (ln(2) )+2 cos(x) , since dx (2 ) = 2 (ln(2)), dx sin(x) = cos(x). = 2 (ln(2))sin(x) +2 cos(X) = 2 (ln(2)sin(x)+cos(x)) 1 c Since , from the definition f (c)= 0 = 2 (ln(2)sin(c)+cos(c)) c 2 (ln(2)sin(c)+cos(c)) = 0 That is , 2 = 0 and (ln(2)sin(c)+cos(c)) = 0 . c So, 2 = 0 for this take log on both sides then c ln( 2 ) = ln(0) C (ln(2)) = ln(0), since ln( a ) = m(ln(a)) ln(0) C = ln(2) = undefined , since ln(0) is undefined. The second result is (ln(2)sin(c)+cos(c)) = 0 . ln(2) sin(c) = -cos(c) ln(2) = -cot(c) , since cos(c)= cot(c) sin(c) tan(c) = - (1/ln(2)) 1 1 That is , c = tan (1/ln(2)) = tan ( 0. 69314) Therefore, C = -0.96 , 2.18 ,and 5.32 , since from the general solution. Therefore , the function has critical values at c = -0.96 ,2.18, 5.32 are lies in the given interval. Step-4 b). Now , we have to determine the absolute extreme values of f on the given interval ; Here the given interval is [ -2,6], and the function has critical values at x = -0.96, 2.18,5.32.So the endpoints are -2,6. From the step-2 , the absolute extreme values are; Therefore , at x = -2, the value of the function is ; 2 sin(2) f(-2) = 2 in(-2) = 4 , since sin(-x) = -sin(x) and 2 = 0.044899 = -0.0087248 At x = -0.96, the value of the function is ; 0.96 0.96 f(-0.96) = 2 in(-0.96) , since sin(-x) = -sin(x) and 2 =0.514, sin(-0.96) = 0.819 = (0.514)(0.819) = 0.420966 At x =2.18, the value of the function is ; 2.18 2.18 f(2.18) = 2 in(2.18) , since 2 =4.53153, sin(2.18) = 0.82 = (4.53153)(0.82) = 3.71585 At x =5.32, the value of the function is ; f(5.32) = 2 5.32 in(5.32) , since 25.32=39. 94657, sin(5.32) = -0.821 = (39.94657)(-0.821) = -32.79614 Therefore , f(-0.96 ) = 0.420966 , f(2.18) =3.71585 , and f(5.32) = -32.79614, f(-2) = -0.0087248 Hence , the largest value of f(x) is 3. 71585 ,this attains at x= 2.18 Therefore , the absolute maximum is f( 2.18) = 3.71585 Hence , the smallest value of f(x) is -32.79614,this attains at x = 5.32 Therefore , the absolute minimum is f( 5.32) = -32.79614,. c) . The graph of the related function f(x) = 2 sin(x) on [ -2,6]. Hence, from the above graph all the absolute extreme values are true. Step 2 of 3 Step 3 of 3 #### Related chapters Unlock Textbook Solution
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487380981445312, "perplexity": 1670.6858270309585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00356.warc.gz"}
http://www.researchgate.net/researcher/9197907_Hongguang_Bi
# Hongguang Bi Government of the People's Republic of China, Peping, Beijing, China Are you Hongguang Bi? ## Publications (6)30.35 Total impact • Source ##### Article: Lyα Leaks in the Absorption Spectra of High-Redshift Quasi-stellar Objects Jiren Liu, Hongguang Bi, and Li-Zhi Fang [Hide abstract] ABSTRACT: Spectra of high-redshift QSOs show deep Gunn-Peterson absorptions on the blue sides of the Lyα emissions lines. They can be decomposed into components called Lyα leaks, defined to be emissive regions in complementary to otherwise zero-flux absorption gaps. Just like Lyα absorption forests at low redshifts, Lyα leaks are easy to find in observations and contain rich sets of statistical properties that can be used to study the early evolution of the intergalactic medium (IGM). Among all properties of a leak profile, we investigate its equivalent width in this paper, since it is weakly affected by instrumental resolution and noise. Using 10 Keck QSO spectra at z ~ 6, we have measured the number density distribution function n(W, z) , defined to be the number of leaks per equivalent width W and per redshift z in the redshift range 5.4-6.0. These new observational statistics, in both the differential and cumulative forms, fit well to hydrodynamic simulations of uniform ionizing background in the ΛCDM cosmology. In this model, Lyα leaks are mainly due to low-density voids. It supports the early studies that the IGM at z 6 would still be in a highly ionized state with a neutral hydrogen fraction 10−4. Measurements of n(W, z) at z > 6 would be effective to probe the reionization of the IGM. The Astrophysical Journal 12/2008; 671(2):L89. DOI:10.1086/525279 · 6.28 Impact Factor • Source ##### Article: On the Normalization of the QSO Lyα Forest Power Spectrum [Hide abstract] ABSTRACT: The calculation of the transmission power spectrum of QSO Lyα absorption requires two parameters for the normalization: the continuum Fc and mean transmission -τ. Traditionally, the continuum is obtained by a polynomial fitting truncating it at a lower order, and the mean transmission is calculated over the entire wavelength range considered. The flux F is then normalized by Fc-τ. However, the fluctuations in the transmitted flux are significantly correlated with the local background flux on scales for which the field is intermittent. As a consequence, the normalization of the entire power spectrum by an overall mean transmission -τ will overlook the effect of the fluctuation-background correlation upon the powers. In this paper we develop a self-normalization algorithm of the transmission power spectrum based on a multiresolution analysis. This self-normalized power spectrum estimator needs neither a continuum fitting nor a predetermining of the mean transmission. With simulated samples, we show that the self-normalization algorithm can perfectly recover the transmission power spectrum from the flux regardless of how the continuum varies with wavelength. We also show that the self-normalized power spectrum is also properly normalized by the mean transmission. Moreover, this power spectrum estimator is sensitive to the nonlinear behavior of the field. That is, the self-normalized power spectrum estimator can distinguish between fields with or without the fluctuation-background correlation. This cannot be accomplished by the power spectrum with the normalization by an overall mean transmission. Applying this analysis to a real data set of Q1700+642 Lyα forest, we demonstrate that the proposed power spectrum estimator can perform correct normalization and effectively reveal the correlation between the fluctuations and background of the transmitted flux on small scales. Therefore, the self-normalized power spectrum would be useful for the discrimination among models without the uncertainties caused by free (or fitting) parameters. The Astrophysical Journal 12/2008; 561(1):94. DOI:10.1086/323216 · 6.28 Impact Factor • Source ##### Article: Is the Cosmic Ultraviolet Background Fluctuating at Redshift z 6? [Hide abstract] ABSTRACT: We study the Gunn-Peterson effect of the photoionized intergalactic medium (IGM) in the redshift range 5 < z < 6.4 using semianalytic simulations based on the lognormal model. Assuming a rapidly evolved and spatially uniform ionizing background, the simulation can produce all the observed abnormal statistical features near redshift z 6. They include (1) a rapid increase of absorption depths, (2) large scatter in the optical depths, (3) long-tailed distributions of transmitted flux, and (4) long dark gaps in spectra. These abnormal features are mainly due to rare events, which correspond to the long-tailed probability distribution of the IGM density field, and therefore they may not imply significant spatial fluctuations in the UV ionizing background at z 6. The Astrophysical Journal 12/2008; 645(1):L1. DOI:10.1086/506149 · 6.28 Impact Factor • Source ##### Article: Lyα leaks and reionization [Hide abstract] ABSTRACT: Lyα absorption spectra of QSOs at redshifts z≃ 6 show complete Gunn–Peterson absorption troughs (dark gaps) separated by tiny leaks. The dark gaps are from the intergalactic medium (IGM) where the density of neutral hydrogen are high enough to produce almost saturated absorptions, however, where the transmitted leaks come from is still unclear so far. We demonstrate that leaking can originate from the lowest density voids in the IGM as well as the ionized apatches around ionizing sources using semi-analytical simulations. If leaks are produced in lowest density voids, the IGM must already be highly ionized, and the ionizing background should be almost uniform; in contrast, if leaks come from ionized patches, the neutral fraction of IGM should be still high, and the ionizing background is significantly inhomogeneous. Therefore, the origin of leaking is crucial to determining the epoch of inhomogeneous-to-uniform transition of the ionizing photon background. We show that the origin could be studied with the statistical features of leaks. Actually, Lyα leaks can be well defined and described by the equivalent width W and the full width of half-area WH, both of which are less contaminated by instrumental resolution and noise. It is found that the distributions of W and WH of Lyα leaks are sensitive to the modelling of the ionizing background. We consider four representative models: uniform ionizing background (model 0), the photoionization rate of neutral hydrogen ΓH i and the density of IGM are either linearly correlated (model I), or anticorrelated (model II), and ΓH i is correlated with high-density peaks containing ionizing sources (model III). Although all of these models can match to the mean of the observed effective optical depth of the IGM at z≃ 6, the distributions of W and WH are very different from each other. Consequently, the leak statistics provides an effective tool to probe the evolutionary history of reionization at z≃ 5–6.5. Similar statistics will also be applicable to the reionization of He ii at z≃ 3 Monthly Notices of the Royal Astronomical Society 01/2008; 383(4):1459 - 1468. DOI:10.1111/j.1365-2966.2007.12642.x · 5.23 Impact Factor • ##### Article: Lyα Leaks in the Absorption Spectra of High Redshift QSOs Jiren Liu, H. Bi, L. Fang [Hide abstract] ABSTRACT: Spectra of high redshift QSOs show deep Gunn-Peterson absorptions on the blue sides of the Lya emissions lines. They can be decomposed into components called Lya leaks, defined to be emissive regions in complementary to otherwise zero-fluxed absorption gaps. Just like Lya absorption forests at low redshifts, Lya leaks are both easy to find in observations and containing rich sets of statistical properties that can be used to study the early evolution of the IGM. Among all properties of a leak profile, we investigate its equivalent width in this paper, since it is weakly affected by instrumental resolution and noise. Using 10 Keck QSO spectra at z 6, we have measured the number density distribution function n(W,z), defined to be the number of leaks per equivalent width W and per redshift z, in the redshift range 5.4 - 6.0. These new observational statistics, in both the differential and cumulative forms, fit well to hydro numerical simulations of uniform ionizing background in the LCDM cosmology.In this model, Lya leaks are mainly due to low density voids.It supports the early studies that the IGM at z 6 would still be in a highly ionized state with neutral hydrogen fraction 10-4. Measurements of n(W,z) at z>6; would be effective to probe the reionization of the IGM. • Source ##### Article: Hydrogen Clouds before Reionization: a Lognormal Model Approach [Hide abstract] ABSTRACT: We study the baryonic gas clouds (the IGM) in the universe before the reionization with the lognormal model which is shown to be dynamcially legitimate in describing the fluctuation evolution in quasilinear as well as nonlinear regimes in recent years. The probability distribution function of the mass field in the LN model is long tailed and so plays an important role in rare events, such as the formation of the first generation of baryonic objects. We calculate density and velocity distributions of the IGM at very high spatial resolutions, and simulate the distributions at resolution of 0.15 kpc from z=7 to 15 in the LCDM cosmological model. We performed a statistics of the hydrogen clouds including column densities, clumping factors, sizes, masses, and spatial number density etc. One of our goals is to identify which hydrogen clouds are going to collapse. By inspecting the mass density profile and the velocity profile of clouds, we found that the velocity outflow significantly postpones the collapsing process in less massive clouds, in spite of their masses are larger than the Jeans mass. Consequently, only massive (> 10^5 M_sun) clouds can form objects at higher redshift, and less massive (10^4-10^5) collapsed objects are formed later. For example, although the mass fraction in clouds with sizes larger than the Jeans length is already larger than 1 at z=15, there is only a tiny fraction of mass (10^{-8}) in the clouds which are collapsed at that time. If all the ionizing photons, and the 10^{-2} metallicity observed at low redshift are produced by the first 1% mass of collapsed baryonic clouds, the majority of those first generation objects would not happen until z=10. The Astrophysical Journal 09/2003; 598(1). DOI:10.1086/378793 · 6.28 Impact Factor #### Publication Stats 32 Citations 30.35 Total Impact Points #### Institutions • ###### Government of the People's Republic of China Peping, Beijing, China • ###### The University of Arizona • Department of Physics Tucson, AZ, United States
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384775161743164, "perplexity": 2473.495617693784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00000-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/bending-moments.509727/
# Bending Moments 1. Jun 26, 2011 ### studentoftheg I'm having trouble fully understanding bending moments. I get the calculations, the force times distance (lever arm), and how to calculate the bending stress (M*y/I). Its the orientation/direction that I am having trouble picturing in my head. I know this is pretty basic, but I just havent read or seen something that makes it 'click' in my head. Like say you've got a beam, and along the beam axially is the X axis, vertically Y axis and Z is the lateral direction. The beam is anchored at one end. If I apply a force to the other end, say Fz, how do I know what direction the moment is? Is it the right hand rule? Can someone explain. Thanks 2. Jun 26, 2011 ### Unrest Are you talking about bending moments (within a beam) or applied/reaction moments (at supports/etc)? Orientation of reaction moment To find the orientation of the reaction moment in your example I imagine holding the beam where it's fixed and think how I would have to try to turn it to counteract the applied force. Sign of reaction moment If you have a feel for the orientation but don't know if it's +ve or -ve, use the right-hand rule as you suggested. In your example the reaction moment is about the Y axis, so put your thumb in the direction of +Y and your curled fingers show the direction of a positive moment. Sign of bending moment For bending moments it's less standardized. You can have "sagging is positive" or "hogging is positive". Sagging positive is common for mechanical engineering. When you're feeling positive you're smiling and your face looks like a sagging simply supported beam :P Since you're bending in the X-Z plane it's more confusing. The "sagging is positive" convention means a positive bending moment is caused by a positive moment at the positive side of the point of interest, and a negative moment on the negative side. In your example the concavity will be in the direction of the +Z axis. So the bending moment about Y would be negative. However then you can't use stress=My/I, but that's obvious because the stress is independent of the Y-coordinate for bending about Y. Last edited: Jun 26, 2011 3. Jun 26, 2011 ### studentoftheg Yeah I am talking about the bending moment within the beam as opposed to reactions at supports etc. 4. Jun 28, 2011 ### studentoftheg Anyone point me in the right direction here? Thanks 5. Jun 28, 2011 ### Unrest Maybe my last paragraph was more confusing than anything. You have to first think through the process in a few different cases before you can form a mental image that you trust. What in particular are you unclear on? The signs? Bending in the x-z plane instead of the usual x-y plane? The meaning of "direction" for a bending moment? 6. Jun 28, 2011 ### studentoftheg Thanks for replying. I know this is pretty basic stuff! Yeah in particular its the resulting direction of the bending moments within a beam. For example, which direction would you need to apply a force to get a moment about Z etc? 7. Jun 28, 2011 ### Unrest I always visualize the deformed beam, then it's obvious how that relates to applied moments. A smiley face in the XY plane has a bending moment about Z. The direction of that bending moment vector depends on the sign convention - it could be +Z or -Z. Are you clear that in your example the bending moment would be about Y? 8. Jun 29, 2011 ### studentoftheg Thanks, no that wasnt clear to me, thats what I'm having trouble processing and picturing in my head. So if it bends in XY axis then that is a moment about Z? And so I take it applying a moment about one axis then results in bending in the other two axis'? So if I applied a moment about Y, then it is going to bend in the XZ plane. Resulting in a smiley face laterally. 9. Jun 29, 2011 ### Unrest Yes. 10. Jun 29, 2011 ### Observables Similar Discussions: Bending Moments
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047270774841309, "perplexity": 1024.0337943600255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00386.warc.gz"}
https://bookini.ru/interdisciplinary-applied-mathematics/13/
# Interdisciplinary Applied Mathematics Скачать в pdf «Interdisciplinary Applied Mathematics» In Chapter 16 we discuss theory and numerical methodologies for simulating liquid flows at the atomistic and mesoscopic levels. The atomistic description is necessary for liquids contained in domains with dimension of fewer than ten molecules. First, we present the Molecular Dynamics (MD) method, a deterministic approach suitable for liquids. We explain details of the algorithm and focus on the various potentials and thermostats that can be used. This selection is crucial for reliable simulations of liquids at the nanoscale. In the next section we consider various approaches in coupling atomistic with mesoscopic and continuum level. Such coupling is quite difficult, and no fully satisfactory coupling algorithms have been developed yet, although significant progress has been made. An alternative method is to embed an MD simulation in a continuum simulation, which we demonstrate in the context of electroosmotic flow in a nanochannel. In the last section we discuss a new method, developed in the late 1990s primarily in Europe: the dissipative particle dynamics (DPD) method. It has features of both LBM and MD algorithms and can be thought of as a coarse-grained version of MD. In Chapter 17 we turn our attention to simulating full systems across heterogeneous domains, i.e., fluid, thermal, electrical, structural, chemical, etc. To this end, we introduce several reduced-order modeling techniques for analyzing microsystems. Specifically, techniques such as generalized Kirch-hoff networks, black box models, and Galerkin methods are described in detail. In black box models, detailed results from simulations are used to construct simplified and more abstract models. Methods such as nonlinear static models and linear and nonlinear dynamic models are described under the framework of black box models. Finally, Galerkin methods, where the basic idea is to create a set of coupled ordinary differential equations, are described. The advantages and limitations of the various techniques are highlighted. Скачать в pdf «Interdisciplinary Applied Mathematics»
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8244062662124634, "perplexity": 1345.8493282503875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00061.warc.gz"}
https://programmathically.com/variance-and-the-expected-value/
# Introducing Variance and the Expected Value Sharing is caring In this post, we are going to look at how to calculate expected values and variances for both discrete and continuous random variables. We will introduce the theory and illustrate every concept with an example. ## Expected Value of a Discrete Random Variable The expected value of a random variable X is usually its mean. The outcome of X is defined by a probability distribution p(x) over the concrete values x = \{x1,x2,...,xn\} that X can assume. You can calculate the mean or expected value of a discrete random variable X by • multiplying each value in x by its probability as defined by p(x) • summing overall x p(x) E(X) = \sum_x xp(x) This was fairly abstract. To build the intuition, let’s suppose you are throwing a six-sided fair dice. We have the following values for x. x = \{1,2,3,4,5,6\} Each value has a probability of 1/6 of being the result of a dice throw. If you throw the dice once, it could return any number. However, if you throw a dice many times you’d expect every value from one to 6 to come up with approximately the same frequency. If you took the average of all the results and divided it by the number of throws, you’d expect it to return the geometric mean between 1 and 6, which is 3.5. We can now calculate our expected value as follows. This should give us the same result. E(X) = \frac{1}{6}1 + \frac{1}{6}2 +\frac{1}{6}3 +\frac{1}{6}4 + \frac{1}{6}5 + \frac{1}{6}6 = 3.5 ## Variance of a Discrete Random Variable Variance and standard deviation are both measures for how much probabilistic outcomes deviate from the expected value. The variance of a random variable X equals the expected value of the square of X minus the expected value of X squared. Var(X) = E[(X - E[X])^2] If you expand the squared expression in brackets, you’ll realize that this is equivalent to the mean of the square of X minus the square of the mean of X. Var(X) = E[X^2] - E[X]^2 The standard deviation usually denoted as sigma is simply the square root of the variance. \sigma = \sqrt{Var(X)} Let’s dive right into an example using discrete variables to make this more accessible. We’ve established that the expected value of a fair dice throw equals 3.5. We simply have to square this E[X]^2 = 12.25 To obtain the expected value of the square of X, we have to add all the possible squared values that X can assume multiplied by their respective probabilities. E(X^2) = \frac{1}{6}1^2 + \frac{1}{6}2^2 +\frac{1}{6}3^2 +\frac{1}{6}4^2 + \frac{1}{6}5^2 + \frac{1}{6}6^2 = 15.17 Now we can calculate the variance. Var(X) = E[X^2] - E[X]^2 = 15.17 - 12.25 = 2.92 Sthe standard deviation can now easily be calculated. \sigma = \sqrt{2.92} = 1.71 Often the variance of a discrete random variable is expressed in terms of its concrete values x = {x1,x2,x3,…xn} rather than the random variable X. Var(X) = \sum_{i=1}^n p(x)_i (x_i - \mu)^2 Note that the mean or expected value is usually expressed using the greek letter μ. The standard deviation can accordingly expressed like this. \sigma = \sqrt{ \sum_{i=1}^n p(x)_i (x_i - \mu)^2} ## Expected Value of a Continuous Random Variable When dealing with continuous random variables, the logic is the same as with discrete random variables. But instead of summing over the possible values x that the random variable X can assume, you take the integral over the interval [a,b] for which you would like to evaluate the probability. E(X) = \int_a^b xp(x)dx Assume the outcome of our random variable can be modeled using the following probability density function p(x). p(x) =\begin{cases} \frac{3}{8}(6x - x^2) & 0\leq x\leq 1 \\ 0 & otherwise \end{cases} First, we plug this into our formula for the expected value: E(x) = \mu = \int_0^1 x \frac{3}{8}(6x - x^2) dx = \int_0^1 \frac{18}{8} x^2 - \frac{3}{8}x^3 Second, we evaluate the integral: = [ \frac{18}{24} x^3 - \frac{3}{32} x^4 ]_0^2 Third, we plug 1 and 0 into the resulting expression: 1 \rightarrow \frac{18}{24} 1^3 - \frac{3}{32} 1^4= 0.65 0 \rightarrow \frac{18}{24} 0^3 - \frac{3}{32} 0^4= 0 Finally, we can evaluate the expression for the interval and thus obtain our expected value. E(x) = \mu = 0.65 - 0 = 0.65 ## Variance of a Continuous Random Variable When using continuous random variables, we have to replace the summation with an integration. Our variance is calculated as follows. Var(X) = \int_{-\infty}^{\infty} x^2p(x)dx - \mu^2 As in the discrete case, we can also express the variance in terms of the random variable X. Var(X) = E[X^2] - E[X]^2 Let’s continue with the same function p(x) that we already used for the expected value. Since the function evaluates to zero everywhere except for the interval [0,1], we only need to evaluate the variance for that interval. We already calculated the expected value (μ = 0.65). E[x] = \mu = 0.65 So we really just need to evaluate the following expression. E[X^2] = \int_{0}^{1} x^2p(x)dx = \int_0^1 x^2 \frac{3}{8}(6x - x^2) dx = 0.49 This expression is very similar to the calculation we’ve performed for the expected value. The only difference is that the first x is squared. Accordingly, I didn’t list all the steps again. Now, the variance is just. Var(X) = E[X^2] - E[X]^2 = 0.49 - 0.65^2 = 0.0675 As with discrete random variables the standard deviation is simply the square root of the variance. \sigma = \sqrt{Var(X)} = 0.26 This post is part of a series on statistics for machine learning and data science. To read other posts in this series, go to the index. Sharing is caring
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396458268165588, "perplexity": 434.9087969161706}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00019.warc.gz"}
https://datascience.stackexchange.com/questions/37806/purpose-of-backpropagation-in-neural-networks
# Purpose of backpropagation in neural networks I've just finished conceptually studying linear and logistic regression functions and their optimization as preparation for neural networks. For example, say we are performing binary classification with logistic regression, let's define variables: $$x$$ - vector containing all inputs. $$y$$ - vector containing all outputs. $$w_{0}$$ - bias weight variable. $$W=(w_1,...,w_{2})$$ - vector containing all weight variables. $$f(x_i)=w_{0}+\sum_{i=1}x_{i}w_{i}=w_{0}+x^{T}W$$ - summation of all weight variables. $$p(x_{i})=\frac{1}{1+e^{-f(x_i)}}$$ - logistic activation function (sigmoid), representing conditional probability that $$y_i$$ will be 1 given observed values in $$x_i$$. $$L=-\frac{1}{N} \sum^{N}_{i=0} y_i*ln(p(x_i))+(1-y_i)*ln(1-p(x_i))$$ - binary cross entropy loss function (Kullback-Leibler divergence of Bernoulli random variables plus entropy of activation function representing probability) $$L$$ is multi-dimensional function, so it must be differentiated with partial derivative, being: $$\frac{\partial{L}}{\partial{w}}$$ Then, the chain rule gives: $$\frac{\partial{L}}{\partial{w_1}}=\frac{\partial{L}}{\partial{p_i}} \frac{\partial{p_i}}{\partial{w_1}}$$ After doing few calculations, derivative of the loss function is: $$(y_i-p_i)*x_i$$ So we got derivative of the loss function, and all weights are trained separately with gradient descent. What does backpropagation have to do with this? To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct? # In short We already have derivative calculated, so what's the point of calculating them on every step when we can just plug in the variables? Is backpropagation just fancy term for weights being optimized on every iteration? • Each iteration, you take a small step, you have to take so many steps to reach to an optimal point. – Media Sep 5 '18 at 5:54 • @Media Yes, I understand how cost function is weights are optimized so that it eventually converges at local minima, but partial derivative (roadmap, gradient) is already calculated, what does backpropagation do with this? Why is automatic differentiation necessary in this? Thank you – ShellRox Sep 5 '18 at 6:36 • Basically, in backpropagation algorithm, you find the derivatives of each parameter with respect to the cost function and after updating each parameter, you can take next steps. You use learning rate for taking controlled steps. – Media Sep 5 '18 at 6:43 • @Media Oh, so partial derivative must be re-calculated with respect of each weight, but isn't minimization of each parameter done in different process, so if there are say 5 parameters, 5 minimization processes for each. I assume when weights are changing, gradient must be alsp re-calculated during that change with respect to weight. So backpropagation is basicay re-calculation of gradient on each step? – ShellRox Sep 5 '18 at 6:55 • I guess you have not understand the math behind it. Suppose you have two parameters and the cost function which changes with respect to the weights is a hill. Each time you change each weight, your cost changes. In gradeint descent based approaches, you find the derivative alongside each parameter and change the current value of each parameter simoltaneously. Next you take another step. – Media Sep 5 '18 at 7:08 Is backpropagation just fancy term for weights being optimized on every iteration? Almost. Backpropagation is a fancy term for using the chain rule. It becomes more useful to think of it as a separate thing when you have multiple layers, as unlike your example where you apply the chain rule once, you do need to apply it multiple times, and it is most convenient to apply it layer-by-layer in reverse order to the feed forward steps. For instance, if you have two layers, $l$ and $l-1$ with weight matrix $W^{(l)}$ linking them, non-activated sum for a neuron in each layer $z_i^{(l)}$ and activation function $f()$, then you can link the gradients at the sums (often called logits as they may be passed to logistic activation function) between layers with a general equation: $$\frac{\partial L}{\partial z^{(l-1)}_j} = f'(z^{(l-1)}_j) \sum_{i=1}^{N^{(l)}} W_{ij}^{(l)} \frac{\partial L}{\partial z^{(l)}_i}$$ This is just two steps of the chain rule applied to generic equations of the feed-forward network. It does not provide the gradients of the weights, which is what you eventually need - there is a separate step for that - but it does link together layers, and is a necessary step to eventually obtain the weights. This equation can be turned into an algorithm that progressively works back through layers - that is back propagation. To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct? That is exactly what automatic differentiation is doing. Essentially "automatic differentiation" = "the chain rule", applied to function labels in a directed graph of functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854111909866333, "perplexity": 708.221752637492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00718.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-8-problem-14p-college-physics-11th-edition/9781305952300/the-xanthar-mothership-locks-onto-an-enemy-cruiser-with-its-tractor-beam-fig-p814-each-ship-is/50ce789f-98d6-11e8-ada4-0ee91056875a
Chapter 8, Problem 14P ### College Physics 11th Edition Raymond A. Serway + 1 other ISBN: 9781305952300 Chapter Section ### College Physics 11th Edition Raymond A. Serway + 1 other ISBN: 9781305952300 Textbook Problem # The Xanthar mothership locks onto an enemy cruiser with its tractor beam (Fig. P8.14); each ship is at rest in deep space with no propulsion following a devastating battle. The mothership is at x = 0 when its tractor beams are first engaged, a distance d = 215 xiles from the cruiser. Determine the x-position in xiles of the two spacecraft when the tractor beam has pulled them together. Model each spacecraft as a point particle with the mothership of mass M = 185 xons and the cruiser of mass m = 20.0 xons.Figure P8.14 To determine The x position of the two spacecraft when the tractor beam has pulled them together. Explanation Section1: To determine: The center of mass of the two spacecraft when there is no force on them. Answer: The center of mass of the two spacecraft when there is no force on them is 21.0xiles . Explanation: Assume there is no force between the mother ship and the cruiser and the center of mass of the system is initially at rest. Given info: The mass of the Xanthar mother ship is 185xons , the cruiser mass is 20.0xons , and the distance of the mother ship from the cruiser is 215xiles . The formula for the center of mass of the Xanthar-cruiser system is, xcm=Mx+mdM+m • M is mass of the Xanthar mother ship • m is mass of enemy cruiser ship • x is the position of the mother ship • d is the distance of the cruiser from the mother ship Substitute 185xons for M , 20.0xons for m , 0xiles for x , and 215xiles for d to find xcm . xcm=(185xons)(0xiles)+(20 ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499324917793274, "perplexity": 3097.4246317331304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00185.warc.gz"}
http://math.stackexchange.com/questions/590332/constructing-a-distributional-solution-to-the-inhomogeneous-c-r-equations
# Constructing a Distributional Solution to the Inhomogeneous C.R. Equations The question is to find a fundamental solution to the system of equations in $\mathbb{R}^{2}$ \begin{array}{l} u_{x}-v_{y}=f\\ u_{y}+v_{x}=g\end{array} and to express the answer as a $2\times2$ matrix of tempered distributions. I really have had no experience with solving systems of PDE, so while the Fourier transform will undoubtedly be needed here, I'm not sure how to apply it. Also, I don't really understand the directions. How can we have a $2\times2$ matrix of solutions? Are we not solving for $u$ and $v$? Furthermore, what characterizes a fundamental solution for a system. It almost certainly should involve the $\delta$ distribution, no? So if this is the case, why the $f$ and $g$? Here's my attempted solution... I will use the following convention for the Fourier transform: $$\mathcal{F}(u(x,y))=\frac{1}{2\pi}\int\int_{\mathbb{R}^{2}}u(x,y)e^{-i(x\xi+y\zeta)}\;dxdy.$$ Then applying $\mathcal{F}$ to both equations we get \begin{array}{l} i\xi\hat{u}-i\zeta\hat{v}=\hat{f}\\ i\zeta\hat{u}+i\xi\hat{v}=\hat{g}. \end{array} If we solve for $\hat{u}$ and $\hat{v}$ we then get \begin{array}{l} \hat{u}=-\frac{i\xi\hat{f}+i\zeta\hat{g}}{\xi^{2}+\zeta^{2}}=-\frac{\hat{f_{x}}+\hat{g_{y}}}{\xi^{2}+\zeta^{2}}\\ \hat{v}=-\frac{i\zeta\hat{f}-i\xi\hat{g}}{\xi^{2}+\zeta^{2}}=-\frac{\hat{f_{y}}-\hat{g_{x}}}{\xi^{2}+\zeta^{2}}\end{array} so that by the convolution theorem \begin{array}{l} u=-\frac{1}{2\pi}(f_{x}+g_{y})*\mathcal{F}^{-1}\left(\frac{1}{\xi^{2}+\zeta^{2}}\right)\\ u=-\frac{1}{2\pi}(f_{y}-g_{x})*\mathcal{F}^{-1}\left(\frac{1}{\xi^{2}+\zeta^{2}}\right) \end{array} So then, formally speaking, if $$h=\frac{-1}{2\pi}\mathcal{F}^{-1}\left(\frac{1}{\xi^{2}+\zeta^{2}}\right),$$ then $$u=f_{x}*h+g_{y}*h$$ and $$v=f_{y}*h-g_{x}*h.$$ The fundamental solution is convolution with $f$ and $g$, however, not the partial derivatives of $f$ and $g$. Also, the above solution can't be expressed with respect to convolution of a $2\times2$ matrix. So, let us use the multipliers for the differentiation operators and immediately apply the convolution theorem. To that end we have $$\begin{array}{l} \hat{u}=\frac{-i}{\xi^{2}+\zeta^{2}}\left(\xi\hat{f}+\zeta\hat{g}\right).\\ \hat{v}=\frac{-i}{\xi^{2}+\zeta^{2}}\left(\zeta\hat{f}-\xi\hat{g}\right). \end{array}$$ Then if $$A=\frac{1}{2\pi i}\mathcal{F}^{-1}\left(\frac{\xi}{\xi^{2}+\zeta^{2}}\right),$$ $$B=\frac{1}{2\pi i}\mathcal{F}^{-1}\left(\frac{\zeta}{\xi^{2}+\zeta^{2}}\right),$$ and $$M=\begin{pmatrix}A&B\\B&-A\end{pmatrix}$$ then $$\begin{pmatrix}u\\v\end{pmatrix}=M*\begin{pmatrix}f\\g\end{pmatrix}.$$ Would you think this is an acceptable answer? Is it possible to explicitly invert $A$ and $B$? The instructions ask for a $2\times2$ matrix of tempered distributions representing the fundamental solution. Clearly $M$ is the fundamental solution. Presumably $f$ and $g$ are tempered so the operations used to solve for $u$ and $v$ are also tempered. I still to this day just have difficulty accepting results obtained by formal operations on distributions as if they were functions as being true (or as being obtained rigorously I should say) since behind the scenes they are not functions (of course, I could probably just write everything as a linear functional to verify what I got already, but this seems dumb and unnecessary). - In complex notation, you are asked to find a function $\varphi = u+iv$ such that $\frac{\partial \varphi}{\partial z} = \frac12(f+ig)$. The relevant formula from complex analysis (Cauchy-Pompeiu) is usually stated in terms of $\frac{\partial \varphi}{\partial z}$ derivative. Rather than tweak the formula, you can look for $\psi = u-iv$, which satisfies $\frac{\partial \psi}{\partial \bar z} = \frac12(f+ig)$. A solution (non-unique) is given by $$(u-iv)(z) = -\frac{1}{2}\iint \frac{f(\zeta)+ig(\zeta)}{z-\zeta} \,d\lambda \tag{1}$$ where $\lambda$ is the 2-dimensional Lebesgue measure. To turn (1) into a real-variable formula, you'll have to separate the real and imaginary part. It's easier to work with convolution notation: $$(f+ig)*\frac{1}{z} = f*\frac{x}{x^2+y^2} + g* \frac{y}{x^2+y^2} +i\left( -f*\frac{y}{x^2+y^2} +g*\frac{x}{x^2+y^2} \right)$$ This is how you turn (1) into $$\begin{pmatrix} u \\ v\end{pmatrix} = A*\begin{pmatrix} f \\ g\end{pmatrix}$$ with some $2\times 2$ matrix $A$. - I was going to use the Fourier transform to do the same thing. But I would have ended up with the same answer. Except I wouldn't have been confident about any constants (like $-1$ and $\sqrt \pi$) flying around, because everyone has a slightly different definition of Fourier transform, and so I can never quite remember all the constants and signs. –  Stephen Montgomery-Smith Dec 3 '13 at 14:45 @Post - See the update to my question. I'd like to avoid any reference to complex analysis if it's possible. –  Taylor Martin Dec 4 '13 at 17:37 (In this post $\mathbb{K}$ stands for either the real or the complex field). Regarding the "fundamental solution" concept and the $\delta$ distribution. The linear operator that we are dealing with here is the following: $$L \left(\begin{bmatrix} u \\v \end{bmatrix}\right) = \begin{bmatrix} u_x-v_y \\u_x+v_y \end{bmatrix},$$ and is defined on $\left[\mathscr{D}'(\mathbb{R}^2)\right]^2$. We can introduce a convolution product on this space via the following formula: $$\tag{1} \begin{bmatrix} u_1 \\ v_1\end{bmatrix} \star \begin{bmatrix} u_2 \\ v_2 \end{bmatrix} = \begin{bmatrix} u_1\ast u_2 + v_1\ast v_2 \\ u_1 \ast v_2 + v_1\ast u_2 \end{bmatrix}^{[1]},$$ where $\ast$ denotes the usual convolution product of two scalar-valued distributions$^{[2]}$. Turns out that $$\left(\left[\mathscr{D}'(\mathbb{R}^2)\right]^2, +, \star, \cdot_{\mathbb{K}}\right)$$ is an associative and commutative algebra with unity $$\begin{bmatrix} \delta \\ 0 \end{bmatrix}.$$ Moreover, an explicit computation reveals that $$L\left( \begin{bmatrix} u_1 \\ v_1\end{bmatrix} \star \begin{bmatrix} u_2 \\ v_2 \end{bmatrix} \right) = L\left( \begin{bmatrix} u_1 \\ v_1\end{bmatrix} \right) \star \begin{bmatrix} u_2 \\ v_2 \end{bmatrix}.$$ Therefore, if we are able to solve the equation $$L\left( \begin{bmatrix} u_E \\ v_E\end{bmatrix} \right) = \begin{bmatrix} \delta \\ 0 \end{bmatrix},$$ we can later solve the most general inhomogeneous equation $$L\left( \begin{bmatrix} u \\ v\end{bmatrix} \right) = \begin{bmatrix} f \\ g \end{bmatrix}$$ by setting $\begin{bmatrix} u \\ v\end{bmatrix} = \begin{bmatrix} u_E \\ v_E\end{bmatrix}\star \begin{bmatrix} f \\ g \end{bmatrix}$. We can therefore righteously say that $\begin{bmatrix} u_E \\ v_E\end{bmatrix}$ is a fundamental solution to our linear operator $L$. $^{[1]}$. The genesis of this formula is the following. You can identify any vector $[x_0, x_1]\in \mathbb{K}^2$ with a $\mathbb{K}$-valued function defined on the cyclic group $\mathbb{Z}/2\mathbb{Z}$. Those groups enjoy a built-in notion of convolution, given by $$(F\ast G)(n)=\sum_{m\in \mathbb{Z}/2\mathbb{Z}} F(n-m)G(m).$$ Combining this formula with the distributional convolution product you get (1). $^{[2]}$. Of course we are ignoring here some technicalities regarding the fact that the convolution of two arbitrary distribution is not defined: to be more precise we should restrict our attention to some subspace of distributions having some decay condition at infinity, so that $\ast$ is well-defined and associative. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715216755867004, "perplexity": 164.67934088866798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929869.17/warc/CC-MAIN-20150521113209-00317-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/58635b2b11ef6b7876190211
Chemistry Topics # Question 90211 Jul 10, 2017 The $n = 3$ orbit is $9 {a}_{0}$ from the nucleus; the energy emitted is 1.936 × 10^"-18"color(white)(l) "J". #### Explanation: The radius $r$ of the $n$th orbit in a hydrogen atom is $\textcolor{b l u e}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} r = {n}^{2} {a}_{0} \textcolor{w h i t e}{\frac{a}{a}} |}}} \text{ }$ where ${a}_{0}$ is the Bohr radius. ∴ For $n = 3$, $r = {3}^{2} {a}_{0} = 9 {a}_{0}$ You use the Rydberg formula to calculate the energy. Rydberg's original formula was expressed in terms of wavelengths, but we can rewrite the formula to have the units of energy. The Rydberg formula in terms of energy is $\textcolor{b l u e}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} E = R \left(\frac{1}{n} _ {\textrm{f}}^{2} - \frac{1}{n} _ {\textrm{i}}^{2}\right) \textcolor{w h i t e}{\frac{a}{a}} |}}} \text{ }$ where $R = \text{the Rydberg constant", 2.178 × 10^"-18"color(white)(l) "J}$, and ${n}_{\textrm{i}}$ and ${n}_{\textrm{f}}$ are the initial and final energy levels. In this problem, ${n}_{\textrm{i}} = 3$ ${n}_{\textrm{f}} = 1$ E = 2.178 × 10^"-18"color(white)(l) "J" × (1/1 -1/9) = 2.178 × 10^"-18" color(white)(l)"J" × (9-1)/(9×1) =8/9 ×2.178 × 10^"-18" color(white)(l)"J" = 1.936 × 10^"-18"color(white)(l) "J"# ##### Impact of this question 890 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598403573036194, "perplexity": 2533.2843327019614}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00311.warc.gz"}
https://www.mathphysicsbook.com/mathematics/lie-groups/matrix-groups/
# Matrix groups The most common Lie groups are the matrix groups (AKA linear groups), which are Lie subgroups of the group of real or complex $${n\times n}$$ invertible matrices, denoted $${GL(n,\mathbb{R})}$$ and $${GL(n,\mathbb{C})}$$. We can also consider the linear groups, which are Lie subgroups of $${GL(V)}$$, the group of invertible linear transformations on a real or complex vector space $${V}$$. One can then choose a basis of $${V}$$ to get a (non-canonical) isomorphism to the matrix group, e.g. from $${GL(\mathbb{R}^{n})}$$ to $${GL(n,\mathbb{R})}$$. Δ The distinction between the abstract linear groups and the basis-dependent matrix groups is not always made, and the notation is used interchangeably. Alternative notation includes $${GL_{n}(\mathbb{R})}$$, and the field and/or dimension is often omitted, yielding notation such as $${GL_{n}}$$, $${GL(n)}$$, $${GL(\mathbb{R})}$$, or $${GL}$$. These groups can be seen to be Lie groups by taking global coordinates to be the real matrix entries or the real components of the complex entries. Thus $${GL(n,\mathbb{R})}$$ is a manifold of dimension $${n^{2}}$$, and $${GL(n,\mathbb{C})}$$ has manifold dimension $${2n^{2}}$$. Any subgroup of $${GL}$$ that is also a submanifold is then automatically a Lie subgroup. We can also consider Lie groups defined by invertible matrices with entries in $${\mathbb{H}}$$ or $${\mathbb{O}}$$, since even though they cannot be viewed as linear transformations on a vector space, they still form a group and are manifolds with respect to the real components of their entries. Δ Some matrix groups can also be viewed as a complex Lie group, a group that is also a complex manifold. For example, $${GL(n,\mathbb{C})}$$ can be viewed as an $${n^{2}}$$-dimensional complex Lie group instead of as a real Lie group of dimension $${2n^{2}}$$. It is important to distinguish between a complex Lie group and a real Lie group defined by matrices with complex entries.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605899453163147, "perplexity": 91.70559211140538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00561.warc.gz"}
http://stackoverflow.com/questions/193298/best-practices-in-latex/401489
# Best practices in LaTeX [closed] What are some good practices when typesetting LaTeX documents? I use LaTeX mostly to typeset papers. Here is a short list of what I consider LaTeX good practices, most of them would be common sense in any programming language: • When writing a large document (book), keep the chapters in separate files • Use a versioning system • Repeated code (i.e. piece of formula occurring many times) is evil. Use macros • Use macros to represent concepts, not to type less • Use long, descriptive names for macros, labels, and bibliographic entries %=================================== to emphasize the beginning of sections and subsections • Comment out suppressed paragraphs, don't delete them yet • Don't format formulas (i.e. break them into many lines) until the final font size and page format are decided • Learn to use BibTeX Further question: What package/macro/whatever do you use to insert source code? - Title is just too delicious. I want to answer "wear it clubbing, not to the office" and "watch out for slippery upholstery". –  Jay Bazuzi Dec 26 '09 at 5:56 rollback: "latex" and "LaTeX" are spelled the same, differing only in capitalisation. The camel case capitalisation LaTeX is an approximation to a realisation of a logo for latex. Hence "latex" is spelled correctly, and, though many latexphiles prefer "LaTeX", "latex" is not incorrectly capitalised. –  Charles Stewart Dec 26 '09 at 5:57 it is not, but every time I see it, I quiver at the idea of searching "latex" on the internet, in particular while at work. So I got used to proper capitalization ;) –  Stefano Borini Dec 31 '09 at 8:11 Stefano: Google search is case insensitive. –  George Steel Jul 12 '10 at 22:55 @Stefano: you can somewhat restrict "latex" google searches to LaTeX-related pages by adding "tex" to your search query. –  Matthew Leingang Oct 7 '10 at 12:14 ## closed as off topic by Lasse V. KarlsenOct 20 '11 at 18:18 Questions on Stack Overflow are expected to relate to programming within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. Here's my list: • Read the TeX FAQ. It has a lot of good information, and is constantly updated. • Use nag Latex package. It warns the user about the usage of old packages or commands (for example, using \it, \tt, etc.). nag warns about the things mentioned in l2tabu (Obsolete packages and commands). • Use fixltx2e package. It fixes some 'mistakes' in Latex. From the description: • ensure one-column floats don't get ahead of two-column floats; • correct page headers in twocolumn documents; • stop spaces disappearing in moving arguments; • allowing \fnysmbol to use text symbols; • allow the first word after a float to hyphenate; • \emph can produce caps/small caps text; • bugs in \setlength and flushbottom. • If you are typesetting any math, reading Math mode by Herbert Voß is a must. • If you really have to produce double-spaced documents, use setspace package instead of changing baselinestreach yourself. • Typeset your document in draft mode to see how bad over- and underfull lines are, and if they are pretty bad, consider changing the wording a bit (but don't fuss too much over this, as the best wording is more important than minor typographical improvements). You can also help Latex by letting it hyphenate some words in this case. • Use booktabs to get much better-looking tables than Latex default. • Use \centering instead of \begin{center} \end{center} to center things inside tables/figures etc. \centering doesn't add any additional vertical space. • Use microtype for small-scale typographic enhancements (character protrusion, font expansion, letter-spacing). • If you're creating PDF documents, use hyperref to get hyperlinks in your document. • Use \newcommand to make things more logical. For example, let's say you're writing a biology book, and it has a lot of scientific names of different species. Instead of littering your document with things like \textit{Homo sapiens}, you could define a new command \newcommand{\species}[2]{\textit{#1 #2}}. Then, later when you're reading rules about formatting species names, you realize that the name should be in a font different from the surrounding text—textit doesn't work if the surrounding text is in italics. You should have used \emph. With the \newcommand, the change is easy. Without the \newcommand, you have to manually change the font changing command in all the places you use a species name. • Use geometry package to set up page geometry instead of doing it manually. • Use fancyvrb to get precise control in verbatim listings. • This is my personal view, and others may not agree, but I find the dotted lines in table of contents distracting and not really useful. I would rather have the page numbers right after the title in the "table of ..." entries. If you want to do that, see titletoc package. The package documentation comes with several examples, and it is easy to change one of the examples to do exactly what you want it to do. • If your document has a lot of technical figures, it might be a good idea to learn PGF/Tikz, sketch 3D, and pstricks, or a combination of those. • To typeset units, use siunitx. The package tries to replace sistyle, siunits, units, etc., is actively maintained, and the package writer is very active on comp.text.tex. • Similar to above: use numprint package to format numbers nicely (locale-specific, with commas as thousands separator, etc.). I am sure I have missed some, but hopefully I will be able to add them later. As an example of what is possible with Latex, I recommend looking at The Font Installation Guide (for its typography). Finally, one of the most important 'best practice', which I didn't mention above because it needs more space (no pun intended!) to explain than the list format would have allowed: understand the power of ~, the non-breakable space. Use it so that your document is easy to read. If you write: This \textsc{dvd} is meant to be played with region 2 players only. Latex may decide to break the line this way: This DVD is meant to be played with region 2 players only. To avoid the bad line-break, use ~: This \textsc{dvd} is meant to be played with region~2 players only. Now, region and 2 will not have a break between them. - show 1 more comment Put your files in source control. This makes it so you don't need to keep long comment blocks in as long. It also can be helpful if you've written a large section that you decide to yank for the current publication, but that you'll want later (e.g. for a tech report or for a journal paper). Subversion-style tags can be especially helpful at times like this. - Source control would have saved me a ton of time. I had a laptop crash in the middle of writing my thesis. I didn't know anything about source control in those days. I'd give more points if I could. –  Anthony Potts Oct 20 '08 at 14:19 @Anthony: that's not about source control. That's about having backups (and a recovery plan) –  Stefano Borini Dec 31 '09 at 8:43 show 1 more comment Use the fact that LaTeX is not WYSIWYG. When writing paragraphs, start each sentence at the beginning of a line, and if it spills over, each subsequent line is tabbed. This helps you understand the structure of your paragraphs and identify materials in them very fast. I've since written many documents with this practice and it was always effective. - I indent my latex-document for logical structure (like I do in programming). I keep long lines long, my editor makes linebreaks by keeping the intendation. And paragraphs are easy to see, because latex wants a linebreak for them. –  Mnementh Oct 20 '08 at 14:55 Good tip. Actually, keeping one sentence per line helps when using VCS. For that reason now I prefer not to wrap the lines. –  sastanin Jan 5 '09 at 19:30 I don't suppose you have a emacs mode or macro to set this up? –  dmckee Feb 4 '09 at 20:23 Starting each sentence on a new line is very helpful, as it makes it easier to understand differences between two source files using standard diff tools (e.g. when comparing different versions saved in a version control system). Changes to one sentence don't produce trivial differences due to changes in the line wrapping for the rest of the paragraph. –  Stephen C. Steel Dec 31 '09 at 16:15 I could probably go on all day, but this seems like a good start. # if you want to change the page margins: \usepackage[letterpaper]{geometry} \geometry{top=1.0in, bottom=1.0in, left=1.5in, right=1.0in} # greatly improved citation commands: \usepackage[longnamesfirst]{natbib} # better looking tables with \toprule,\midrule,\bottomrule: \usepackage{booktabs} # make sure figures do not appear before their text: \usepackage{flafter} # if you're doing math: \usepackage{amsmath,amssymb,cancel,units} # more control with verbatim ('unformated') environments: \usepackage{fancyvrb} Documentation for the packages used in these examples: - I advise against the first point: setting the margins by hand is not something you should routinely do. The default document classes have the margins that respect document setting best practices. If you change them, you should either know those best practices, or be aware that you most likely deviate from best practices. –  Svante Jun 1 '09 at 20:00 Can you provide links to the documentation of those packages? You've given their names, but not what exactly they contain that is useful. –  sykora Dec 26 '09 at 6:36 show 1 more comment Use a Makefile, or some other kind of build script to build your document from your sources. This may include generating plots or other graphics outside of the latex system. This becomes even more important if you need to rebuild the document from source at some later date (for example, you need to regenerate a paper with new data). - latexmk is handy in this regard for the latex/bibtex side of things (see my answer below). –  dreeves Dec 30 '08 at 22:11 After trying several kinds of super-complex makefiles, I discovered rubber and never looked back. –  Roberto Bonvallet Aug 24 '10 at 21:19 I highly recommend the short math guide which includes explicit best practices for typesetting math using AMSLatex (built in to any LaTeX installation). It's pretty amazing -- 17 pages and it's all I've ever needed to know about typesetting math. Best Practices Excerpts: • Don't use eqnarray as it "produces inconsistent spacing of the equal signs and make no attempt to prevent overprinting of the equation body and equation number." • Use $math stuff$ instead of $$math stuff$$. • Use \quad, \qquad, \phantom and other standard ways of inserting whitespace. • Use \lvert and \rvert for absolute value. • Check out \DeclareMathOperator. • \begin{cases} for piecewise functions (like f(x) = 2 if x<7 or 3 otherwise). • \dfrac and \tfrac for overriding LaTeX's guess about the size of fractions. From the AMS web page: This guide is a concise summary of the essential features in LaTeX for writing math formulas , including features provided by the packages amssymb and amsmath . This is not a mere listing of everything available but a careful selection of the LaTeX commands that are especially recommended for authors' use. There is also some discussion of certain common uses and misuses of various commands that are better avoided for reasons of typographical quality or logical markup. - This may not be the type of tip you are after but if you haven't already,take a look at LyX. It's an open-source word processor-like front end for LaTeX with lots of features built-in (version control, comments), very configurable and a good community. I wrote my PhD thesis (all 220 1.5-spaced pages of it!) in LyX and found it much easier than straight LaTeX (I did my maths honours thesis in emacs + LaTeX). It's especially good at helping manage figures, tables and other fiddly environments. You can drop into straight LaTeX anytime you want and see a live preview of maths and graphics as you type. I still use LaTeX (and TextMate) when collaborating with other LaTeX-only users but if I'm writing something by myself, LyX is my first choice. - My top suggestion is to format LaTeX source the same way you would format source code as far as indention, lining up opening and closing braces, etc. People tend to put all the LaTeX source for an equation on one line, even though it may have 10 nested pairs of opening and closing groupings, and even if they'd never do anything like that in C. The most effective technique I know for debugging errors in LaTeX files is to simply format the source like I would C++ code. For example $x = \frac { -b \pm \sqrt{ b^2 - 4ac } } { 2a }$ I put all the square root stuff on one line only because it's very short. But especially for the \frac command, extra white space really helps. - From LaTeX FAQ: While the double dollar sign (still) works in LaTeX, it is not part of the "official" LaTeX command set (in fact, most books on LaTeX don't even mention it) and its use is discouraged. Use the bracket pair "$", "$" instead.) –  Hudson Jan 9 '09 at 22:54 (Of course, I still use since that is what I learned so many years ago...) –  Hudson Jan 9 '09 at 22:55 I agree with formatting code, but I disagree with C++ style. ;) –  Svante Jun 1 '09 at 20:03 I see that formatting like C code also multiplies the denominator of the fraction with c... :-) –  Daan Jan 6 '10 at 7:55 I created several unary commands to tag sections of text, such as \HighlightProblem, \ConsiderRemoving, \CollaborateWithCoauthor, etc. (I use acronyms, of course). These commands display the text in a different color. When I watch the PDF, this helps me quickly see areas that need work. In some cases, I use several "confidence level" tags, so that areas that are more "stable" appear in black or closer to black, while less stable areas appear in lighter grays. - You might try the todonotes package, which provides this sort of thing. Esp with the inline option. –  Paul Biggar Aug 19 '09 at 19:37 I use the KOMA-script package for all my documents. It provides counterparts or replacements for the standard LaTeX classes such as article, book, etc., but offers many additional features and its own unique look and feel. The KOMA-Script bundle provides drop-in replacements for the article/report/book classes with emphasis on typography and versatility. There is also a letter class, different from all other letter classes. It also offers e.g. a package for calculated type areas in the way laid down by the typographer Jan Tschichold, a package for easily changing and defining of page styles, a package for getting not only the current date but also the name of day and a package for getting current time. All these packages may be used not only with KOMA-Script classes but also with standard classes. (Source) - This isn't exactly a best practice, but a possibility for LaTeX that not too many people know about. If you need to insert calculations in your LaTeX file, you can embed R code using Sweave. When you run Sweave, it produces a LaTeX file with the results of your calculations inserted in the file. Sweave embeds R inside LaTeX the way CGI, PHP, etc. embed script inside HTML. With Sweave you can, for example, embed data and plotting commands in your LaTeX file. Then if you need to change the data, you don't need to manually create a new image and paste it in; Sweave recreates the image and embeds it for you. So your data and your plots stay in sync. If you have any trouble getting started with Sweave, see troubleshooting Sweave. - show 1 more comment Most of my tips have already been mentioned. The one tip I'd add is creating "namespaces" for different types of labels. For example, I use \label{eq:labelgoeshere} for equations, \label{tbl:labelgoeshere} for tables, \label{fig:labelgoeshere} for figures, etc. For source code, I use the listings package. - I prefer labels like eq_Equation1 to eq:Equation1 because by default my editor (vi) can auto-complete the first type but not the second. –  FJDU Mar 14 at 3:49 I recommend latexmk which, with -pvc switch (for "preview continuously"), will recompile whenever the source changes. If you have a pdf viewer that autorefreshes the pdf view (Skim on Mac OSX does this) then you can see a refreshed preview every time you hit save. Using latexmk is nice even without the -pvc option since it automatically compiles (including bibtex) as many times as necessary. (For most documents, latexmk obviates the need for a Makefile.) See this question for a mini-tutorial on latexmk: Don't make me manually abort a LaTeX compile when there's an error - Yes, Skim and latexmk are orthogonal. Skim is just a pdf viewer that autorefreshes when the pdf file changes on disk. latexmk continuously compiles your tex source into pdf when the tex source changes. –  dreeves Jan 18 '09 at 21:57 Use the SIunits package when working with any kind of units instead of trying to compose the unit abbreviations by hand. \usepackage[mediumspace,mediumqspace,Grey,squaren,binary]{SIunits} ... \unit{10.5}\mega\bit\per\second \unit{0.2}\micro\second \unit{9.81}\meter\per\second\squared - siunitx should be preferred over SIunits. See also this question: tex.stackexchange.com/questions/2248/… –  Denilson Sá Oct 27 '11 at 10:21 Here's one I don't think has been mentioned yet. When you are including figures via a \includegraphics and the like, don't specify the full pathname (use graphicspath to organize them) and don't specify the file suffix. Keep all of your figures in one canonical form (or one raster, one vector) and use make or the like to generate temporary files as needed. This is much easier to organize, and also works well with different build paths (e.g. pdflatex vs latex->dvi->ps) For example, if I have a file figures/figure1.png and I want to include it, my preamble will include something like \usepackage{graphicx} \graphicspath{{./figures/}} And the document will have something like \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figure1} \caption{something terribly interesting} \end{figure} This works directly with pdflatex. If I need a .dvi, I'll have a makefile target that generates .eps files from .png files as needed, and either place them in './figures', or more likely in './eps' with an update to graphicspath above. The benefits: Files can be organized into subdirs as needed, but you don't need to edit all over the place if you move them. You can also have different output targets without changing your source images and figures, and you won't have any surprised from conversion (like jpeg compression artifacts showing up in the pdf file you've generated from a .dvi) - I like your list but I don't know if I agree with Repeated code (i.e. piece of formula occurring many times) is evil For someone comfortable enough with LaTeX, reading the LaTeX should be like reading the paper (obviously this is never completely true). If the repeated code is an eyesore for the LaTeX reader then it is probably also an eyesore for the reader of the pdf document. Certainly there are exceptions where the LateX is cumbersome but the output is simple and in this case you should use a macro. But perhaps when you are tired of writing part of a formula over and over again you should consider that your reader may also be tired of reading it over and over again. So I might replace this point with Try to keep the structure of your code similar to the structure of your document - I'm not speaking about text. Mostly, I meant formulas. Suppose there's a quantity named 't_f' in your 123 formulas, say "final time". Now you decide that 'T_f' would be better... You got the idea. It's not merely a matter of search/replace... –  Federico A. Ramponi Oct 11 '08 at 1:40 Might seem too obvious, but it's vitally important to use a good editor that: • Makes things like inserting commonly used macros/comment markers/latex tags easy and consistent. • Can highlight the source properly • Can format the source properly Either (g)Vim or emacs can do all of these, but I'm sure there will be plenty of others. - Eclipse has a plugin for latex syntax highlighting as well –  Kena Dec 29 '08 at 21:31 kile and texnicenter are good editors IMHO –  Krystian Dec 31 '09 at 17:52 Spelling the name correctly is a good practice: LaTeX ("Latex" is a sort of allergenic rubber.) (I'm not just being snarky ... when I first saw the title of this post, I thought, 'what strange new scripting language is 'latex'? The traditional capitalization does carry meaning.) - By the way, someone answered "lots of margarine on your legs". My fault. –  Federico A. Ramponi Oct 13 '08 at 4:21 @Eric: There are some, who on the other hand, hate the practice. Robert Bringhurst, in Elements of Typographic Style for example, says that with TeX, Knuth opened a can of worms, with everyone coming up with weird kerning for names like LaTeX, AMSTex, LaTeX2e, ..., which is a visual assault. In his opinion, type should be just that, free of "logos". I can see his point. I am not sure if I agree with it yet. :-) –  Alok Singhal Dec 31 '09 at 8:11 show 1 more comment 1. If possible use Emacs to edit LaTeX files, using RefTeX (for handling citations) and AucTeX. Some people also like preview-latex, for having an almost WYSIWYG experience: your mileage may vary on that. 2. Use the comment environment for commenting one paragraph or more, so that your version control sw does not complain 3. Beware of end-of-line control characters if your collaborators use different OS 4. Use a programmatic environment for producing graphics. Metapost is the oldest and stablest (it's also used by Knuth). Pstricks is probably the most resourceful, but it's a bit outdated. Tikz is the new kid on the block and already very good. I switched to tikz as it's able to deal with both postscript and pdf outputs. 5. Read a "not so short guide to latex2e", after that move to Kopka & Daly. 6. Have a look at the memoir package for designing a new document style: it's also a great guide to publishing. 7. I use listings for display source code. It's ok, but not something definitive: you can probably do fine with a simple verbatim environment (maybe with fancyvrb). - show 1 more comment 1. Don't fuss about formatting on papers that you plan to submit for publication elsewhere - it makes later editing harder, and it's quite likely to cause problems at the typesetting stage. Stay close to elementary latex, and use only the latex packages you need. Don't use nonstandard fonts. 2. latexdiff is another tool worth knowing: it can be very helpful when explaining changes in collaborative editing. 3. Again for collaborative editing: once your reflist is complete, paste the .bbl file that bibtex produces into the latex file, in place of the \bibliography macro. This reduces a point of possible difficulty in the latex compile for your colleague. 4. Use the \message command the same way you would use printfs when debugging. - • Forget that DVI ever existed. pdftex is the most current and maintained implementation, and it's always frustrating to get a recent article with the bitmapped fonts from the metafont era, which don't display well on screen, and are often generated at too low a definition for modern laser printers. • Type everything in UTF-8. That means \usepackage[utf8]{inputenc}, or give a try to XeTeX :) • Stay light on the custom macros. I try to group semantic macros & environments separately from those that are pure convenience. In the past I tended to define macros for e.g. product or persan names, classes, methods, code examples, and what not, but in the end I find it's better to just type what you mean use one single \code{} macro (a semantic alias of \texttt{}), and if necessary do a global search and replace to ensure consistency throughout the document. • Use collaboration notes. We have a set of adhoc macros for inserting TODOs and various comments while writing the article, but there are a few packages that do that and are already available in TeXlive (todonotes.sty is one). - One suggestion: use \colon instead of : in appropriate math. E.g. compare $f:X \to Y$ with $f \colon X \to Y$. - Try \let\from=\colon to make your code more readable. Then you can say "If $f\from G \to G$..." –  Matthew Leingang Oct 7 '10 at 12:25 show 1 more comment Know the differences between hyphens, en- and em-dashes, and minus signs (-, --, ---, and $-$, respectively). Hyphens are for compound adjectives and breaking words across lines. En-dashes are for ranges, as in 7--10. Em-dashes are for setting off text---like this. Minus signs are for subtraction. - On \includegraphics, I might add that I see lots of people using [scale=xx] and, by trial and error, setting up the image size. • Create the figure in the final size: Create the PDF or EPS with the correct size, or the PNG or other bitmap image with the correct dimensions, considering the DPI (300-600 for print, 72 for electronic display) • use [width=0.x\textwidth], which guarantees a consistent image size relative to text. - I like the point about width=.... See also tex.stackexchange.com/questions/275/… –  phaedrus Feb 20 at 15:40 Be sure you are clear about what page format, letter or A4, you are generating. - I am using LaTeX mostly for typesetting software manuals for our customers. I use macros for the name of the company and some other things that appear often in the document. Besides saving myself a few seconds here and there, it has the advantage that you can easily change the customers name if you use the same text for another customer - it makes for easier abstraction. Apart from that, I recommend modularisation. The 'main' file contains the definitions that are only specific to this document and the chapter structure. Put each chapter in its own file, but also have a seperate file with all the page layout etc. I have several layout templates and can easily switch between them. This would also be valid for research papers - many publishers require their own layout. - Do this \newcommand{\foo}{\ensuremath{stuff}} rather than \def\foo{stuff} One advantage of the former is that you can then use \foo both in and out of math mode. - I use the listings package, not least because it allows for source files/examples to be kept separate from the document text, and imported directly. \usepackage{listings} \lstset{language=Python} \lstinputlisting{source.py} - Avoid eqnarray. Use the align, align*, or split environments from amsmath instead.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110061287879944, "perplexity": 2809.2624533625117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163056995/warc/CC-MAIN-20131204131736-00035-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/58386/meromorphic-continuation-of-eisenstein-series
# Meromorphic continuation of Eisenstein series I am interested what kind of (different) proofs of meromorphic continuation Eisenstein series (for general parabolic subgroups) exist in the literature? The only one I understand well, is Bernstein's proof using his "continuation principle" (it is probably unpublished) but apparently there are many others (especially I am interested in proofs which are different from the one in Langlands book) - A belated response: so far as I can tell, Selberg's idea, taken literally, does not apply at all in rational rank above 1. One should note that Avakumovic and Roelcke had similar ideas, which also did not anticipate the complications of higher rank. Langlands' 544 (notes written in the mid-60s, not public until mid 70s) were extremely novel in their recognition of complications in higher rank, e.g., cuspidal-data Eisenstein series in the first place, and non-constant residual automorphic forms (e.g., Speh forms). Colin-de-Verdiere's argument works well in rank 1, but, in its nascent form, has the same limitations as Selberg's 1950s viewpoint. Moeglin-Waldspurger cite Langlands and others, but do not give proofs of several analytical points. Bernstein's apocryphal proofs of meromorphic continuation are reputed to be instantiated last fall... but one should not be over-optimistic, given the possibility of people finding other priorities. Around 2001, I tried to rewrite notes on Bernstein's idea, with help from notes obtained thanks to Hejhal and Sarnak. I think it is fair to say that there are several confusing points, even if other potentially confusing points can be cleared up by "standard mathematics". In the last few years, there has been interest in supposedly applying Bernstein's method [sic] to not-cuspidal-date Eisenstein series... My personal reaction, based on some experience, is skeptical. I would like to see (and may try to write it myself) actual proofs for cuspidal-data Eisenstein series. :) A significant caution is that various spaces of automorphic forms meeting growth conditions are not representation spaces for the relevant group G, so reasoning that implicitly assumes this is dangerous. Of course, one often needs less... Lisa Carbone and Howard Garland have recently written some things about Eisenstein series on not-classical-groups (e.g., loop-or-something)... that seem to succeed, based aesthetically/morally on the Bernstein-Selberg arguments. If anyone wants further technical information about my assessment of the situation, I welcome email about meromorphic continuation of Eisenstein series. :) Edit (15 April '12): [By the way, deleted my last year's comment on mis-spelling in the question, which I could not correct last year...!] Disregarding the relatively special cases where arguments based on Poisson summation can succeed, as far as I can tell, all other approaches need some compactness or finite-dimension assertion at some crucial juncture. (The usual way to prove some space is finite-dimensional is to exhibit it as a non-zero eigenspace for a compact operator!) Granting some such assertion, the remainders of the arguments are relatively formal. E.g., inside Colin de Verdiere's argument is an essential Rellich-like compactness assertion about a resolvent, in a form due to Faddeev-Pavlov and Lax-Phillips. I suspect that the essential "confusing" or "mysterious" aspects of proof-sketches reside in problems about this sort of point. Not that incorrect conclusions are reached, but that the complexity of the set-up often gives an impression that one has done sufficient work to have completed a/the proof, and "surely" an "auxiliary" finite-dimensionality statement oughtn't be critical? That is, it's not that various "confusing" argument are incorrect, but, rather, perhaps incomplete. Correctibly so, indeed, but perhaps not trivially. - I found your answer itself a bit apocryphal, although that might be a language thing (e.g. I never heard about the word "instantiate" before). Are you saying that there are no complete proofs available for the meromorphic continuation of Eisenstein series and the spectral decomposition of $L^2(G(k)\backslash G(\mathbb{A}))$? What's the status of $G=\mathrm{GL}_n$? –  GH from MO Apr 5 '11 at 0:48 I'm not familiar enough with the proofs to say if they are more than superficially different, but here is something: Moeglin-Waldspurger prove continuation crediting Jacquet (see p. xix), instead of Langlands. They say it is similar to that given by Efrat, in his treatment of the Hilbert-modular ($PSL_2$ over a totally real field) case. Here, Jacquet credits M-W's proof to Colin de Verdière's "new and strikingly brief and elegant proof" (from the MR review) for $SL_2({\mathbb R})$ (extended here). I couldn't find any extensions of Colin de Verdière's argument to higher-rank cases, but that may be because it was done in Moeglin-Waldspurger. Muller also has a proof in the rank-one case. Wong gave a proof using integral equations. Everyone listed credits Selberg with their ideas. - The proof of Colin de Verdière is indeed very nice, and it would be very interesting to see how far it can generalize (it is written for subgroups of $SL_2(\mathbf{R})$ with a single cusp; the paper is available numdam.org/numdam-bin/fitem?id=AIF_1983__33_2_87_0 on Numdam.) –  Denis Chaperon de Lauzières Mar 17 '11 at 5:43 Looking around a bit more, Jacquet attributes Moeglin-Waldspurger's proof to Colin de Verdière, so that may be our answer. Thanks for the working link! –  B R Mar 17 '11 at 8:10 Also, it's not clear to me that these proofs don't implicitly prove the critical component needed to invoke Bernstein's continuation principle (that the parametrized system of equations has a locally finite-dimensional solution space). –  B R Mar 17 '11 at 17:33 Actually, in the situation I have to deal with, the "critical component" of Bernstein's proof turns out to be the property that in some domain the system has unique solution. In any case, Moeglin-Waldspurger prove more than Bernstein - they give you some information about poles which Bernstein's proof has no chance to give. Thank you all very much! –  Alexander Braverman Mar 18 '11 at 2:15 Also I tried to read Selberg's original proof in his ICM paper and didn't understand it. I think that Sarnak couldn't understand it either and he rewrote it a little (using slightly different idea). Selberg's original proof remains a mystery for me. –  Alexander Braverman Mar 18 '11 at 2:16 You'll find proofs of analytic continuation (and functional equation, both generally come together) in Weil's "Basic number theory", Bump's "Automorphic forms and representations", in Godement's "Séries d'Eisenstein" (available here), in Hida's "Elementary theory of $L$-functions and Eisenstein series", Garrett's "Holomorphic Hilbert modular forms" and a host of other places... - The question seems to be about the case of a general reductive group. It is certainly not in Weil, not in Bump (GL_2 only), not in Hida (GL_2 only), not in Godement (Siegel modular forms only, and not in representation-theoretic language), and I don't know about Garrett (but I doubt it, since the title indicates a very restricted case). –  Denis Chaperon de Lauzières Mar 14 '11 at 7:52 I took the question as meaning "I have other references for the same thing, but can't wrap my head against them"... so I tried to give a larger view. The references I gave have different proofs (or sometimes the same but with slightly different explanations) in different settings -- which should still help, since the question says "The only one I understand well" and "proofs which are different from the one in Langlands book". –  Julien Puydt Mar 14 '11 at 8:11 I actually did mean proofs that work for arbitrary $G$ and arbitrary parabolic $P$. The case of $GL(2)$ is more or less trivial anyway (and more generally when we talk about Eisenstein series induced from Borel subgroup). –  Alexander Braverman Mar 16 '11 at 13:50 I think that in particular Erez Lapid has done a nice job with these two slides http://www.math.clemson.edu/~jimlb/ConferenceTalks/ColumbiaWorkshop2006/lapid1.pdf http://www.math.clemson.edu/~jimlb/ConferenceTalks/ColumbiaWorkshop2006/lapid2.pdf Have a look in particular on page 10 in the first slide session for Bernstein's prinicple, and a proof of it is on page 11. He has also states it in for $SL(2)$ on page 9, which is always helpful for me before seeing a general statement. The second slides focus on the higher rank situation. By the way, the whole side is great: http://www.math.clemson.edu/~jimlb/coursenotes.html Perhaps a remark about Eisenstein series: I think that at least in a congruence setting, the analytic continuation is in some sense equivalent to analytic continuation of automorphic L function. The Langlands-Shahidi http://en.wikipedia.org/wiki/Langlands-Shahidi_method method deduces from the analytic continuation of Eisenstein series the analytic continuation of automorphic $L$ functions. So every new proof of analytic continuation for Eisenstein series yields a new proof for the analytic continuation of automorphic $L$ functions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647117614746094, "perplexity": 901.3678396894254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010792.55/warc/CC-MAIN-20141125155650-00083-ip-10-235-23-156.ec2.internal.warc.gz"}
http://www.dms.umontreal.ca/~andrew/1999.php
### 1999 Publications #### Motivating the multiplicative spectrum (with K. Soundararajan) Topics in Number Theory, (S.D. Ahlgren et al., eds.), Netherlands: Kluwer, 1999, 1-5. We motivate the results that will appear in our 2001 Annals paper "The spectrum of multiplicative functions" Article #### The set of differences of a given set (with F. Roesler) American Mathematical Monthly, 106 (1999), 338-344. If $$a=\prod_i p_i^{a_i},\ b=\prod_i p_i^{b_i}$$ then $$a/(a,b)=\prod_i p_i^{c_i}$$ where $$c_i=\max\{ a_i-b_i,0\}$$; we write $$\Delta(a,b)$$ for the vector $$c$$. We study here the size of the set $$\Delta(A,B)$$. Article #### Borwein and Bradley's Apéry-like formulae for $$\zeta(4n+3)$$ (with Gert Almkvist) Experimental Mathematics, 8 (1999), 197-204. We prove Borwein and Bradley's conjectured Apéry-like formulae for $$\zeta(4n+3)$$ Article #### On the scarcity of powerful binomial coefficients, Mathematika, 46 (1999), 397-410. Assuming the $$abc$$-conjecture we show that there are only finitely many powerful binomial coefficients $$\binom nk$$ with $$3\leq k\leq n/2$$, since if $$q^2$$ divides $$\binom nk$$ then $$q\ll n^2 \binom nk^{o(1)}$$. Unconditionally we show that there are $$N^{1/2+o(1)}$$ powerful binomial coefficients in the top $$N$$ rows of Pascal's triangle. Article #### Review of Notes on Fermat's Last Theorem'" by Alf van der Poorten American Mathematical Monthly, 106, pages 177-181. 98 (1999), 5-8 Review of this lovely book Article
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517394304275513, "perplexity": 1180.6073793512455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00776.warc.gz"}
https://www.physicsforums.com/threads/rotational-motion-problem.96701/
# Homework Help: Rotational motion problem 1. Oct 25, 2005 ### willworkforfood "A string is wound around a solid cylindrical spool of mass 9.4 kg and radius .11 m. Assume the acceleration of gravity to be 9.8 m/s^2. If the spool is released from rest and rolls along the string and the distance to the floor is 9.06 meters, then how long in seconds will it take the spool to hit the floor?" I'm having trouble solving this. I tried setting torque equal to torque r*m*g=moment of inertia*(a/r). After I solved for acceleraton I threw it into the linear kinematics equation 9.06=.5*a*t^2 to solve for time ..but it didn't work out so well because I got the wrong answer. Am I thinking about this incorrectly? What other method can I use?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84986811876297, "perplexity": 571.1347928395161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823516.50/warc/CC-MAIN-20181210233803-20181211015303-00550.warc.gz"}
https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.special.roots_laguerre.html
scipy.special.roots_laguerre¶ scipy.special.roots_laguerre(n, mu=False)[source] Computes the sample points and weights for Gauss-Laguerre quadrature. The sample points are the roots of the n-th degree Laguerre polynomial, $$L_n(x)$$. These sample points and weights correctly integrate polynomials of degree $$2n - 1$$ or less over the interval $$[0, \infty]$$ with weight function $$f(x) = e^{-x}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363212585449219, "perplexity": 236.62344326392076}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00400.warc.gz"}
https://www.bartleby.com/questions-and-answers/20percent-20percent-4j40-10-j80-10percent-at-100-mva-10percent-at-100-mva-f1-la-xt-40percent-xlt-20p/88338189-2f67-40bc-bb30-068b27f9ee70
# 20%20%4j40(10 j80)(10%at 100 MVA(10%at 100 MVAF1l'a XT =40%XLT 20%XHL 10%110/220 kVFigure 4.25 System diagram for Problem 4.10 Question 249 views You are given that the system shown in Figure 4.25 has a 110/220 kV autotransformer. The positive- and zero-sequence impedances in ohms or percent are as shown in the figure, the zero-sequence impedances being in parentheses. Assume that the low- voltage system is solidly grounded. For a phase-a-to-ground fault at the midpoint of the transmission line, calculate the transformer current In in the neutral and the phase a currents Ia and I'a on the high and low sides of the transformer. If the source on the low-voltage side is to be grounded through a reactance, determine the value of the grounding reactance for which the transformer neutral current becomes zero. As the grounding reactance changes around this value, the direction of the neutral current will reverse, and will affect the polarizing capability of the neutral current for ground faults on the high side. Can faults on the low-voltage side ever cause the neutral current to reverse? check_circle Step 1 Since there is single line to ground fault in phase a. So, fault current in phase a will be IF and fault current in other phases will be zero. For a single line to ground fault, zero sequence fault current (Iaf(0)), positive sequence fault current (Iaf(1))  and negative sequence fault current (Iaf(2)) is calculated by Symmetrical component transformation matrix. This formula contains standard notation. Step 2 Since positive sequence fault current is equal to the zero sequence fault current. Calculate the zero sequence fault current (Iaf(0)) to get the current in neutral ground also current in high voltage and low voltage side of autotransformer. So calculate the zero sequence fault current. For calculating the zero sequence fault current, draw the Thevenin’s equivalent circuit for positive sequence, negative sequence and zero sequence. Then calculate the equivalent Thevenin’s voltage and resistance for each sequence. For solving the question further Assume Step 3 Since transmission line is connected on high voltage site and impedance of transmission line is the actual value. So convert t... ### Want to see the full answer? See Solution #### Want to see this answer and more? Solutions are written by subject experts who are available 24/7. Questions are typically answered within 1 hour.* See Solution *Response times may vary by subject and question. Tagged in
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728320598602295, "perplexity": 2658.163594511991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00177.warc.gz"}
https://forum.allaboutcircuits.com/threads/decibel-power-voltage-conversion.84968/
# Decibel Power/Voltage Conversion Discussion in 'Homework Help' started by blah2222, May 14, 2013. 1. ### blah2222 Thread Starter Well-Known Member May 3, 2010 573 36 Something that I had never really received a good answer for and have just taken it for granted is why the input and output resistances used in the gain equation considered equal to make the decibel calculation simplified. I.E. $Gain = 10log(\frac{P_{out}}{P_{in}}) = 10log(\frac{V_{out}^{2}/R_{out}}{V_{in}^{2}/R_{in}}) \approx 20log(\frac{V_{out}}{V_{in}})\ \ \ \ *Rout = Rin$ Very subtle note, but it is often overlooked. Considering most amplifiers have high input impedance and low output impedance, why do we not take these into account for voltage gain? 2. ### crutschow Expert Mar 14, 2008 16,209 4,333 Strictly speaking a dB is a ratio of power levels, so it only works properly for voltage if the two voltages are working into the same resistance value. But commonly the voltage ratio is used without regard to resistance level for convenience. Thus one can refer to an amplifier as having a voltage gain of say 20dB (a factor of 10) without knowing (or caring) what the input and output resistance levels are. But if you want the power gain of an amplifier then you need to include the input and output resistance values. That is commonly done for RF amplifier modules which typically have the same input and output impedance levels matched to the transmission line impedance. 3. ### bountyhunter Well-Known Member Sep 7, 2009 2,498 508 The definition of decibel is: I was taught it's origination was in Bell labs to have a unit for measuring sound levels (which are log scaled) not voltage or current. To have the unit be meaningful, the parameters units have to be consistent throughout. many people fail to grasp that dB has to be referenced to something if it is used as an actual measurement. The sound scale often used is dBA which has a reference level (that I don't remember) so that dBA meters can be used to measure actual sound levels. If no reference is built in, then a dB is simply a measurement of a ratio, not a value. Related Forum Posts: 1. Replies: 6 Views: 1,064 2. Replies: 3 Views: 834 3. Replies: 1 Views: 1,279 4. Replies: 3 Views: 1,403 5. Replies: 1 Views: 1,614 6. Replies: 1 Views: 1,046
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704389929771423, "perplexity": 1843.5595991559028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00583.warc.gz"}
https://mathoverflow.net/questions/244449/conditions-for-convergence-to-non-isolated-fixed-points
# Conditions for convergence to non-isolated fixed points Consider a dynamical system of the form $$\dot{x}=f(x), \quad x\in X,$$ and assume that the system possesses a set of non-isolated fixed points. Suppose moreover that there exists a Lyapunov $V(x)$ function whose derivative is non-decreasing along the trajectories of the system. By virtue of the LaSalle invariance principle we know that the system will converge to the largest invariance set of fixed points $\{x\in X\,:\, \dot{V}(x)=0\}$. Now, I know that there exist counter-examples showing that the system can converge to some trajectories in the invariant set without necessarily approaching to a point (see for instance this post). My question is whether there exist additional conditions (different from the fact that the set of fixed points is isolated) on the system and/or Lyapunov function $V(x)$ which guarantee asymptotic converge to fixed points and not to "moving" trajectories. Possibly, invariant set localization tecnique can be useful to you. Each periodic trajectory in a compact invariant set contains at least two points of the set $$S_h= \{ x\in X : \dot h(x)=0 \},$$ where $h(x)$ is any $C^{\infty}$ function on $X$. This fact can be used to prove non-existence of non-zero trajectories, completely contained in $\{ x\in X: \dot V(x)=0 \}$. Yes, there do exist sufficient conditions for asymptotic stability when the Lyapunov function is negative semi-definite, which I describe below. Krasovsky's Theorem Given an autonomous ODE $\dot x = f(x)$ with fixed point at the origin. Let $K$ be a manifold that does not contain entire trajectories of the ODE. If there exists a Lyapunov function $V$ such that the orbital derivative $\dot V$ satisfies: 1. $\dot V<0$ outside of $K$ 2. $\dot V=0$ on $K$ then the dynamics is asymptotically stable. If $K = \{ x ~:~ F(x)=0\}$, here is a sufficient condition for $K$ to not contain entire trajectories of the ODE: $$(f^T \nabla F)(x) \ne 0 \quad \text{on K \setminus \{ \mathbf{0} \}}$$ where $f$ is the vector field of the ODE. (This just ensures that the vector field $f$ is never orthogonal to the normal to the surface.) Example Consider the ODE: $\dot x_1 = -x_1 + 3 x_2^2 \;, \dot x_2 = - x_1 x_2 - x_2^3 \;.$ In this case, a Lyapunov function is given by $V(x_1,x_2)=(x_1^2+x_2^2)/2$, whose orbital derivative is negative semi-definite. The set where $\dot V=0$ is given by the zero level set of the function $F(x_1,x_2)=x_1 - x_2^2$. Note that $(f^T \nabla F)(x_1,x_2) = 2 x_2^2 + 4 x_2^4 \ne 0$ on $K \setminus \{ \mathbf{0} \}$. Thus, by Krasovsky's Theorem the origin is asymptotically stable as illustrated in the graphic below. In this graphic, two of the axes correspond to state variables $x_1$ and $x_2$, and the other axis is the time variable $t$. The red line marks the state $x_1=0$ and $x_2=0$ for the time interval shown. Different grey shading is used for trajectories with different initial conditions. Here is a cartoon from the book referenced below, which illustrates the idea behind Krasovsky's Theorem. The dark line labelled $\gamma$ represents a solution of the ODE, the lighter lines are contour lines of the semi-definite Lyapunov function $V$, and the dashed region is $K$ where $\dot V=0$. This cartoon nicely illustrates how the dynamic avoids getting stuck inside $K$, and instead, asymptotically reaches a fixed point. Note that if the ODE solution enters $K$ then the value of the Lyapunov function does not change. This is illustrated by the curve remaining on a level curve of $V$ in the dashed region labelled $K$. However, eventually the ODE solution must exit $K$ (by hypothesis of Krasovky's Theorem), after which the Lyapunov function again decreases. This picture suggests that a set of non-isolated fixed points can be reached in this fashion. Reference David R. Merkin [1997]. Introduction to the Theory of Stability. Texts in Applied Mathematics. Springer. Translated from Russian by Andrei L. Smirnov and Fred Afagh. • Thanks for the answer. In my question, I assume that the dynamical system possesses a set of non-isolated fixed points, and I ask whether there exist conditions which implies convergence to a point in the set of fixed points (and not to the set itself). It's not clear to me if your answer applies to this framework. Sep 22 '16 at 19:46 • As in Lasalle's principle, the set K (for Krasovsky) is allowed to be non-isolated. Krasovsky's theorem is a general purpose result for autonomous ODEs that guarantees asymptotic convergence to fixed points and not relative equilibria, as requested. Sep 22 '16 at 21:19 • I'm still a bit confused. I'm not talking about the set $K$ but the set of fixed points of $f$, namely $\mathrm{Fix}(f):=\{x\in X : f(x)=0\}$. In particular, in your example the fixed point at the origin is isolated. Sep 22 '16 at 23:06 • I added the idea behind the proof. Since the proof relies on a lyapunov function and the fact that K does not contain an entire trajectory of the ODE, I think it can be adapted to your setting where the set of fixed points is non-isolated. Sep 23 '16 at 11:20 • Unfortunately, the assumption that $K$ does not contain entire trajectories (other than the trajectory in $0$ I guess) rules out the case that the OP is interested in. If you have a set of nonisolated fixed points, then every such fixed point gives rise to an entire trajectory. So if $K$ is this set then it completely consists of entire trajectories. Oct 7 at 18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423271417617798, "perplexity": 178.21178392630415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00563.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/mary-applies-force-73-n-push-box-acceleration-059-m-s2-increases-pushing-force-82-n-box-s--q4407420
Mary applies a force of 73 N to push a box with an acceleration of 0.59 m/s2. When she increases the pushing force to 82 N, the box's acceleration changes to 0.81 m/s2. There is a constant friction force present between the floor and the box. (a) What is the mass of the box? kg (b) What is the coefficient of kinetic friction between the floor and the box?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329090118408203, "perplexity": 211.3828622840443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00265-ip-10-71-132-137.ec2.internal.warc.gz"}
https://archivesic.ccsd.cnrs.fr/IRIT-ADRIA/hal-03614899v1
Long term dynamics of the subgradient method for Lipschitz path differentiable functions - Archive ouverte HAL Access content directly Journal Articles Journal of the European Mathematical Society Year : 2022 ## Long term dynamics of the subgradient method for Lipschitz path differentiable functions Jérôme Bolte • Function : Author • PersonId : 937126 Edouard Pauwels Rodolfo Ríos-Zertuche • Function : Author #### Abstract We consider the long-term dynamics of the vanishing stepsize subgradient method in the case when the objective function is neither smooth nor convex. We assume that this function is locally Lipschitz and path differentiable, i.e., admits a chain rule. Our study departs from other works in the sense that we focus on the behavoir of the oscillations, and to do this we use closed measures. We recover known convergence results, establish new ones, and show a local principle of oscillation compensation for the velocities. Roughly speaking, the time average of gradients around one limit point vanishes. This allows us to further analyze the structure of oscillations, and establish their perpendicularity to the general drift. ### Dates and versions hal-03614899 , version 1 (24-01-2023) ### Identifiers • HAL Id : hal-03614899 , version 1 • ARXIV : • DOI : ### Cite Jérôme Bolte, Edouard Pauwels, Rodolfo Ríos-Zertuche. Long term dynamics of the subgradient method for Lipschitz path differentiable functions. Journal of the European Mathematical Society, 2022, pp.1-28. ⟨10.4171/JEMS/1285⟩. ⟨hal-03614899⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 41 View
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305826783180237, "perplexity": 2949.346115269144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00493.warc.gz"}
http://math.stackexchange.com/questions/326845/question-about-symplectic-tranformations/378664
Suppose I know that two vectors $\vec{a}$ and $\vec{b}$ are perpendicular in a given basis spanned by basis vectors $\vec{x}$. Now suppose I transform to another basis $\vec{x'}$ using a symplectic transformation matrix S (i.e. $SJS^{T} = J$ for some skew-symmetric matrix J). Will the transformed vectors a and b still be perpendicular after the transformation? If not, is there a way to figure out a relation between the dot products between the two vectors in the two bases? Thanks! - I only can add this part of information: $\langle S a,S b\rangle= a^TS^TSb$ if the vectors are considered as one-column matrices. $S$ need not preserve orthogonality. – Berci Mar 10 '13 at 21:05 Consider $\mathbb{R}^2$ with the standard symplectic structure given by $\omega( (a,b), (a',b')) = ab' - ba'$. This corresponds to saying $\omega = < J \cdot, \cdot>$ where $J$ is the almost complex structure given by $$\begin{pmatrix} 0 & -1\\1 & 0 \end{pmatrix}$$ An easy symplectic and orthogonal basis for $\mathbb{R}^2$ is given by $e_1 = (1,0)$ and $e_2 = (0,1)$. A symplectic transformation is given by $e_1 \mapsto e_1 + e_2, e_2 \mapsto e_2$. (i.e. in matrix form $\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}. )$ The image of this basis is clearly no longer orthogonal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889752864837646, "perplexity": 120.62348654520908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450715.63/warc/CC-MAIN-20151124205410-00017-ip-10-71-132-137.ec2.internal.warc.gz"}
https://wiki2.org/en/K-theory
To install click the Add extension button. That's it. The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time. 4,5 Kelly Slayton Congratulations on this excellent venture… what a great idea! Alexander Grigorievskiy I use WIKI 2 every day and almost forgot how the original Wikipedia looks like. Live Statistics English Articles Improved in 24 Hours Languages Recent Show all languages What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better. . Leo Newton Brights Milds # K-theory In mathematics, K-theory is, roughly speaking, the study of a ring generated by vector bundles over a topological space or scheme. In algebraic topology, it is a cohomology theory known as topological K-theory. In algebra and algebraic geometry, it is referred to as algebraic K-theory. It is also a fundamental tool in the field of operator algebras. It can be seen as the study of certain kinds of invariants of large matrices.[1] K-theory involves the construction of families of K-functors that map from topological spaces or schemes to associated rings; these rings reflect some aspects of the structure of the original spaces or schemes. As with functors to groups in algebraic topology, the reason for this functorial mapping is that it is easier to compute some topological properties from the mapped rings than from the original spaces or schemes. Examples of results gleaned from the K-theory approach include the Grothendieck–Riemann–Roch theorem, Bott periodicity, the Atiyah–Singer index theorem, and the Adams operations. In high energy physics, K-theory and in particular twisted K-theory have appeared in Type II string theory where it has been conjectured that they classify D-branes, Ramond–Ramond field strengths and also certain spinors on generalized complex manifolds. In condensed matter physics K-theory has been used to classify topological insulators, superconductors and stable Fermi surfaces. For more details, see K-theory (physics). ## Grothendieck completion The Grothendieck completion of an abelian monoid into an abelian group is a necessary ingredient for defining K-theory since all definitions start by constructing an abelian monoid from a suitable category and turning it into an abelian group through this universal construction. Given an abelian monoid ${\displaystyle (A,+')}$ let ${\displaystyle \sim }$ be the relation on ${\displaystyle A^{2}=A\times A}$ defined by ${\displaystyle (a_{1},a_{2})\sim (b_{1},b_{2})}$ if there exists a ${\displaystyle c\in A}$ such that ${\displaystyle a_{1}+'b_{2}+'c=a_{2}+'b_{1}+'c.}$ Then, the set ${\displaystyle G(A)=A^{2}/\sim }$ has the structure of a group ${\displaystyle (G(A),+)}$ where: ${\displaystyle [(a_{1},a_{2})]+[(b_{1},b_{2})]=[(a_{1}+'b_{1},a_{2}+'b_{2})].}$ Equivalence classes in this group should be thought of as formal differences of elements in the abelian monoid. This group ${\displaystyle (G(A),+)}$ is also associate with a monoid homomorphism ${\displaystyle i:A\to G(A)}$ given by ${\displaystyle a\mapsto [(a,0)],}$ which has a certain universal property. To get a better understanding of this group, consider some equivalence classes of the abelian monoid ${\displaystyle (A,+)}$. Here we will denote the identity element of ${\displaystyle A}$ by ${\displaystyle 0}$ so that ${\displaystyle [(0,0)]}$ will be the identity element of ${\displaystyle (G(A),+).}$ First, ${\displaystyle (0,0)\sim (n,n)}$ for any ${\displaystyle n\in A}$ since we can set ${\displaystyle c=0}$ and apply the equation from the equivalence relation to get ${\displaystyle n=n.}$ This implies ${\displaystyle [(a,b)]+[(b,a)]=[(a+b,a+b)]=[(0,0)]}$ hence we have an additive inverse for each element in ${\displaystyle G(A)}$. This should give us the hint that we should be thinking of the equivalence classes ${\displaystyle [(a,b)]}$ as formal differences ${\displaystyle a-b.}$ Another useful observation is the invariance of equivalence classes under scaling: ${\displaystyle (a,b)\sim (a+k,b+k)}$ for any ${\displaystyle k\in A.}$ The Grothendieck completion can be viewed as a functor ${\displaystyle G:\mathbf {AbMon} \to \mathbf {AbGrp} ,}$ and it has the property that it is left adjoint to the corresponding forgetful functor ${\displaystyle U:\mathbf {AbGrp} \to \mathbf {AbMon} .}$ That means that, given a morphism ${\displaystyle \phi :A\to U(B)}$ of an abelian monoid ${\displaystyle A}$ to the underlying abelian monoid of an abelian group ${\displaystyle B,}$ there exists a unique abelian group morphism ${\displaystyle G(A)\to B.}$ ### Example for natural numbers An illustrative example to look at is the Grothendieck completion of ${\displaystyle \mathbb {N} }$. We can see that ${\displaystyle G((\mathbb {N} ,+))=(\mathbb {Z} ,+).}$ For any pair ${\displaystyle (a,b)}$ we can find a minimal representative ${\displaystyle (a',b')}$ by using the invariance under scaling. For example, we can see from the scaling invariance that ${\displaystyle (4,6)\sim (3,5)\sim (2,4)\sim (1,3)\sim (0,2)}$ In general, if ${\displaystyle k:=\min\{a,b\}}$ then ${\displaystyle (a,b)\sim (a-k,b-k)}$ which is of the form ${\displaystyle (c,0)}$ or ${\displaystyle (0,d).}$ This shows that we should think of the ${\displaystyle (a,0)}$ as positive integers and the ${\displaystyle (0,b)}$ as negative integers. ## Definitions There are a number of basic definitions of K-theory: two coming from topology and two from algebraic geometry. ### Grothendieck group for compact Hausdorff spaces Given a compact Hausdorff space ${\displaystyle X}$ consider the set of isomorphism classes of finite-dimensional vector bundles over ${\displaystyle X}$, denoted ${\displaystyle {\text{Vect}}(X)}$ and let the isomorphism class of a vector bundle ${\displaystyle \pi :E\to X}$ be denoted ${\displaystyle [E]}$. Since isomorphism classes of vector bundles behave well with respect to direct sums, we can write these operations on isomorphism classes by ${\displaystyle [E]\oplus [E']=[E\oplus E']}$ It should be clear that ${\displaystyle ({\text{Vect}}(X),\oplus )}$ is an abelian monoid where the unit is given by the trivial vector bundle ${\displaystyle \mathbb {R} ^{0}\times X\to X}$. We can then apply the Grothendieck completion to get an abelian group from this abelian monoid. This is called the K-theory of ${\displaystyle X}$ and is denoted ${\displaystyle K^{0}(X)}$. We can use the Serre–Swan theorem and some algebra to get an alternative description of vector bundles over the ring of continuous complex-valued functions ${\displaystyle C^{0}(X;\mathbb {C} )}$ as projective modules. Then, these can be identified with idempotent matrices in some ring of matrices ${\displaystyle M_{n\times n}(C^{0}(X;\mathbb {C} ))}$. We can define equivalence classes of idempotent matrices and form an abelian monoid ${\displaystyle {\textbf {Idem}}(X)}$. Its Grothendieck completion is also called ${\displaystyle K^{0}(X)}$. One of the main techniques for computing the Grothendieck group for topological spaces comes from the Atiyah–Hirzebruch spectral sequence, which makes it very accessible. The only required computations for understanding the spectral sequences are computing the group ${\displaystyle K^{0}}$ for the spheres ${\displaystyle S^{n}}$[2]pg 51-110. ### Grothendieck group of vector bundles in algebraic geometry There is an analogous construction by considering vector bundles in algebraic geometry. For a Noetherian scheme ${\displaystyle X}$ there is a set ${\displaystyle {\text{Vect}}(X)}$ of all isomorphism classes of algebraic vector bundles on ${\displaystyle X}$. Then, as before, the direct sum ${\displaystyle \oplus }$ of isomorphisms classes of vector bundles is well-defined, giving an abelian monoid ${\displaystyle ({\text{Vect}}(X),\oplus )}$. Then, the Grothendieck group ${\displaystyle K^{0}(X)}$ is defined by the application of the Grothendieck construction on this abelian monoid. ### Grothendieck group of coherent sheaves in algebraic geometry In algebraic geometry, the same construction can be applied to algebraic vector bundles over a smooth scheme. But, there is an alternative construction for any Noetherian scheme ${\displaystyle X}$. If we look at the isomorphism classes of coherent sheaves ${\displaystyle \operatorname {Coh} (X)}$ we can mod out by the relation ${\displaystyle [{\mathcal {E}}]=[{\mathcal {E}}']+[{\mathcal {E}}'']}$ if there is a short exact sequence ${\displaystyle 0\to {\mathcal {E}}'\to {\mathcal {E}}\to {\mathcal {E}}''\to 0.}$ This gives the Grothendieck-group ${\displaystyle K_{0}(X)}$ which is isomorphic to ${\displaystyle K^{0}(X)}$ if ${\displaystyle X}$ is smooth. The group ${\displaystyle K_{0}(X)}$ is special because there is also a ring structure: we define it as ${\displaystyle [{\mathcal {E}}]\cdot [{\mathcal {E}}']=\sum (-1)^{k}\left[\operatorname {Tor} _{k}^{{\mathcal {O}}_{X}}({\mathcal {E}},{\mathcal {E}}')\right].}$ Using the Grothendieck–Riemann–Roch theorem, we have that ${\displaystyle \operatorname {ch} :K_{0}(X)\otimes \mathbb {Q} \to A(X)\otimes \mathbb {Q} }$ is an isomorphism of rings. Hence we can use ${\displaystyle K_{0}(X)}$ for intersection theory.[3] ## Early history The subject can be said to begin with Alexander Grothendieck (1957), who used it to formulate his Grothendieck–Riemann–Roch theorem. It takes its name from the German Klasse, meaning "class".[4] Grothendieck needed to work with coherent sheaves on an algebraic variety X. Rather than working directly with the sheaves, he defined a group using isomorphism classes of sheaves as generators of the group, subject to a relation that identifies any extension of two sheaves with their sum. The resulting group is called K(X) when only locally free sheaves are used, or G(X) when all are coherent sheaves. Either of these two constructions is referred to as the Grothendieck group; K(X) has cohomological behavior and G(X) has homological behavior. If X is a smooth variety, the two groups are the same. If it is a smooth affine variety, then all extensions of locally free sheaves split, so the group has an alternative definition. In topology, by applying the same construction to vector bundles, Michael Atiyah and Friedrich Hirzebruch defined K(X) for a topological space X in 1959, and using the Bott periodicity theorem they made it the basis of an extraordinary cohomology theory. It played a major role in the second proof of the Atiyah–Singer index theorem (circa 1962). Furthermore, this approach led to a noncommutative K-theory for C*-algebras. Already in 1955, Jean-Pierre Serre had used the analogy of vector bundles with projective modules to formulate Serre's conjecture, which states that every finitely generated projective module over a polynomial ring is free; this assertion is correct, but was not settled until 20 years later. (Swan's theorem is another aspect of this analogy.) ## Developments The other historical origin of algebraic K-theory was the work of J. H. C. Whitehead and others on what later became known as Whitehead torsion. There followed a period in which there were various partial definitions of higher K-theory functors. Finally, two useful and equivalent definitions were given by Daniel Quillen using homotopy theory in 1969 and 1972. A variant was also given by Friedhelm Waldhausen in order to study the algebraic K-theory of spaces, which is related to the study of pseudo-isotopies. Much modern research on higher K-theory is related to algebraic geometry and the study of motivic cohomology. The corresponding constructions involving an auxiliary quadratic form received the general name L-theory. It is a major tool of surgery theory. In string theory, the K-theory classification of Ramond–Ramond field strengths and the charges of stable D-branes was first proposed in 1997.[5] ## Examples and properties ### K0 of a field The easiest example of the Grothendieck group is the Grothendieck group of a point ${\displaystyle {\text{Spec}}(\mathbb {F} )}$ for a field ${\displaystyle \mathbb {F} }$. Since a vector bundle over this space is just a finite dimensional vector space, which is a free object in the category of coherent sheaves, hence projective, the monoid of isomorphism classes is ${\displaystyle \mathbb {N} }$ corresponding to the dimension of the vector space. It is an easy exercise to show that the Grothendieck group is then ${\displaystyle \mathbb {Z} }$. ### K0 of an Artinian algebra over a field One important property of the Grothendieck group of a Noetherian scheme ${\displaystyle X}$ is that it is invariant under reduction, hence ${\displaystyle K(X)=K(X_{\text{red}})}$.[6] Hence the Grothendieck group of any Artinian ${\displaystyle \mathbb {F} }$-algebra is a direct sum of copies of ${\displaystyle \mathbb {Z} }$, one for each connected component of its spectrum. For example, ${\displaystyle K_{0}\left({\text{Spec}}\left({\frac {\mathbb {F} [x]}{(x^{9})}}\times \mathbb {F} \right)\right)=\mathbb {Z} \oplus \mathbb {Z} }$ ### K0 of projective space One of the most commonly used computations of the Grothendieck group is with the computation of ${\displaystyle K(\mathbb {P} ^{n})}$ for projective space over a field. This is because the intersection numbers of a projective ${\displaystyle X}$ can be computed by embedding ${\displaystyle i:X\hookrightarrow \mathbb {P} ^{n}}$ and using the push pull formula ${\displaystyle i^{*}([i_{*}{\mathcal {E}}]\cdot [i_{*}{\mathcal {F}}])}$. This makes it possible to do concrete calculations with elements in ${\displaystyle K(X)}$ without having to explicitly know its structure since[7] ${\displaystyle K(\mathbb {P} ^{n})={\frac {\mathbb {Z} [T]}{(T^{n+1})}}}$ One technique for determining the grothendieck group of ${\displaystyle \mathbb {P} ^{n}}$ comes from its stratification as ${\displaystyle \mathbb {P} ^{n}=\mathbb {A} ^{n}\coprod \mathbb {A} ^{n-1}\coprod \cdots \coprod \mathbb {A} ^{0}}$ since the grothendieck group of coherent sheaves on affine spaces are isomorphic to ${\displaystyle \mathbb {Z} }$, and the intersection of ${\displaystyle \mathbb {A} ^{n-k_{1}},\mathbb {A} ^{n-k_{2}}}$ is generically ${\displaystyle \mathbb {A} ^{n-k_{1}}\cap \mathbb {A} ^{n-k_{2}}=\mathbb {A} ^{n-k_{1}-k_{2}}}$ for ${\displaystyle k_{1}+k_{2}\leq n}$. ### K0 of a projective bundle Another important formula for the Grothendieck group is the projective bundle formula:[8] given a rank r vector bundle ${\displaystyle {\mathcal {E}}}$ over a Noetherian scheme ${\displaystyle X}$, the Grothendieck group of the projective bundle ${\displaystyle \mathbb {P} ({\mathcal {E}})=\operatorname {Proj} (\operatorname {Sym} ^{\bullet }({\mathcal {E}}^{\vee }))}$ is a free ${\displaystyle K(X)}$-module of rank r with basis ${\displaystyle 1,\xi ,\dots ,\xi ^{n-1}}$. This formula allows one to compute the Grothendieck group of ${\displaystyle \mathbb {P} _{\mathbb {F} }^{n}}$. This make it possible to compute the ${\displaystyle K_{0}}$ or Hirzebruch surfaces. In addition, this can be used to compute the Grothendieck group ${\displaystyle K(\mathbb {P} ^{n})}$ by observing it is a projective bundle over the field ${\displaystyle \mathbb {F} }$. ### K0 of singular spaces and spaces with isolated quotient singularities One recent technique for computing the Grothendieck group of spaces with minor singularities comes from evaluating the difference between ${\displaystyle K^{0}(X)}$ and ${\displaystyle K_{0}(X)}$, which comes from the fact every vector bundle can be equivalently described as a coherent sheaf. This is done using the Grothendieck group of the Singularity category ${\displaystyle D_{sg}(X)}$[9][10] from derived noncommutative algebraic geometry. It gives a long exact sequence starting with ${\displaystyle \cdots \to K^{0}(X)\to K_{0}(X)\to K_{sg}(X)\to 0}$ where the higher terms come from higher K-theory. Note that vector bundles on a singular ${\displaystyle X}$ are given by vector bundles ${\displaystyle E\to X_{sm}}$ on the smooth locus ${\displaystyle X_{sm}\hookrightarrow X}$. This makes it possible to compute the Grothendieck group on weighted projective spaces since they typically have isolated quotient singularities. In particular, if these singularities have isotropy groups ${\displaystyle G_{i}}$ then the map ${\displaystyle K^{0}(X)\to K_{0}(X)}$ is injective and the cokernel is annihilated by ${\displaystyle {\text{lcm}}(|G_{1}|,\ldots ,|G_{k}|)^{n-1}}$ for ${\displaystyle n=\dim X}$[10]pg 3. ### K0 of a smooth projective curve For a smooth projective curve ${\displaystyle C}$ the Grothendieck group is ${\displaystyle K_{0}(C)=\mathbb {Z} \oplus {\text{Pic}}(C)}$ for Picard group of ${\displaystyle C}$. This follows from the Brown-Gersten-Quillen spectral sequence[11]pg 72 of algebraic K-theory. For a regular scheme of finite type over a field, there is a convergent spectral sequence ${\displaystyle E_{1}^{p,q}=\coprod _{x\in X^{(p)}}K^{-p-q}(k(x))\Rightarrow K_{-p-q}(X)}$ for ${\displaystyle X^{(p)}}$ the set of codimension ${\displaystyle p}$ points, meaning the set of subschemes ${\displaystyle x:Y\to X}$ of codimension ${\displaystyle p}$, and ${\displaystyle k(x)}$ the algebraic function field of the subscheme. This spectral sequence has the property[11]pg 80 ${\displaystyle E_{2}^{p,-p}\cong {\text{CH}}^{p}(X)}$ for the Chow ring of ${\displaystyle X}$, essentially giving the computation of ${\displaystyle K_{0}(C)}$. Note that because ${\displaystyle C}$ has no codimension ${\displaystyle 2}$ points, the only nontrivial parts of the spectral sequence are ${\displaystyle E_{1}^{0,q},E_{1}^{1,q}}$, hence {\displaystyle {\begin{aligned}E_{\infty }^{1,-1}\cong E_{2}^{1,-1}&\cong {\text{CH}}^{1}(C)\\E_{\infty }^{0,0}\cong E_{2}^{0,0}&\cong {\text{CH}}^{0}(C)\end{aligned}}} The coniveau filtration can then be used to determine ${\displaystyle K_{0}(C)}$ as the desired explicit direct sum since it gives an exact sequence ${\displaystyle 0\to F^{1}(K_{0}(X))\to K_{0}(X)\to K_{0}(X)/F^{1}(K_{0}(X))\to 0}$ where the left hand term is isomorphic to ${\displaystyle {\text{CH}}^{1}(C)\cong {\text{Pic}}(C)}$ and the right hand term is isomorphic to ${\displaystyle CH^{0}(C)\cong \mathbb {Z} }$. Since ${\displaystyle {\text{Ext}}_{\text{Ab}}^{1}(\mathbb {Z} ,G)=0}$, we have the sequence of abelian groups above splits, giving the isomorphism. Note that if ${\displaystyle C}$ is a smooth projective curve of genus ${\displaystyle g}$ over ${\displaystyle \mathbb {C} }$, then ${\displaystyle K_{0}(C)\cong \mathbb {Z} \oplus (\mathbb {C} ^{g}/\mathbb {Z} ^{2g})}$ Moreover, the techniques above using the derived category of singularities for isolated singularities can be extended to isolated Cohen-Macaulay singularities, giving techniques for computing the Grothendieck group of any singular algebraic curve. This is because reduction gives a generically smooth curve, and all singularities are Cohen-Macaulay. ## Applications ### Virtual bundles One useful application of the Grothendieck-group is to define virtual vector bundles. For example, if we have an embedding of smooth spaces ${\displaystyle Y\hookrightarrow X}$ then there is a short exact sequence ${\displaystyle 0\to \Omega _{Y}\to \Omega _{X}|_{Y}\to C_{Y/X}\to 0}$ where ${\displaystyle C_{Y/X}}$ is the conormal bundle of ${\displaystyle Y}$ in ${\displaystyle X}$. If we have a singular space ${\displaystyle Y}$ embedded into a smooth space ${\displaystyle X}$ we define the virtual conormal bundle as ${\displaystyle [\Omega _{X}|_{Y}]-[\Omega _{Y}]}$ Another useful application of virtual bundles is with the definition of a virtual tangent bundle of an intersection of spaces: Let ${\displaystyle Y_{1},Y_{2}\subset X}$ be projective subvarieties of a smooth projective variety. Then, we can define the virtual tangent bundle of their intersection ${\displaystyle Z=Y_{1}\cap Y_{2}}$ as ${\displaystyle [T_{Z}]^{vir}=[T_{Y_{1}}]|_{Z}+[T_{Y_{2}}]|_{Z}-[T_{X}]|_{Z}.}$ Kontsevich uses this construction in one of his papers.[12] ### Chern characters Chern classes can be used to construct a homomorphism of rings from the topological K-theory of a space to (the completion of) its rational cohomology. For a line bundle L, the Chern character ch is defined by ${\displaystyle \operatorname {ch} (L)=\exp(c_{1}(L)):=\sum _{m=0}^{\infty }{\frac {c_{1}(L)^{m}}{m!}}.}$ More generally, if ${\displaystyle V=L_{1}\oplus \dots \oplus L_{n}}$ is a direct sum of line bundles, with first Chern classes ${\displaystyle x_{i}=c_{1}(L_{i}),}$ the Chern character is defined additively ${\displaystyle \operatorname {ch} (V)=e^{x_{1}}+\dots +e^{x_{n}}:=\sum _{m=0}^{\infty }{\frac {1}{m!}}(x_{1}^{m}+\dots +x_{n}^{m}).}$ The Chern character is useful in part because it facilitates the computation of the Chern class of a tensor product. The Chern character is used in the Hirzebruch–Riemann–Roch theorem. ## Equivariant K-theory The equivariant algebraic K-theory is an algebraic K-theory associated to the category ${\displaystyle \operatorname {Coh} ^{G}(X)}$ of equivariant coherent sheaves on an algebraic scheme ${\displaystyle X}$ with action of a linear algebraic group ${\displaystyle G}$, via Quillen's Q-construction; thus, by definition, ${\displaystyle K_{i}^{G}(X)=\pi _{i}(B^{+}\operatorname {Coh} ^{G}(X)).}$ In particular, ${\displaystyle K_{0}^{G}(C)}$ is the Grothendieck group of ${\displaystyle \operatorname {Coh} ^{G}(X)}$. The theory was developed by R. W. Thomason in 1980s.[13] Specifically, he proved equivariant analogs of fundamental theorems such as the localization theorem. ## Notes 1. ^ Atiyah, Michael (2000). "K-Theory Past and Present". arXiv:math/0012213. 2. ^ Park, Efton. (2008). Complex topological K-theory. Cambridge: Cambridge University Press. ISBN 978-0-511-38869-9. OCLC 227161674. 3. ^ 4. ^ Karoubi, 2006 5. ^ by Ruben Minasian (http://string.lpthe.jussieu.fr/members.pl?key=7), and Gregory Moore in K-theory and Ramond–Ramond Charge. 6. ^ "Grothendieck group for projective space over the dual numbers". mathoverflow.net. Retrieved 2017-04-16. 7. ^ "kt.k theory and homology - Grothendieck group for projective space over the dual numbers". MathOverflow. Retrieved 2020-10-20. 8. ^ Manin, Yuri I (1969-01-01). "Lectures on the K-functor in algebraic geometry". Russian Mathematical Surveys. 24 (5): 1–89. Bibcode:1969RuMaS..24....1M. doi:10.1070/rm1969v024n05abeh001357. ISSN 0036-0279. 9. ^ "ag.algebraic geometry - Is the algebraic Grothendieck group of a weighted projective space finitely generated ?". MathOverflow. Retrieved 2020-10-20. 10. ^ a b Pavic, Nebojsa; Shinder, Evgeny (2019-03-25). "K-theory and the singularity category of quotient singularities". arXiv:1809.10919 [math.AG]. 11. ^ a b Srinivas, V. (1991). Algebraic K-theory. Boston: Birkhäuser. ISBN 978-1-4899-6735-0. OCLC 624583210. 12. ^ Kontsevich, Maxim (1995), "Enumeration of rational curves via torus actions", The moduli space of curves (Texel Island, 1994), Progress in Mathematics, 129, Boston, MA: Birkhäuser Boston, pp. 335–368, arXiv:hep-th/9405035, MR 1363062 13. ^ Charles A. Weibel, Robert W. Thomason (1952–1995). ## References Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 164, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632728099822998, "perplexity": 283.2783258925919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00214.warc.gz"}
https://byjus.com/question-answer/can-superconductors-exhibit-resonance-why-or-why-not/
Question # Can superconductors exhibit resonance ? Why  or why not ? Solution ## Superconductors can exhibit resonance. It depends on the effect of a magnetic field on superconductivity and spin excitations. A magnetic field can suppress Tc and reduce the magnitude of the superconducting energy gap via either orbital pair breaking of Cooper pairs in the superconducting state Zeeman effect on electron spins. Recently it has been discovered that a copper-oxide superconductor that exhibits zero resistance at one - and ONLY ONE temperature: 109 Kelvin. If confirmed, this would be the first compound to display a "resonant" superconductivity, where Cooper pairs form only at a specific temperature. Suggest corrections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260684847831726, "perplexity": 1642.4465334592526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00171.warc.gz"}
https://home.cern/news/news/physics/cms-measures-rare-particle-decay-high-precision
Voir en français # CMS measures rare particle decay with high precision Using LHC Run 2 data, CMS has precisely measured the rare decay of strange B mesons to muon–antimuon pairs. While its properties agree with Standard Model predictions, it may provide clues to new discoveries in Run 3 | At CERN’s Large Hadron Collider (LHC), studies of rare processes allow scientists to infer the presence of heavy particles, including undiscovered particles, that cannot be directly produced. Such particles are widely anticipated to exist beyond the Standard Model, and could help explain some of the enigmas of the universe, such as the existence of dark matter, the masses of neutrinos (elusive particles originally thought to be massless) and the universe’s matter–antimatter asymmetry. One such process is the rare decay of neutral B mesons to a muon and antimuon pair: the heavier cousin of the electron paired with its corresponding antiparticle. There are two types of neutral B mesons: the B0 meson consists of a beauty antiquark and a down quark, while for the Bs meson the down quark is replaced by a strange quark. If there are no new particles affecting these rare decays, researchers have predicted that only one in 250 million Bs mesons will decay into a muon–antimuon pair; for the B0 meson, the process is even more rare, at only one in 10 billion. Scientists have been searching for experimental confirmation of these decays since the 1980s. Only recently, in 2014, was the first observation of the Bs to muons decay reported in a combined analysis of data taken by the LHCb and CMS collaborations, later confirmed by the ATLAS, CMS and LHCb experiments individually. However, the B0 decay still eludes any attempt to observe it. Using data from Run 2 of the LHC, the CMS experiment has released a new study of the decay rate and the lifetime of the Bs meson decay, as well as a search for the B0 decay. The new study, presented at the International Conference on High Energy Physics (ICHEP), benefits from not only a large amount of data analysed, but also advanced machine-learning algorithms that single out the rare decay events from the overwhelming background of events produced by millions of particle collisions per second. The results revealed a very clear signal of the Bs meson decaying to a muon–antimuon pair. The precision of the decay rate measurement exceeds that achieved in previous measurements in other experiments. Both the observed Bs decay rate, found to be 3.8 ± 0.4 parts in a billion, and its lifetime measurement of 1.8 ± 0.2 picoseconds (one picosecond is one trillionth of a second), are very close to the values predicted by the Standard Model. As for the B0 decay, although no evidence of it was found from these results, physicists can state with 95% statistical confidence that its decay rate is less than 1 part in 5 billion. In recent years, a number of anomalies have been observed in other rare B meson decays, with discrepancies between the theoretical predictions and the data – indicating the potential existence of new particles. The new CMS result is much closer to theoretical predictions than these other rare decays and so could help scientists to understand the nature of the anomalies. Rare B meson decays continue to be of great interest to scientists. With the Bs meson to muons decay firmly established and measured with high precision, scientists are now setting their sights on the ultimate prize: the B0 decay. With large data sets anticipated from LHC Run 3, they hope to catch the first glimpse of this extremely rare process and learn more about the puzzling anomalies. Read more on the CMS website.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246624708175659, "perplexity": 1030.9916270286844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00604.warc.gz"}
https://en.unionpedia.org/Bra%E2%80%93ket_notation
Communication Free Faster access than browser! # Bra–ket notation In quantum mechanics, bra–ket notation is a standard notation for describing quantum states. [1] 80 relations: Abuse of notation, Angular momentum diagrams (quantum mechanics), Angular momentum operator, Antilinear map, Associative property, Asterisk, Banach space, Basis (linear algebra), Borel functional calculus, Bracket, Change of basis, Complete metric space, Complex conjugate, Complex number, Conjugate transpose, Dirac delta function, Displacement (vector), Dot product, Dual space, Einstein notation, Energetic space, Energy, Expectation value (quantum mechanics), Finite-rank operator, Function composition, Functional analysis, Gelfand–Naimark–Segal construction, Hermann Grassmann, Hermitian adjoint, Hilbert space, Identical particles, Inner product space, Line (geometry), Linear algebra, Linear combination, Linear form, Linear map, Linear subspace, Mathematics, Matrix multiplication, Measurement in quantum mechanics, Momentum, N-slit interferometric equation, Norm (mathematics), Observable, Open Court Publishing Company, Orthonormal basis, Orthonormality, Outer product, Paul Dirac, ... Expand index (30 more) » ## Abuse of notation In mathematics, abuse of notation occurs when an author uses a mathematical notation in a way that is not formally correct but that seems likely to simplify the exposition or suggest the correct intuition (while being unlikely to introduce errors or cause confusion). ## Angular momentum diagrams (quantum mechanics) In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolically. ## Angular momentum operator In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. ## Antilinear map In mathematics, a mapping f:V\to W from a complex vector space to another is said to be antilinear (or conjugate-linear) if for all a, \, b \, \in \mathbb and all x, \, y \, \in V, where \bar and \bar are the complex conjugates of a and b respectively. ## Associative property In mathematics, the associative property is a property of some binary operations. ## Asterisk An asterisk (*); from Late Latin asteriscus, from Ancient Greek ἀστερίσκος, asteriskos, "little star") is a typographical symbol or glyph. It is so called because it resembles a conventional image of a star. Computer scientists and mathematicians often vocalize it as star (as, for example, in the A* search algorithm or C*-algebra). In English, an asterisk is usually five-pointed in sans-serif typefaces, six-pointed in serif typefaces, and six- or eight-pointed when handwritten. It is often used to censor offensive words, and on the Internet, to indicate a correction to a previous message. The asterisk is derived from the need of the printers of family trees in feudal times for a symbol to indicate date of birth. The original shape was seven-armed, each arm like a teardrop shooting from the center. In computer science, the asterisk is commonly used as a wildcard character, or to denote pointers, repetition, or multiplication. ## Banach space In mathematics, more specifically in functional analysis, a Banach space (pronounced) is a complete normed vector space. ## Basis (linear algebra) In mathematics, a set of elements (vectors) in a vector space V is called a basis, or a set of, if the vectors are linearly independent and every vector in the vector space is a linear combination of this set. ## Borel functional calculus In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus (that is, an assignment of operators from commutative algebras to functions defined on their spectra), which has particularly broad scope. ## Bracket A bracket is a tall punctuation mark typically used in matched pairs within text, to set apart or interject other text. ## Change of basis In linear algebra, a basis for a vector space of dimension n is a set of n vectors, called basis vectors, with the property that every vector in the space can be expressed as a unique linear combination of the basis vectors. ## Christmas Christmas is an annual festival commemorating the birth of Jesus Christ,Martindale, Cyril Charles. ## Christmas and holiday season The Christmas season, also called the festive season, or the holiday season (mainly in the U.S. and Canada; often simply called the holidays),, is an annually recurring period recognized in many Western and Western-influenced countries that is generally considered to run from late November to early January. ## Christmas Eve Christmas Eve is the evening or entire day before Christmas Day, the festival commemorating the birth of Jesus. Christmas traditions vary from country to country. ## Complete metric space In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M or, alternatively, if every Cauchy sequence in M converges in M. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). ## Complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. ## Complex number A complex number is a number that can be expressed in the form, where and are real numbers, and is a solution of the equation. ## Conjugate transpose In mathematics, the conjugate transpose or Hermitian transpose of an m-by-n matrix A with complex entries is the n-by-m matrix A∗ obtained from A by taking the transpose and then taking the complex conjugate of each entry. ## Dirac delta function In mathematics, the Dirac delta function (function) is a generalized function or distribution introduced by the physicist Paul Dirac. ## Displacement (vector) A displacement is a vector whose length is the shortest distance from the initial to the final position of a point P. It quantifies both the distance and direction of an imaginary motion along a straight line from the initial position to the final position of the point. ## Dot product In mathematics, the dot product or scalar productThe term scalar product is often also used more generally to mean a symmetric bilinear form, for example for a pseudo-Euclidean space. ## Dual space In mathematics, any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V, together with the vector space structure of pointwise addition and scalar multiplication by constants. ## Einstein notation In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving notational brevity. ## Energetic space In mathematics, more precisely in functional analysis, an energetic space is, intuitively, a subspace of a given real Hilbert space equipped with a new "energetic" inner product. ## Energy In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. ## Expectation value (quantum mechanics) In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. ## Finite-rank operator In functional analysis, a branch of mathematics, a finite-rank operator is a bounded linear operator between Banach spaces whose range is finite-dimensional. ## Function composition In mathematics, function composition is the pointwise application of one function to the result of another to produce a third function. ## Functional analysis Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear functions defined on these spaces and respecting these structures in a suitable sense. ## Gelfand–Naimark–Segal construction In functional analysis, a discipline within mathematics, given a C*-algebra A, the Gelfand–Naimark–Segal construction establishes a correspondence between cyclic *-representations of A and certain linear functionals on A (called states). ## Hermann Grassmann Hermann Günther Grassmann (Graßmann; April 15, 1809 – September 26, 1877) was a German polymath, known in his day as a linguist and now also as a mathematician. In mathematics, specifically in functional analysis, each bounded linear operator on a complex Hilbert space has a corresponding adjoint operator. ## Hilbert space The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. ## Identical particles Identical particles, also called indistinguishable or indiscernible particles, are particles that cannot be distinguished from one another, even in principle. ## Inner product space In linear algebra, an inner product space is a vector space with an additional structure called an inner product. ## Line (geometry) The notion of line or straight line was introduced by ancient mathematicians to represent straight objects (i.e., having no curvature) with negligible width and depth. ## Linear algebra Linear algebra is the branch of mathematics concerning linear equations such as linear functions such as and their representations through matrices and vector spaces. ## Linear combination In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). ## Linear form In linear algebra, a linear functional or linear form (also called a one-form or covector) is a linear map from a vector space to its field of scalars. ## Linear map In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication. ## Linear subspace In linear algebra and related fields of mathematics, a linear subspace, also known as a vector subspace, or, in the older literature, a linear manifold, is a vector space that is a subset of some other (higher-dimension) vector space. ## Mathematics Mathematics (from Greek μάθημα máthēma, "knowledge, study, learning") is the study of such topics as quantity, structure, space, and change. ## Matrix multiplication In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. ## Measurement in quantum mechanics The framework of quantum mechanics requires a careful definition of measurement. ## Momentum In Newtonian mechanics, linear momentum, translational momentum, or simply momentum (pl. momenta) is the product of the mass and velocity of an object. ## N-slit interferometric equation Quantum mechanics was first applied to optics, and interference in particular, by Paul Dirac. ## New Year New Year is the time or day at which a new calendar year begins and the calendar's year count increments by one. ## New Year's Day New Year's Day, also called simply New Year's or New Year, is observed on January 1, the first day of the year on the modern Gregorian calendar as well as the Julian calendar. ## New Year's Eve In the Gregorian calendar, New Year's Eve (also known as Old Year's Day or Saint Sylvester's Day in many countries), the last day of the year, is on 31 December which is the seventh day of Christmastide. ## Norm (mathematics) In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero. ## Observable In physics, an observable is a dynamic variable that can be measured. ## Open Court Publishing Company The Open Court Publishing Company is a publisher with offices in Chicago and La Salle, Illinois. ## Orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. ## Orthonormality In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and unit vectors. ## Outer product In linear algebra, an outer product is the tensor product of two coordinate vectors, a special case of the Kronecker product of matrices. ## Paul Dirac Paul Adrien Maurice Dirac (8 August 1902 – 20 October 1984) was an English theoretical physicist who is regarded as one of the most significant physicists of the 20th century. ## Plane wave In the physics of wave propagation, a plane wave (also spelled planewave) is a wave whose wavefronts (surfaces of constant phase) are infinite parallel planes. ## Position and momentum space In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general could be any finite number of dimensions. ## Probability amplitude In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. ## Projection (linear algebra) In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself such that. ## Quantum mechanics Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles. ## Quantum number Quantum numbers describe values of conserved quantities in the dynamics of a quantum system. ## Quantum state In quantum physics, quantum state refers to the state of an isolated quantum system. ## Quantum superposition Quantum superposition is a fundamental principle of quantum mechanics. ## Riesz representation theorem There are several well-known theorems in functional analysis known as the Riesz representation theorem. ## Rigged Hilbert space In mathematics, a rigged Hilbert space (Gelfand triple, nested Hilbert space, equipped Hilbert space) is a construction designed to link the distribution and square-integrable aspects of functional analysis. ## Ring (mathematics) In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. ## Row and column vectors In linear algebra, a column vector or column matrix is an m &times; 1 matrix, that is, a matrix consisting of a single column of m elements, Similarly, a row vector or row matrix is a 1 &times; m matrix, that is, a matrix consisting of a single row of m elements Throughout, boldface is used for the row and column vectors. ## Schrödinger picture In physics, the Schrödinger picture (also called the Schrödinger representation) is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators (observables and others) are constant with respect to time. In mathematics, an element x of a *-algebra is self-adjoint if x^*. In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product \langle\cdot,\cdot\rangle is a linear map A (from V to itself) that is its own adjoint: \langle Av,w\rangle. ## Spin (physics) In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei. ## Spin-½ In quantum mechanics, spin is an intrinsic property of all elementary particles. ## Stationary state A stationary state is a quantum state with all observables independent of time. ## T-symmetry T-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal: T-symmetry can be shown to be equivalent to the conservation of entropy, by Noether's Theorem. ## Tensor product In mathematics, the tensor product of two vector spaces and (over the same field) is itself a vector space, together with an operation of bilinear composition, denoted by, from ordered pairs in the Cartesian product into, in a way that generalizes the outer product. ## Time evolution Time evolution is the change of state brought about by the passage of time, applicable to systems with internal state (also called stateful systems). ## Topology In mathematics, topology (from the Greek τόπος, place, and λόγος, study) is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. ## Transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal, that is it switches the row and column indices of the matrix by producing another matrix denoted as AT (also written A′, Atr, tA or At). ## Uncountable set In mathematics, an uncountable set (or uncountably infinite set) is an infinite set that contains too many elements to be countable. ## Unitary operator In functional analysis, a branch of mathematics, a unitary operator is a surjective bounded operator on a Hilbert space preserving the inner product. ## Vector space A vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars. ## Velocity The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. ## Vertical bar The vertical bar (|) is a computer character and glyph with various uses in mathematics, computing, and typography. ## Wave function A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. ## Wave function collapse In quantum mechanics, wave function collapse is said to occur when a wave function—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate (by "observation"). ## 2018 2018 has been designated as the third International Year of the Reef by the International Coral Reef Initiative. ## 2019 2019 (MMXIX) will be a common year starting on Tuesday of the Gregorian calendar, the 2019th year of the Common Era (CE) and Anno Domini (AD) designations, the 19th year of the 3rd millennium, the 19th year of the 21st century, and the 10th and last year of the 2010s decade. ## References Hey! We are on Facebook now! »
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414755702018738, "perplexity": 775.6795443722608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201922.85/warc/CC-MAIN-20190319073140-20190319095140-00278.warc.gz"}
https://sciendo.com/article/10.2478/amns.2021.2.00133
Journal Details Format Journal eISSN 2444-8656 First Published 01 Jan 2016 Publication timeframe 2 times per year Languages English Open Access # The consistency method of linguistic information and other four preference information in group decision-making ###### Accepted: 26 Sep 2021 Journal Details Format Journal eISSN 2444-8656 First Published 01 Jan 2016 Publication timeframe 2 times per year Languages English The transformation method between the linguistic judgement matrix and other four forms of preference information is researched in this paper. The four forms of preference information are preference ordering, utility value, reciprocal judgement matrices and complementary judgement matrices. First, based on the definition of preference information, the mutual transformation methods of linguistic judgement matrix and other four forms of preference information are given. Then new transformation equations are obtained. Next, it is proved that when the linguistic judgement matrix has complete consistency, the other four forms of preference information will have complete consistency too. Finally, a numerical analysis is offered to show that these methods are feasible and effective. #### Keywords Introduction Owing to differences in cultural background, psychological quality and life experience, different decision-makers in a group may have different individual preferences for the same decision-making problems, and their importance or authority may be different. The heterogeneity of decision-making groups in decision-making practice cannot be ignored. Therefore, the quest for the means to design a scientific, effective and rapid mechanical method for the different preference information given by heterogeneous groups is an urgent basic theoretical problem to be solved. The content of decision-makers’ perception of the external environment is diverse and the information involved is complex. There is also uncertainty in the process of different decision-makers’ perception of information. At the same time, the information itself also has a certain degree of randomness and conflict; so, it is necessary to unify different preference information. With the development of group decision-making, different forms of preference information have arisen, including the initial preference order, utility value, reciprocal judgement matrix and complementary judgement matrix, but this preference information cannot meet the needs of decision-makers. Since Zadeh and Yager put forward the concept of using linguistic information to represent evaluation results [1, 2, 3, 4, 5, 6, 7], the linguistic judgement matrix has assumed importance as a vital mode of information provided by decision-makers to compare the two alternatives. Due to the complexity and fuzziness of decision-makers’ knowledge background and decision objects, decision-makers may give different forms of preference information for the same decision-making problem. In order to aggregate different preference information given by decision-makers, it is necessary to unify different forms of preference information. The research in this field has attracted extensive attention of scholars and achieved certain results [8, 9, 10, 11, 12, 13, 14, 15, 16]. The aggregation method of natural language and numerical preference information was given in Delgado et al. [8]; the conversion method of four kinds of preference information in group decision-making was given in the study by Wujiang [9]; Xiao et al. [10] study the conversion method of two kinds of preference information with reciprocal judgement matrix and fuzzy complementary judgement matrix in group decision-making; Huayou and Chunlin [11] give three kinds of preference information: order relation value, reciprocal judgement matrix and fuzzy complementary judgement matrix. The effectiveness of information transformation is calculated in the form of value. Most of these studies analyse the mutual transformation of four kinds of preference information, and seldom systematically study the mutual transformation between linguistic information and among them. Yan-Wu and Hua-You [12] transform linguistic judgement matrix into derived matrix, and prove that derived matrix is complementary judgement matrix, which is also equivalent to transforming linguistic judgement matrix into complementary judgement matrix. When the preference information given by experts consists of fuzzy complementary judgement matrix, interval value, positive reciprocal matrix, order relation value and utility value, the preference information of different forms is transformed into fuzzy complementary judgement matrix, and then the ranking value of each scheme is obtained according to the fuzzy complementary judgement matrix in the study by Yangjing [13]. A GDM framework based on consistency driven and consistency driven optimisation model of personalised normalisation method is proposed to manage complete and incomplete probabilistic linguistic preference relation in Tian et al. [15]. This paper presents the conversion formula between linguistic judgement matrix and four kinds of preference information, which mainly solves the problem of mutual conversion between linguistic information and among them, and theoretically proves the rationality of the conversion formula and the consistency after conversion. Finally, an example is used to verify the rationality and effectiveness of the conversion formula. Preliminqries I = {1, 2, … , n} and U = {0, 1, 2, … , T} are two sets; the following is a brief description of the linguistic judgement matrix [13, 14, 15, 16]. X = {x1, x2, … , xn} is the set of alternatives and D = {d1, d2, … , dm} is the set of decision-makers. The preference information of pairwise comparison given by decision-makers can be described by a matrix P = (pij)n×n. The objects in the matrix are selected from the linguistic term set S = {si |iU} as the evaluation results of xi and xj. The number of elements in the set S = {si |iU} is called granularity of linguistic term set. For example, a linguistic term set with 13 granularity can be described as S = {s0 = DD = absolute difference, s1 = V HD = quite poor, s2 = HD = very poor, s3 = MD = weak, s4 = LD = Poor, s5 = V LD = slightly poor, s6 = AS = equivalent, s7 = V LP = slightly better, s8 = LP = better, s9 = MP = good, s10 = HP = very good, s11 = V HP = quite good, s12 = DP = absolutely good}. A linguistic term set should have odd elements and the following properties: Orderliness: when i < j, there is sisj or sjsi, namely, it is inferior si to sj or sj better than si; Inverse operation neg: neg(si) = sj, j = Ti; Maximisation operation: if sisj, max {si, sj} = si; Minimisation operation: if sisj, min {si, sj} = sj; $SL={s0, s1,⋯, sT2−1}$ {S^L} = \left\{{{s_0},\,\,{s_1}, \cdots,\,{s_{{{\rm{T}} \over 2} - 1}}} \right\} , $SU={sT2+1, ⋯, sT}$ {S^{\rm{U}}} = \left\{{{s_{{{\rm{T}} \over 2} + 1}},\, \cdots,\,{s_T}} \right\} , $ST2L={s0, s1,⋯, sT2}$ S_{{{\rm{T}} \over 2}}^L = \left\{{{s_0},\,\,{s_1}, \cdots,\,{s_{{{\rm{T}} \over 2}}}} \right\} , $ST2U={sT2, ⋯, sT}$ S_{{{\rm{T}} \over 2}}^{\rm{U}} = \left\{{{s_{{{\rm{T}} \over 2}}},\, \cdots,\,{s_T}} \right\} are four linguistic term sets. Definition 1 [13, 14, 15] Let P = (pij)n×n be a matrix, where pij satisfy all the following properties for all i, jI, $pij∈S;pii=sT2;pij=sk,pji=neg(sk)$ {p_{ij}} \in S;{p_{ii}} = {s_{{T \over 2}}};{p_{ij}} = {s_k},{p_{ji}} = neg\left({{s_k}} \right) P = (pij)n×n is called linguistic judgement matrix. The definitions of linguistic information and four kinds of preference information are given as follows. (1) Language information [1, 3] The preference information given by the decision-maker can be expressed by linguistic information. The preference information can be described by a matrix P = (pij)n×n, corresponding membership functions µP : X × XS. The element pij is selected from a predefined set of linguistic term set S = {si |iU} as the evaluation result of the comparison between alternatives xi and xj. (2) Preference order relation [19] The decision-maker directly gives the order of the decision alternative set according to the individual preference: Ok = {ok(1), ok(2), … , ok(n)} where Ok is a substitution of {1, 2, … , n}. ok(i) indicates the position order of the decision alternative xi. Generally, the smaller the decision alternative ok(i), the better the alternative xi. (3) Utility value [20] For the alternative set X, the decision-maker gives the utility value set U = {u1, u2, … , un} of the scheme according to the preference of the scheme, in which ui ∈ [0,1] represents the ranking utility value of the scheme given by the decision-maker. Generally, the larger the utility value ui, the better the alternative xi. (4) Reciprocal judgement matrix [21] The decision-maker compares the alternatives in the alternative set X and gives the reciprocal judgement matrix A = aijn×n. aij, indicating the relative importance of the alternative xi to the alternative xj, and $19≤aij≤9$ {1 \over 9} \le {a_{ij}} \le 9 , $aij=1aji$ {a_{ij}} = {1 \over {{a_{ji}}}} , aii = 1, ∀i, j. (5) Complementary judgement matrix [21] The decision-maker compares the alternatives in the alternative set and gives complementary judgement matrix B = (bij)n×n. bij, indicating the degree to which the alternative xi is superior to the alternative xj, and 0 ≤ bij ≤ 1, bij + bji = 1, bii = 0.5,∀i, j. The consistency method of linguistic information and other four preference information In the case of strict preference relationship between each alternative, this paper discusses the consistency method of linguistic judgement matrix with the other four kinds of preference information, and assumes that linguistic judgement matrix has satisfactory consistency. Transformation between linguistic judgement matrix and preference order Linguistic judgement matrix is transformed into preference order Definition 2 [13, 14] S = {sα |α ∈ [0q]} is a linguistic term set, let $IT:S→R,IT(si)=i$ {I_T}:S \to R,{I_T}\left({{s_i}} \right) = i then the function IT : SR is the subscript function corresponding to the linguistic phrase evaluation set. order $IT−1:R→S,IT−1(i)=si,0≤i≤q$ \matrix{{I_T^{- 1}:R \to S,} & {I_T^{- 1}\left(i \right) = {s_i},} & {0 \le i \le q} \cr} then the function $IT−1:R→S$ I_T^{- 1}:R \to S is called the linguistic information function of the decision-maker on the real number set. Definition 3 Let C = (cij)n×n be a matrix, where $cij={1if pij∈ST2U0if pij∈SL$ {c_{ij}} = \left\{{\matrix{1 & {{\rm{if}}\,{p_{ij}} \in S_{{T \over 2}}^U} \cr 0 & {{\rm{if}}\,{p_{ij}} \in {S^L}} \cr}} \right. C = (cij)n×n is called the preference relation matrix of the linguistic judgement matrix. Definition 4 Let C = (cij)n×n be the preference relation matrix of linguistic judgement matrix P = (pij)n×n, where $ai=∑j=1nqij,bj=∑i=1nqij,$ \matrix{{{a_i} = \sum\limits_{j = 1}^n {{q_{ij}},}} \cr {{b_j} = \sum\limits_{i = 1}^n {{q_{ij}},}} \cr} where ai is called the row preference value of the row of the preference relation matrix and bj is called the column preference value of the column of the preference relation matrix. In this way, the row preference value of each alternative can be obtained, and the preference order of the alternative s can be determined according to the size of the row preference value. If calculated by column, the result is just the opposite. Preference order is transformed into linguistic judgement matrix The smaller the value ok(i) in the preference order, the higher the position of the alternative i in the preference order, and the better it is compared with other alternatives. Assuming the granularity of the linguistic phrase evaluation set on which the converted linguistic judgement matrix rests to be T + 1, then the conversion formula can be stated as follows: $IT(pij)=[T2(1−ok(i)−ok(j)n−1)]$ {I_T}\left({{p_{ij}}} \right) = \left[ {{T \over 2}\left({1 - {{{o^k}\left(i \right) - {o^k}\left(j \right)} \over {n - 1}}} \right)} \right] where IT (pij) is the subscript function of the evaluation set of language phrases, [] denotes rounding and the following similar statements will not be repeated. Then the linguistic judgement matrix is given by the following: $P=(pij)n×n=(IT−1([T2(1−ok(i)−ok(j)n−1)]))n×n.$ P = {\left({{p_{ij}}} \right)_{n \times n}} = {\left({I_T^{- 1}\left({\left[ {{T \over 2}\left({1 - {{{o^k}\left(i \right) - {o^k}\left(j \right)} \over {n - 1}}} \right)} \right]} \right)} \right)_{n \times n}}. Transformation between linguistic judgement matrix and utility value Linguistic judgement matrix is transformed into utility value When calculating the utility value of an alternative i, we use the subscript function to add the linguistic phrase subscripts obtained from comparing all schemes with the scheme together, which is called the subscript sum of the alternative i. If there are n alternatives, we divide by nT, which is the utility value of the alternative i. At the same time, we also know that the better the alternative, the greater the utility value, and the utility value is between [0,1]. Definition 5 Let IT : SR, IT (si) = i be the subscript function of the linguistic judgement matrix P = (pij)n×n, where $i′=∑j=1nIT(pij)$ i' = \sum\limits_{j = 1}^n {{I_T}\left({{p_{ij}}} \right)} and i′ is called the subscript sum of the scheme in the linguistic judgement matrix. Then the conversion formula of the utility value obtained from the linguistic judgement matrix can be stated as follows: $ui=i′nT$ {u_i} = {{i'} \over {nT}} Utility value is transformed into linguistic judgement matrix The utility value is the preferred utility value given by the decision-maker. The larger the utility value, the better the alternative. The conversion formula of utility value into linguistic judgement matrix is given as follows: $IT(pij)=[T2(ui−uj)+T2]$ {I_T}\left({{p_{ij}}} \right) = \left[ {{T \over 2}\left({{u_i} - {u_j}} \right) + {T \over 2}} \right] Then, the linguistic judgement matrix is expressed as: $P=(pij)n×n=(IT−1([T2(ui−uj)+T2]))n×n.$ P = {\left({{p_{ij}}} \right)_{n \times n}} = {\left({I_T^{- 1}\left({\left[ {{T \over 2}\left({{u_i} - {u_j}} \right) + {T \over 2}} \right]} \right)} \right)_{n \times n}}. Transformation between linguistic judgement matrix and reciprocal judgement matrix Linguistic judgement matrix is transformed into reciprocal judgement matrix The elements of the reciprocal judgement matrix satisfy [1/9, 9] and aijaji = 1. Assuming that the granularity of the evaluation set of linguistic phrases of linguistic judgement matrix is T + 1, the subscript function representing linguistic phrases is IT (pij). The conversion formula of linguistic judgement matrix into reciprocal judgement matrix can be stated as follows: $aij=92TIT(pij)−1$ {a_{ij}} = {9^{{2 \over T}{I_T}\left({{p_{ij}}} \right) - 1}} Proof. Let IT (pij) = i, $aijaji=92TIT(pij)−1×92TIT(pji)−1=92Ti−1×92T(T−I)−1=92Ti−1+2T(T−i)−1=92T(i+T−i)−2=90=1$ {a_{ij}}{a_{ji}} = {9^{{2 \over T}{I_T}\left({{p_{ij}}} \right) - 1}} \times {9^{{2 \over T}{I_T}\left({{p_{ji}}} \right) - 1}} = {9^{{2 \over T}i - 1}} \times {9^{{2 \over T}\left({T - I} \right) - 1}} = {9^{{2 \over T}i - 1 + {2 \over T}\left({T - i} \right) - 1}} = {9^{{2 \over T}\left({i + T - i} \right) - 2}} = {9^0} = 1 ; There is $pii=sT2$ {p_{ii}} = {s_{{T \over 2}}} , $aii=92TIT(pii)−1=92T×T2−1=90=$ {a_{ii}} = {9^{{2 \over T}{I_T}\left({{p_{ii}}} \right) - 1}} = {9^{{2 \over T} \times {T \over 2} - 1}} = {9^0} = . So, the condition of reciprocal judgement matrix is satisfied. Reciprocal judgement matrix is transformed into linguistic judgement matrix When the reciprocal judgement matrix is transformed into the linguistic judgement matrix, the inverse function expressed in Eq. (8) is used as the conversion formula. $IT(pij)=[T2log9aij+T2]$ {I_T}\left({{p_{ij}}} \right) = \left[ {{T \over 2}\log _9^{{a_{ij}}} + {T \over 2}} \right] Then, the linguistic judgement matrix $P=(pij)n×n=(IT−1([T2log9aij+T2]))n×n$ P = {\left({{p_{ij}}} \right)_{n \times n}} = {\left({I_T^{- 1}\left({\left[ {{T \over 2}\log _9^{{a_{ij}}} + {T \over 2}} \right]} \right)} \right)_{n \times n}} . Definition 6 Let the granularity of the phrase evaluation set on the linguistic judgement matrix P = (pij)n×n be T + 1, and $δ(pij)=IT(pij)T$ \delta \left({{p_{ij}}} \right) = {{{I_T}\left({{p_{ij}}} \right)} \over T} δ (pij) is called the derivation function of linguistic preference information pij given by the decision-maker, and $qij=δ(pij)$ {q_{ij}} = \delta \left({{p_{ij}}} \right) Q = (qij)n×n is called the derivation matrix of linguistic judgement matrix P = (pij)n×n. Definition 7 [13, 14] If the linguistic judgement matrix satisfies $δ(pij)=δ(pik)+δ(pkj)−0.5,∀i,j∈{1,2,⋯,n},$ \matrix{{\delta \left({{p_{ij}}} \right) = \delta \left({{p_{ik}}} \right) + \delta \left({{p_{kj}}} \right) - 0.5,} & {\forall i,j \in \left\{{1,2, \cdots,n} \right\},} \cr} then the linguistic judgement matrix P = (pij)n×n is called complete consistency linguistic judgement matrix. Definition 8 [21] Let A = (aij)n×n be a positive reciprocal judgement matrix, if $aij=aikajk, ∀i,j,k∈I,$ {a_{ij}} = {{{a_{ik}}} \over {{a_{jk}}}},\,\forall i,j,k \in I, Then A is called consistent positive reciprocal judgement matrix. Theorem 1 if the linguistic judgement matrix is completely consistent, then the transformed reciprocal judgement matrix will also be completely consistent. Proof If the linguistic judgement matrix is completely consistent, then it is satisfied that $δ(pij)=δ(pik)+δ(pkj)−0.5, δ(pij)=IT(pij)T,aij=92TIT(pij)−1=92(IT(pik)T+IT(pkj)T−0.5)−1=92TIT(pik)−1+2TIT(pkj)−1=92TIT(pik)−1×92TIT(pkj)−1=aikakj.$ \matrix{{\delta \left({{p_{ij}}} \right)} \hfill & = \hfill & {\delta \left({{p_{ik}}} \right) + \delta \left({{p_{kj}}} \right) - 0.5,\,\,\delta \left({{p_{ij}}} \right) = {{{I_T}\left({{p_{ij}}} \right)} \over T},} \hfill \cr {\,\,\,\,\,\,\,\,\,{a_{ij}}} \hfill & = \hfill & {{9^{{2 \over T}{I_T}\left({{p_{ij}}} \right) - 1}} = {9^{2\left({{{{I_T}\left({{p_{ik}}} \right)} \over T} + {{{I_T}\left({{p_{kj}}} \right)} \over T} - 0.5} \right) - 1}} = {9^{{2 \over T}{I_T}\left({{p_{ik}}} \right) - 1 + {2 \over T}{I_T}\left({{p_{kj}}} \right) - 1}} = {9^{{2 \over T}{I_T}\left({{p_{ik}}} \right) - 1}} \times {9^{{2 \over T}{I_T}\left({{p_{kj}}} \right) - 1}} = {a_{ik}}{a_{kj.}}} \hfill \cr} It is concluded that the reciprocal judgement matrix is also consistent. Transformation between linguistic judgement matrix and complementary judgement matrix Linguistic judgement matrix is transformed into complementary judgement matrix Let the granularity of the linguistic phrase evaluation set of the linguistic judgement matrix be T + 1 and have satisfactory consistency, and the elements of the complementary judgement matrix B = (bij)n×n satisfy bij ∈ [0.1,0.9], bij + bji = 1; then the conversion formula of the linguistic judgement matrix into the complementary judgement matrix can be stated as follows: $bij=45IT(pij)T+110.$ {b_{ij}} = {4 \over 5}{{{I_T}\left({{p_{ij}}} \right)} \over T} + {1 \over {10}}. Proof. Let IT (pij) = i, $bij+bji=45IT(pij)T+110+45IT(pij)T+110=45iT+110+45T−iT+110=45T−i+iT+210=1$ {b_{ij}} + {b_{ji}} = {4 \over 5}{{{I_T}({p_{ij}})} \over T} + {1 \over {10}} + {4 \over 5}{{{I_T}({p_{ij}})} \over T} + {1 \over {10}} = {4 \over 5}{i \over T} + {1 \over {10}} + {4 \over 5}{{T - i} \over T} + {1 \over {10}} = {4 \over 5}{{T - i + i} \over T} + {2 \over {10}} = 1 . $bii=45IT(pii)T+110=45(T−i+i)T+210=25+110=0.5$ {b_{ii}} = {4 \over 5}{{{I_T}({p_{ii}})} \over T} + {1 \over {10}} = {4 \over 5}{{(T - i + i)} \over T} + {2 \over {10}} = {2 \over 5} + {1 \over {10}} = 0.5 . So the condition of complementary judgement matrix is satisfied. Complementary judgement matrix is transformed into linguistic judgement matrix The inverse function expressed in Eq. (14) is used as the conversion function of the complementary judgement matrix to the linguistic judgement matrix, which is $qij=[10Tbij−T8]$ {q_{ij}} = \left[ {{{10T{b_{ij}} - T} \over 8}} \right] Then the linguistic judgement matrix $P=(pij)n×n=(IT−1([10Tbij−T8]))n×n$ P = {\left({{p_{ij}}} \right)_{n \times n}} = {\left({I_T^{- 1}\left({\left[ {{{10T{b_{ij}} - T} \over 8}} \right]} \right)} \right)_{n \times n}} . Definition 9 [23] The complementary judgement matrix B = (bij)n×n satisfies the following conditions: $bij=bik+bkj−0.5,∀i,j∈{1,2,⋯,n}$ \matrix{{{b_{ij}} = {b_{ik}} + {b_{kj}} - 0.5,} \hfill & {\forall i,j \in \left\{{1,2, \cdots,n} \right\}} \hfill \cr} Then the matrix is said to be completely consistent. Theorem 2 If the linguistic judgement matrix has complete consistency, the transformed complementary judgement matrix also has complete consistency. Proof If the linguistic judgement matrix is completely consistent, then $δ(pij)=IT(pij)T$ \delta \left({{p_{ij}}} \right) = {{{I_T}\left({{p_{ij}}} \right)} \over T} , $bij=45IT(pij)T+110=45(IT(pik)T+IT(pkj)T−0.5)+110=45IT(pik)T+45IT(pkj)T−25+110=45IT(pik)T+110+45IT(pkj)T+110−0.5=bik+bkj−0.5$ {b_{ij}} = {4 \over 5}{{{I_T}\left({{p_{ij}}} \right)} \over T} + {1 \over {10}} = {4 \over 5}\left({{{{I_T}\left({{p_{ik}}} \right)} \over T} + {{{I_T}\left({{p_{kj}}} \right)} \over T} - 0.5} \right) + {1 \over {10}} = {4 \over 5}{{{I_T}\left({{p_{ik}}} \right)} \over T} + {4 \over 5}{{{I_T}\left({{p_{kj}}} \right)} \over T} - {2 \over 5} + {1 \over {10}} = {4 \over 5}{{{I_T}\left({{p_{ik}}} \right)} \over T} + {1 \over {10}} + {4 \over 5}{{{I_T}\left({{p_{kj}}} \right)} \over T} + {1 \over {10}} - 0.5 = {b_{ik}} + {b_{kj}} - 0.5 So theorem 2 is proved. Case analysis Example 1. Five decision-makers give the preference information about four alternatives as follows: $d1:O={3, 1, 4, 2};d2{0.5, 0.7, 0.9, 0.1} d3:A=[11/93791821/31/8191/71/21/91 ];d4:B=[0.50.10.60.70.90.50.80.60.40.20.50.90.30.40.10.5 ];d5:P=[s4s2s5s6s6s4s6s5s3s2s4s6s3s3s2s4 ].$ \eqalign{& \matrix{{{d_1}:O = \left\{{3,\,1,\,4,\,2} \right\};} & {{d_2}\left\{{0.5,\,0.7,\,0.9,\,0.1} \right\}} & {} \cr} \cr & \matrix{{{d_3}:A = \left[ {\matrix{1 & {1/9} & 3 & 7 \cr 9 & 1 & 8 & 2 \cr {1/3} & {1/8} & 1 & 9 \cr {1/7} & {1/2} & {1/9} & 1 \cr}} \right];} & {{d_4}:B = \left[ {\matrix{{0.5} & {0.1} & {0.6} & {0.7} \cr {0.9} & {0.5} & {0.8} & {0.6} \cr {0.4} & {0.2} & {0.5} & {0.9} \cr {0.3} & {0.4} & {0.1} & {0.5} \cr}} \right];} & {{d_5}:P = \left[ {\matrix{{{s_4}} & {{s_2}} & {{s_5}} & {{s_6}} \cr {{s_6}} & {{s_4}} & {{s_6}} & {{s_5}} \cr {{s_3}} & {{s_2}} & {{s_4}} & {{s_6}} \cr {{s_3}} & {{s_3}} & {{s_2}} & {{s_4}} \cr}} \right].} \cr} \cr} and according to the above preference information, give the order of the four alternatives. This paper studies the mutual conversion between linguistic judgement matrix and other four kinds of preference information. To judge the order of alternatives, we transform the other decision preference information into the linguistic judgement matrix to judge the order of alternatives. The fifth decision-maker gives a linguistic judgement matrix based on a linguistic phrase evaluation set with a granularity of 9. The granularity of the linguistic phrase evaluation set used to transform other preference information into linguistic judgement matrix is also 9. According to the above transformation formula, the preference information is transformed into linguistic judgement matrix. $P1=[s4s1s5s2s7s4s8s5s3s0s4s1s5s3s7s4 ];P2=[s4s3s2s6s5s4s3s6s6s5s4s7s2s2s1s4 ]P3=[s4s0s6s8s8s4s8s5s2s0s4s8s0s3s0s4 ];P4=[s4s0s5s6s8s4s7s5s3s1s4s8s2s3s0s4 ].$ \matrix{{{P^1} = \left[ {\matrix{{{s_4}} & {{s_1}} & {{s_5}} & {{s_2}} \cr {{s_7}} & {{s_4}} & {{s_8}} & {{s_5}} \cr {{s_3}} & {{s_0}} & {{s_4}} & {{s_1}} \cr {{s_5}} & {{s_3}} & {{s_7}} & {{s_4}} \cr}} \right];} & {{P^2} = \left[ {\matrix{{{s_4}} & {{s_3}} & {{s_2}} & {{s_6}} \cr {{s_5}} & {{s_4}} & {{s_3}} & {{s_6}} \cr {{s_6}} & {{s_5}} & {{s_4}} & {{s_7}} \cr {{s_2}} & {{s_2}} & {{s_1}} & {{s_4}} \cr}} \right]} \cr {{P^3} = \left[ {\matrix{{{s_4}} & {{s_0}} & {{s_6}} & {{s_8}} \cr {{s_8}} & {{s_4}} & {{s_8}} & {{s_5}} \cr {{s_2}} & {{s_0}} & {{s_4}} & {{s_8}} \cr {{s_0}} & {{s_3}} & {{s_0}} & {{s_4}} \cr}} \right];} & {{P^4} = \left[ {\matrix{{{s_4}} & {{s_0}} & {{s_5}} & {{s_6}} \cr {{s_8}} & {{s_4}} & {{s_7}} & {{s_5}} \cr {{s_3}} & {{s_1}} & {{s_4}} & {{s_8}} \cr {{s_2}} & {{s_3}} & {{s_0}} & {{s_4}} \cr}} \right].} \cr} Using Eq. (3), the above linguistic judgement matrix is transformed into the following preference relation matrix: $Q1=[1010111100101011 ];Q2=[1001110111110001 ]Q3=[1011111100110001 ];Q4=[1011111100110001 ];Q5=[1011111100110001 ].$ \matrix{{{Q^1} = \left[ {\matrix{1 & 0 & 1 & 0 \cr 1 & 1 & 1 & 1 \cr 0 & 0 & 1 & 0 \cr 1 & 0 & 1 & 1 \cr}} \right];} & {{Q^2} = \left[ {\matrix{1 & 0 & 0 & 1 \cr 1 & 1 & 0 & 1 \cr 1 & 1 & 1 & 1 \cr 0 & 0 & 0 & 1 \cr}} \right]} & {} \cr {{Q^3} = \left[ {\matrix{1 & 0 & 1 & 1 \cr 1 & 1 & 1 & 1 \cr 0 & 0 & 1 & 1 \cr 0 & 0 & 0 & 1 \cr}} \right];} & {{Q^4} = \left[ {\matrix{1 & 0 & 1 & 1 \cr 1 & 1 & 1 & 1 \cr 0 & 0 & 1 & 1 \cr 0 & 0 & 0 & 1 \cr}} \right];} & {{Q^5} = \left[ {\matrix{1 & 0 & 1 & 1 \cr 1 & 1 & 1 & 1 \cr 0 & 0 & 1 & 1 \cr 0 & 0 & 0 & 1 \cr}} \right].} \cr} We arrange the alternatives according to the number of 1 in the preference relation matrix. The order of d1 is x2x4x1x3; the order of d2 is x3x2x1x4; the order of d3 is x2x1x3x4; the order of d4 is x2x1x3x4. Let the weight of the decision-maker be the same; the weighted average operator of the ranking of the schemes is x2x1x3x4 according to the weighted average operator. Conclusions In this paper, the transform formula between linguistic judgement matrix and other four preference information (preference order, utility value, reciprocal judgement matrix and complementary judgement matrix) is given systematically, and the rationality of the transform formula is theoretically proved. This method provides the transformation of different preference information forms in group decision-making. Further, more detailed and accurate transform methods between preference information and its application need to be studied further. Bordogna G, Pasi G. A fuzzy linguistic approach generalizing boolean information retrieval: a model and its evaluation. Journal of the American Society for Information Science and Technology, 1993, 44: 70–82. BordognaG PasiG A fuzzy linguistic approach generalizing boolean information retrieval: a model and its evaluation Journal of the American Society for Information Science and Technology 1993 44 70 82 10.1002/(SICI)1097-4571(199303)44:2<70::AID-ASI2>3.0.CO;2-I Search in Google Scholar Yager R R. Applications and extensions of OWA aggregations. International Journal of Man-Machine Studied, 1992, 37: 103–132. YagerR R Applications and extensions of OWA aggregations International Journal of Man-Machine Studied 1992 37 103 132 10.1016/0020-7373(92)90093-Z Search in Google Scholar Zadeh, L. A. “A Computational Approach to Fuzzy Quantifiers in Natural Languages,” Computers & Mathematics with Applications, 1983, 9: 149–184. ZadehL. A. “A Computational Approach to Fuzzy Quantifiers in Natural Languages,” Computers & Mathematics with Applications 1983 9 149 184 10.1016/0898-1221(83)90013-5 Search in Google Scholar Meng Fanyong, Tang Jie, Hamido Fujita Meng Fanyong, Tang Jie, Hamido Fujita. Linguistic intuitionistic fuzzy preference relations and their application to multi-criteria decision making[J]. Information Fusion, 2019, 46:77–90. MengFanyong TangJie HamidoFujita MengFanyong TangJie HamidoFujita Linguistic intuitionistic fuzzy preference relations and their application to multi-criteria decision making[J] Information Fusion 2019 46 77 90 10.1016/j.inffus.2018.05.001 Search in Google Scholar Fang Liua, Qin Yu, Witold PedryczWei, Guo Zhang. A group decision making model based on an inconsistency index of interval multiplicative reciprocal matrices[J]. Knowledge-Based Systems, Knowledge-Based Systems, 145(2018): 67–76. FangLiua QinYu WitoldPedryczWei GuoZhang A group decision making model based on an inconsistency index of interval multiplicative reciprocal matrices[J] Knowledge-Based Systems, Knowledge-Based Systems 145 2018 67 76 10.1016/j.knosys.2018.01.001 Search in Google Scholar J. Hu, L. Pan, Y. Yang, H. Chen, A group medical diagnosis model based on intuitionistic fuzzy soft sets, Appl. Soft Comput. 77 (2019) 453–466, doi: 10.1016/j.asoc.2019.01.041.2. HuJ. PanL. YangY. ChenH. A group medical diagnosis model based on intuitionistic fuzzy soft sets Appl. Soft Comput. 77 2019 453 466 10.1016/j.asoc.2019.01.041.2 F.Y. Meng, C.Q. Tan, A new consistency concept for interval multiplicative preference relations, Appl. Soft Comput. 52 (2017) 262–276. MengF.Y. TanC.Q. A new consistency concept for interval multiplicative preference relations Appl. Soft Comput. 52 2017 262 276 10.1016/j.asoc.2016.11.001 Search in Google Scholar Delgado M, Herrera F, Herrera Viedma E et al. Combining numerical and linguistic information in group decision making [J]. Information Sciences, 1998, 107(2):177–194. DelgadoM HerreraF Herrera ViedmaE Combining numerical and linguistic information in group decision making [J] Information Sciences 1998 107 2 177 194 10.1016/S0020-0255(97)10044-5 Search in Google Scholar Wujiang. Study of Methods for Transforming Four Preference Information in Group Decision Making [J]. Journal of Wuhan University of Technology, 2004, 26(3):64–67. Wujiang Study of Methods for Transforming Four Preference Information in Group Decision Making [J] Journal of Wuhan University of Technology 2004 26 3 64 67 Search in Google Scholar Xiao sihan, Fan zhiping, Wang mengguang. Uniform approach to two forms of preference information – AHP judgement matrices and fuzzy preference relation matrices in group decision making[J]. Journal of systems engineering, 2002, 17(1):82–86. Xiaosihan Fanzhiping Wangmengguang Uniform approach to two forms of preference information – AHP judgement matrices and fuzzy preference relation matrices in group decision making[J] Journal of systems engineering 2002 17 1 82 86 Search in Google Scholar Chen huayou, Liu Chunlin. Relative entropy aggregation method in group decision making based on different types of preference information [J]. Journal of Southeast University, 2005, 35(2):311–315. Chenhuayou LiuChunlin Relative entropy aggregation method in group decision making based on different types of preference information [J] Journal of Southeast University 2005 35 2 311 315 Search in Google Scholar TANG Yan-wu, CHEN Hua-you. A New Method of Adjusting the Inconsistency of Judgment Matrices with Linguistic Terms Based on Compatibility [J]. Fuzzy Systems and Mathematics, 2010, 24(2):112–118. TANGYan-wu CHENHua-you A New Method of Adjusting the Inconsistency of Judgment Matrices with Linguistic Terms Based on Compatibility [J] Fuzzy Systems and Mathematics 2010 24 2 112 118 Search in Google Scholar Yangjing. Rsearch on aggregation method witj different forms of preference information based on fuzzy clustering[J]. Mathematics in Practice and Thory, 2017, 47(10):1–7. Yangjing Rsearch on aggregation method witj different forms of preference information based on fuzzy clustering[J] Mathematics in Practice and Thory 2017 47 10 1 7 Search in Google Scholar Gao, Jie; Xu, Zeshui; Ren, Peijia. An emergency decision making method based on the multiplicative consistency of probabilistic linguistic preference relations[J]. International Journal of Machine Learning and Cybernetics, 2019, 10(7):1613–1629. GaoJie XuZeshui RenPeijia An emergency decision making method based on the multiplicative consistency of probabilistic linguistic preference relations[J] International Journal of Machine Learning and Cybernetics 2019 10 7 1613 1629 10.1007/s13042-018-0839-0 Search in Google Scholar Tian, ZP, Nie, RX, Wang, JQ. Consistency and consensus improvement models driven by a personalized normalization method with probabilistic linguistic preference relations[J]. Information Fusion, 2021, 69, 156–179. TianZP NieRX WangJQ Consistency and consensus improvement models driven by a personalized normalization method with probabilistic linguistic preference relations[J] Information Fusion 2021 69 156 179 10.1016/j.inffus.2020.12.005 Search in Google Scholar Gao J, Xu ZS, Liang ZL, Liao HC. Expected consistency-based emergency decision making with incomplete probabilistic linguistic preference relations [J]. Knowledge-based Systems, 2019, 176, 15–28. GaoJ XuZS LiangZL LiaoHC Expected consistency-based emergency decision making with incomplete probabilistic linguistic preference relations [J] Knowledge-based Systems 2019 176 15 28 10.1016/j.knosys.2019.03.020 Search in Google Scholar Wei Cuiping, Feng Xiangqian, Zhang Yuzhong. Method for measuring the satisfactory consistency of a linguistic judgment matrix [J]. Systems Engineering-Theory & Practice, 2009, 29(1):104–110. WeiCuiping FengXiangqian ZhangYuzhong Method for measuring the satisfactory consistency of a linguistic judgment matrix [J] Systems Engineering-Theory & Practice 2009 29 1 104 110 Search in Google Scholar Fan Zhiping, Jiang Yanping. A judgment method for the satisfying consistency of linguistic judgment matrix [J]. Control and Decision, 2004, 19 (8): 903–906. FanZhiping JiangYanping A judgment method for the satisfying consistency of linguistic judgment matrix [J] Control and Decision 2004 19 8 903 906 Search in Google Scholar Chiclana F, Herrera F, Herrera-Viedma E. Integrating three representation models in fuzzy multipurpose decision making based on fuzzy preference relations[J]. Fuzzy Sets and Systems, 1998, 97:33–48. ChiclanaF HerreraF Herrera-ViedmaE Integrating three representation models in fuzzy multipurpose decision making based on fuzzy preference relations[J] Fuzzy Sets and Systems 1998 97 33 48 10.1016/S0165-0114(96)00339-9 Search in Google Scholar Tanino T. On group decision making under fuzzy preference[C]//J Kacprzyk, M Fedrizzi. Multiperson Decision Making Using Fuzzy Sets and Possibility Theory. Kluwer Academic Publishers, Dordrecht, 1990:172–185. TaninoT On group decision making under fuzzy preference[C] KacprzykJ FedrizziM Multiperson Decision Making Using Fuzzy Sets and Possibility Theory Kluwer Academic Publishers Dordrecht 1990 172 185 10.1007/978-94-009-2109-2_16 Search in Google Scholar Saaty T L. The analytic hierarchy process [M]. New York: McGraw 2 Hill, 1980.80–210. SaatyT L The analytic hierarchy process [M] New York McGraw 2 Hill 1980 80 210 10.21236/ADA214804 Search in Google Scholar Yager RR. An approach to ordinal decision making. International Journal of Approximate Reasoning 1995, 12: 237–261. YagerRR An approach to ordinal decision making International Journal of Approximate Reasoning 1995 12 237 261 10.1016/0888-613X(94)00035-2 Search in Google Scholar Xiao Sihan, Fan Zhiping, Wang Mengguang. Consistency analysis of fuzzy judgment matrix [J]. Journal of system engineering, 2001, 16(2):142–145. XiaoSihan FanZhiping WangMengguang Consistency analysis of fuzzy judgment matrix [J] Journal of system engineering 2001 16 2 142 145 Search in Google Scholar • #### Fractional Linear Regression Equation in Agricultural Disaster Assessment Model Based on Geographic Information System Analysis Technology Recommended articles from Trend MD
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657639265060425, "perplexity": 4041.059466861631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00428.warc.gz"}
http://math.stackexchange.com/questions/138978/existence-of-a-particular-well-ordering-of-0-1
Existence of a particular well-ordering of [0,1] How do you show, assuming the Axiom of Choice and the Continuum Hypothesis, that there exists a well-ordering on $[0,1]$ such that for all $x$, there are only countably many $y$ such that $y \leq x$? - I think that is not possible. Given any $x=0,a_1a_2a_3a_4\dots a_n$, any $y=0,a_1a_2a_3a_4\dots a_na_j$ will do the job. Am I wrong? –  Pedro Tamaroff Apr 30 '12 at 17:28 Perhaps giving it the order type of $\omega_1$? –  Alex Becker Apr 30 '12 at 17:29 Could you explain further? –  Venge Apr 30 '12 at 17:31 Are you familiar with infinite ordinal numbers? If you are, the construction is very easy; if not, it will take a bit more explanation. –  Brian M. Scott Apr 30 '12 at 17:32 I'm familiar with the 'basic' infinite ordinals like $\omega$, less so with $\omega_1$. –  Venge Apr 30 '12 at 17:34 If CH holds and AC both hold, then $[0,1]$ (which is bijectable with $\mathbb{R}$, hence with $2^{\aleph_0}$) is bijectable with $\omega_1$, the first uncountable ordinal. Let $f\colon [0,1]\to\omega_1$ be a bijection, and define the order with $x\leq y\iff f(x)\preceq f(y)$ (the right hand side is the usual ordering of ordinals). Since $\omega_1$ is the first uncountable ordinal, every element of $\omega_1$ has only countably many elements strictly smaller than it, so for every $\alpha\in\omega_1$, $\{a\in\omega_1\mid a\preceq\alpha\}$ is countable. Thus, for any $x\in [0,1]$, only countably many reals can be strictly smaller than $x$ in this ordering. - So I can see how you need CH to show that $\omega_1$ can be bijected with $\mathbb{R}$ (otherwise you just know that $\omega_1$ is uncountable), but I'm not sure how AC comes into play. –  Venge Apr 30 '12 at 17:40 @Patrick: without some choice, you might not have a well ordering of $[0,1]$, so you couldn't have the bijection with $\omega_1$. –  Ross Millikan Apr 30 '12 at 18:00 @Patrick: You need AC to be sure that $[0,1]$ ($\mathbb{R}$) is bijectable with a cardinal. In some models of ZF without Choice, $\mathbb{R}$ is a countable union of countable sets, and so cannot be bijected with a cardinal (since such a cardinal would necessarily be countable, but we know that $\mathbb{R}$ is not countable). –  Arturo Magidin Apr 30 '12 at 18:08 Ah, ok. Thanks! –  Venge Apr 30 '12 at 18:25 Following Munkers 10.2. Set $S(a)=\{ x | x<a \}; W =\{x | S(x) uncountable\}$. If W= $\emptyset$ we are done. Otherwise W has a least element $\Omega$ . Now S( $\Omega$ ) is as required. - $S(\Omega)$ is an uncountable subset of [0,1] by the CH it is in bijection with [0,1]. The answer to the second question is yes. –  user139964 Apr 28 '14 at 7:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729896187782288, "perplexity": 277.39378171139833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00060-ip-10-236-191-2.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/2474/fundamental-group-of-the-double-torus
# Fundamental group of the double torus In May's "A Concise Course in Algebraic Topology" I am supposed to calculate the fundamental group of the double torus. Can this be done using van Kampen's theorem and the fact that for (based) spaces $X, Y$: $\pi_1(X\times Y) = \pi_1(X)\times\pi_1(Y)$? Or do I need other theorems to prove this? I believe that this should equal $\pi_1(T)\times\pi_1(T)\times\pi_1(S^1)$ where $T$ is the torus minus a closed disc on the surface, but I do not know how to calculate $\pi_1(T)$. - By van Kampen's theorem, what you get is actually $$\pi_1(T)\ast_{\pi_i(S^1)}\pi_1(T)$$ which is an amalgamated product (a pushout in the category of groups). Roughly speaking if you have two groups $G_1$ and $G_2$ and embeddings $i_1$ and $i_2$ of a group $H$ in both then $G_1\ast_H\ast G_2$ is the group freely generated by the elements of $G_1$ and $G_2$ but identifying elements $i_1(h)$ and $i_2(h)$ for $h\in H$. Now $\pi_1(T)$ can be computed using the fact that $T$ deformation retracts to a bouquet of two circles. (Think about the standard torus; fix a point and look at the circles through it going round the torus in the two natural ways.) - "$T$ deformation retracts to a bouquet of two circles" -- you mean, $T$ minus a point(or disc) deformation retracts to a bouquet of two circles? –  user1119 Aug 14 '10 at 19:10 I meant exactly what I said: see Ringo's definition of $T$. –  Robin Chapman Aug 15 '10 at 7:17 Hi: Please see this link, exercise 0.2 in the pdf file written by Christopher Walker in March 2, 2007 for Math 205B - Topology class. This has a nice explanation as well as some more information. - Doesn't this answer your question? –  anonymous Aug 14 '10 at 17:51 -1: A merely related link is not an answer to a specific question. You could as well point to wikipedia. –  Rasmus Aug 14 '10 at 17:54 However, if the original answer had included the magic words "see Exercise 0.2", this would be a perfectly fine answer. (Note for those who don't follow the link: Baez gives the solution there. I don't think it makes sense to just give a link to where somebody has asked a question.) –  Michael Lugo Aug 14 '10 at 21:29 @Michael Lugo: I agree with you. –  Rasmus Aug 15 '10 at 10:57 This is a response to Robin Chapman's answer. (For some reason I am not able to ask this directly under his question.) Why do we get that formula from van Kampen? The double torus is the union of the two open subsets that are homeomorphic to $T$ and whose intersection is $S^1$. So by van Kampen this should equal the colimit of $\pi_1(W)$ with $W \in {T,T,S^1}$. I thought the colimit in the category of groups is just the direct sum, hence the result should be $\pi_1(T) \oplus \pi_1(T) \oplus \pi_1(S^1)$. - No, the colimit in the category of groups is the amalgamated product as I described. In the category of Abelian groups the colimit is more or less the direct sum, but fundamental groups are not Abelian in general. You should consult a text dealing with van Kampen, e.g., Hatcher's. –  Robin Chapman Aug 18 '10 at 12:02 PS you can't post a comment in this section because you have two user IDs! –  Robin Chapman Aug 18 '10 at 12:02 Thank you! I now know why I was wrong. It should have been that "coproducts" (not colimits in general) are direct sums in the category of groups. (My old user ID was before I connected my account to my google account and after clearing the cookies I could not longer access it). –  Ringo Starr Aug 18 '10 at 19:35 Ringo, you could ask the moderators to merge your two accounts (and so add your rep points together). –  Robin Chapman Aug 19 '10 at 6:17 As a futher hint: the fundamental group of the torus $\mathbb{T}$ is generated by $a, b$ and has the relation $abAB$, where capitals denote inverses. If you remove an open disk then you get a once-holed torus $T$. Now the fundamental group is free (why?) and the boundary is the homotopic to the element $abAB$ (why?). So you can take another copy of the torus, say $\mathbb{S}$, with fundamental group generated by $c, d$ and having relation $cdCD$. Again remove a disk to get a once holed torus $S$. Now carefully follow the answer already given, gluing $T$ and $S$, and so on. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480281233787537, "perplexity": 359.3426082160524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446525.24/warc/CC-MAIN-20141017005726-00035-ip-10-16-133-185.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/275310/what-is-the-difference-between-linear-and-affine-function?answertab=oldest
# What is the difference between linear and affine function I am a bit confused. What is the difference between a linear and affine function? Any suggestions will be appreciated - ## migrated from stats.stackexchange.comJan 10 '13 at 18:01 This question came from our site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. $f(x)=2x$ is linear and affine. $f(x)=2x+3$ is affine but not linear. –  Rahul Jan 10 '13 at 18:39 A linear function fixes the origin, whereas an affine function need not do so. An affine function is the composition of a linear function with a translation, so while the linear part fixes the origin, the translation can map it somewhere else. As an example, linear functions $\mathbb{R}^2\to\mathbb{R}^2$ preserve the vector space structure (so in particular they must fix the origin). While affine functions don't preserve the origin, they do preserve some of the other geometry of the space, such as the collection of straight lines. If you choose a basis for vector spaces $V$ and $W$, and consider functions $f\colon V\to W$, then $f$ is linear if $f(v)=Av$ for some matrix $A$ (of the appropriate size), and $f$ is affine if $f(v)=Av+b$ for some matrix $A$ and vector $b\in W$. - Affine functions preserve the distance between two points? –  Jonas Meyer Sep 15 '14 at 2:59 An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line. From Wikipedia –  paldepind Sep 20 '14 at 17:46 Fixed! Thanks for catching that; it's slightly unnerving that something so wrong survived for 18 months! –  Matthew Pressland Sep 25 '14 at 8:59 An affine function is the composition of a linear function followed by a translation. $ax$ is linear ; $(x+b)\circ(ax)$ is affine. see Modern basic Pure mathematics : C.Sidney - Not sure why this got downvoted, made the most sense to me. –  Phil H Sep 25 '14 at 9:02 Probably because this is just a particular case. In general an affine space needs to be introduced. –  Respawned Fluff Feb 8 at 18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614506125450134, "perplexity": 443.39574940063085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00223-ip-10-236-191-2.ec2.internal.warc.gz"}
http://aimsciences.org/article/doi/10.3934/amc.2018014?viewType=html
# American Institute of Mathematical Sciences February 2018, 12(1): 199-214. doi: 10.3934/amc.2018014 ## Reduced access structures with four minimal qualified subsets on six participants Department of Mathematical Sciences, Sharif University of Technology, P.O. Box 11155-9415, Tehran, Iran Received  March 2017 Revised  August 2017 Published  March 2018 In this paper, we discuss a point about applying known decomposition techniques in their most general form. Three versions of these methods, which are useful for obtaining upper bounds on the optimal information ratios of access structures, are known as: Stinson's $λ$-decomposition, $(λ, ω)$-decomposition and $λ$-weighted-decomposition, where the latter two are generalizations of the first one. We continue by considering the problem of determining the exact values of the optimal information ratios of the reduced access structures with exactly four minimal qualified subsets on six participants, which remained unsolved in Martí-Farré et al.'s paper [Des. Codes Cryptogr. 61 (2011), 167-186]. We improve the known upper bounds for all the access structures, except four cases, determining the exact values of the optimal information ratios. All three decomposition techniques are used while some cases are handled by taking full advantage of the generality of decompositions. Citation: Motahhareh Gharahi, Shahram Khazaei. Reduced access structures with four minimal qualified subsets on six participants. Advances in Mathematics of Communications, 2018, 12 (1) : 199-214. doi: 10.3934/amc.2018014 ##### References: [1] A. Beimel, Secret-sharing schemes: a survey, in Int. Conf. Coding Crypt., Springer, 2011, 11–46. [2] A. Beimel, A. Ben-Efraim, C. Padró and I. Tyomkin, Multi-linear secret-sharing schemes, in Theory of Cryptography Conference, Springer, Berlin, 2014,394–418. [3] G. R. Blakley, Safeguarding cryptographic keys, in Proceedings of the 1979 AFIPS National Computer Conference, Monval, NJ, USA, AFIPS Press, 1979, 313-317. [4] C. Blundo, A. De Santis, D. R. Stinson and U. Vaccaro, Graph decompositions and secret sharing schemes, J. Cryptology, 8 (1995), 39-64. [5] C. Blundo, A. De Santis, R. D. Simone and U. Vaccaro, Tight bounds on the information rate of secret sharing schemes, Des. Codes Crypt., 11 (1997), 107-122. [6] E. F. Brickell and D. R. Stinson, Some improved bounds on the information rate of perfect secret sharing schemes, J. Cryptology, 5 (1992), 153-166. [7] R. M. Capocelli, A. D. Santis, L. Gargano and U. Vaccaro, On the size of shares for secret sharing schemes, J. Cryptology, 6 (1993), 157-167. [8] L. Csirmaz, An impossibility result on graph secret sharing, Des. Codes Crypt., 53 (2009), 195-209. [9] L. Csirmaz, Secret sharing on the d-dimensional cube, Des. Codes Crypt., 74 (2015), 719-729. [10] L. Csirmaz and G. Tardos, Optimal information rate of secret sharing schemes on trees, IEEE Trans. Inf. Theory, 59 (2013), 2527-2530. [11] O. Farràs, T. B. Hansen, T. Kaced and C. Padró, Optimal non-perfect uniform secret sharing schemes, in Int. Crypt. Conf., Springer, Berlin, 2014,217–234. [12] O. Farràs, T. Kaced, S. Martin and C. Padro, Improving the linear programming technique in the search for lower bounds in secret sharing, Cryptology ePrint Archive, Report 2017/919,2017; available at https://eprint.iacr.org/2017/919 [13] O. Farràs, J. R. Metcalf-Burton, C. Padró and L. Vázquez, On the optimization of bipartite secret sharing schemes, Des. Codes Crypt., 63 (2012), 255-271. [14] M. Gharahi and M. H. Dehkordi, Perfect secret sharing schemes for graph access structures on six participants, J. Math. Crypt., 7 (2013), 143-146. [15] M. Gharahi and M. H. Dehkordi, The complexity of the graph access structures on six participants, Des. Codes Crypt., 67 (2013), 169-173. [16] M. Ito, A. Saito and T. Nishizeki, Secret sharing scheme realizing general access structure, Electr. Commun. Japan (Part Ⅲ: Fundam. Electr. Sci.), 72 (1989), 56-64. [17] W.-A. Jackson and K. M. Martin, Geometric secret sharing schemes and their duals, Des. Codes Crypt., 4 (1994), 83-95. [18] W.-A. Jackson and K. M. Martin, Perfect secret sharing schemes on five participants, Des. Codes Crypt., 9 (1996), 267-286. [19] E. D. Karnin, J. W. Greene and M. E. Hellman, On secret sharing systems, IEEE Trans. Inf. Theory, 29 (1983), 35-41. [20] J. Martí-Farré and C. Padró, Secret sharing schemes with three or four minimal qualified subsets, Des. Codes Crypt., 34 (2005), 17-34. [21] J. Martí-Farré and C. Padró, Secret sharing schemes on access structures with intersection number equal to one, Discrete Appl. Math., 154 (2006), 552-563. [22] J. Martí-Farré, C. Padró and L. Vázquez, Optimal complexity of secret sharing schemes with four minimal qualified subsets, Des. Codes Crypt., 61 (2011), 167-186. [23] K. M. Martin, New secret sharing schemes from old, J. Combin. Math. Combin. Comp., 14 (1993), 65-77. [24] C. Padró and G. Sáez, Secret sharing schemes with bipartite access structure, IEEE Trans. Inf. Theory, 46 (2000), 2596-2604. [25] C. Padró and G. Sáez, Lower bounds on the information rate of secret sharing schemes with homogeneous access structure, Inf. Process. Lett., 83 (2002), 345-351. [26] C. Padró, L. Vázquez and A. Yang, Finding lower bounds on the complexity of secret sharing schemes by linear programming, Discrete Appl. Math., 161 (2013), 1072-1084. [27] A. Shamir, How to share a secret, Commun. ACM, 22 (1979), 612-613. [28] D. R. Stinson, An explication of secret sharing schemes, Des. Codes Crypt., 2 (1992), 357-390. [29] D. R. Stinson, Decomposition constructions for secret-sharing schemes, IEEE Trans. Inf. Theory, 40 (1994), 118-125. [30] H.-M. Sun and B.-L. Chen, Weighted decomposition construction for perfect secret sharing schemes, Comp. Math. Appl., 43 (2002), 877-887. [31] H.-M. Sun, H. Wang, B.-H. Ku and J. Pieprzyk, Decomposition construction for secret sharing schemes with graph access structures in polynomial time, SIAM J. Discrete Math., 24 (2010), 617-638. [32] M. Van Dijk, On the information rate of perfect secret sharing schemes, Des. Codes Crypt., 6 (1995), 143-169. [33] M. Van Dijk, W.-A. Jackson and K. M. Martin, A general decomposition construction for incomplete secret sharing schemes, Des. Codes Crypt., 15 (1998), 301-321. [34] M. Van Dijk, T. Kevenaar, G.-J. Schrijen and P. Tuyls, Improved constructions of secret sharing schemes by applying ($λ$, $ω$)-decompositions, Inf. Process. Lett., 99 (2006), 154-157. show all references ##### References: [1] A. Beimel, Secret-sharing schemes: a survey, in Int. Conf. Coding Crypt., Springer, 2011, 11–46. [2] A. Beimel, A. Ben-Efraim, C. Padró and I. Tyomkin, Multi-linear secret-sharing schemes, in Theory of Cryptography Conference, Springer, Berlin, 2014,394–418. [3] G. R. Blakley, Safeguarding cryptographic keys, in Proceedings of the 1979 AFIPS National Computer Conference, Monval, NJ, USA, AFIPS Press, 1979, 313-317. [4] C. Blundo, A. De Santis, D. R. Stinson and U. Vaccaro, Graph decompositions and secret sharing schemes, J. Cryptology, 8 (1995), 39-64. [5] C. Blundo, A. De Santis, R. D. Simone and U. Vaccaro, Tight bounds on the information rate of secret sharing schemes, Des. Codes Crypt., 11 (1997), 107-122. [6] E. F. Brickell and D. R. Stinson, Some improved bounds on the information rate of perfect secret sharing schemes, J. Cryptology, 5 (1992), 153-166. [7] R. M. Capocelli, A. D. Santis, L. Gargano and U. Vaccaro, On the size of shares for secret sharing schemes, J. Cryptology, 6 (1993), 157-167. [8] L. Csirmaz, An impossibility result on graph secret sharing, Des. Codes Crypt., 53 (2009), 195-209. [9] L. Csirmaz, Secret sharing on the d-dimensional cube, Des. Codes Crypt., 74 (2015), 719-729. [10] L. Csirmaz and G. Tardos, Optimal information rate of secret sharing schemes on trees, IEEE Trans. Inf. Theory, 59 (2013), 2527-2530. [11] O. Farràs, T. B. Hansen, T. Kaced and C. Padró, Optimal non-perfect uniform secret sharing schemes, in Int. Crypt. Conf., Springer, Berlin, 2014,217–234. [12] O. Farràs, T. Kaced, S. Martin and C. Padro, Improving the linear programming technique in the search for lower bounds in secret sharing, Cryptology ePrint Archive, Report 2017/919,2017; available at https://eprint.iacr.org/2017/919 [13] O. Farràs, J. R. Metcalf-Burton, C. Padró and L. Vázquez, On the optimization of bipartite secret sharing schemes, Des. Codes Crypt., 63 (2012), 255-271. [14] M. Gharahi and M. H. Dehkordi, Perfect secret sharing schemes for graph access structures on six participants, J. Math. Crypt., 7 (2013), 143-146. [15] M. Gharahi and M. H. Dehkordi, The complexity of the graph access structures on six participants, Des. Codes Crypt., 67 (2013), 169-173. [16] M. Ito, A. Saito and T. Nishizeki, Secret sharing scheme realizing general access structure, Electr. Commun. Japan (Part Ⅲ: Fundam. Electr. Sci.), 72 (1989), 56-64. [17] W.-A. Jackson and K. M. Martin, Geometric secret sharing schemes and their duals, Des. Codes Crypt., 4 (1994), 83-95. [18] W.-A. Jackson and K. M. Martin, Perfect secret sharing schemes on five participants, Des. Codes Crypt., 9 (1996), 267-286. [19] E. D. Karnin, J. W. Greene and M. E. Hellman, On secret sharing systems, IEEE Trans. Inf. Theory, 29 (1983), 35-41. [20] J. Martí-Farré and C. Padró, Secret sharing schemes with three or four minimal qualified subsets, Des. Codes Crypt., 34 (2005), 17-34. [21] J. Martí-Farré and C. Padró, Secret sharing schemes on access structures with intersection number equal to one, Discrete Appl. Math., 154 (2006), 552-563. [22] J. Martí-Farré, C. Padró and L. Vázquez, Optimal complexity of secret sharing schemes with four minimal qualified subsets, Des. Codes Crypt., 61 (2011), 167-186. [23] K. M. Martin, New secret sharing schemes from old, J. Combin. Math. Combin. Comp., 14 (1993), 65-77. [24] C. Padró and G. Sáez, Secret sharing schemes with bipartite access structure, IEEE Trans. Inf. Theory, 46 (2000), 2596-2604. [25] C. Padró and G. Sáez, Lower bounds on the information rate of secret sharing schemes with homogeneous access structure, Inf. Process. Lett., 83 (2002), 345-351. [26] C. Padró, L. Vázquez and A. Yang, Finding lower bounds on the complexity of secret sharing schemes by linear programming, Discrete Appl. Math., 161 (2013), 1072-1084. [27] A. Shamir, How to share a secret, Commun. ACM, 22 (1979), 612-613. [28] D. R. Stinson, An explication of secret sharing schemes, Des. Codes Crypt., 2 (1992), 357-390. [29] D. R. Stinson, Decomposition constructions for secret-sharing schemes, IEEE Trans. Inf. Theory, 40 (1994), 118-125. [30] H.-M. Sun and B.-L. Chen, Weighted decomposition construction for perfect secret sharing schemes, Comp. Math. Appl., 43 (2002), 877-887. [31] H.-M. Sun, H. Wang, B.-H. Ku and J. Pieprzyk, Decomposition construction for secret sharing schemes with graph access structures in polynomial time, SIAM J. Discrete Math., 24 (2010), 617-638. [32] M. Van Dijk, On the information rate of perfect secret sharing schemes, Des. Codes Crypt., 6 (1995), 143-169. [33] M. Van Dijk, W.-A. Jackson and K. M. Martin, A general decomposition construction for incomplete secret sharing schemes, Des. Codes Crypt., 15 (1998), 301-321. [34] M. Van Dijk, T. Kevenaar, G.-J. Schrijen and P. Tuyls, Improved constructions of secret sharing schemes by applying ($λ$, $ω$)-decompositions, Inf. Process. Lett., 99 (2006), 154-157. An ideal $2$-decomposition for $\Gamma = \Gamma_{4}(\{1, 2, 3, 5, 9, C\})$ $[\Gamma^-]=23+5C+9C+1359$ $[{\Gamma ^{j}}^-]$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $23$ $(0, 1, 1, 0, 0, 0)$ $5C+9C$ $(0, 0, 0, 1, 1, 1)$ $5C+1359$ $(1, 0, 1, 1, 1, 1)$ $23+9C+1359+125C$ $(1, 1, 1, 1, 1, 1)$ $[\Gamma^-]=23+5C+9C+1359$ $[{\Gamma ^{j}}^-]$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $23$ $(0, 1, 1, 0, 0, 0)$ $5C+9C$ $(0, 0, 0, 1, 1, 1)$ $5C+1359$ $(1, 0, 1, 1, 1, 1)$ $23+9C+1359+125C$ $(1, 1, 1, 1, 1, 1)$ An ideal $(3, 1)$-decomposition for $\Gamma = \Gamma_{4}(\{1, 3, 5, A, B, C\})$ $[\Gamma^-]=5C + 3AB + ABC +135B$ $\Gamma^+=\{13BC, 13AC, 15AB, 135A, 35B\}$ $[{\Gamma ^{j}}^-]$ $a_1\dots a_4$ $b_1\dots b_5$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $5C$ $1000$ $00000$ $(0, 0, 1, 0, 0, 1)$ $5+AB$ $1111$ $00111$ $(0, 0, 1, 1, 1, 0)$ $C+3AB+13B$ $1111$ $11000$ $(1, 1, 0, 1, 1, 1)$ $3AB+ABC+135B+15BC$ $0111$ $00000$ $(1, 1, 1, 1, 1, 1)$ Note. Consider an access structure $\Gamma$ with $\Gamma^-=\{A_1, \dots, A_m\}$ and $\Gamma^+=\{B_1, \dots, B_{M}\}$. Each bit $a_i$ of binary string $a_1\dots a_m$ in the second column indicates if $A_i$ is a qualified subset of $\Gamma^j$; that is, $a_i=1$ iff $A_i\in{\Gamma^j}$. Similarly, each bit $b_i$ of binary string $b_1\dots b_M$ in third column indicates if $B_i$ is a qualified subset of ${\Gamma^j}$; that is, $b_i=1$ iff $B_i\in{\Gamma^j}$. $[\Gamma^-]=5C + 3AB + ABC +135B$ $\Gamma^+=\{13BC, 13AC, 15AB, 135A, 35B\}$ $[{\Gamma ^{j}}^-]$ $a_1\dots a_4$ $b_1\dots b_5$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $5C$ $1000$ $00000$ $(0, 0, 1, 0, 0, 1)$ $5+AB$ $1111$ $00111$ $(0, 0, 1, 1, 1, 0)$ $C+3AB+13B$ $1111$ $11000$ $(1, 1, 0, 1, 1, 1)$ $3AB+ABC+135B+15BC$ $0111$ $00000$ $(1, 1, 1, 1, 1, 1)$ Note. Consider an access structure $\Gamma$ with $\Gamma^-=\{A_1, \dots, A_m\}$ and $\Gamma^+=\{B_1, \dots, B_{M}\}$. Each bit $a_i$ of binary string $a_1\dots a_m$ in the second column indicates if $A_i$ is a qualified subset of $\Gamma^j$; that is, $a_i=1$ iff $A_i\in{\Gamma^j}$. Similarly, each bit $b_i$ of binary string $b_1\dots b_M$ in third column indicates if $B_i$ is a qualified subset of ${\Gamma^j}$; that is, $b_i=1$ iff $B_i\in{\Gamma^j}$. A $2$-weighted-decomposition for $\Gamma = \Gamma_{4}(\{3, 5, 6, 9, A, D\})$ $[\Gamma^-]=359D+36A+56D+9AD$ $[W_{j}^{-}]$ $\Sigma^j$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $1\times(359D+36AD+56D+9AD)$ $(1, 1, 1, 1, 1, 1)$ $1 \times(359D+56D+9AD)+2\times(36A)$ $\Sigma^2$ $(2, 1, 2, 1, 2, 2)$ Note. In $\Sigma^2$, the shares of participants are assigned as follows: $\mathbf{s}_3= (r_2+r_4-s_1, r_5)$, $\mathbf{s}_5 =r_3+r_4$, $\mathbf{s}_6 = (r_4, r_6), \mathbf{s}_9 =r_1+r_2, \mathbf{s}_A= (r_2, r_5+r_6+s_2)$, $\mathbf{s}_ D=(r_3+s_1, r_1+s_1)$. $[\Gamma^-]=359D+36A+56D+9AD$ $[W_{j}^{-}]$ $\Sigma^j$ ${{\mathbf{\sigma }}^{j}}=\text{(}\sigma _{p}^{j}\text{)}_{p\in\mathcal{P}}$ $1\times(359D+36AD+56D+9AD)$ $(1, 1, 1, 1, 1, 1)$ $1 \times(359D+56D+9AD)+2\times(36A)$ $\Sigma^2$ $(2, 1, 2, 1, 2, 2)$ Note. In $\Sigma^2$, the shares of participants are assigned as follows: $\mathbf{s}_3= (r_2+r_4-s_1, r_5)$, $\mathbf{s}_5 =r_3+r_4$, $\mathbf{s}_6 = (r_4, r_6), \mathbf{s}_9 =r_1+r_2, \mathbf{s}_A= (r_2, r_5+r_6+s_2)$, $\mathbf{s}_ D=(r_3+s_1, r_1+s_1)$. Results obtained by ideal $\lambda$-decomposition $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{1}$ $12359C$ $23 + 5C + 9C + 1359$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{2}$ $12569C$ $26 + 9C + 159 + 56C$ $\mathcal{A}_{3}$ $13569A$ $56 + 9A + 36A + 1359$ $\mathcal{A}_{4}$ $1356AC$ $AC + 135 + 56C + 36A$ $\mathcal{A}_{5}$ $35679A$ $9A+567+367A+3579$ $\mathcal{A}_{6}$ $127BCD$ $17BD+27B+7CD+BCD$ $[5/3, 11/6]$ $5/3$ Note. Details of decompositions can be found in Appendix A.1. $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{1}$ $12359C$ $23 + 5C + 9C + 1359$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{2}$ $12569C$ $26 + 9C + 159 + 56C$ $\mathcal{A}_{3}$ $13569A$ $56 + 9A + 36A + 1359$ $\mathcal{A}_{4}$ $1356AC$ $AC + 135 + 56C + 36A$ $\mathcal{A}_{5}$ $35679A$ $9A+567+367A+3579$ $\mathcal{A}_{6}$ $127BCD$ $17BD+27B+7CD+BCD$ $[5/3, 11/6]$ $5/3$ Note. Details of decompositions can be found in Appendix A.1. Result obtained by non-ideal $\lambda$-decomposition $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{7}$ $167ABD$ $17BD+67AB+67D+ABD$ $[3/2, 5/3]$ $3/2$ Note. Details of decomposition can be found in Appendix A.2. $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{7}$ $167ABD$ $17BD+67AB+67D+ABD$ $[3/2, 5/3]$ $3/2$ Note. Details of decomposition can be found in Appendix A.2. Results obtained by ideal $(\lambda, \omega)$-decomposition $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{8}$ $135ABC$ $5C + 3AB + ABC + 135B$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{9}$ $125ACD$ $2A+ 15D+ 5CD + ACD$ $\mathcal{A}_{10}$ $136ACE$ $13 + ACE + 6CE + 36AE$ $\mathcal{A}_{11}$ $167ABC$ $17B + 67C + ABC + 67AB$ Note. Details of decompositions can be found in Appendix A.3. $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{8}$ $135ABC$ $5C + 3AB + ABC + 135B$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{9}$ $125ACD$ $2A+ 15D+ 5CD + ACD$ $\mathcal{A}_{10}$ $136ACE$ $13 + ACE + 6CE + 36AE$ $\mathcal{A}_{11}$ $167ABC$ $17B + 67C + ABC + 67AB$ Note. Details of decompositions can be found in Appendix A.3. Results obtained by $\lambda$-weighted decomposition $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{12}$ $3569AD$ $359D+36A+56D+9AD$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{13}$ $1249AC$ $19 + 2A + 4C + 9AC$ $\mathcal{A}_{14}$ $35679E$ $3579+367E+567E+9E$ $\mathcal{A}_{15}$ $3569BE$ $359B+36BE+56E+9BE$ $[3/2, 7/4]$ Note. Details of decompositions can be found in Appendix A.4. $\mathcal{P}$ Access Structure $\sigma$ from [22] $\sigma$ $\mathcal{A}_{12}$ $3569AD$ $359D+36A+56D+9AD$ $[3/2, 5/3]$ $3/2$ $\mathcal{A}_{13}$ $1249AC$ $19 + 2A + 4C + 9AC$ $\mathcal{A}_{14}$ $35679E$ $3579+367E+567E+9E$ $\mathcal{A}_{15}$ $3569BE$ $359B+36BE+56E+9BE$ $[3/2, 7/4]$ Note. Details of decompositions can be found in Appendix A.4. Results obtained from the corresponding dual graph access structures $\mathcal{P}$ Access Structure $(\cong \Gamma^*)$ $\sigma$ from [22] $\sigma$ $167BDE$ $17BD+67BE+67DE+BDE$ $(\cong \Gamma^*_{62})$ $[3/2, 5/3]$ $3/2$ [14,32] $356BDE$ $35BD+36BE+56DE+BDE$ $(\cong\Gamma^*_{68} )$ $357ABC$ $357B+37AB+57C+ABC$ $( \cong \Gamma^*_{33})$ $[3/2, 7/4]$ $357ACE$ $357+37AE+57CE+ACE$ $(\cong\Gamma^*_{36})$ $37BCDE$ $37BD+37BE+7CDE+BCDE$ $(\cong \Gamma^*_{102})$ $[3/2, 11/6]$ $125ADE$ $15D+2AE+5DE+ADE$ $(\cong \Gamma^*_{14})$ $[5/3, 7/4]$ $5/3$ [32] $135ADE$ $135D+3AE+5DE+ADE$ $(\cong \Gamma^*_{29} )$ $137BCE$ $137B+37BE+7CE+BCE$ $(\cong \Gamma^*_{48} )$ $[5/3, 11/6]$ $124BDE$ $1BD+2BE+4DE+BDE$ $(\cong \Gamma^*_{9} )$ $[7/4, 11/6]$ $7/4$ [30,32,15] $125BDE$ $15BD+2BE+5DE+BDE$ $(\cong \Gamma^*_{22})$ $127BDE$ $17BD+27BE+7DE+BDE$ $(\cong \Gamma^*_{40})$ $135BDE$ $135BD+3BE+5DE+BDE$ $(\cong \Gamma^*_{42})$ $136BDE$ $13BD+36BE+6DE+BDE$ $(\cong \Gamma^*_{43})$ $137BDE$ $137BD+37BE+7DE+BDE$ $(\cong\Gamma^*_{61})$ $\mathcal{P}$ Access Structure $(\cong \Gamma^*)$ $\sigma$ from [22] $\sigma$ $167BDE$ $17BD+67BE+67DE+BDE$ $(\cong \Gamma^*_{62})$ $[3/2, 5/3]$ $3/2$ [14,32] $356BDE$ $35BD+36BE+56DE+BDE$ $(\cong\Gamma^*_{68} )$ $357ABC$ $357B+37AB+57C+ABC$ $( \cong \Gamma^*_{33})$ $[3/2, 7/4]$ $357ACE$ $357+37AE+57CE+ACE$ $(\cong\Gamma^*_{36})$ $37BCDE$ $37BD+37BE+7CDE+BCDE$ $(\cong \Gamma^*_{102})$ $[3/2, 11/6]$ $125ADE$ $15D+2AE+5DE+ADE$ $(\cong \Gamma^*_{14})$ $[5/3, 7/4]$ $5/3$ [32] $135ADE$ $135D+3AE+5DE+ADE$ $(\cong \Gamma^*_{29} )$ $137BCE$ $137B+37BE+7CE+BCE$ $(\cong \Gamma^*_{48} )$ $[5/3, 11/6]$ $124BDE$ $1BD+2BE+4DE+BDE$ $(\cong \Gamma^*_{9} )$ $[7/4, 11/6]$ $7/4$ [30,32,15] $125BDE$ $15BD+2BE+5DE+BDE$ $(\cong \Gamma^*_{22})$ $127BDE$ $17BD+27BE+7DE+BDE$ $(\cong \Gamma^*_{40})$ $135BDE$ $135BD+3BE+5DE+BDE$ $(\cong \Gamma^*_{42})$ $136BDE$ $13BD+36BE+6DE+BDE$ $(\cong \Gamma^*_{43})$ $137BDE$ $137BD+37BE+7DE+BDE$ $(\cong\Gamma^*_{61})$ Open access structures $\mathcal{P}$ Access structure $\sigma$ [32,22] $\{3, 5, 7, A, D, E\}$ $357D+37AE+57DE+ADE$ $(\cong \Gamma^*_{75})$ $[3/2, 5/3]$ $\{3, 5, 7, B, D, E\}$ $357BD+37BE+57DE+BDE$ $(\cong \Gamma_{84}^*)$ $\{1, 6, 7, A, D, E\}$ $17D+67AE+67DE+ADE$ $\{3, 5, 7, 9, B, E\}$ $3579B+37BE+57E+9BE$ $[3/2, 11/6]$ $\mathcal{P}$ Access structure $\sigma$ [32,22] $\{3, 5, 7, A, D, E\}$ $357D+37AE+57DE+ADE$ $(\cong \Gamma^*_{75})$ $[3/2, 5/3]$ $\{3, 5, 7, B, D, E\}$ $357BD+37BE+57DE+BDE$ $(\cong \Gamma_{84}^*)$ $\{1, 6, 7, A, D, E\}$ $17D+67AE+67DE+ADE$ $\{3, 5, 7, 9, B, E\}$ $3579B+37BE+57E+9BE$ $[3/2, 11/6]$ 2016 Impact Factor: 0.8 ## Tools Article outline Figures and Tables
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019606471061707, "perplexity": 967.6009611394308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00104.warc.gz"}
https://malegislature.gov/Laws/GeneralLaws/PartI/TitleXX/Chapter147/Section59
# General Laws ## Section 59 Issuance of license; effect of felony conviction; term; contents; renewal Section 59. The commissioner may issue to an applicant complying with the provisions of section fifty-eight a license to engage in the security systems business; provided, however, that no such license shall be issued to any person who has been convicted in any state of the United States of a felony, unless a hearing is held and the commissioner, at his or her discretion, determines a license is appropriate. Any person who has been convicted of a violation of section ninety-nine or ninety-nine A of chapter two hundred and seventy-two shall not be issued a license, unless a hearing is held and the commissioner, at his or her discretion, determines a license is appropriate. If any license has been previously issued to such person, it shall be revoked. Such license shall be valid for two years, shall state the name under which the licensed business is to be conducted, and the address of its principal office, and shall be posted by the licensee in a conspicuous place in such office. The name of the business, so licensed, shall not contain any words which denote or imply any association with agencies of the United States, the commonwealth or any of its political subdivisions. Failure to comply with the provisions of this paragraph shall constitute cause for revocation of such license. The commissioner may biennially renew and may at anytime for cause, after notice and hearing, revoke any such license. An application for renewal shall be on a form furnished by the commissioner.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214707374572754, "perplexity": 3320.9761937165067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456976384.15/warc/CC-MAIN-20150501050936-00039-ip-10-235-10-82.ec2.internal.warc.gz"}
https://rd.springer.com/article/10.1140/epjc/s10052-018-5882-1
# On the holographic basis of loop quantum cosmology • C. A. S. Silva Open Access Regular Article - Theoretical Physics ## Abstract In this work, we obtain the loop quantum cosmology dynamical equations, plus a positive cosmological constant, from the Bekenstein–Hawking entropy-area relation given by Loop Quantum Black Holes, by the use of the Jacobson formalism (Phys Rev Lett 75:1260, 1995). The results found out may set a still absent connection between holography and the description of the cosmos given by loop quantum cosmology. ## 1 Introduction Loop quantum gravity (LQG) proposes a way to model the behavior of spacetime in situations where its atomic characteristic arises [2, 3, 4, 5, 6]. Among these situations, the nature of our universe near the Big Bang singularity is described by loop quantum cosmology (LQC) [7]. This description of cosmology which takes into account effects of quantum gravity has become very popular during the last decade, because it allows making contact with the observational activity [8, 9]. The main result in LQC is the resolution of the Big Bang singularity, since there are long standing prospects that the General Relativity initial singularity must be solved in the context of a quantum gravity theory. In the case of LQC, the Big Bang singularity is naturally replaced by a bounce when the curvature becomes stronger at the Planckian regime. At this point, the universe density does not become infinite anymore, but assumes a maximum finite critical value. A quantum bridge forms in the place of the initial singularity and the universe can tunnel through it. In this sense, the quantum evolution of the universe extends through the Big Bang. Such results open the possibility that our universe could have its origin in a prior contraction phase [10]. The possibility of a phase(s) for the universe before the hot Big Bang, on the other hand, raises important questions related with the thermodynamics of our universe. Among these questions, we have the problem about the origin of the cosmological entropy and the arrow of time [11]. In order to address these issues, its is necessary to answer a question that remains open until now: what is the correct way to count the spacetime states in agreement with the LQC description of the universe? On the other hand, among the results coming from black hole thermodynamics, we have the Bekenstein–Hawking formula, where the entropy of a black hole is given as proportional to its horizon area: $$S = A/4\hslash G$$ [12]. Behind the simplicity of this expression, lies a deep intersection between two theories that remain at odds until now, gravity and quantum mechanics. Interestingly, Bekenstein–Hawking formula is one of the few places in physics where the Newton’s gravitational constant G meets the Planck constant $$\hslash$$. Moreover, the Bekenstein–Hawking formula consists in the basis of the holographic principle which sets how many degrees of freedom there are in nature at the most fundamental level [13, 14, 15, 16]. Consequently, one may wonder that, the way to discuss the universe thermodynamical evolution in the context of LQC could lie in the holographic principle. Such possibility is reinforced by the strong evidences that a quantum theory of spacetime must be holographic. Among such evidences, we have the recent results by Afshordi et al. which have shown that the universe has passed by a holographic phase at its early times [17]. Since it is also the realm where LQC contributions become necessary, it is required to find out a way to conciliate LQC with holography. The applicability of the holographic principle to cosmology has been a subject of many discussions in the literature. In this way, following the pioneer work by Fischller and Suskind [18], a sort of scenarios has been proposed in order to establish the validity of the holographic principle in cosmological contexts, consisting in the so called Holographic Cosmology [18, 19, 20, 21, 22, 23, 24, 25]. Moreover, the holographic hypothesis has a main role in braneworld cosmology via the application of the AdS/CFT formulation of the holographic principle to Randall-Sundrum braneworld scenario [26, 27, 28, 29, 30, 31, 32]. Among the versions of the holographic cosmology, Bak and Rey have argued that the holographic principle must be satisfied by the universe if one considers that its entropy must be associated with its apparent horizon [21]. (The definition of the apparent horizon of a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe can be find, for example, in [33]). In this context, the validity of the first law of thermodynamics can be proved, which has made possible to derive the Friedmann equations of a FLRW universe [34] by the use of the Jacobson formalism [1]. By the way, the Jacobson formalism consists in one of the most important results in favor of the holographic hypothesis. In fact, the results by Jacobson have demonstrated that the Einstein’s field equations can be derived from the proportionality between entropy and the horizon area, if the fundamental Clausius relation, $$\delta Q = TdS$$, connecting heat, temperature and entropy is valid for all the local Rindler causal horizon through each spacetime point, in a way that $$\delta Q$$ and T will be interpreted, respectively, as the energy flux and Unruh temperature as seen by an accelerated observer just inside the horizon [1]. The results by Jacobson bring up an interesting consequence of the holographic principle: that the spacetime must have an atomic structure. In fact, the most important lesson which brings from such results is that the spacetime can be viewed as a gas of atoms with a related entropy given by the Bekenstein–Hawking formula, and the gravitational field equations are nothing, but equations of state describing this gas. Such an interpretation of spacetime was later reinforced by Padmanabhan, who linked the macroscopic description of spacetime, by Einstein equations, to microscopic degrees of freedom when assuming the principle of equipartition of energy [35]. A further extension of the Jacobson results to non-equilibrium situations has also been done [36]. The Jacobson’s results have given rise to several works which have strengthened the thermodynamic interpretation of Einstein’s equations. Actually, it has been shown that the susceptibility of gravitational fields to a thermodynamical behavior occurs not only in Einstein’s gravity, but also in a wide assortment of theories. (For a review and a broad list of references see [37]). One may think about to use the Jacobson’s formalism in order to investigate the relationship between LQC and holography. In fact, such investigation has been done by Cai et al. [38]. In such work, the Jacobson’s formalism has been used, where it has been taken into account a logarithmic quantum corrected Bekenstein–Hawking formula which arises in the context of LQG [39]. However, this attempt to obtain the LQC equations from Bekenstein–Hawking entropy, has led to quantum corrected Friedmann equations which give us a different scenario from LQC ones. Actually, the worse conclusion from such analysis is that a bounce does not occur anymore and the Big-Bang singularity is not resolved. It establishes a breakup between the description of the spacetime behavior near the Big Bang and the way how its degrees of freedom are counted in the context of LQG. In this way, the important problem in reconciling the description of the universe provided by LQC with the thermodynamic evolution of the cosmos has been shown to be non trivial. In the present work, we shall demonstrate that one can reconcile LQC and holography when it is considered that the universe entropy is given by the quantum corrected Bekenstein–Hawking formula that arises in the context of Loop Quantum Black Holes (LQBHs). In order to do this, we shall show that LQC dynamical equations can be derived from LQBH’s entropy-area relation, by the use of the Jacobson formalism. The present article is organized as follows: in Sect. 2, we shall review the basic aspects of LQBHs, in order to introduce the modified Bekenstein–Hawking formula that will be used throughout this paper; in Sect. 3 we shall review the formalism introduced by Cai et al. to derive the Friedmann equations from the Bekenstein–Hawking formula; in Sect. 4, we shall derive quantum corrected Friedmann equations from LQBHs’ entropy-area relation; in Sect. 5, from the results of Sect. 4, we shall derive the LQC Friedmann equations; Sect. 6 is devoted to conclusions and perspectives. In this paper, unless otherwise stated, we shall use $$\hslash = c = k_{B} = G = 1$$. ## 2 Loop quantum black holes In this section we shall review the basic concepts of Loop Quantum Black Holes in order to introduce the quantum corrected Bekenstein–Hawking formula which will be used in order to derive the LQC dynamical equations. In this way, a Loop Quantum Black Hole (LQBH), also called self-dual black hole, consists in a quantum corrected Schwarzschild black hole that appears from a simplified model of LQG by the use of semiclassical tools in the minisuperspace quantization approach [40, 41, 42, 43]. The metric that describes the LQBH scenario is given by \begin{aligned} ds^{2} = - G(r)dt^{2} + F^{-1}(r)dr^{2} + H(r)(d\theta ^{2} + \sin ^{2}\theta d\phi ^{2}), \end{aligned} (1) where the functions above are defined as \begin{aligned} G(r)= & {} \frac{(r-r_{+})(r-r_{-})(r+r_{*})^2}{r^{4}+a_{0}^{2}}, \nonumber \\ F(r)= & {} \frac{(r-r_{+})(r-r_{-})r^{4}}{(r+r_{*})^{2}(r^{4}+a_{0}^{2})};\quad H(r) = r^{2} + \frac{a_{0}^{2}}{r^{2}}. \end{aligned} (2) In this scenario, we have the presence of two horizons. The first one, localized at $$r_{+} = 2m$$, corresponds to an event horizon. The second one, localized at $$r_{-} = 2mP^{2}$$, corresponds to a Cauchy horizon. In addition, we have $$r_{*} = \sqrt{r_{+}r_{-}} = 2mP$$, where P is the polymeric function defined by \begin{aligned} P = \frac{\sqrt{1+\epsilon ^{2}} - 1}{\sqrt{1+\epsilon ^{2}} +1}. \end{aligned} (3) In the definition above, $$\epsilon =\gamma \delta _b$$, being $$\gamma$$ the Barbero-Immirzi parameter and $$\delta _b$$ the polymeric parameter which appear in the LQG quantization techniques. In particular, the polymeric parameter is used in order to define the length of the path along with the connection, used to define the holonomies in LQG, is integrated [40]. Moreover, $$a_{0} = A_{min}/8\pi$$, with $$A_{min}$$ conceived as the minimal area value in LQG. In the metric 2, r is only asymptotically the usual radial coordinate. It is because $$g_{\theta \theta }$$ is not just $$r^{2}$$. In this way, a more physical radial coordinate can be defined from the form of the function H(r) in the metric: \begin{aligned} R = \sqrt{r^{2}+\frac{a_{0}^{2}}{r^{2}}} , \end{aligned} (4) in the sense that the radial coordinate R measures the proper circumferential distance. In addition, the relation of the parameter m in the solution 1 with the ADM mass M is given by $$M = m(1 + P )^{2}$$. An interesting feature of LQBHs is the property of self-duality. This property says that if one introduces the new coordinates $$\tilde{r} = a_{0}/r$$ and $$\tilde{t} = t r_{*}^{2}/a_{0}$$, with $$\tilde{r}_{\pm } = a_{0}/r_{\mp }$$ the metric preserves its form. The dual radius is given by $$r_{dual} = \tilde{r} = \sqrt{a_{0}}$$ and corresponds to the minimal possible surface element. Moreover, since the Eq. 4 can be written as $$R = \sqrt{r^{2}+\tilde{r}^{2}}$$, it is clear that the solution contains another asymptotically flat Schwazschild region rather than a singularity in the limit $$r\rightarrow 0$$. This new region corresponds to a Planck-sized wormhole. In addition, the Kretschmann invariant for this LQBH solution is given, for $$r \approx 0$$, by \begin{aligned} K = R_{\mu \nu \alpha \beta }R^{\mu \nu \alpha \beta } = \frac{3145728\pi ^{4}r^{6}}{a_{0}^{4}\gamma ^{8}\delta ^{8}m^{2}}. \end{aligned} (5) In a different way from the classical Schwarzschild scenario, the LQBH’s Kretschmann invariant does not diverge when $$r \rightarrow 0$$. Such result points to the resolution of the singularity at $$r = 0$$. As a matter of fact, in the quantum corrected LQBH scenario, an asymptotic flat region appears in the place of the black hole singularity, as has been shown in the LQBH Carter–Penrose diagram in Fig. 1. The thermodynamic properties of LQBH’s solution can be derived in the usual manner. In this way, the black hole temperature $$T_{BH}$$ is obtained by the calculation of the surface gravity $$\kappa$$, by $$T_{BH} = \kappa /2\pi$$, where \begin{aligned} \kappa ^{2} = - g^{\mu \nu }g_{\rho \sigma }\nabla _{\mu }\chi ^{\rho }\nabla _{\nu }\chi ^{\sigma } = - \frac{1}{2}g^{\mu \nu }g_{\rho \sigma }\varGamma ^{\rho }_{\mu 0} \varGamma ^{\sigma }_{\nu 0}. \end{aligned} (6) In the equation above, $$\chi ^{\mu } = (1,0,0,0)$$ consists in a timelike Killing vector. On the other hand, $$\varGamma ^{\mu }_{\sigma \rho }$$ give us the connections coefficients. By connecting with the metric, one obtains the LQBH’s temperature \begin{aligned} T_{H} = \frac{(2m)^{3}(1-P^{2})}{4\pi [(2m)^{4} +a_{0}^{2}]}. \end{aligned} (7) It is easy to see that one can recover the usual Hawking temperature in the limit of large masses. However, differently from the Hawking case, the temperature 7 goes to zero for $$m \rightarrow 0$$. The black hole’s entropy can be found out by making use of the thermodynamical relation $$S_{BH} = \int dm/T (m)$$, \begin{aligned} S = \frac{4\pi }{(1-P^{2})}\left[ \frac{16m^{4} - a_{0}^{2}}{16m^{2}}\right] . \end{aligned} (8) From the expression 8 , one can obtain an expression for the black hole entropy in terms of its area [43] \begin{aligned} S = \pm \frac{\sqrt{A^{2} - A_{min}^{2}}}{4\sigma }, \end{aligned} (9) where $$\sigma = 1 - P^{2}$$, and we have chosen the possible additional constant to be zero. In addition, $$A = 4\pi R^{2}$$, with R defined by the Eq. 4. We have that, S has a positive value for $$m > \sqrt{a_{0}}/2$$ and becomes negative otherwise. The two possible situations for the signal of the loop black hole entropy are related to the double physical possibility that arises from LQBH’s structure, depending on the localization of the event horizon ( outside or inside of the wormhole throat) [42]. Another way to obtain the thermodynamic properties of LQBH has been pointed in [44], where the Hamilton–Jacobi version of the tunneling formalism has been used. In this context, back-reaction effects have been included. Moreover, the LQBH solution can be extended to scenarios of black holes with charge and angular momentum [45]. The information loss problem has also been discussed in the LQBH’s framework. As an important result, the problem of information loss by black holes can be relieved in this scenario [44, 46]. ## 3 Friedmann equations from thermodynamics In this section, we shall review the method introduced by Cai and Kim [34] and Cai et al. [38] in order to derive the Friedmann equations from the Bekenstein–Hawking formula, by considering that the holographic bound is satisfied by the universe in some regime. Such method is based on the results by Jacobson that demonstrated the equivalence between the Einstein’s gravitational equations and thermodynamics [1]. In this sense, in order to have the holographic bound fulfilled by the universe, we shall consider the evolution of a universe region whose holographic boundary corresponds to the cosmological apparent horizon. The issue of the correct choice of the cosmological holographic boundary has been a point of some discussion in the literature. In this way, such a boundary has been considered as having the size of the Hubble horizon in [19, 25], the size of the apparent horizon in [21], and the size of the particle horizon in [24]. However, in order to get a thermodynamic description of the universe evolution based on the Jacobson formalism, it has been shown that the choice of the cosmological apparent horizon as the holographic boundary [21, 34] is more convenient. Such conclusion comes from that fact that, at the apparent horizon, the Friedmann equations have been shown to be equivalent to the first law of thermodynamics [47, 48, 49]. It occurs not only in Einstein’s gravity, but in other scenarios such as braneworld models [50, 51, 52, 53], Horava–Lifshitz gravity [54], Lovelock gravity [47, 55] and f(R) gravity [56]. Moreover, it has been shown that the obedience to the generalized second law of thermodynamics is fulfilled in the scenario of an accelerating expanding universe when one identifies the cosmological holographic boundary as the universe apparent horizon [57, 58, 59]. On the base of such facts, it can be argued that the apparent horizon must be considered as the physical horizon in dealing with thermodynamics issues, in the context of a universe with any curvature. Following the procedure developed in [34, 38], in order to obtain the LQC Friedmann equations, we have that the FLRW universe is described by the following metric \begin{aligned} ds^{2}= & {} - dt^{2} + a(t)^{2}\Big (\frac{dr^{2}}{1- kr^{2}} + r^{2}d\varOmega ^{2}_{2}\Big ) \nonumber \\= & {} h_{ab}dx^{a}dx^{b} + \tilde{r}^{2}d\varOmega ^{2}_{2}, \end{aligned} (10) where $$h_{ab} = diag (-1, a^{2}/(1- kr^{2}))$$ and $$\tilde{r} = a(t)r$$. Moreover, the radius of the apparent horizon is given by \begin{aligned} \tilde{r}_{A} = \frac{1}{\sqrt{H^{2} + k/a^{2}}} , \end{aligned} (11) Now, let us consider that the energy-momentum tensor $$T_{\mu \nu }$$ related with the matter in universe possess the form of the one for a perfect fluid: \begin{aligned} T_{\mu \nu } = (\rho + p)U_{\mu }U_{\nu } + p g_{\mu \nu }. \end{aligned} (12) From the energy conservation law, comes the continuity equation \begin{aligned} \dot{\rho } + 3H(\rho + p) = 0. \end{aligned} (13) In this point, let us introduce the work density W and the energy-supply vector $$\psi _{a}$$ \begin{aligned} W = -\frac{1}{2}T^{ab}h_{ab}\; ; \end{aligned} (14) and \begin{aligned} \psi _{a} = T^{b}_{a}\partial _{b}\tilde{r} + W\partial _{a}\tilde{r}. \end{aligned} (15) We shall have, in our case \begin{aligned} W = \frac{1}{2}(\rho - p)\;; \end{aligned} (16) and \begin{aligned} \psi _{a} = - \frac{1}{2}(\rho + p)H\tilde{r}dt + \frac{1}{2}(\rho + p)adr. \end{aligned} (17) From the expressions above, we can compute the amount of energy going through the apparent horizon during the time interval dt as [34] \begin{aligned} \delta Q = - A\psi = A(\rho +p)H\tilde{r}_{A}dt, \end{aligned} (18) where $$A = 4\pi \tilde{r}^{2}_{A}$$. The gravitational equations are obtained by the use the Clausius relation $$\delta Q = TdS$$, where the universe entropy is conjectured to be given by the Bekenstein-Hawking formula \begin{aligned} S = \frac{A}{4}. \end{aligned} (19) On the other hand, the temperature associated with the universe apparent horizon is given by \begin{aligned} T = \frac{1}{2\pi \tilde{r}_{A}}, \end{aligned} (20) which was obtained in the reference [48, 49] through tunneling methods. From the equations above, we obtain \begin{aligned} \dot{H} - \frac{k}{a^{2}} = 4\pi (\rho + p). \end{aligned} (21) In order to obtain the Friedmann equation above we have used the relation \begin{aligned} \dot{\tilde{r}}_{A} = -H\tilde{r}^{3}_{A}\left( \dot{H} - \frac{k}{a^{2}}\right) . \end{aligned} (22) Now, using the continuity Eq. 13, we can find \begin{aligned} \frac{8\pi }{3}d\rho= & {} d(H^{2}+k/a^{2}) \nonumber \\= & {} \frac{4\pi }{A^{2}} dA, \end{aligned} (23) where we have used the fact that $$H^{2} + \frac{k}{a^{2}} = \frac{4\pi }{A}$$. The integration of the Eq. 23 give us \begin{aligned} H^2 + \frac{k}{a^2} = \frac{8\pi }{3}\rho , \end{aligned} (24) which consists in the first Friedmann equation. Now, by the time differentiation of the equation above, and the use of the continuity Eq. 13 we find \begin{aligned} \dot{H} - \frac{k}{a^{2}} = 4\pi (\rho +p), \end{aligned} (25) which constitutes in the second Friedmann equation, i.e., the Raychaudhuri equation. In this way, the complete dynamics of a FLRW universe can be find out from thermodynamics. Such results have been applied in order to study a sort of problems in cosmology, in particular, such related with the thermodynamic evolution of the universe. In this way, the validity of the Generalized Second law of thermodynamics has been investigated in this context [60, 61]. Moreover, such formalism has been used to study the relation between gravity and thermodynamics in the context of extended theories of gravity such scalar-tensor gravity and f(R) gravity [62], braneworld scenarios [50, 51, 53, 63], viscous cosmology [64], and Horava–Lifshitz gravity [65]. ## 4 Quantum corrected Friedmann equations from LQBHs In this section, we shall obtain quantum corrected Friedmann equations for the evolution of the universe, by considering that the holographic bound is satisfied near the Big Bang/Big Crunch singularity, where we shall assume that the entropy associated with the universe apparent horizon is related with its area by the modified entropy-area relation 9. In this way, upon the same considerations of the last section, following the procedure developed in [34, 38], in order to obtain the gravitational equations, we have that by the use of the Clausius relation $$\delta Q = TdS$$, and the LQBH entropy-area relation 9 we can reach \begin{aligned} \dot{H} - \frac{k}{a^{2}} = \mp 4\pi \sigma \frac{\sqrt{A^{2}-A_{min}^{2}}}{A}(\rho + p). \end{aligned} (26) Now, using the continuity Eq. 13, we can find \begin{aligned} \frac{8\pi }{3}d\rho= & {} \pm \frac{1}{\sigma }\frac{A}{\sqrt{A^{2} - A_{min}^{2}}} d(H^{2}+k/a^{2}) \nonumber \\= & {} \mp \frac{1}{\sigma } \frac{4\pi }{A\sqrt{A^{2} - A_{min}^{2}}} dA, \end{aligned} (27) where, again we have used the fact that $$H^{2} + \frac{k}{a^{2}} = \frac{4\pi }{A}$$. The integration of the Eq. 27 give us \begin{aligned} \varTheta= & {} \pm \left[ \frac{2A_{min}}{3}\sigma \rho - \alpha \right] = \arccos (A_{min}/A), \end{aligned} (28) where we must have $$- \pi /2 \le \varTheta \le \pi /2$$, since $$A_{min}/A \ge 0$$. The Eq. 28 give us the following Friedmann equation: \begin{aligned} H^2 + \frac{k}{a^2} = \frac{1}{\gamma \sqrt{3}}\cos (\varTheta ), \end{aligned} (29) where we have used $$A_{min} = 4\pi \gamma \sqrt{3}$$ [66]. \begin{aligned} \dot{H} - \frac{k}{a^2} = \frac{1}{\gamma \sqrt{3}}\sigma (\rho +p)\sin (\varTheta ). \end{aligned} (30) The Eqs. 29 and 30 consist in the quantum versions of the Friedmann equations. As we can see, the quantum corrections present in these equations, inherited from the LQBH’s entropy-area relation, imply in a quantum effective density term which is a harmonic function of the classical density. A very important consequence of this result is that the quantum corrected Friedmann equations bring us a scenario where the Big Bang initial singularity does not exist anymore, but is replaced by a bounce at a point where the universe density gets a critical value, as occurs in LQC. In the Eq. 29, the phase constant $$\alpha$$ will be given by the initial conditions of the universe and must be appropriately chosen in order to ensure that the effective density term is definite positive, in concordance with the Eq. 28 and the comment below it. ## 5 Relation with usual semiclassical LQC In this point, we shall address how the quantum corrected Friedmann equations found out in the last section can be related to the usual semiclassical LQC equations. In order to do this, let us expand the Eq. 29 as \begin{aligned} H^{2} + \frac{k}{a^{2}}= & {} \frac{1}{\gamma \sqrt{3}}\cos {(\alpha )} + \frac{8\pi }{3}\sigma \sin {(\alpha )}\rho \nonumber \\&- \frac{32\pi ^2}{9}\gamma \sigma ^{2}\sqrt{3} \rho ^{2}\cos {(\alpha )}, \end{aligned} (31) where we have disconsidered the terms that depend on quantum corrections of order $$\mathcal {O}(\ge A_{min}^{2}$$), as have been done in the usual semiclassical LQC [67]. The equation above can be written in the form \begin{aligned} H^{2} + \frac{k}{a^{2}} = \frac{8\pi }{3}\rho _{tot}\left( 1- \frac{\rho _{tot}}{\rho _{c}}\right) , \end{aligned} (32) where $$\rho _{tot} = \rho + \frac{\varLambda }{8\pi }$$, with $$\varLambda$$ as a cosmological constant. The Raychaudhuri equation can also be obtained from the time derivative of the Eq. 32 which, by the use of the continuity Eq. 13, give us \begin{aligned} \dot{H} - \frac{k}{a^{2}} = -4\pi (\rho _{tot}+p_{tot})\left( 1- \frac{2\rho _{tot}}{\rho _{c}}\right) , \end{aligned} (33) where $$p_{tot} = p - \frac{\varLambda }{8\pi }$$. In this way, one could obtain the complete LQC semiclassical dynamics from an holographic prescription by the use of the LQBH entropy-area relation. However, there are still some points to address. The first point is the role of the cosmological constant which appears, in our approach, from the quantum gravity corrections to the Bekenstein-Hawking formula. The second point consists in how to conciliate the universe critical density found out in our treatment to that given by usual LQC. In order to discuss such points, we have that from the Eqs. 31 and 32, we must have \begin{aligned}&\rho _{c} = \frac{\sqrt{3}}{4\pi \gamma \sigma ^{2}\cos {(\alpha )}}, \end{aligned} (34) \begin{aligned}&1 - \frac{2\tilde{\varLambda }}{\rho _{c}} = \sigma \sin {(\alpha )}, \end{aligned} (35) \begin{aligned}&\tilde{\varLambda }\left( 1-\frac{\tilde{\varLambda }}{\rho _{c}}\right) = \frac{3\cos {(\alpha )}}{8\pi \gamma \sqrt{3}}, \end{aligned} (36) where $$\tilde{\varLambda } = \frac{\varLambda }{8\pi }$$. Moreover, from the Eqs. 3435 and 36, we shall find \begin{aligned} \tilde{\varLambda }_{\pm } = \frac{\rho _{c}}{2}\left( 1 \pm \sqrt{1 - 4\frac{\xi _{\pm }}{\rho _{c}}}\right) , \end{aligned} (37) where \begin{aligned} \xi _{\pm } = \frac{\rho _{c}}{4}\left( 1 \pm \sqrt{1 - \frac{3}{4\pi ^{2}\gamma ^{2}\rho _{c}^{2}}} \right) . \end{aligned} (38) We also obtain \begin{aligned} \sigma ^{2} = \frac{3}{32\pi ^{2}\gamma ^{2}\rho _{c}}\left[ \tilde{\varLambda }\left( 1-\frac{\tilde{\varLambda }}{\rho _{c}}\right) \right] ^{-1}, \end{aligned} (39) and \begin{aligned} \cos {(\alpha )}^{2} = \frac{64\pi ^{2}\gamma ^{2}}{3}\left[ \tilde{\varLambda }\left( 1-\frac{\tilde{\varLambda }}{\rho _{c}}\right) \right] ^{2}. \end{aligned} (40) In the Eq. 38, $$\xi _{-}$$ is the only consistent choice in order to have a real valued solution to 37. On the other hand, in the Eq. 37, $$\tilde{\varLambda }_{-}$$ is the only choice consistent with the agreement between the Eq. 40 and the condition that $$-1 \le \cos {(\alpha )} \le 1$$. In this way, we obtain: \begin{aligned} \varLambda = 8\pi \tilde{\varLambda } = 4\pi \rho _{c}\left[ 1- \left( 1- \frac{3}{4\pi ^{2}\gamma ^{2}\rho _{c}^2}\right) ^{\frac{1}{4}}\right] . \end{aligned} (41) Consequently, the value of the cosmological constant depends on the initial conditions of the universe, particularly on the value of the universe critical density at the bounce. In the limit where $$\rho _{c} \gg 1$$, $$\varLambda \sim \frac{3}{4\pi \gamma ^2\rho _{c}}$$, in a way that, in the infrared limit ($$\rho _{c} \rightarrow \infty$$), $$\varLambda \rightarrow 0$$. By comparing the appearance of a cosmological constant in our treatment with usual LQC, it has been demonstrated that LQC fits the situation where a cosmological constant $$\varLambda$$ is present, for both the cases where $$\varLambda$$ is positive [68, 69] or negative [70]. However, LQC does not offer any theoretical result for the value of the cosmological constant, within the sense that it does not arise as the result of a more fundamental calculation [71, 72, 73]. Instead, the point of view in LQC (and, more generally, in LQG) has been that the cosmological constant consists in a constant of nature, in the same sense of Newton’s gravitational constant or Planck’s constant, or the electron charge. From this standpoint, $$\varLambda$$ should be measured through some experiments and/or observations. As a consequence, LQC does not address what is often called the “‘cosmological constant problem” which asks for an explanation from fundamental physics of why the observed value of $$\varLambda$$ is so small compared to that provided by the Standard Model, which predicts $$\varLambda$$ would be Planckian. Such persistent problem is considered as one of the most puzzling in physics. (For a recent review about the cosmological constant problem, see [74]). The results found out in the present article, on the other hand, tie the cosmological constant to the universe density at the bounce. In this way, concerning to the value of the universe critical density, we have that, in the context of LQC, it is given by [67, 75, 76, 77, 78, 79, 80] \begin{aligned} \rho _{c} = \frac{3}{8\pi \gamma ^{2}\varDelta }, \end{aligned} (42) where $$\varDelta$$ is an area gap. Usually one assumes that the area gap above is given by the LQG one, $$\varDelta _{LQG} = 4\pi \gamma \sqrt{3}$$, and calculates the value of the universe critical density to be, in Planck units, $$\rho _{c} \approx 0.41$$ [79, 80]. The same numerical result for $$\rho _{c}$$ is obtained when we have the presence of a cosmological constant [68, 69, 70]. However, such assumption can not be taken into account in the present formalism since, in such a case, the cosmological constant given by the Eq. 41 would be not real valued. In fact in order to have a real valued cosmological constant in the Eq. 41, one must have \begin{aligned} \rho _{c} \ge \frac{\sqrt{3}}{2\pi \gamma } \approx 1.16 . \end{aligned} (43) It could be a bone of contention between our treatment and the usual LQC. However, it has been pointed that the choice of the full LQG area gap in order to calculate $$\rho _{c}$$ is naive and lacks further physical justification. Consequently, other values for the universe critical density could be conceived. In fact, from the arguments presented in [81, 82, 83], the value of $$\rho _{c}$$ would be fixed by observations. For a more general expression for $$\rho _{c}$$, see [84]. If it is this case, the results of the present work could give us a way to fix the value of $$\rho _{c}$$ by the use of the observational results about $$\varLambda$$. In this way, one must have that the energy scale of the bounce would be super Planckian, since in order to have a concordance with the observed value of the cosmological constant one must have $$\rho _{c} \sim 10^{120}$$. ## 6 Summary and conclusions Loop quantum gravity is a propose to the description of spacetime behavior in situations where its atomic characteristic arises. Among these situations, the nature of our universe near the Big Bang singularity is described by LQC. Near the Big Bang, LQC faces some important questions about the thermodynamical evolution of the universe for what the holographic principle must be fundamental. Among such questions, the origin of the universe entropy, and the arrow of time [11]. However, an investigation of LQC under the holographic point of view was still lacking. In this way, in the present work, we have shown a manner to obtain the LQC semiclassical equations from the holographic principle. In order to do this, we have considered that the entropy of the universe is given by the LQBH’s entropy-area relation, and the holographic boundary is chosen as the universe apparent horizon in order to have the validity of the first law of thermodynamics. Based on such assumptions, by the use of the Jacobson formalism [1], adapted to cosmological scenarios by Cai et al. [34], the dynamical LQC equations can be obtained. The compatibility with the results of standard LQC, might suggest that the black hole’s entropy counting performed in loop black hole’s scenario may be on a more solid footing, in cosmological contexts, than other counting procedures such as those given in other approaches like [39]. Since the Eq. 29 is an equation of state for the cosmological evolution of spacetime, semiclassical LQC would appear as a thermodynamic effect whose origin would lies in the atomic structure of spacetime described by LQG. Among our results, a positive cosmological constant has been obtained. The value of the cosmological constant depends on the universe initial conditions, specially it depends on the choice of the universe critical density at the bounce. In order to have a real valued cosmological constant, the naive assumption that the universe critical density is determined by the LQG area gap can not be considered in the present formalism. On the other hand, there are some suggestions that $$\rho _{c}$$ should be fixed by observations. In this case, our results gives a way to fix the value of $$\rho _{c}$$ by connecting it with the cosmological constant observed value. In this way $$\rho _{c} \sim 10^{120}$$. The problem of why the cosmological constant has a very small value in our universe remains open. The results found out in this paper can pave the way for a large sort of investigation of important issues about the thermodynamical behavior of our universe as described by LQC. Besides the aforementioned problems of the origin of universe entropy and the arrow of time, non-equilibrium regimes can be also investigated in this context. ## Notes ### Acknowledgements The author would like to thank the anonymous referee by the useful discussions and comments about the paper. ## References 1. 1. T. Jacobson, Phys. Rev. Lett. 75, 1260 (1995) 2. 2. C. Rovelli, Quantum Gravity (Cambridge University Press, Cambridge, 2004) 3. 3. A. Ashtekar, J. Lewandowski, Class. Quant. Grav. 21, R53 (2004). arXiv:gr-qc/0404018 4. 4. T. Thiemann, Lect. Notes Phys. 721, 185 (2007). arXiv:hep-th/0608210 5. 5. T. Thiemann. arXiv:gr-qc/0110034 6. 6. T. Thiemann, Lect. Notes Phys. 631, 41 (2003). arXiv:gr-qc/0210094 7. 7. M. Bojowald, Phys. Rev. Lett. 86, 5227 (2001) 8. 8. A. Barrau, T. Cailleteau, J. Grain, J. Mielczarek, Class. Quant. Grav. 31, 053001 (2014) 9. 9. A. Ashtekar, A. Barrau. arXiv:1504.07559 [gr-qc] 10. 10. M. Bojowald, Nat. Phys. 3(8), 523 (2007) 11. 11. M. Bojowald, R. Tavakol, Phys. Rev. D 78, 023515 (2008) 12. 12. J.D. Bekenstein, Phys. Rev. D 7, 2333 (1973) 13. 13. G. ’t Hooft, Salamfest, 0284–296 (1993). arXiv:gr-qc/9310026 14. 14. L. Susskind, J. Math. Phys. 36, 6377 (1995) 15. 15. R. Bousso, Rev. Mod. Phys. 74, 825 (2002) 16. 16. L. Susskind, Nat. Phys. 2(10), 665 (2006) 17. 17. N. Afshordi, C. Coriano, L. Delle Rose, E. Gould, K. Skenderis, Phys. Rev. Lett. 118(4), 041301 (2017). . arXiv:1607.04878 [astro-ph.CO] 18. 18. W. Fischler, L. Susskind. arXiv:hep-th/9806039 19. 19. G. Veneziano, Phys. Lett. B 454, 22 (1999). . arXiv:hep-th/9902126 20. 20. R. Easther, D.A. Lowe, Phys. Rev. Lett. 82, 4967 (1999). . arXiv:hep-th/9902088 21. 21. D. Bak, S.J. Rey, Class. Quant. Grav. 17, L83 (2000). . arXiv:hep-th/9902173 22. 22. R. Bousso, JHEP 9907, 004 (1999). . arXiv:hep-th/9905177 23. 23. R. Bousso, JHEP 9906, 028 (1999). . arXiv:hep-th/9906022 24. 24. T. Banks, W. Fischler. arXiv:hep-th/0111142 25. 25. N. Kaloper, A.D. Linde, Phys. Rev. D 60, 103509 (1999). . arXiv:hep-th/9904120 26. 26. L. Randall, R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999). . arXiv:hep-th/9906064 27. 27. S.S. Gubser, Phys. Rev. D 63, 084017 (2001). . arXiv:hep-th/9912001 28. 28. I. Savonije, E.P. Verlinde, Phys. Lett. B 507, 305 (2001). . arXiv:hep-th/0102042 29. 29. S. Mukohyama, Phys. Lett. B 473, 241 (2000). . arXiv:hep-th/9911165 30. 30. E.E. Flanagan, S.H.H. Tye, I. Wasserman, Phys. Rev. D 62, 044039 (2000). . arXiv:hep-ph/9910498 31. 31. P. Binetruy, C. Deffayet, U. Ellwanger, D. Langlois, Phys. Lett. B 477, 285 (2000). . arXiv:hep-th/9910219 32. 32. D. Ida, JHEP 0009, 014 (2000). . arXiv:gr-qc/9912002 33. 33. V. Faraoni, Lect. Notes Phys. 907, 1 (2015). 34. 34. R.G. Cai, S.P. Kim, JHEP 0502, 050 (2005) 35. 35. T. Padmanabhan, Phys. Rev. D 81, 124040 (2010) 36. 36. C. Eling, R. Guedens, T. Jacobson, Phys. Rev. Lett. 96, 121301 (2006). arXiv:gr-qc/0602001 37. 37. T. Padmanabhan, Rept. Prog. Phys. 73, 046901 (2010) 38. 38. R.G. Cai, L.M. Cao, Y.P. Hu, JHEP 0808, 090 (2008) 39. 39. K.A. Meissner, Class. Quant. Grav. 21, 5245 (2004) 40. 40. L. Modesto, Int. J. Theor. Phys. 49, 1649 (2010). . arXiv:0811.2196 [gr-qc] 41. 41. L. Modesto, I. Premont-Schwarz, Phys. Rev. D 80, 064041 (2009) 42. 42. B. Carr, L. Modesto, I. Premont-Schwarz. arXiv:1107.0708 [gr-qc] 43. 43. S. Hossenfelder, L. Modesto, I. Premont-Schwarz. arXiv:1202.0412 [gr-qc] 44. 44. C.A.S. Silva, F.A. Brito, Phys. Lett. B 725(45), 456 (2013) 45. 45. F. Caravelli, L. Modesto, Class. Quant. Grav. 27, 245022 (2010) 46. 46. E. Alesci, L. Modesto, Gen. Rel. Grav. 46, 1656 (2014) 47. 47. Y. Gong, A. Wang, Phys. Rev. Lett. 99, 211301 (2007). . arXiv:0704.0793 [hep-th] 48. 48. R.G. Cai, L.M. Cao, Phys. Rev. D 75, 064008 (2007). . arXiv:gr-qc/0611071 49. 49. R.G. Cai, L.M. Cao, Y.P. Hu, Class. Quant. Grav. 26, 155018 (2009) 50. 50. R.G. Cai, L.M. Cao, Nucl. Phys. B 785, 135 (2007). . arXiv:hep-th/0612144 51. 51. R.G. Cai, Prog. Theor. Phys. Suppl. 172, 100 (2008). . arXiv:0712.2142 [hep-th] 52. 52. A. Sheykhi, B. Wang, R.G. Cai, Nucl. Phys. B 779, 1 (2007). . arXiv:hep-th/0701198 53. 53. A. Sheykhi, B. Wang, R.G. Cai, Phys. Rev. D 76, 023515 (2007). . arXiv:hep-th/0701261 54. 54. R.G. Cai, L.M. Cao, N. Ohta, Phys. Lett. B 679, 504 (2009). . arXiv:0905.0751 [hep-th] 55. 55. R.G. Cai, L.M. Cao, Y.P. Hu, S.P. Kim, Phys. Rev. D 78, 124012 (2008). . arXiv:0810.2610 [hep-th] 56. 56. Y. Zhang, Y. Gong, Z.H. Zhu, Int. J. Mod. Phys. D 21, 1250034 (2012). 57. 57. B. Wang, Y. Gong, E. Abdalla, Phys. Rev. D 74, 083520 (2006). . arXiv:gr-qc/0511051 58. 58. J. Zhou, B. Wang, Y. Gong, E. Abdalla, Phys. Lett. B 652, 86 (2007). . arXiv:0705.1264 [gr-qc] 59. 59. A. Sheykhi, Class. Quant. Grav. 27, 025007 (2010). . arXiv:0910.0510 [hep-th] 60. 60. S.F. Wu, B. Wang, G.H. Yang, P.M. Zhang, Class. Quant. Grav. 25, 235018 (2008). . arXiv:0801.2688 [hep-th] 61. 61. N. Radicella, D. Pavon, Phys. Lett. B 691, 121 (2010). . arXiv:1006.3745 [gr-qc] 62. 62. M. Akbar, R.G. Cai, Phys. Lett. B 635, 7 (2006). . arXiv:hep-th/0602156 63. 63. X.H. Ge, Phys. Lett. B 651, 49 (2007). . arXiv:hep-th/0703253 64. 64. M. Akbar, Chin. Phys. Lett. 25, 4199 (2008). . arXiv:0808.0169 [gr-qc] 65. 65. A. Sheykhi, Phys. Rev. D 87(2), 024022 (2013). . arXiv:1301.3776 [hep-th] 66. 66. 67. 67. V. Taveras, Phys. Rev. D 78, 064072 (2008). . arXiv:0807.3325 [gr-qc] 68. 68. W. Kaminski, T. Pawlowski, Phys. Rev. D 81, 024014 (2010). . arXiv:0912.0162 [gr-qc] 69. 69. T. Pawlowski, A. Ashtekar, Phys. Rev. D 85, 064001 (2012). . arXiv:1112.0360 [gr-qc] 70. 70. E. Bentivegna, T. Pawlowski, Phys. Rev. D 77, 124025 (2008). . arXiv:0803.4446 [gr-qc] 71. 71. E. Bianchi, C. Rovelli. arXiv:1002.3966 [astro-ph.CO] 72. 72. E. Bianchi, C. Rovelli, R. Kolb, Nature, 466, 321 (2010) 73. 73. E. Wilson-Ewing, Comptes Rendus Phys. 18, 207 (2017). . arXiv:1612.04551 [gr-qc] 74. 74. 75. 75. A. Ashtekar, T. Pawlowski, P. Singh, Phys. Rev. D 73, 124038 (2006). . arXiv:gr-qc/0604013 76. 76. A. Ashtekar, T. Pawlowski, P. Singh, Phys. Rev. D 74, 084003 (2006). . arXiv:gr-qc/0607039 77. 77. A. Ashtekar, T. Pawlowski, P. Singh, Phys. Rev. Lett. 96, 141301 (2006). . arXiv:gr-qc/0602086 78. 78. A. Ashtekar, A. Corichi, P. Singh, Phys. Rev. D 77, 024046 (2008). . arXiv:0710.3565 [gr-qc] 79. 79. D.W. Chiou, K. Vandersloot, Phys. Rev. D 76, 084015 (2007). . arXiv:0707.2548 [gr-qc] 80. 80. A. Ashtekar, Gen. Rel. Grav. 41, 707 (2009). . arXiv:0812.0177 [gr-qc] 81. 81. P. Malkiewicz, W. Piechocki, Phys. Rev. D 80, 063506 (2009). . arXiv:0903.4352 [gr-qc] 82. 82. M. Bojowald, Class. Quant. Grav. 26, 075020 (2009). . arXiv:0811.4129 [gr-qc] 83. 83. P. Dzierzak, J. Jezierski, P. Malkiewicz, W. Piechocki, Acta Phys. Polon. B 41, 717 (2010). arXiv:0810.3172 [gr-qc] 84. 84. M. Bojowald, Gen. Rel. Grav. 40, 2659 (2008). . arXiv:0801.4001 [gr-qc]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294913411140442, "perplexity": 1174.070466829514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743110.55/warc/CC-MAIN-20181116173611-20181116195611-00145.warc.gz"}
https://www.physicsforums.com/threads/momentums-as-vectors.108715/
# Momentums as vectors 1. Jan 30, 2006 ### boris16 hi If a person on a train moving at speed 100 km per hour is moving in opossite direction with speed of 10 km per hour, then person's speed is 90 km per hour. Adding or substracting velocity vectors makes perfect sense. But adding or substracting momentums doesn't, if it means adding or substracting two momentums that each belong to two DIFFERENT objects in an isolated system. If in isolated system two balls are approaching each other from opposite directions ( friction is negligible ), one on the left with momentum of 160 and the one on the right with momentum 100, then since momentum is vector quantity the resultant momentum will be 60 and its direction will be to the left. I know the two balls together represent a system, but at the end of the day they are still two distinctive balls each with its own momentum, so to me substracting the two momentums belonging to DIFFERENT objects is like ... I don't know ... apples and oranges. I know that when they collide their individual momentums will change but total momentum will stay the same! So is perhaps the main reason why we add or substract momentums of different objects ( like in the above example ) because system has net momentum of 60 even in a case where objects at collision experience impulse that equals the smaller of the two momentums ( that would be momentum of the ball going left -> M=100), the remaining momentum will still be 60? thank you 2. Jan 30, 2006 ### Hootenanny Staff Emeritus It depends what you call the system. The positive and negative sign is a crude way of indicating the direction of the vector. Summation of the vectors gives the net or resultant vector. Finding the magnitude would be something completely different. 3. Jan 30, 2006 ### krab This suggests that you don't really know what momentum is. If a system consists of two objects, one with momentum x and the other with momentum y, then the total momentum is x+y. Simple as that. I suspect that you are confusing momentum with energy. If y=-x, then the two objects can collide and as a result be at rest (say they are really sticky). Momentum is conserved. But the energy is a different matter; energy is not a vector, so the two objects had the same energy of motion to start with, and this energy must re-appear somewhere in the system. For example, the two objects stuck together could end at a higher temperature.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773351311683655, "perplexity": 449.8175064124875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983026851.98/warc/CC-MAIN-20160823201026-00140-ip-10-153-172-175.ec2.internal.warc.gz"}
http://ngri.org.in/rh4.html
## A new approach for enhancement of Magnetotelluric resolution This study has proposed the optimal evaluation (target) frequency separation criterion quantitatively for enhancement of Magnetotelluric (MT) resolution. The study is based on the propagation geometry of diffusive electromagnetic (EM) wave through the earth (figure 1). In a layered earth, this propagation geometry takes the shape of a distorted hemisphere (figure1). Difference between two distorted hemispheres (corresponding to two different target frequencies) is taken to calculate the minimum resolvable layer thickness at a particular depth. This difference between two distorted hemispheres is calculated on the basis of skin depth principle. This difference expression is then converted to a function of frequency so that the concept of differentiation can be applied to get the maximum evaluation (target) frequency separation for resolution enhancement of that layer. Finally the study concludes that Δf ≤ 0.414fn ; where Δf is the maximum frequency separation and fn is the minimum frequency taken for a particular MT study. While examining the effectiveness of the proposed criterion with synthetic data, it is observed that the new idea can improve the resolution of subsurface object (Figure 2). Figure 1: The propagation geometry of diffused electromagnetic wave through layered earth and its distortion at the boundaries. Figure 2: Synthetic test to examine the effectiveness of evaluation (target) frequency separation on magnetotelluric resolution. For Further Details: Ujjal K. Borah and Prasanta K. Patro, 2017, Annals of Geophysics, vol 60 no3, http://dx.doi.org/10.4401/ag-7056 http://www.annalsofgeophysics.eu/index.php/annals/issue/view/521
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446040153503418, "perplexity": 1785.9016830001283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948572676.65/warc/CC-MAIN-20171215133912-20171215155912-00590.warc.gz"}
http://www.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=1994&number=100
WIAS Preprint No. 100, (1994) # Effects of distributed delays on the stability of structures under seismic excitation and multiplicative noise Authors • Karmeshu, Prof. • Schurz, Henri 2010 Mathematics Subject Classification • 74H55 74H50 26A15 Keywords • Stochastic stability, weak and strong time delay, random oscillations, seismic excitation, stochastic differential equations, numerical methods, implicit Euler, Mil'shtein and Balanced methods DOI 10.20347/WIAS.PREPRINT.100 Abstract The effects of seismic excitation and multiplicative noise (arising from environmental fluctuations) on the stability of a single degree of freedom system with distributed delays are investigated. The system is modelled in the form of a stochastic integro-differential equation interpreted in Stratonovich sense. Both deterministic stability and stochastic moment stability are examined for the system in the absence of seismic excitation. The model is also extended to incorporate effects of symmetric nonlinearity. The simulation of stochastic linear and nonlinear systems are carried out by resorting to numerical techniques for the solution of stochastic differential equations. Appeared in • SADHANA, 20 (1995), pp. 451--474
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934956789016724, "perplexity": 1551.1376436859528}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00113.warc.gz"}
http://spie.org/Publications/Proceedings/Paper/10.1117/12.283957
Share Email Print ### Proceedings Paper Generalized linear feature detection of weak targets in spectrally mixed clutter Author(s): Xiaoli Yu; Lawrence E. Hoff; Scott G. Beaven; Edwin M. Winter; John A. Antoniades; Irving S. Reed Format Member Price Non-Member Price PDF \$14.40 \$18.00 Paper Abstract The ability to detect weak targets of low contrast or signal-to- noise ratio (SNR) is improved by a fusion of data in space and wavelength from multispectral/hyperspectral sensors. It has been demonstrated previously that the correlation of the clutter between multiband thermal infrared images plays an important role in allowing the data collected in one spectral band to be used to cancel the background clutter in another spectral band, resulting in increased SNR. However, the correlation between bands is reduced when the spectrum observed in each pixel is derived from a mixture of several different materials, each with its own spectral characteristics. In order to handle the identification of objects in this complex (mixed) clutter, a class of algorithms have been developed that model the pixels as a linear combination of pure substances and then unmix the spectra to identify the pixel constituents. In this paper a linear unmixing algorithm is incorporated with a statistical hypothesis test for detecting a known target spectral feature that obeys a linear mixing model in a mixture of background noise. The generalized linear feature detector utilizes a maximum likelihood ratio approach to detect and estimate the presence and concentration of one or more specific objects. A performance evaluation of the linear unmixing and maximum likelihood detector is shown by comparing the results to the spectral anomaly detection algorithm previously developed by Reed and Yu. Paper Details Date Published: 29 October 1997 PDF: 9 pages Proc. SPIE 3163, Signal and Data Processing of Small Targets 1997, (29 October 1997); doi: 10.1117/12.283957 Show Author Affiliations Xiaoli Yu, Science Applications International Corp. (United States) Lawrence E. Hoff, Naval Command, Control and Ocean Surveillance Ctr. (United States) Scott G. Beaven, Naval Command, Control and Ocean Surveillance Ctr. (United States) Edwin M. Winter, Technical Research Associates, Inc. (United States) John A. Antoniades, Naval Research Lab. (United States) Irving S. Reed, Univ. of Southern California (United States) Published in SPIE Proceedings Vol. 3163: Signal and Data Processing of Small Targets 1997 Oliver E. Drummond, Editor(s)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953692317008972, "perplexity": 4612.723591382486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00069.warc.gz"}
https://www.physicsforums.com/threads/find-torque-required-to-lift-a-mass.273727/
Find torque required to lift a mass 1. Nov 21, 2008 1. The problem statement, all variables and given/known data A crane contains a hollow drum of mass 150kg and radius 0.8m that is driven by an engine to wind up a cable. The cable passes over a solid cylindrical 30kg pulley 0.3m in radius to lift a 2000N weight. How much torque must the engine apply to the drum to lift the weight with an acceleration of 1ms^(-2) 2. Relevant equations As I am only currently dealing with 1D rotationl motion the equations are $$\tau=FR\sin\theta$$ where i suppose theta is 90 degrees $$\tau=I\alpha$$ $$a_{t}=\alpha R$$ $$I_{pulley} = \frac{1}{2}M_{p}R_{p}^2$$ $$I_{drum} = M_{d}R_{d}^2$$ $$F=ma_{t}$$ 3. The attempt at a solution The difficulty I have with this question is the pulley and drum. I'm not sure how to factor both of these in and not sure where the mass comes into the overall scheme of things either, though I'm sure its got something to do with it. Now I know $$I_{pulley} = \frac{1}{2} \times 30 \times 0.3^2 = 1.35 kg m^2$$ $$I_{drum} = 150 \times 0.8^2 = 96 kg m^2$$ $$m = \frac{2000}{9.8} = 204.1kg$$ The tension in the rope between the pulley and the mass is given by $$T-mg=ma$$ where $$a = 1ms^-2$$ so $$T = mg+ma = 2204.1N$$ This is where things become unclear, after this I am just guessing $$\tau_{pulley} = T R_{pulley} = 2204.1 \times 0.3 = 661.23Nm$$ assuming tension in rope is the same between drum and pulley then $$\tau_{drum} = T R_{drum} = 2204.1 \times 0.8 = 1763.28Nm$$ or perhaps $$\tau_{pulley} = I_{pulley} \alpha = I_{pulley} \frac{a_{t}}{R_p}=1.35 \times \frac{1}{0.3}=4.5Nm$$ I am out of ideas after this point. answer in back of book = 1900Nm Thanks, 2. Nov 21, 2008 Redbelly98 Staff Emeritus More than one torque is acting on the drum. So think in terms of net torque and angular acceleration, just like you do with force and linear acceleration. Actually, it isn't. The rope exerts a torque on the pulley due to friction. This is the torque that causes the pulley rotation to accelerate. Likewise (Newton's 3rd Law), the pulley exerts a force on the rope, changing the tension value before & after the pulley. 3. Nov 21, 2008 still not sure how to go about it The only guess I could make is that (assuming that the entire rope is accelerating at the same rate of $$1ms^{-2}$$ $$\tau_{net} = \tau_{drum} + \tau_{pulley}$$ we can do $$\tau_{net} = M_{drum} R_{drum}^2 \alpha_{drum} + \frac{1}{2}M_{pulley}R_{pulley}^2 \alpha_{pulley} = 124.5Nm$$ where $$\alpha_{drum}=\frac{1}{R_{drum}}$$ and $$\alpha_{pulley}=\frac{1}{R_{pulley}}$$ or $$\tau_{net} = M_{drum} R_{drum}^2 \alpha_{drum} + TR_{pulley} = 781Nm$$ or even $$\tau_{net} = T R_{drum} + \frac{1}{2}M_{pulley}R_{pulley}^2 \alpha_{pulley} = 1767Nm$$ None of the above arrives at the right answer but they are only guesses. My first guess makes the most sense to me, however it is wrong. By the way, does anyone know how to get rid of that white ... out of the tex equations? 4. Nov 22, 2008 Redbelly98 Staff Emeritus I'm not sure which object this taunet refers to? It almost sounds like it's for the rope, but that doesn't make any sense since the rope is not rotating. We are trying to set up a . . torquenet = I * alpha equation for the drum, to answer the question. To do that, we need the tension in the rope at the drum. Since the rope provides a net torque on the pulley, we'll need to set up . . torquenet = I * alpha for the pulley. So, draw a free body diagram for the pulley. Hint: think of the rope as exerting two forces, in different directions, on the pulley. One force is from the weight-to-pulley section of rope, the 2nd force is from the pulley-to-drum section of rope. They have different tensions; solve the equation for the pulley-to-drum tension. It's a bug, and unfortunately there's nothing members like us can do to get rid of it. The forum administrators are aware of the problem, but I guess it's not an easy one to fix. Last edited: Nov 22, 2008 5. Nov 23, 2008 ok here goes another try let $$T_{1}$$ be the tension in the rope between the drum and the pulley let $$T_{2}$$ be the tension in the rope between the pulley and the mass so using Mr Newton on the 2000N mass $$T_{2} - 2000 = ma = 204.1 \times 1$$ $$T_{2} = 2204.1$$ now for the net torque on the pulley $$\tau_{net} = (T_{1} - T_{2}) R_{p} = I_{p} \alpha_{p}$$ so $$T_{1} = \frac{I_{p}\alpha_{p}}{R_{p}} + T_{2}= \frac{\frac{1}{2}M_{p} R_{p}^2\alpha_{p}}{R_{p}} + T_{2}=\frac{1}{2}M_{p}R_{p}\alpha_{p}+T_{2}$$ now I shall take $$a_{t} = 1ms^{-2}$$ so $$\alpha_{p} = \frac{1}{R_{p}}$$ which gives $$T_{1} = \frac{1}{2}M_{p} + T_{2}=15+2204.1=2219.1N$$ now the net torque on the drum is $$\tau_{net} = T_{1} R_{d} = 2219.1N \times 0.8m = 1775Nm$$ which is wrong. Not sure where the $$I\alpha$$ thing comes into it for the net torque of the drum. I suppose it must pop in somewhere because my answer doesn't depend on the mass of the drum 6. Nov 23, 2008 Redbelly98 Staff Emeritus You're very close! In other words, there's an engine and it contributes to the net torque on the drum. Set up a torquenet = I * alpha equation for the drum, similar to what you did for the pulley. 7. Nov 24, 2008 ok i think i have finally got it heres a quick sketch of what i did $$T_{2} - mg = ma$$ net torque of pulley: $$\tau_{net} = (T_{1} - T_{2}) R_{p} = \frac{1}{2}M_{p}R_{p}^2\alpha_{p}$$ $$T_{1} = \frac{1}{2}M_{p}+T_{2}$$ net torque of drum: $$\tau_{net} = (F_{engine} - T_{1}) R_{d} = M_{d}R_{d}^2\alpha_{d}$$ $$F_{engine} = M_{d}+T_{1}$$ torque required by engine [tex]\tau_{engine} = F_{engine} R_{drum} greatly appreciated, would never have solved it without your help 8. Nov 24, 2008 Redbelly98 Staff Emeritus Looks pretty good. There are a couple of "R" terms missing, don't know if you just forgot about them in your post or they are missing from your written solution as well: ½ Mp → ½ Mp Rp Similarly, Md → Md Rd Usually, checking the units (and explicitly including the 1/s2 angular acceleration) will catch little errors like that. You're welcome!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223766088485718, "perplexity": 742.7575349808934}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00146.warc.gz"}
http://mymathforum.com/differential-equations/342949-method-undet-coefficients-question.html
My Math Forum Method Of Undet. Coefficients question Differential Equations Ordinary and Partial Differential Equations Math Forum November 27th, 2017, 07:53 AM #1 Newbie   Joined: Nov 2017 From: NY Posts: 1 Thanks: 0 Method Of Undet. Coefficients question Hi everyone! I'm really struggling with this problem Solve X' = AX + F(t) with A = ( 1 2 , 2 4) and F(t) = ( t , 2e^(5t) ) A and F(t) are both matrices, I'm not sure if there was a better way for me to write them. I think I need to use method of undetermined coefficients, and I already solved the complementary solution but I'm having a lot of trouble with the particular solution. Can anyone help?? Thanks November 27th, 2017, 09:45 AM #2 Math Team   Joined: Jan 2015 From: Alabama Posts: 3,261 Thanks: 896 You want to find a set of functions, $\displaystyle \begin{pmatrix}x(t) \\ y(t)\end{pmatrix}$, that satisfy the entire equation by "undertermined coefficients". In order to use "undetermined coefficients", even in the case of a single equation in a single unknown, you need to be able to make a "guess" at the form of the function. Since the "non-homogenous" part consists of the functions $\displaystyle t$ and $\displaystyle 2e^{5t}$ I would look for functions of the form $\displaystyle x= At+ B+ Ce^{5t}$ and $\displaystyle y= Et+ F+ Ge^{5t}$. Then $\displaystyle \frac{dx}{dt}= A+ 5Ce^{5t}= x+ 2y= (A+ 2E+ 1)t+ (B+ 2F)+ (C+ 2G)e^{5t}$ and $\displaystyle \frac{dy}{dt}= E+ 5Ge^{5t}= t+ 5GTe^{5t}= (2A+ 4E)t+ (2B+4F)+ (2C+ 4G+ 1)e^{5t}$. That gives the equations $\displaystyle A+ 2E+ 1= 0$, $\displaystyle B+ 2F= A$, $\displaystyle C+ 2G= 5$, $\displaystyle 2A+ 4E= 1$, $\displaystyle 2B+ 4F= =0$, and $\displaystyle 2C+ 4G+ 1= 5G$, six equations to solve for the 6 constants, A, B, C, D, E, and F. Tags coefficients, method, question, undet Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post aliaja Applied Math 1 October 19th, 2013 09:06 AM BinaryReader Calculus 1 May 13th, 2012 08:17 PM ARTjoMS Applied Math 1 October 14th, 2011 03:46 AM reggea07 Applied Math 1 May 23rd, 2011 03:31 PM mathHuji Advanced Statistics 0 February 7th, 2011 05:06 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864503026008606, "perplexity": 848.5835519862388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00317.warc.gz"}
https://thephilosophyforum.com/discussion/6187/a-proof-for-the-existence-of-god/p9
## A Proof for the Existence of God 156789Next • 13.8k The nature of being and God IS being. So that's the same thing as saying "the nature of God" no? • 907 The nature of being and God IS being. — Dfpolis So that's the same thing as saying "the nature of God" no? Yes, as long as you do not take "nature" to entail limiting determinations. • 13.8k Okay but there is a limit in that being is some ways and not others. We've already gone over and agreed that it's some ways and not others. The ways it's not are the limits. • 907 Okay but there is a limit in that being is some ways and not others. We've already gone over and agreed that it's some ways and not others. The ways it's not are the limits. But, what being is not, is nothing. • 13.8k For example, the fact that nothing can be and not be in one and the same way at one and the same time, contra if it were the case that something could be and not be in one and the same way at one and the same time. • 907 For example, the fact that nothing can be and not be in one and the same way at one and the same time, contra if it were the case that something could be and not be in one and the same way at one and the same time. I understand the contrast, but not its point. • 1.8k I think that we can prove that there exists a timeless Power maintaining the universe in being. And I think there is very little we can prove if you mean "prove" in a scientific sort of way? I'm pretty sure we can't prove what you said above, though. • 537 Most people believe it is wrong to harm another person intentionally, although there is absolutely no way to "prove" this. All you can prove is what will happen if you are caught doing that (social condemnation, retaliation, possibly punishment). Perhaps belief in God can enjoy a similar kind of status without upsetting too many people on either side of the debate. 156789Next bold italic underline strike code quote ulist image url mention reveal
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858483612537384, "perplexity": 561.9209751716991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00535.warc.gz"}
http://stochastix.wordpress.com/2012/02/
## Archive for February, 2012 ### Deciding injectivity February 28, 2012 Consider a function $f : \mathcal{X} \to \mathcal{Y}$. We say that $f$ is injective if and only if it maps distinct inputs to distinct outputs [1]. More formally, $f$ is injective if and only if $\forall x_1 \forall x_2 \left( x_1 \neq x_2 \implies f (x_1) \neq f (x_2)\right)$ where $x_1$ and $x_2$ range over set $\mathcal{X}$. Note that the inequations $x_1 \neq x_2$ and $f (x_1) \neq f (x_2)$ are equivalent to $\neg (x_1 = x_2)$ and $\neg \left(f (x_1) = f (x_2)\right)$, respectively. The universally-quantified formula above can be rewritten as follows $\forall x_1 \forall x_2 \left( \neg (x_1 = x_2) \implies \neg (f (x_1) = f (x_2))\right)$ which looks a bit messy. Contrapositively, $\forall x_1 \forall x_2 \left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$. In this post, we will restrict our attention to real-valued functions of a real variable (i.e., functions from $\mathbb{R}$ to $\mathbb{R}$). For polynomial functions from reals to reals, we can use quantifier elimination to determine the truth values of the universally-quantified formulas above, thereby deciding injectivity. __________ Example We would like to decide the injectivity of the following functions $\begin{array}{rl} f : \mathbb{R} &\to \mathbb{R}\\ x &\mapsto x^2\end{array}$                 $\begin{array}{rl} g : \mathbb{R} &\to \mathbb{R}\\ x &\mapsto x^3\end{array}$ which we plot below (using Wolfram Alpha) In high school we all learned to use the infamous horizontal line test to determine whether a function is injective. Let us use something more powerful this time, namely, the following REDUCE + REDLOG script: % decide the injectivity of f(x) = x^2 and g(x) = x^3 rlset ofsf; % define universally-quantified formulas phi := all({x1,x2}, x1**2=x2**2 impl x1=x2); psi := all({x1,x2}, x1**3=x2**3 impl x1=x2); % perform quantifier elimination rlqe phi; rlqe psi; end; which decides that $f$ is not injective and that $g$ is injective. __________ References [1] David Makinson, Sets, Logic and Maths for Computing, Springer, 2008. ### Deciding the existence of equilibrium points February 25, 2012 Suppose we are given a dynamical system of the form $\dot{x} (t) = f ( x (t) )$ where $x : \mathbb{R} \to \mathbb{R}^n$ is the state trajectory, and $f : \mathbb{R}^n \to \mathbb{R}^n$ is a known vector field [1]. We now introduce the following definition: Definition: A point $x^* \in \mathbb{R}^n$ is an equilibrium point of the given dynamical system if the state being at $x^*$ at some time $t^*$ implies that the state will remain at $x^*$ for all future time, i.e., $x (t) = x^*$ for all $t \geq t^*$. In other words, an equilibrium point $x^*$ is a point of zero flow, i.e., $f ( x^* ) = 0$. If you happen to dislike the word “flow”, feel free to replace it with the word “velocity”. A point of zero velocity is thus a stationary one. Like a properly-trained poodle, it stays put. One can find the equilibrium points of a given dynamical system by computing the real roots of the vector equation $f (x) = 0$. However, suppose that we are not interested in computing the equilibrium points; instead, all we would like to know is whether any such points exist. Thus, we arrive at the following decision problem: Problem: Given a vector field $f : \mathbb{R}^n \to \mathbb{R}^n$, we would like to decide whether the dynamical system $\dot{x} (t) = f ( x (t) )$ has at least one equilibrium point. This is equivalent to determining the truth value of the formula $\exists x \left( f (x) = 0 \right)$, where $x \in \mathbb{R}^n$. Since both $x$ and $f (x)$ are $n$-dimensional vectors, $x = (x_1, \dots, x_n)$ and $f (x) = (f_1 (x), \dots, f_n (x))$, we can rewrite the existentially-quantified formula above in the following form $\displaystyle\exists x_1 \exists x_2 \dots \exists x_n \left( \bigwedge_{i=1}^n f_i (x_1, x_2, \dots, x_n) = 0\right)$. Can one even determine the truth value of the existentially-quantified formula above? In order to avoid colliding against the brick wall of undecidability, let us restrict our attention to polynomial dynamical systems (i.e., dynamical systems in which the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ is polynomial). For such systems, one can use quantifier elimination to decide the existence of equilibrium points. __________ Example Consider the following polynomial dynamical system $\begin{array}{rl} \dot{x}_1 &= \mu - x_1^2\\ \dot{x}_2 &= - x_2\end{array}$ taken from section 2.7 in Khalil’s book [1]. Note that the two first-order ordinary differential equations above are decoupled, i.e., they are of the form $\dot{x}_i = f_i (x_i)$. Note also that we have a free parameter $\mu \in \mathbb{R}$. We now compute the equilibrium points of the dynamical system by solving the (scalar) polynomial equations $f_i (x_i) = 0$. We thus obtain $x_1^2 = \mu$ and $x_2 = 0$. For $\mu > 0$, we have two equilibrium points at $(\sqrt{\mu}, 0)$ and $(-\sqrt{\mu}, 0)$. For $\mu = 0$, we have one equilibrium point at $(0,0)$. Finally, for $\mu < 0$, we have no equilibrium points, as the equation $x_1^2 = \mu$ has no real roots when $\mu < 0$. The existence of equilibrium points depends on parameter $\mu$. Stating that this dynamical system has at least one equilibrium point is the same as saying that the following existentially-quantified formula $\displaystyle\exists x_1 \exists x_2 \left( x_1^2 - \mu = 0 \land x_2 = 0\right)$ evaluates to $\text{True}$. By visual inspection of the formula, we conclude that the formula (which has two bound variables, $x_1$ and $x_2$, and one free variable $\mu$) evaluates to $\text{True}$ when $\mu \geq 0$. Let us confirm this using the following REDUCE + REDLOG script: % decide the existence of equilibrium points rlset ofsf; % define polynomial vector field f1 := mu - x1**2; f2 := -x2; % define existentially quantified formula phi := ex({x1,x2}, f1=0 and f2=0); % perform quantifier elimination rlqe phi; end; which outputs $\mu \geq 0$. Hence, if the parameter $\mu$ is nonnegative, then there will always exist at least one equilibrium point. __________ References [1] Hassan K. Khalil, Nonlinear Systems, 3rd edition, Prentice Hall. ### Deciding linear independence February 23, 2012 Quantifier elimination has a bit of a magical feel to it. Arnab Bhattacharyya (2011) We would like to decide whether a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly independent. Let us first recall the definition of linear independence. From Dym’s book [1], we have: Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly independent over $\mathbb{F}$ if the only scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$ for which $\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$ are $\alpha_1 = \dots = \alpha_k = 0$. This is just another way of saying that you cannot express one of these vectors in terms of the others. _____ In this post we are interested in sets of real vectors and, thus, we shall restrict our attention to the case where $\mathbb{F} = \mathbb{R}$. Asserting that a given (finite) set of vectors is linearly independent is the same as stating that the set of vectors under study is not linearly dependent. Therefore, we now recall Dym’s definition [1] of linear dependence: Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly dependent over $\mathbb{F}$ if there exists a set of scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$, not all of which are zero, such that $\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$. Notice that this permits you to express one or more of the given vectors in terms of the others. _____ From this definition it follows that saying that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly dependent is the same as stating that the following existentially quantified formula $\displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}\right)\right)$ evaluates to $\text{True}$. Please do note that $\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}$ is a vector equation encapsulating $n$ scalar equations of the form $\displaystyle\sum_{i = 1}^k \alpha_i v_i^{(j)} = 0$ where $v_i^{(j)}$ is the $j$-th component of vector $\mathbf{v}_i$. Finally, we conclude that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ being linearly independent is equivalent to the following formula $\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\bigwedge_{j=1}^n \sum_{i = 1}^k \alpha_i v_i^{(j)} = 0\right)\right)$ evaluating to $\text{True}$. We can determine the truth value of the formula above using quantifier elimination. __________ Example Consider the three following vectors in $\mathbb{R}^4$ $\mathbf{v}_1 = \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_2 = \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_3 = \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right]$. Is the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ linearly independent (over $\mathbb{R}$)? It is not, as $\mathbf{v}_3 = \mathbf{v}_1 + \mathbf{v}_2$. However, we would like to arrive at this conclusion using quantifier elimination. Note that in this example we have $k = 3$ (number of vectors), and $n = 4$ (dimensionality). The vector equation $\alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2 + \alpha_3 \mathbf{v}_3 = \mathbf{0}$ then becomes $\alpha_1 \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right] + \alpha_2 \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right] + \alpha_3 \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right] = \left[\begin{array}{c} 0\\ 0 \\ 0 \\ 0\end{array}\right]$ which yields two (scalar) equations: $\alpha_1 + \alpha_3 = 0$ and $\alpha_2 + \alpha_3 = 0$. Note that the vector equation above also yields two redundant equations of the form $0 = 0$, which we happily discard. The set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly independent (over $\mathbb{R}$) if and only if the following existentially quantified formula $\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \exists \alpha_3 \left( \neg \left(\bigwedge_{i=1}^3 \alpha_i = 0\right) \land \alpha_1 + \alpha_3 = 0 \land \alpha_2 + \alpha_3 = 0\right)$ evaluates to $\text{True}$. Using the following REDUCE + REDLOG script, we can determine the truth value of the formula above: % decide the linear independence of a set of vectors rlset ofsf; % define linear functions f1 := alpha1; f2 := alpha2; f3 := alpha3; f4 := alpha1 + alpha3; f5 := alpha2 + alpha3; % define existentially quantified formula phi := not ex({alpha1,alpha2,alpha3}, not (f1=0 and f2=0 and f3=0) and f4=0 and f5=0); % perform quantifier elimination rlqe phi; end; This script returns $\text{False}$, which allows us to conclude that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is, indeed, not linearly independent. __________ Acknowledgements This post was inspired by Arnab Bhattacharyya’s recent blog post on quantifier elimination. __________ References [1] Harry Dym, Linear algebra in action, American Mathematical Society, 2006.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 113, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769019484519958, "perplexity": 276.01240462194767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705407338/warc/CC-MAIN-20130516115647-00023-ip-10-60-113-184.ec2.internal.warc.gz"}