source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
9,115 | This discussion question is inspired by this post on the current homework policy question . The main question is What are the goals of this site? Some things to think about when answering: What is it we want this site to represent? What are the ideals to which we should hold all of the content on this site? What do we want the site to be seen as? (i.e., how do we want it to look to outsiders?) Some might say this is unnecessary, but I think it is really important. To quote Jim (the user who posted the idea) Before we get ahead of ourselves [and define close reasons], we should step back and officially answer some questions that I'm sure many of you will consider already answered. But until we know everyone agrees on them and knows our stance, we can't properly move forward. Please try to focus on one question at a time in answers; hopefully once the community has reached a consensus, the answers can be combined into the main answer. Note: after this question, there will be two more questions, plus the close vote question already in existence. I will link to them as each is resolved. Next question: Leeway in deviating from goals of this site | The conception of the founding members is expressed in the tour: Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics and astronomy. Now, "students" are explicitly included among the audience envisioned for the site, but the members have been very negative on the idea of becoming a resource to which beginners turn to finish an assignment or avoid having to do some basic reading and thinking about concepts that are widely addressed in basic pedagogical materials. This has taken the form of a much debated and repeatedly mutated 'homework' policy (where 'homework' has never really meant 'something that you were assigned for class', but something more like 'questions copied from texts or that seem to exist in order to teach the subject). And frankly I don't think there is a large contingent interested in changing those basic parameters (inclusive, but not here to solve endless problems for students who should be practicing those problems). The linked posts are about trying to get a good consensus on what we do and don't want and how to express that in a comprehensible form. I propose that the site should be "for active researchers, academics and students of physics and astronomy" , and that we should encourage posters to think of the site as a tool for jump starting their thinking and not one for getting their work done for them. | {
"source": [
"https://physics.meta.stackexchange.com/questions/9115",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/121464/"
]
} |
9,777 | Can I a kid be on this site and ask Questions? I want to but I think they would be silly | Only people aged thirteen years and older may maintain accounts . Apart from that, you're welcome to ask whatever you like. Keep in mind that we'll hold your questions to the same standards as everyone else's --- that is, we won't treat you specially because you're young --- which may mean that your questions get downvoted or closed for reasons that aren't obvious to you. | {
"source": [
"https://physics.meta.stackexchange.com/questions/9777",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/151148/"
]
} |
9,822 | I'm prompted to start this thread by a comment on a recent question about why this site has the homework policy that it does. As I said in that answer, the ecological niche that this site occupies is rather different to the one that Mathematics Stack Exchange does, and a lot of this difference is directly attributable to the existence of MathOverflow . However, I realize that the details of why this situation came about are unknown to many users who joined after much of the development happened*, so it's worthwhile to go over some of that history. Plus, today marks a bit of a grim anniversary in that history, so maybe it's a good time to do some reflecting. So: How come there's only one Stack Exchange physics Q&A site, while mathematics gets two? I've written an answer with the essential timeline of the history, along with some links to relevant landmarks, and I'll describe from a (very) high level some of the choices different communities made along the way. However, I think it would be nice if people actually involved in those decisions could add their perspective on how we came to where we are at the moment. * Using some kludgy informal queries, I make it that from the users that posted ≥0-score posts in the last year, half joined in the last 18 months, and more than 75% within the past 3 years, which is rather later than much of the relevant history. | I'll take this in roughly historical order, and to the extent that I'm aware of things and can remember them, though there is a definite danger that I'm telling the stories as we'd like to reimagine them rather than as they actually happened, so please go check the original sources and make up your mind from them. This obviously can only be a pretty partial list, so if you have additions or corrections please add them in (just try to keep it neutral). Stack Exchange started as the trio of hard tech sites: Stack Overflow, followed later by Super User and Server Fault. Wikipedia has the essentials, and this Meta Stack Exchange is a reasonable jumping-off point. Long story short, Stack Overflow launched in 2008 and rose steadily thereafter. The conversation about Stack Overflow started on its UserVoice page and moved after about a year to Meta Stack Overflow ; most questions about what would become the Stack Exchange network ended up on Meta Stack Exchange after MSO split in two in April 2014. The first science site using the Stack Exchange engine was MathOverflow , which started on 28 September 2009 as Wikipedia tells it; this Meta MathOverflow question has more details. MathOverflow was started by a set of postdocs and graduate students, who set out to create a meeting place for professional research mathematicians. Some interesting reads about it are the official announcement on the blogosphere , or this slightly later piece on the AMS notices . From about one year into its tenure, there's pieces on The Atlantic and Mercury News that capture a lot of the zeitgeist, I think. On the more technical side, MathOverflow was started as a completely separate entity to Stack Exchange; instead, MathOverflow licensed the software from SE for its own use, a practice which SE did for a short time after it started (other instances including moms4mom and chiphacker , which became Electronics SE ), and which stopped after they set up area 51 as a mechanism for creating non- SOFU sites within SE; see this SE blog post for more details. MathOverflow migrated to the Stack Exchange 2.0 platform, managed by SE itself, in mid-2013, without really changing much in how the site is run from what I can tell. Some interesting bits and pieces regarding MathOverflow are this , this and this meta questions. Also, note that the pre-migration Meta MathOverflow, which ran on an old-school PHP bulletin board, is now preserved as tea.mathoverflow.net , and if one wants to delve into deep MathOverflow history that is the place for it. An important bit of context from the time around the creation of MathOverflow is the first Polymath Project , which started in January 2009 taking off from Tim Gowers' blog , and which closed successfully within a couple of months. I don't know to what extent the MathOverflow founders had been thinking of starting the site before this, but it certainly showed that there was a large community of mathematicians online ready to take a more hands-on approach than the (also rather active) existing blogging community of the time. Mathematics Stack Exchange started a good deal later than MathOverflow, on 27 July 2010, and it came up through the area 51 mechanism. A good place to have a look at how the conversation looked like at the time it was founded is the proposal's Area 51 page , which contains example questions and some discussion - and particularly this 2011 question on the one-site model vs the two-site model . Have a look at the definition tab for those goodies (I find Robert Harvey's comment here to be particularly illuminating on how things developed), and at the beta tab for more stats. Tooltips have more precise dates. There is also a lot of relevant discussion on a thread on the pre-2010 Meta Stack Exchange (a meta site for the Stack Exchange 1.0 software and family of sites), which is preserved in the Wayback Machine . Similarly, this site came up through this proposal on Area 51 , which was proposed on 2 June 2010, and the public beta launched on 9 November 2010 (see also here ). Again, that proposal gives a good look at how the conversation looked like at the time the site started; another interesting place for how the site scope came to be decided is the bottom of the scope tag on this meta . The site graduated from beta to full SE site on 24 February 2011. As an interesting bit of trivia for anyone that joined after February 2011 , the site design was originally a blackboard-inspired black-on-white , which was later changed . The next important step is the Area 51 proposal of a Theoretical Physics site , on 13 November 2010. By the time that proposal got traction, Physics SE had been running for some time, and there was definite dissatisfaction from some sectors about the level of questions ( example ). The proposal was eventually opened for public beta about a year later, on 4 October 2011, but it is important to note that there was definite pushback against splitting the SE physics community in two: the thread I linked to above, calls to just merge the TP proposal into PSE , and a bunch of similar discussions on the proposal definition page . This came to the point that SE community managers directly moved to close the proposal (also this ), with extensive feedback from this site prompting them to continue . When the Theoretical Physics site was running, there was some definite ambiguity in terms of where the boundary lay between that site and this one; this is painfully obvious in Where should research-level questions go? Theoretical Physics SE or Physics Research SE? ; my feeling is that this probably didn't ever get resolved. From a more personal perspective, I joined Stack Exchange at about this time (9 April 2012), some nine months into my MRes in quantum information and quantum optics , and I have to say that I found the Theoretical Physics site to be extremely intimidating, and I felt like I could hardly get a word in edgewise ( this one ). In hindsight, I still think that this was a worrying sign - if that experiment were going on right now, I would wish for the level to be accessible to a broad base of graduate students across all branches of physics. But I digress. In any case, this was not to be for long, because the Theoretical Physics site closed on 25 April 2012 (exactly five years ago), through the announcement When a site grows quiet on the SE blog. This shut down six SE beta sites, including the initial astronomy beta (there's now a new one ), about six months into the Theoretical Physics beta. To see the community's reactions to this, try Why Theoretical Physics has been closed? on this meta, as well as a variety of post-mortems from different perspectives at Why did Theoretical Physics fail? on Meta SE. After it was closed, all the Theoretical Physics posts were merged into this site; for discussions of that process, the place to look is the site-salvage tag on this meta . It is important to note that the closing of Theoretical Physics took place during the growing-pains stage of the Area 51 mechanism, and at that stage Stack Exchange took a much closer look at traffic stats of beta sites than they do now. In particular, there was a discrete policy change announced on Graduation, site closure, and a clearer outlook on the health of SE sites that makes it much less likely that a small and slowly-but-consistently growing SE beta site will be shuttered for lack of traffic. I also want to mention another pair of sites that follow the two-site model that have been successful within the Stack Exchange network, and which we probably don't explore as analogues nearly as much as we should - Computer Science and Theoretical Computer Science , the Area 51 proposals for which are here and here respectively. Keep an eye on the timings, though: TCS was proposed in June 2010, went to public beta in August 2010, and it graduated from beta in November 2010; the plain CS site was proposed in September 2011, started public beta in March 2012, and it graduated four years later in January 2016. Thus, this is arguably another case where the 'hard' site came before the 'soft' one (bad descriptors, but you know what I mean), though obviously the elephant in that room is Stack Overflow, which isn't computer science as such but has a strong bearing on that discussion. Physics and computer science are very different in many respects, but this is nevertheless an interesting model to study. Over the years there have been definite chafings at the perceived low level of questions on this site, as shown e.g. in this thread - there's probably more out there to be found, though. There is also PhysicsOverflow , and I think I will leave the telling of this story to its organizers. I will point to the comments section of Luboš Motl's blog as the place for the initial discussions (more specifically on this thread and its comments ), and its initial announcement on the MathOverflow meta , as well as the thread What is Physics Overflow and how is it linked to Physics.SE? on this meta. I do feel it's important not to underestimate how much of the motivation for the foundation of PhysicsOverflow was purely on moderation concerns rather than shooting for a research-oriented site, but I'll let them tell their story. PhysicsOverflow is currently active, though as far as I understand it they still import a fair amount of this site's content through our CC BY license. I really encourage readers to visit and form their own opinion. I do want to touch on traffic as regards PhysicsOverflow, though, because I've seen several people make claims along the lines of "it's OK if we litter Physics Stack Exchange with drudge-level homework because if researchers get offended they can go to PhysicsOverflow much like mathematicians can go to MathOverflow". PhysicsOverflow reports its traffic on its statistics page ; in the past year those show an average of about 25 questions per month posted directly to PhysicsOverflow (and about 10 questions per month imported from Physics Stack Exchange), which is about the rate that Theoretical Physics had before it shut down. For comparison, MathOverflow gets about 40 questions a day, to 650+ on Mathematics and ~100 on this site. | {
"source": [
"https://physics.meta.stackexchange.com/questions/9822",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/8563/"
]
} |
10,183 | I normally only come when clicking on some questions from other StackExchange sites. Every time the questions are protected by user Qmechanic . Is that something like the Community user ? I don't see this at other StackExchange sites. | HNQs with $\geq 3$ answer are often protected by a moderator to prevent "thanks!", "me too!", or spam answers by new users. Questions can be flagged to let moderators know that they should be protected/unprotected. I doubt it would make a difference if I claim that I'm not a bot, so let me just point to the list of current Phys.SE moderators . | {
"source": [
"https://physics.meta.stackexchange.com/questions/10183",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/45330/"
]
} |
10,563 | [This was originally posted as an answer to another question and was transformed into a question following a comment to this effect.] To discourage rapid postings of the type alluded to in another questions , I would suggest the community simply close questions containing screenshots of text. I can understand a screenshot of a figure, but if the text part of a question can be easily typeset with the built-in editor, there’s no place for a screenshot. If anything, the time and effort going to typesetting makes it easier to justify that the OP has done some effort - at least a tysetting effort - in thinking about and posting the question. Of course typesetting also means a question becomes searchable and all those advantages, but it seems to me that intolerance to screenshot questions (at least the textual part of the question) is enough of a deterrent to eliminate the most egregious cases. As to screenshots of equations, there is already a discussion elsewhere that would be applicable to questions as well as answers; the discussion on screenshots of equations seems to be inconclusive at this time. [Here are a first and and a second example of questions where text is posted as screenshot. The OPs are not rapid posters.] | I agree. Screenshots of text have no place on this site. There is simply no reason that plain text should ever be presented as a screenshot, and plenty of reasons why it shouldn't. If you want to quote text from a separate source which you have as an image, it is your responsibility to transcribe it. Transcribing the text, simply put, makes the site easier to read for everyone. It might make things more awkward for the reader by just a little bit, but it's definitely there and it's your responsibility to make your posts as readable as possible. More importantly, it is an important accessibility concern. Do you have good eyesight, and are you reading this site without accessibility tools or without zooming in on text? Then good for you! but not every user of the internet has that ability. There are plenty of people with visual impairments , going from difficulty reading all the way to complete blindness, that have every right to use the internet, which they do relying on assistive technology such as screen readers that do just fine if text is typeset as text, but which get stopped in their tracks when it comes to screenshots. Using proper typesetting (including good use of Markdown and MathJax ) provides additional HTML syntax that those assistive technologies can use to give a more meaningful account of the content on the page. There are some disciplines that are intrinsically hard for visually-impaired people (say, photography? though even then, there are surprises) and physics is not one of them. I would like this site to be open to all, and proper typesetting is one of the things that allows us to be that. This machine-readability also makes the text easier to register and index by search-engine crawlers, which makes the post easier to search for and easier to find by other users, and which therefore makes the post more useful for a broader cross-section of internet users. Transcribing the text is also an important favour to the potential answerers who might want to copy-paste that text into their answers in order to refer to it in more detail in their answers. On a separate track, transcribing the text in your question is a way to demonstrate that you're prepared to match the effort you're asking others to perform in researching and writing an answer to your question with an effort to the best of your abilities to write the best question you can. If you cannot be bothered to type out two paragraphs of text, which then means that prospective answerers need to spend extra time squinting at a smudgy screenshot, then that speaks very poorly about how you value the effort you're soliciting. This specific thread is explicitly about text, which has no entry barriers to transcribing. Quite often, though, screenshots of text will also include some mathematics, which do require learning to use the LaTeX syntax used by the site's MathJax engine. I do not think this is an excuse: if you want to quote mathematics, it is your responsibility to typeset it accurately. This is indeed an entry barrier, and you do need to learn how to do so - the Mathematics SE site has an excellent tutorial - which goes back to putting in the effort in formatting your question correctly, to match the effort you're expecting from the answerers. (Furthermore, using the correct MathJax to display the math is even more of an accessibility concern. MathJax output is displayed using MathML , which contains a ton of semantic information that can be used by screen readers. To see just how much, right click on any complicated formula (here's one for convenience, $f(x) = \sum_{n=-\infty}^\infty c_n e^{inx}$ ) and click on Show Math As > MathML Code (so, for my example, it produces this code , which is the 'true' internal representation of the maths as the browser understands it). The resulting mess is not meant to be human-readable (the way LaTeX syntax is) but it is a bonanza for an automated system.) Now, from time to time (and in practice quite often), community members might step in and transcribe an image for you, particularly if you're a new site member. That's great! somebody decided to help you out and help improve the site on their own. However, if this happens more than two or three times, then it quickly starts becoming abuse of the site's community mechanisms. You are responsible for the content you post and for ensuring that it does not unduly waste other people's time, either editors or readers. As to what this community should do with questions that contain screenshotted text: what we've been doing already, namely downvoting and, where appropriate, closing; leave a link to this thread if you feel like the downvote requires an explanation. If you have the time and inclination, transcribe the image, particularly if it's a new user, but do make it clear to the poster that they should be doing that on their own. I use the AutoReviewComments userscript described in this answer and I've added a link to this answer to my copy of the 'Text, not pictures' template, which now reads Text, not pictures Please do not post images of texts you want to quote , but type it out instead so it is readable for all users and so that it can be indexed by search engines. For formulae, use MathJax instead. and as copyable source ### Text, not pictures
Please [do not post images of texts you want to quote](https://physics.meta.stackexchange.com/q/10563), but type it out instead so it is readable for all users and so that it can be indexed by search engines. For formulae, use [MathJax](https://math.meta.stackexchange.com/q/5020) instead. This takes a lot of the friction out, and it makes it much easier to autocomment, downvote, and then move on with one's life. And to deal explicitly with the specific question: is the presence of a non-transcribed screenshot of text, by itself, reason enough to close a question? Frankly, I personally think that that is excessive and not a very effective way to tackle the problem. As I said, I think that screenshots of text have no place on this site, but the close-fix-reopen cycle is too sluggish to deal effectively with the problem. More importantly, though, for the vast majority of questions that fall in that category there will also be other, more established, reasons to close the post, which makes the question moot most of the time. If none of that is applicable then, in my view, the first action should be a sharp word and a pointer to this thread. Typically the screenshot will be fixed one way or another, but if a newcomer insists on forcing others (by their inaction) to transcribe their screenshots then (as in the "elsewhere" reference for mathjax) question closure becomes a more appealing option as a way to force them to take notice. Ultimately, of course, people's votes are their own, but to offer a definitive guidance proposal: vote to close on any other applicable reasons first, and if none are applicable for me the relevant question is: does this question's use of a screenshot constitute an abuse of this site's community mechanisms that can be effectively dealt with through a question closure? If the answer is yes, then I'd say a custom-reason closevote pointing here is warranted. | {
"source": [
"https://physics.meta.stackexchange.com/questions/10563",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/36194/"
]
} |
10,581 | I don't know if this has come up in discussion, but somebody scraped various tags on Phys.SE and copy-pasted the results together into several books, including one on QFT , one on GR , and one on optics . The books in print cost about \$6 while the ones out of print can cost up to \$1000. The prices are so exorbitant I thought it was one of those Amazon money laundering operations , but it seems to be serious. Has anybody been contacted about this? Is this even legal? | Actually, on further reflection, I don't think this is OK at all. I'm all for re-use of the Creative Commons content that I post on this site, including commercial use if people can find ways to monetize it that are compatible with its licensing conditions. However, I'm not OK with reuse of that content in ways which infringe the attribution requirements (i.e. providing electronic copies without hyperlinks to the source post and author profile, and making clear on the text which website the content came from) or the licensing requirements (details below). Generally speaking, it is very easy to get immediately incensed when you see your content used in ways you didn't anticipate, particularly if those ways include money changing hands. In those situations, say, when somebody says "somebody is printing Wikipedia and selling it for a profit", I often find it useful to append "... and it's being used to tremendous effect in schools in [insert developing country] where internet access is [insert suitable restriction]", and see if that changes how I feel about it. However, that is not the case with this book series. Rob Jeffries is quite right in his initial statement you can go to google books and read these Q+A without being aware that the content has been stripped from elsewhere, with no obvious indication of the CC-BY-SA license and with no proper links to the original source of the material. and although the details vary depending on the precise presentation at issue, most of the ways in which the material is available are problematic enough to be infringing in way way or another. And frankly: Rob is right, it's time someone took the time to write some sharp lines to that effect. So, I've written a pretty curt email (much longer than I intended, partly to leave no room for wriggling and partly because I didn't have time to make it shorter) to Duckett which I intend to send shortly I've just sent. I will update this answer as and when I get any news from that. Before I do, though, I'm posting it here so I can get some community feedback in case my interpretation of the license is off in some way I'm not seeing right now. Dear Mr. Duckett, I am writing to you about your book series "Questions and Answers", collected from the Stack Exchange network of sites, as available through the Amazon and Google Books sites. This email requires action on your part. As you know, user-provided content on the Stack Exchange network is licensed under a Creative Commons Attribution-ShareAlike 3.0 license ( https://creativecommons.org/licenses/by-sa/3.0/legalcode ) that allows you to build upon that content and indeed to use it commercially so long as the license conditions are met: that the work be correctly attributed, that it be licensed under a compatible license that is clearly and unambiguously marked, and that no additional technological or legal restrictions be applied to the content. As regards attribution, the requirements for reusing Stack Exchange content are clearly spelled out in the Stack Exchange blog ( https://stackoverflow.blog/2009/06/25/attribution-required/ ) and they essentially follow the basic common-sense requirement that any reproduction of the work in electronic means must contain a hyperlink to the location of the original work as well as a hyperlink to the author's profile. As regards the license, you are required to license your derivative work only under the terms of the CC BY-SA 3.0 license; moreover, as the license makes it clear in section 4.a, You must include a copy of, or the Uniform Resource Identifier (URI) for, this License with every copy of the Work You Distribute or Publicly Perform. Similarly, as noted in that same section of the license, You may not impose any effective technological measures on the Work that restrict the ability of a recipient of the Work from You to exercise the rights granted to that recipient under the terms of the License. Finally, as regards the license, I draw your attention to section 7, which clarifies that redistribution of the work in breach of the license's terms will cause its termination, at which point you lose the right to use the work as originally licensed. This email concerns content whose copyright rests with me, which was made available on the website Physics Stack Exchange ( https://physics.stackexchange.com/ ) as well as other sites of the Stack Exchange network, and which you have redistributed as part of several collections of your 'Questions and Answers' series, which are available as Kindle ebooks on Amazon.com, currently in print and for sale whose profits (ostensibly) benefit you directly; as out-of-print paperback hardcopies from Amazon.com; as samples of the paperback hardcopies, displayed as electronic copies and attributed to you, on Google Books; as eBook samples, displayed as electronic copies and attributed to you, on Google Books; as well as a variety of other outlets ( https://www.google.com/search?q=Duckett+ "Questions+and+Answers"+-site:amazon.*+-site:books.google.com). You can find a selection of the books in your series which contain content under my copyright using the Google Books search feature ( https://www.google.com/search?tbm=bks&q=inauthor%3A%22George+A+Duckett%22+emilio-pisanty ). These include your books on Physics, Thermodynamics, Quantum Mechanics, Energy in Physics, Electrostatics, Particle Physics, Fluid Dynamics, Physics Exercises, Optics, and Mathematica. Your redistribution of the work is in infringement of the CC BY-SA 3.0 in (at least) the following ways: The Google Books samples, as provided by default, do not contain any hyperlinks to either the source of each work nor the profile of the author. The Google Books samples, as provided by default, do not contain any indication of the origin of the work as globally contained in the collection: they have an 'About this book' page which does not reference the website from which the book's content originated. The Google Books samples, as provided by default, often contain no indication of the licensing conditions of the work, as the final page (containing the scant copyright and licensing information) is not provided. Where the Google Books samples, as provided by default, do contain licensing information, they still do not contain suitable indication of the CC BY-SA license: where the final licensing page is present, the information provided, "All questions and content within this book are licensed under cc-wiki with attribution required" is insufficient. To clarify this point further: The name "cc-wiki" is essentially meaningless and it was clearly deprecated by both Creative Commons and Stack Exchange as early as June 2009 ( https://stackoverflow.blog/2009/06/04/stack-overflow-creative-commons-data-dump/ ). The correct name of the license is CC BY-SA 3.0, including the license version (which does matter). The collection does not contain either a copy of the license text or the URI of the license, as required by §4.a of the license. The Google Books samples, in their eBook version, contain no indication of the licensing conditions of the work, as the final page (containing the scant copyright and licensing information) is not provided. The Kindle ebook versions as provided in samples by Amazon.com do not contain any indication of the CC BY-SA license, nor do they contain the license text as required. The Kindle ebook versions contain technological measures (generally known as Digital Rights Management) that prevent their users from redistributing the work, a restriction that is explicitly forbidden by the license. The paperback copies (as depicted in the Amazon.com "Look Inside" feature) do not feature any indication of the website from which the work originates, as their 'About this book' page does not reference the Stack Exchange website from which the book's content originated. As indicated above, the infringement of the license terms causes the license to expire. More generally, the core problem is that readers can see the content, both on Google Books as well as on the paperback copies, without having any idea as to where the content came from. I should also note that several others have found your approach problematic, as discussed on the Physics Meta site ( Somebody scraped our answers and sold them as a book ), (as well as a previous discussion at Are these eBooks that copy from SE illegal? ); indeed they may contact you independently. If you wish to address these issues in good faith directly with the Physics Stack Exchange community, I would ask that you do so at that thread. In the final page of each book you include a statement ("If you have or know of copyrighted content included in this book and want it to be removed please let me know at [email protected]") that indicates that you intend to do right by the license terms. Given the fact that the books are already published and that several of your mistakes cannot be undone without altering the record, your options are quite limited, but if you want to retain the license to the work, you do need to fix the problems. To be clear, I am not requiring you to remove the content under my copyright from your collections: instead, I am requiring you to stick to the CC BY-SA license's terms as stipulated above. This means that you must: Ensure that all electronic copies of the work, as provided by both Google Books and Amazon.com, contain appropriate attribution, including hyperlinks to the source post and its author's profile page. Ensure that all electronic copies of the work, as provided by both Google Books and Amazon.com, contain appropriate licensing information, including the correct name of the license and its full text or the URI of a complete copy of the license. This will likely require you to ensure that the hyperlink-less copies are removed from Google Books and that the eBook sample is made available without cutting any attributions and without removing the license statement (i.e. likely to make the eBook sample available in full), and ensure that the full and correct license information is available from the landing page. Similarly, this will require you to ensure that the Kindle eBooks (currently still on sale) are amended to include the correct license and a URI of a complete copy, and to ensure that the DRM is removed from all copies of the work, as well as ensuring that the full licensing information is available 'above the fold' on the Amazon.com landing pages. If you are unwilling or unable to fix these problems, or if you do not respond to this communication, my next steps will be, at my convenience, to initiate Digital Millenium Copyright Act takedown proceedings against the infringing copies of your books on both Google Books and Amazon.com. I look forward to hearing from you regarding your planned actions on resolving the infringing aspects of your published collections. Sincerely, Emilio Pisanty Oh, and finally: I'm releasing this post under CC0 , in case you want to, say, privately distribute slightly-modified copies of this text by email. | {
"source": [
"https://physics.meta.stackexchange.com/questions/10581",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/83398/"
]
} |
11,105 | The fact that one of the moderators is pretty much inactive has been brought up many times already (e.g. this meta post from last year). In fact, he has been invited to step aside a couple of times by now, with no feedback from his side (AFAIK). I believe it is important to have a conversation about it at some point, so why not now? Do we want this person to remain a moderator? Is there any mechanism to enforce the decision if he does not manifest any will to follow the community's wish? | Oh, wow, I hadn't realized this was going on :) I first just wish to apologise to the community, both Chem.SE and Physics.SE, for my lack of activity. I'll shortly explain what happened, but I also just feel bad for causing this conflict, and think I could have handled this better. Sorry about that. So a bit of background: Around 2014 I started getting involved in the Rust community, and my capacity for dealing with stuff online decreased. I also was just busier in college, and balancing my time on Stack Exchange became increasingly harder. I still spent time moderating, though. I've always had a bit of conflict in picking between a career in software vs a career in physics -- I really like physics and picked it as a major, but I've really enjoyed programming. My opinion eventually shifted from being set on doing physics to doing software (for a complex tangle of reasons), and I eventually got a job doing Rust stuff at Mozilla. This also meant that Being Online was a part of my job so I didn't have as much of an inclination to participate here as I used to. It also decentered physics from my life, impacting my participation similarly. But Stack Exchange used to be a big part of my life. It was an online community I truly enjoyed, and I'd learned so much from it, both in terms of actual physics/etc as well as so many things about community dynamics. A bunch of the stuff I've learned here has been applied to the Rust community at a fundamental level, and it's overall been super helpful. So I couldn't bring myself to stop participating. Instead, I felt like I would eventually have time again, and that would be soon. Sometimes I'd drop by and moderate a couple things and check out the meta and overall be really happy that this place exists and is running smoothly. The comm team sends out these check in emails to semi inactive moderators and that would occasionally spur me into participating a bit again, but it never lasted. I recognized that this wasn't sustainable, but it always felt like "eh, I'll have more time next month". Of course, tomorrow never comes, and I've only gotten busier over time. I still love this place to pieces, but it's just hard to find time to participate. Honestly, moderation isn't the ideal way for me to participate anyway , as someone who isn't doing physics all the time anymore I should probably spend more time doing q&a for fun, not moderation. The reason I like moderating is that I really care about communities I care about being pleasant spaces, and I care about it enough to consider it my duty to help. But I'm not helping right now, and if I wanted to I can help quite a bit without having the diamond. There's really no reason for me to still have the diamond, it's just taken this nudge to cause the reflection necessary for me to explicitly realize that :) So yeah, I definitely should step down. Consider this my resignation letter. It's been fun! I hope to eventually have time to participate again. I'll definitely be dropping by now and then. | {
"source": [
"https://physics.meta.stackexchange.com/questions/11105",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/84967/"
]
} |
12,568 | Update 2 For the most recent information about this request, see the test announcement post - Testing 3-vote close and reopen Update Based on the votes, we will be giving this experiment a try. We are working with the CM team now to figure out what metrics we can measure for comparisons before and during the test and we will update everybody as soon as we come up with a plan and a date to start the test. Stay tuned! In early August, Stack Overflow (the company) announced an experiment to lower the number of votes required to close or to reopen a question to 3 votes on Stack Overflow (the website). The trial period was 30 days, and at the end (plus time for data crunching), the results and an in-depth analysis of the experiment were posted. It was a resounding success on Stack Overflow (the website). Shortly over a week ago, Stack Overflow (the company) announced the change to vote counts would become permanent on Stack Overflow (the website). The Community Manager team has indicated that other sites are welcome to try out a 30 day test of the same thing if the community agrees. Several smaller sites and beta sites have started the process to undergo the test, motivated by the limited number of active curators on those sites. Physics.SE is not a small site, but we're also not the size of Stack Overflow. We have a dedicated core group of curators (thank you!!!), but the group isn't big enough to get quick and effective action on closing or reopening questions that warrant it. A completely not scientific, observation-biased look at the close and reopen queues indicates many questions quickly get 3 or 4 close or reopen votes and then languish. We would like to propose that Physics.SE undergo a 30 day trial of the lowered close and reopen vote counts, so it will only take 3 votes to close or reopen a question. At the end of the 30 days, we can evaluate how the test went and help Stack Overflow (the company) decide if it works on sites of our scale. I'll leave the discussion here open for 3 weeks (we'll take a final look at the status on 3 January 2020) and if there is enough of a consensus around evaluating it, we can let the Community Management team know and they can enable the new vote count requirements for 30 days. Is the community interested in experimenting with a 3 vote requirement for opening and closing questions? | YES Physics.SE should undergo a 30 day test with 3 votes required to close and reopen questions. | {
"source": [
"https://physics.meta.stackexchange.com/questions/12568",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/6634/"
]
} |
13,296 | How should I feel about the Stack Exchange reputation point system? I am new to the Physics Stack Exchange site as a contributor. I try to answer questions that I am sure that I know the answer of. I hope that I create enough reputation and put my Stack Exchange reputation in my CV. So, the reputation points matter to me. Sometimes, I spend a lot of time answering a question and then the question is removed! Other times, I work out a detailed question and write down the answer for a bounty questions and I get the most votes but in the end my answer is not picked. There are times, and I write a long answer but no vote up and no vote down. Does anyone read my answers at all? Do I care too much? Should I avoid answering questions with long answers? | Of your 65 answers to date, you've written 59 in the last ten days. If you continued at that pace for very long, you would become one of the most prolific answerers of questions in our community. You don't show up yet in this sitewide query because of the schedule on which its cache is updated, but if you poke around in dates in the past, you'll see that our most vigorous answerers typically only average one or two answers per day over any extended period of time. The most common scores for answers from the authors at the top of that prolific list are zero and one points; you are not alone there. As far as does anyone read my answers at all? your user profile currently estimates that helpful things you've written have been seen by many thousands of people. If you are finding posting frustrating, take a break from it. We'll still be here when you're ready to come back. It sounds like you have some breathing room to be more judicious in choosing which questions to answer, but I'd say overall you are doing just fine. | {
"source": [
"https://physics.meta.stackexchange.com/questions/13296",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/238807/"
]
} |
13,609 | I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of Naspers , a South African company with a racist past of supporting decades of apartheid and white supremacy. The company refused to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been criticized as insufficient. I mentioned Naspers’ past in a comment to the CEO’s blog post , and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now awful . I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant. | Your time is valuable -- invaluable, actually, since time is the one thing we can't buy or create -- and you have the absolute right (and responsibility) to use it in pursuits you support. So to "answer" your "question" posed at the end of your post -- the past, present, and future of any company or entity you provide your time (and money, for that matter) to is as relevant or irrelevant as you want it to be. The reality of the world is that, unfortunately, any company that has been around for any length of time will likely have been a party to something objectionable. This is true for all of the major US companies, European companies, Chinese companies, Russian companies, etc. etc., for the past 100+ years (and in some cases, longer). Some of those companies were active participants in the atrocities. Others went along, and still others might have found it offensive but didn't speak up or do anything about it. Some companies may have tried to atone for their past. Others may have tried to hide it. Still others may refuse to move on. Perhaps it is performative, and perhaps it is substantive. I can't speak to the issues at play in this particular instance because I don't know the background in enough detail. There may be other issues or concerns with Prosus, its subsidiaries, other holdings, etc.. All I can say is that we are volunteers here with our time and our expertise. Hopefully we're here because we find joy in sharing that with others. But, StackExchange is a for-profit company and our passions and time support their bottom line. If who owns that bottom line gives you pause or troubles, then it's always worth asking yourself if you still enjoy the experience. I won't attempt to change your mind. That's not my place. And I wholeheartedly support you exploring your social and moral responsibilities with respect to your time and expertise. Do some research, if you want to, into what commitments have been made, or not been made, with regard to the issues you find important. Arguably the most important benefit of a free marketplace is that companies provide what consumers demand. For a long time, the primary demand has been low prices. But there is a growing demand for social responsibility in corporate behavior, whether it is in investing, purchasing, doing other business with, or in our instance, volunteering for/participating in their community. If you decide you no longer find joy in the experience with the site because of the issues you identify, then I encourage you to use your time in things that do bring you joy. Nobody can replace your time. Your contributions here have been immense and you would definitely be missed. If you do decide to continue, then I hope it is as rewarding and fun for you as it has been. Okay, well I actually hope it is even more rewarding and even more fun than it has been because this is a great community with tremendous potential and I'd like people to enjoy things more than they already do! | {
"source": [
"https://physics.meta.stackexchange.com/questions/13609",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/199630/"
]
} |
13,861 | This comment-like answer was posted by a rep-1 user with no (AFAICT) other posts, and then silently deleted within 15 minutes, leaving the poster wondering why. The user then posted a copy of the answer, where some others have commented and let him know that "This place is not like a forum. Your answer should literally answer the question." . A while after this the answer was deleted too. The second deletion was OK. But the first one could have been better IMO. Particularly, I'd expect either the moderator to post an explanation (maybe canned, as available in the Review), or there to be a banner (or whatever it's called here) above the post, explaining what's wrong with the post and linking to the relevant documentation. The Q&A format is not very familiar to many new users, after all. Or am I missing something? Is there already an explanation that's not visible to anyone except the poster of the deleted answer? | That was me. I usually leave such a comment, but on this occasion I didn’t. A mistake, for which I apologize. | {
"source": [
"https://physics.meta.stackexchange.com/questions/13861",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/21441/"
]
} |
13,901 | Recently I've been getting a lot of physics questions, and I'm at the point where I'm asking a couple a day. Should I restrain myself, or can I go willy-nilly as long as they meet the standards of the site? | There is no rule against asking too many questions. More questions make the site better. If you have three questions today, ask them! There are, however, rules about asking too many low-quality questions. And, because writing and being creative is hard, users who ask a lot of questions frequently start having trouble asking consistently high-quality questions . I use this data.SE query for identifying people who write lots of questions and answers (beware that the column names don’t change when you switch to questions). For the week ending 2021-10-31, which is the most recent available, you were in fact the top asker on the site, with seven questions. For the month, with nine questions, you’re in the top ten or so. But we had 2653 total questions in October, so your nine make up about 0.3% of the month’s questions for the site. From having used that query for several years, our most active askers tend to cap out around one question per day, and our most active answerers tend to cap out around three. Some people will do more for a burst, but people’s activity level on the site tends to ebb and flow. As long as your questions are attracting upvotes and answers from other users, feel free to ask more. If you start to get negative feedback on your questions, such as downvotes, close votes, or comments suggesting improvements, take that feedback seriously. | {
"source": [
"https://physics.meta.stackexchange.com/questions/13901",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/310823/"
]
} |
1 | I often hear about subatomic particles having a property called "spin" but also that it doesn't actually relate to spinning about an axis like you would think. Which particles have spin? What does spin mean if not an actual spinning motion? | Spin is a technical term specifically referring to intrinsic angular momentum of particles. It means a very specific thing in quantum/particle physics. (Physicists often borrow loosely related everyday words and give them a very precise physical/mathematical definition.) Since truly fundamental particles (e.g. electrons) are point entities, i.e. have no true size in space, it does not make sense to consider them 'spinning' in the common sense, yet they still possess their own angular momenta. Note however, that like many quantum states (fundamental variables of systems in quantum mechanics,) spin is quantised ; i.e. it can only take one of a set of discrete values. Specifically, the allowed values of the spin quantum number $s$ are non-negative multiples of 1/2. The actual spin momentum (denoted $S$) is a multiple of Planck's constant, and is given by $S = \hbar \sqrt{s (s + 1)}$. When it comes to composite particles (e.g. nuclei, atoms), spin is actually fairly easy to deal with. Like normal (orbital) angular momentum, it adds up linearly. Hence a proton, made of three constituent quarks, has overall spin 1/2. If you're curious as to how this (initially rather strange) concept of spin was discovered, I suggest reading about the Stern-Gerlach experiment of the 1920s. It was later put into the theoretical framework of quantum mechanics by Schrodinger and Pauli. | {
"source": [
"https://physics.stackexchange.com/questions/1",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7/"
]
} |
15 | I know that there's big controversy between two groups of physicists: those who support string theory (most of them, I think) and those who oppose it. One of the arguments of the second group is that there's no way to disprove the correctness of the string theory. So my question is if there's any defined experiment that would disprove string theory? | One can disprove string theory by many observations that will almost certain not occur, for example: By detecting Lorentz violation at high energies: string theory predicts that the Lorentz symmetry is exact at any energy scale; recent experiments by the Fermi satellite and others have showed that the Lorentz symmetry works even at the Planck scale with a precision much better than 100% and the accuracy may improve in the near future; for example, if an experiment ever claimed that a particle is moving faster than light, string theory predicts that an error will be found in that experiment By detecting a violation of the equivalence principle ; it's been tested with the relative accuracy of $10^{-16}$ and it's unlikely that a violation will occur; string theory predicts that the law is exact By detecting a mathematical inconsistency in our world, for example that $2+2$ can be equal both to $4$ as well as $5$; such an observation would make the existing alternatives of string theory conceivable alternatives because all of them are mathematically inconsistent as theories of gravity; clearly, nothing of the sort will occur; also, one could find out a previously unknown mathematical inconsistency of string theory - even this seems extremely unlikely after the neverending successful tests By experimentally proving that the information is lost in the black holes, or anything else that contradicts general properties of quantum gravity as predicted by string theory, e.g. that the high center-of-mass-energy regime is dominated by black hole production and/or that the black holes have the right entropy ; string theory implies that the information is preserved in any processes in the asymptotical Minkowski space, including the Hawking radiation, and confirms the Hawking-Bekenstein claims as the right semiclassical approximation; obviously, you also disprove string theory by proving that gravitons don't exist ; if you could prove that gravity is an entropic force, it would therefore rule out string theory as well By experimentally proving that the world doesn't contain gravity, fermions, or isn't described by quantum field theories at low energies; or that the general postulates of quantum mechanics don't work ; string theory predicts that these approximations work and the postulates of quantum mechanics are exactly valid while the alternatives of string theory predict that nothing like the Standard Model etc. is possible By experimentally showing that the real world contradicts some of the general features predicted by all string vacua which are not satisfied by the "Swampland" QFTs as explained by Cumrun Vafa; if we lived in the swampland, our world couldn't be described by anything inside the landscape of string theory; the generic predictions of string theory probably include the fact that gravity is the weakest force, moduli spaces have finite volume, and similar predictions that seem to be satisfied so far By mapping the whole landscape , calculating the accurate predictions of each vacuum for the particle physics (masses, couplings, mixings), and by showing that none of them is compatible with the experimentally measured parameters of particle physics within the known error margins; this route to disprove string theory is hard but possible in principle, too (although the full mathematical machinery to calculate the properties of any vacuum at any accuracy isn't quite available today, even in principle) By analyzing physics experimentally up to the Planck scale and showing that our world contains neither supersymmetry nor extra dimensions at any scale. If you check that there is no SUSY up to a certain higher scale, you will increase the probability that string theory is not relevant for our Universe but it won't be a full proof A convincing observation of varying fundamental constants such as the fine-structure constant would disprove string theory unless some other unlikely predictions of some string models that allow such a variability would be observed at the same time The reason why it's hard if not impossible to disprove string theory in practice is that string theory - as a qualitative framework that must replace quantum field theory if one wants to include both successes of QFT as well as GR - has already been established. There's nothing wrong with it; the fact that a theory is hard to exclude in practice is just another way of saying that it is already shown to be "probably true" according to the observations that have shaped our expectations of future observations. Science requires that hypotheses have to be disprovable in principle, and the list above surely shows that string theory is. The "criticism" is usually directed against string theory but not quantum field theory; but this is a reflection of a deep misunderstanding of what string theory predicts; or a deep misunderstanding of the processes of the scientific method; or both. In science, one can only exclude a theory that contradicts the observations. However, the landscape of string theory predicts the same set of possible observations at low energies as quantum field theories. At long distances, string theory and QFT as the frameworks are indistinguishable; they just have different methods to parameterize the detailed possibilities. In QFT, one chooses the particle content and determines the continuous values of the couplings and masses; in string theory, one only chooses some discrete information about the topology of the compact manifold and the discrete fluxes and branes. Although the number of discrete possibilities is large, all the continuous numbers follow from these discrete choices, at any accuracy. So the validity of QFT and string theory is equivalent from the viewpoint of doable experiments at low energies. The difference is that QFT can't include consistent gravity, in a quantum framework, while string theory also automatically predicts a consistent quantum gravity. That's an advantage of string theory, not a disadvantage. There is no known disadvantage of string theory relatively to QFT. For this reason, it is at least as established as QFT. It can't realistically go away. In particular, it's been showed in the AdS/CFT correspondence that string theory is automatically the full framework describing the dynamics of theories such as gauge theories; it's equivalent to their behavior in the limit when the number of colors is large, and in related limits. This proof can't be "unproved" again: string theory has attached itself to the gauge theories as the more complete description. The latter, older theory - gauge theory - has been experimentally established, so string theory can never be removed from physics anymore. It's a part of physics to stay with us much like QCD or anything else in physics. The question is only what is the right vacuum or background to describe the world around us. Of course, this remains a question with a lot of unknowns. But that doesn't mean that everything, including the need for string theory, remains unknown. What could happen - although it is extremely, extremely unlikely - is that a consistent, non-stringy competitor to string theory that is also able to predict the same features of the Universe as string theory can emerges in the future. (I am carefully watching all new ideas.) If this competitor began to look even more consistent with the observed details of the Universe, it could supersede or even replace string theory. It seems almost obvious that there exists no "competing" theory because the landscape of possible unifying theories has been pretty much mapped, it is very diverse, and whenever all consistency conditions are carefully imposed, one finds out that he returns back to the full-fledged string/M-theory in one of its diverse descriptions. Even in the absence of string theory, it could hypothetically happen that new experiments will discover new phenomena that are impossible - at least unnatural - according to string theory. Obviously, people would have to find a proper description of these phenomena. For example, if there were preons inside electrons, they would need some explanation. They seem incompatible with the string model building as we know it today. But even if such a new surprising observation were made, a significant fraction of the theorists would obviously try to find an explanation within the framework of string theory, and that's obviously the right strategy. Others could try to find an explanation elsewhere. But neverending attempts to "get rid of string theory" are almost as unreasonable as attempts to "get rid of relativity" or "get rid of quantum mechanics" or "get rid of mathematics" within physics. You simply can't do it because those things have already been showed to work at some level. Physics hasn't yet reached the very final end point - the complete understanding of everything - but that doesn't mean that it's plausible that physics may easily return to the pre-string, pre-quantum, pre-relativistic, or pre-mathematical era again. It almost certainly won't. | {
"source": [
"https://physics.stackexchange.com/questions/15",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/57/"
]
} |
17 | Why does the sky change color? Why is the sky blue during the day, red during sunrise/set and black during the night? | The keywords here are Rayleigh scattering . See also diffuse sky radiation . But much more simply, it has to do with the way that sunlight interacts with air molecules. Blue light is scattered more than red light, so during the day when we look at parts of the sky that are away from the sun, we see more blue than red. During sunset or sunrise, most of the light from the sun comes towards the earth at a sharp angle, so now the blue light is mostly scattered away, and we see mostly red light. | {
"source": [
"https://physics.stackexchange.com/questions/17",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
27 | We've learned that the wave function of a particle collapses when we measure a particle's location. If it is found, it becomes more probable to find it a again in the same area, and if not the probability to finding it in the place that was checked decreases dramatically. My question is about the definition of measurement. What makes a measurement different from any other interaction between two particles (gravity and EM fields for example)? In reality, almost every particle interacts with any other particle, so shouldn't there be constant collapse of the wave function all the time? If this happens we're right back in classical mechanics, aren't we? | What you describe in your question is the "Copenhagen interpretation" of quantum mechanics. There are more nuanced views of this nowadays that don't treat "measurements" quite so asymmetrically, see e.g. sources that talk about decoherence. I recommend watching the classic lecture "Quantum Mechanics in your face" by Sidney Coleman for a nice take on this sort of thing. | {
"source": [
"https://physics.stackexchange.com/questions/27",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/72/"
]
} |
29 | A person drove 120 miles at 40 mph, then drove back the same 120 miles at 60 mph. What was their average speed? The average of the speeds is
$$\frac{40\ \text{mph} +60\ \text{mph}}{2} = 50\ \text{mph}$$
so the total trip time should be, by the definition of average speed,
$$\frac{240\ \text{mi}}{50\ \text{mph}} = 4.8 \ \text{hours}.$$ However, this is wrong, because the trip actually took $3 + 2 = 5$ hours. What did I do wrong, and what is the correct way to calculate the average speed? | The reason is because the time taken for the two trips are different, so the average speed is not simply $\frac{v_1 + v_2}{2}$ We should go back to the definition. The average speed is always (total length) ÷ (total time). In your case, the total time can be calculated as \begin{align}
\text{time}_1 &= \frac{120 \mathrm{miles}}{40 \mathrm{mph}} \\\\
\text{time}_2 &= \frac{120 \mathrm{miles}}{60 \mathrm{mph}}
\end{align} so the total time is $120\mathrm{miles} \times \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)$. The average speed is therefore: \begin{align}
\text{average speed} &= \frac{2 \times 120\mathrm{miles}}{120\mathrm{miles} \times \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)} \\\\
&= \frac{2 }{ \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)} \\\\
&= 48 \mathrm{mph}
\end{align} In general, when the length of the trips are the same, the average speed will be the harmonic mean of the respective speeds. $$ \text{average speed} = \frac2{\frac1{v_1} + \frac1{v_2}} $$ | {
"source": [
"https://physics.stackexchange.com/questions/29",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/77/"
]
} |
31 | What is Einstein's theory of Special relativity , in terms a lay person can follow? | Special Relativity derives from two basic ideas: The speed of light (in a vacuum) is always c. The laws of physics are the same in all inertial reference frames (basically, points of view that aren't accelerating, that is, they obey Newton's Laws.) With these two points and a little math, various proven conclusions may be derived: Time Dilation : When something moves fast relative to something else, time for the faster moving body slows down. It's not an illusion of time slowing down, it's the real thing: individual atoms that make up the body operate slower, chemical reactions function slower, and biological processes (aging) occur slower. From the perspective of the faster moving body, its time progresses at the usual pace. Length Contraction : Objects moving fast relative to other objects shrink along the line of the direction they're moving. Relativistic Simultaneity : There's no such thing as simultaneous events: because time is attached to the observer, different people could witness 2 events happening in different order. The exception to this is "causally-related" events which are events where event A is the cause of event B. Mass-Energy : The math goes into describing the mass of bodies at rest and how that mass changes as the bodies move. As bodies speed up they get "heavier." Nothing with mass can travel faster than light (and nothing with mass can travel AT the speed of light) because any massive body would reach infinite "relative mass" at that speed. You can derive $E=mc^2$ and fission/fusion from this. This is a very quick summary of the basic points and principles. | {
"source": [
"https://physics.stackexchange.com/questions/31",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/77/"
]
} |
35 | If I separate two magnets whose opposite poles are facing, I am adding energy. If I let go of the magnets, then presumably the energy that I added is used to move the magnets together again. However, if I start with two separated magnets (with like poles facing), as I move them together, they repel each other. They must be using energy to counteract the force that I'm applying. Where does this energy come from? | Magnetic field in this case (a set of magnets in space, no relativity involved) is conservative, which means it has a potential -- each positional configuration of charges (or dipoles in this case) has its fixed energy which does not depend on history or momenta of charges. So, the work you put or get from displacing them is just exchanged with the potential energy of the field, which means no energy is created or destroyed, just stored. | {
"source": [
"https://physics.stackexchange.com/questions/35",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75/"
]
} |
78 | I can understand why 2 protons will repel each other, because they're both positive. But there isn't a neutral charge is there? So why do neutrons repel? (Do they, or have I been misinformed?) The reason why I'm asking this is because, I've just been learning about neutron stars and how the neutrons are forced (as in, they repel) together according to my teacher (he's a great teacher btw, though what I just said doesn't make it seem so). So I wondered, why they have to be forced by gravity and not just pushed? | Neutrons have spin 1/2 and therefore obey the pauli exclusion principle, meaning two neutrons cannot occupy the same space at the same time. When two neutrons' wavefunctions overlap, they feel a strong repulsive force. See http://en.wikipedia.org/wiki/Exchange_interaction . | {
"source": [
"https://physics.stackexchange.com/questions/78",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
]
} |
93 | Coulomb's Law states that the fall-off of the strength of the electrostatic force is inversely proportional to the distance squared of the charges. Gauss's law implies that the total flux through a surface completely enclosing a charge is proportional to the total amount of charge. If we imagine a two-dimensional world of people who knew Gauss's law, they would imagine a surface completely enclosing a charge as a flat circle around the charge. Integrating the flux, they would find that the electrostatic force should be inversely proportional to the distance of the charges, if Gauss's law were true in a two-dimensional world. However, if they observed a $\frac{1}{r^2}$ fall-off, this implies a two-dimensional world is not all there is. Is this argument correct? Does the $\frac{1}{r^2}$ fall-off imply that there are only three spatial dimensions we live in? I want to make sure this is right before I tell this to my friends and they laugh at me. | Yes, absolutely. In fact, Gauss's law is generally considered to be the fundamental law, and Coulomb's law is simply a consequence of it (and of the Lorentz force law). You can actually simulate a 2D world by using a line charge instead of a point charge, and taking a cross section perpendicular to the line. In this case, you find that the force (or electric field) is proportional to 1/r, not 1/r^2, so Gauss's law is still perfectly valid. I believe the same conclusion can be made from experiments performed in graphene sheets and the like, which are even better simulations of a true 2D universe, but I don't know of a specific reference to cite for that. | {
"source": [
"https://physics.stackexchange.com/questions/93",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/71/"
]
} |
102 | For every force there is an equal force in the opposite direction on another body, correct? So when the Suns gravity acts on Earth where is the opposite and equal force? I also have the same question for centripetal force in a planets orbit. | As many others said, the Sun feels the same force towards Earth as the Earth feels towards the sun. That is your equal and opposite force. In practice though the "visible" effects of a force can be deduced through Newton's first law, i.e. ${\bf F} = m{\bf a}$. In other words, you need to divide the force by the mass of the body to determine the net effect on the body itself. So: ${\bf F_s} = {\bf F_e}$ ${\bf F_s} = m_s {\bf a_s}$ ${\bf F_e} = m_e {\bf a_e}$ therefore, $m_s {\bf a_s} = m_e {\bf a_e}$ and ${\bf a_s} = {\bf a_s} \frac{m_e}{m_s}$ Now, the last term is $3 \cdot 10^{-6}$! This means that the force that the Earth enacts on the sun is basically doing nothing to the sun. Another way of seeing this: $F = \frac{G m_s m_e}{r^2}$ $a_s = \frac{F}{m_s} = \frac{G m_e}{r^2}$ $a_e = \frac{F}{m_e} = \frac{G m_s}{r^2}$ $\frac{a_s}{a_e} = \frac{m_e}{m_s} = 3 \cdot 10^{-6}$ Again, the same big difference in effect. Regarding the centripetal force, it is still the same force. Gravity provides a centripetal force which is what keeps Earth in orbit. Note It's worth pointing out that the mass that acts as the charge for gravity, known as gravitational mass is not, a priori, the same mass that appears in Newtons's law, known as inertial mass . On the other hand it is a fact of nature that they have the same value, and as such we may use a single symbol $m$, instead of two, $m_i$ and $m_g$. This is an underlying, unspoken assumption in the derivation above. This is known as the weak equivalence principle . | {
"source": [
"https://physics.stackexchange.com/questions/102",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
]
} |
122 | I learned today in class that photons and light are quantized. I also remember that electric charge is quantized as well. I was thinking about these implications, and I was wondering if (rest) mass was similarly quantized. That is, if we describe certain finite irreducible masses $x$, $y$, $z$, etc., then all masses are integer multiples of these irreducible masses. Or do masses exist along a continuum, as charge and light were thought to exist on before the discovery of photons and electrons? (I'm only referring to invariant/rest mass.) | There are a couple different meanings of the word that you should be aware of: In popular usage, "quantized" means that something only ever occurs in integer multiples of a certain unit, or a sum of integer multiples of a few units, usually because you have an integer number of objects each of which carries that unit. This is the sense in which charge is quantized. In technical usage, "quantized" means being limited to certain discrete values, namely the eigenvalues of an operator, although those discrete values will not necessarily be multiples of a certain unit. As far as we know, mass is not quantized in either of these ways... mostly. But let's leave that aside for a moment. For fundamental particles (those which are not known to be composite), we have tabulated the masses, and they are clearly not multiples of a single unit. So that rules out the first meaning of quantization. As for the second, there is no known operator whose eigenvalues correspond to (or even are proportional to) the masses of the fundamental particles. Many physicists suspect that such an operator exists and that we will find it someday, but so far there is no evidence for it, and in fact there is basically no concrete evidence that the masses of the fundamental particles have any particular significance. This is why I would not say that mass is quantized. When you consider composite particles, though, things get a little trickier. Much of their mass comes from the kinetic energy and binding energy of the constituents, not from the masses of the constituents themselves. For instance, only a small part of the mass of the proton comes from the masses of its quarks. Most of the proton's mass is actually the kinetic energy of the quarks and gluons. These particles are moving around inside the proton even when the proton itself is at rest, so their energy of motion contributes to the rest mass of the proton. There is also a contribution from the potential energy that all the constituents of the proton have by virtue of being subject to the strong force. This contribution, the binding energy, is actually negative. When you put together the mass energy of the quarks, the kinetic energy, and the binding energy, you get the total energy of what we call a "bound system of $\text{uud}$ quarks." Why not just call it a proton? Well, there is actually a particle exactly like the proton but with a higher mass, the delta baryon $\Delta^+$. Technically, a $\text{uud}$ bound system could be either a proton or a delta baryon. But we've observed that when you put these three quarks together, you only ever get $\mathrm{p}^+$ (with a mass of $938\ \mathrm{MeV/c^2}$) or $\Delta^+$ (with a mass of $1232\ \mathrm{MeV/c^2}$). You can't get any old mass you want. This is a very strong indication that the mass of a $\text{uud}$ bound state is quantized in the second sense. Now, the calculations involved are very complicated, so I'm not sure if the operator which produces these two masses as eigenvalues can be derived in detail, but there's basically no doubt that it does exist. You can take other combinations of quarks, or even include leptons and other particles, and do the same thing with them - that is, given any particular combination of fundamental particles, you can make some number of composite particles a.k.a. bound states, and the masses of those particles will be quantized given what you're starting from . But in general, if you start without assuming the masses of the fundamental particles, we don't know that mass is quantized at all. | {
"source": [
"https://physics.stackexchange.com/questions/122",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/71/"
]
} |
185 | As I understand the Higgs boson can be discovered by the LHC because the collisions are done at an energy that is high enough to produce it and because the luminosity will be high enough also. But what is needed to claim a real "discovery" ? I guess there is not one event saying "hey, that's an Higgs boson" ... I also guess that this was the same kind of situation when the top quark was discovered. How does it work ? Edit: There is a nice introduction to the subject on this page of the CMS experiment , and the various ways to detect it, for example through the following process. | NOTE: I recommend reading Noldorin's answer first, for useful background information, and Matt's answer afterward if you want additional detail Noldorin is right that there isn't a single event that you can look at and identify a Higgs boson. In fact, unless the theories are drastically wrong, the Higgs particle is unstable and it has an exceedingly short lifetime - so short that it won't even make it out of the empty space inside the detector! Even at the speed of light, it can only travel a microscopic distance before it decays into other particles. (If I can find some numeric predictions I'll edit that information in.) So we won't be able to detect a Higgs boson directly . What scientists will be looking for are particular patterns of known particles that are signatures of Higgs decay. For example, the standard model predicts that a Higgs boson could decay into two Z bosons, which in turn decay into a muon and antimuon each. So if physicists see that a particular collision produces two muons and two antimuons, among other particles, there's a chance that somewhere in the mess of particles produced in that collision, there was a Higgs boson. This is just one example, of course; there are many other sets of particles that the Higgs could decay into, and the large detectors at the LHC are designed to look for all of them. Of course, Higgs decay is not the only thing that could produce two muon-antimuon pairs, and the same is true for other possible decay products. So just seeing the expected decay products is not a sure sign of a Higgs detection. The real evidence is going to come from the results of many collisions (billions or trillions), accumulated over time. For each possible set of decay products, you can plot the fraction of collisions in which those decay products are produced (or rather, the scattering cross section, a related quantity) against the total energy of the particles coming into the collision. If the Higgs is real, you'll see a spike, called a resonance , in the graph at the energy corresponding to the mass of the Higgs particle. It'll look something like this plot, which was produced for the Z boson (which has a mass of only 91 GeV): The image is from http://blogs.uslhc.us/the-z-boson-and-resonances , which is actually a pretty good read. Anyway, to sum up: the main signature of the Higgs boson, like other unstable particles, will be this resonance peak that appears in a graph produced by aggregating data from many billions or trillions of collisions. Hopefully this makes it a bit clearer why there's going to be a lot of detailed analysis involved before we get any clear detection or non-detection of the Higgs particle. | {
"source": [
"https://physics.stackexchange.com/questions/185",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82/"
]
} |
197 | I hope this is the right word to use. To me, these forces seem kind of fanciful (except for General Relativity and Gravity, which have a geometric interpretation). For example, how do two charged particles know that they are to move apart from each other? Do they communicate with each other somehow through some means? I've heard some people tell me that they bounce together messenger photons. So does one electron receive a messenger photon, go, "Oh hey, I should move in the direction opposite of where this came from, due to the data in it", and then move? Aren't photons also associated with energy, as well? Does this type of mediation imply that electrons give off energy in order to exert force on other electrons? Every electron is repelled by every other electron in the universe, right? How does it know where to send its force mediators? Does it just know what direction to point it in? Does it simply send it in all directions in a continuum? Does that mean it's always giving off photons/energy? I'm just not sure how to view "how" it is that electrons know they are to move away from each other. These questions have always bugged me when studying forces. I'm sure the Standard Model has something to shed some light on it. | Brief answer: Read only the bold part (and ignore grammar then). The answer you already mentioned lies in Quantum Field Theory (QFT). But to fully understand it, you must give up a particle as a point-like thing that is well-localized . There is one Quantum Field per sort of particle , e.g. the electron field for all electrons, and the photon field for all photons. (The fact that there is a single field for all electrons also results in the Pauli exclusion principle .) What you consider a particle is basically just a local peak in the respective particle field , but one cannot even say "This peak corresponds to electron A, this one to B". Now QFT, more specifically Quantum Electrodynamcis ( QED ), describes the local interaction between the electron field and the photon field . But since the fields have a dynamic, a local change induced in the photon field by the electron field will propagate with the speed of light (flat space assumed) and interact with the electron field in another place, thus creating the impression "Electron A emitted a photon that told electron B to interact electromagnetically". It's similar for the other interactions, there's a gluon field for the strong interaction ( Quantum Chromodynamics ), and for the electroweak interaction there's kind of a combination of the photon field and the weak-interaction-bosons. | {
"source": [
"https://physics.stackexchange.com/questions/197",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/71/"
]
} |
198 | I have a particle system of seven protons and seven (or sometimes eight) neutrons (each formed by their appropriate quarks, etc.) bound together in a state that can be macroscopically described as a nucleus. If relevant, there are also about seven electrons that are bound to this arrangement. These particle systems are usually found in pairs, bound to eachother. Macroscopically, this can be modeled as the elemental Nitrogen ($N_2$), and in other disciplines (such as chemistry), it is treated as a basic unit. We know that at a certain level of thermal energy, this system of elementary particles exist inert and packed together in what can be macroscopically described as a "liquid". We know that this is this temperature is about 77.36 Kelvin (measured experimentally) at the most. Any higher and they start repelling each other and behave as a macroscopic gas. Is there any way, from simply analyzing the particles that make up this system (the quarks making up fourteen protons and 14-16 neutrons, the electrons) and their interactions due to the current model of particles (is this The Standard Model?), to find this temperature 77.36 Kelvin? Can we "derive" 77.36 K from only knowing the particles and their interactions with each other, in the strong nuclear force and electromagnetic force and weak nuclear force? If so, what is this derivation? | Brief answer: Read only the bold part (and ignore grammar then). The answer you already mentioned lies in Quantum Field Theory (QFT). But to fully understand it, you must give up a particle as a point-like thing that is well-localized . There is one Quantum Field per sort of particle , e.g. the electron field for all electrons, and the photon field for all photons. (The fact that there is a single field for all electrons also results in the Pauli exclusion principle .) What you consider a particle is basically just a local peak in the respective particle field , but one cannot even say "This peak corresponds to electron A, this one to B". Now QFT, more specifically Quantum Electrodynamcis ( QED ), describes the local interaction between the electron field and the photon field . But since the fields have a dynamic, a local change induced in the photon field by the electron field will propagate with the speed of light (flat space assumed) and interact with the electron field in another place, thus creating the impression "Electron A emitted a photon that told electron B to interact electromagnetically". It's similar for the other interactions, there's a gluon field for the strong interaction ( Quantum Chromodynamics ), and for the electroweak interaction there's kind of a combination of the photon field and the weak-interaction-bosons. | {
"source": [
"https://physics.stackexchange.com/questions/198",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/71/"
]
} |
202 | I just came from a class on Fourier Transformations as applied to signal processing and sound. It all seems pretty abstract to me, so I was wondering if there were any physical systems that would behave like a Fourier transformation. That is, if given a wave, a purely physical process that would "return" the Fourier transform in some meaningful way. Like, you gave it a sound wave and you would see, "Oh, there are a lot of components of frequency 1kHz...a few of frequency 10kHz...some of 500Hz..." I've seen things happening where, if you put sand on a speaker, the sand would start to form patterns on the speakers that are related to the dominant wavelengths/fundamental frequencies of the sound. Is this some sort of natural, physical fourier transform? | Your ear is an effective Fourier transformer. An ear contains many small hair cells. The hair cells differ in length, tension, and thickness, and therefore respond to different frequencies. Different hair cells are mechanically linked to ion channels in different neurons, so different neurons in the brain get activated depending on the Fourier transform of the sound you're hearing. A piano is a Fourier analyzer for a similar reason. A prism or diffraction grating would be a Fourier analyzer for light. It spreads out light of different frequencies, allowing us to analyze how much of each frequency is present in a given source. | {
"source": [
"https://physics.stackexchange.com/questions/202",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/71/"
]
} |
252 | (This is a simple question, with likely a rather involved answer.) What are the primary obstacles to solve the many-body problem in quantum mechanics? Specifically, if we have a Hamiltonian for a number of interdependent particles, why is solving for the time-independent wavefunction so hard? Is the problem essentially just mathematical, or are there physical issues too? The many-body problem of Newtonian mechanics (for example gravitational bodies) seems to be very difficult, with no solution for the general $n \ge 3$ problems. Is the quantum mechanical case easier or more difficult, or both in some respects? In relation to this, what sort of approximations/approaches are typically used to solve a system composed of many bodies in arbitrary states? (We do of course have perturbation theory which is sometimes useful, though not in the case of high coupling/interaction. Density functional theory, for example, applies well to solids, but what about arbitrary systems?) Finally, is it theoretically and/or practically impossible to simulate high-order phenomena such as chemical reactions and biological functions precisely using Schrodinger's quantum mechanics, over even QFT (quantum field theory)? (Note: this question is largely intended for seeding, though I'm curious about answers beyond what I already know too!) | First let me start by saying that the $N$-body problem in classical mechanics is not computationally difficult to approximate a solution to. It is simply that in general there is not a closed form analytic solution, which is why we must rely on numerics. For quantum mechanics, however, the problem is much harder. This is because in quantum mechanics, the state space required to represent the system must be able to represent all possible superpositions of particles. While the number of orthogonal states is exponential in the size of the system, each has an associated phase and amplitude, which even with the most coarse grain discretization will lead to a double exponential in the number of possible states required to represent it. Thus in quantum systems you need $O(2^{2^n})$ variables to reasonable approximate any possible state of the system, versus only $O(2^n)$ required to represent an analogous classical system. Since we can represent $2^m$ states with $m$ bits, to represent the classical state space we need only $O(n)$ bits, versus $O(2^n)$ bits required to directly represent the quantum system. This is why it is believed to be impossible to simulate a quantum computer in polynomial time, but Newtonian physics can be simulated in polynomial time. Calculating ground states is even harder than simulating the systems. Indeed, in general finding the ground state of a classical Hamiltonian is NP-complete , while finding the ground state of a quantum Hamiltonian is QMA-complete . (On the other hand, ground states are to some extent less relevant because the systems for which is is computationally hard to calculate the ground state of (at least on a QC) don't cool efficiently either.) | {
"source": [
"https://physics.stackexchange.com/questions/252",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/13/"
]
} |
270 | The Straight Dope ran an explanation of why nomads often wear black clothing - it absorbs heat better from the body. On the other hand, white clothing reflects sunlight better. Is it possible to get the best of both worlds and wear clothing that is black on the inside and white on the outside? | The color of a surface doesn't reliably indicate the emissivity at non-visible wavelengths. The color in the visible spectrum is more of a side effect than anything. Most thermal radiation around body temperature or room temperature happens in the infrared region, not the visible, and that's not reliably indicated by visible color: ( Source: JPL ) The visibly transparent glasses are opaque to the body's infrared emissions, while the visibly opaque trashbag is transparent to infrared. So one property has no relation to the other. The emissions of the sun occur mostly in the visible region, which is why white clothing reflects solar energy and stays cool while black clothing absorbs solar energy and gets hot: ( Source: User:Dragons flight ) But your body's thermal radiation is in the infrared, so this rule doesn't apply to the inside of clothing (unless your body is hot enough to radiate visible light, but then you have bigger problems ). Your basic idea would work, though, if you found a material that reflects visible light while transmitting infrared light (but that material would probably have the same properties on the inside and outside, and thus be visibly white on the inside, too). For example, white paint is quoted as having an absorptivity of 0.16, while having an emissivity of 0.93. This is because the absorptivity is averaged with weighting for the solar spectrum, while the emissivity is weighted for the emission of the paint itself at normal ambient temperatures. ... The white paint will serve as a very good insulator against solar radiation, because it is very reflective of the solar radiation, and although it therefore emits poorly in the solar band, its temperature will be around room temperature, and it will emit whatever radiation it has absorbed in the infrared, where its emission coefficient is high. − Kirchhoff's law of thermal radiation NASA uses such materials , which they call "selective surfaces", and are used to cool the Hubble telescope: These surfaces can be designed to reflect solar radiation, while maximizing infrared emittance, yielding a cooling effect even in sunlight. On earth cooling to -50 °C below ambient has been achieved, but in space, outside of the atmosphere, theory using ideal materials has predicted a maximum cooling to 40 K! Wikipedia's article on selective surfaces describes the opposite effect: Transmitting sunlight and blocking infrared from escaping, to capture the sun's energy. | {
"source": [
"https://physics.stackexchange.com/questions/270",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/119/"
]
} |
290 | What aerodynamic effects actually contribute to producing the lift on an airplane? I know there's a common belief that lift comes from the Bernoulli effect, where air moving over the wings is at reduced pressure because it's forced to travel further than air flowing under the wings. But I also know that this is wrong, or at best a minor contribution to the actual lift. The thing is, none of the many sources I've seen that discredit the Bernoulli effect explain what's actually going on, so I'm left wondering. Why do airplanes actually fly? Is this something that can be explained or summarized at a level appropriate for someone who isn't trained in fluid dynamics? (Links to further reading for more detail would also be much appreciated) | A short summary of the paper mentioned in another answer and another good site . Basically planes fly because they push enough air downwards and receive an upwards lift thanks to Newton's third law. They do so in a variety of manners, but the most significant contributions are: The angle of attack of the wings, which uses drag to push the air down. This is typical during take off (think of airplanes going upwards with the nose up) and landing (flaps). This is also how planes fly upside down. The asymmetrical shape of the wings that directs the air passing over them downwards instead of straight behind. This allows planes to fly level to the ground without having a permanent angle on the wings. Explanations showing a wing profile without an angle of attack are incorrect. Airplane wings are attached at an angle so they push the air down, and the airfoil shape lets them do so efficiently and in a stable configuration . This incidence means that even when the airplane is at zero degrees, the wing is still at the 5 or 10 degree angle. -- What is the most common degree for the angle of attack in 747's, 757's, and 767's Any object with an angle of attack in a moving fluid, such as a flat plate, a building, or the deck of a bridge, will generate an aerodynamic force (called lift) perpendicular to the flow. Airfoils are more efficient lifting shapes, able to generate more lift (up to a point), and to generate lift with less drag. -- Airfoil | {
"source": [
"https://physics.stackexchange.com/questions/290",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/124/"
]
} |
296 | In high school I was taught energy was conserved. Then I learned that nuclear reactions allow energy to be converted into mass. Then I also heard that apparently energy can spontaneously appear in quantum mechanics. So, are there any other caveats with the conservation of energy? | The topic of "Energy Conservation" really depends on the particular "theory", paradigm, that you're considering — and it can vary quite a lot. A good hammer to use to hit this nail is Noether's Theorem : see, e.g., how it's applied in Classical Mechanics . The same principle can be applied to all other theories in Physics, from Thermodynamics and Statistical Mechanics all the way up to General Relativity and Quantum Field Theory (and Gauge Theories). Thus, the lesson to learn is that Energy is only conserved if there's translational time symmetry in the problem. Which brings us to General Relativity: in several interesting cases in GR, it's simply impossible to properly define a "time" direction! Technically speaking, this would imply a certain global property (called " global hyperbolicity ") which not all 4-dimensional spacetimes have. So, in general, Energy is not conserved in GR. As for quantum effects, Energy is conserved in Quantum Field Theory (which is a superset of Quantum Mechanics, so to speak): although it's true that there can be fluctuations, these are bounded by the "uncertainty principle", and do not affect the application of Noether's Theorem in QFT. So, the bottom line is that, even though energy is not conserved always, we can always understand what this non-conservation mean via Noether's Theorem. ;-) | {
"source": [
"https://physics.stackexchange.com/questions/296",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/119/"
]
} |
312 | I know classical physics, quantum mechanics, special relativity, and basic nuclear physics. I would like to get into some particle physics. Where can I find a good introduction? It might be useful to segment books by whether they require quantum field theory or not. | I would definitely recommend David Griffiths' book on particle physics . I don't have my copy with me right now, but as I recall, the book explains what the different particles of the Standard Model are, as well as the various properties of particles that are important in modern particle physics. It also introduces the basics of quantum field theory, just enough to allow you to calculate cross sections and decay rates for various reactions. Toward the end, it shows you the basic ideas behind spontaneous symmetry breaking and the Higgs mechanism, which shows you where this prediction of the Higgs boson comes from. If you want to get into more mathematical detail, another book I could recommend is Halzen and Martin . It dates back to 1984 but the physics is still basically correct. I've found that that book takes a lot more effort to work through - that is, you actually have to slow down and think about what you're reading, and work through some of the math, but as long as you put the time in, the understanding you gain is well worth it. | {
"source": [
"https://physics.stackexchange.com/questions/312",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126/"
]
} |
335 | My question is basically what exactly is electricity? I've simply been told before that it's a flow of electrons, but this seems too basic and doesn't show that electricity is instant. What I mean is turning a switch has no delay between that and a light coming on. Is it really instantaneous? Or is it just so fast that we don't notice it? | It's just so fast you don't notice it. You won't see the effect of the travel time in something like turning on a light, because your eyes aren't fast enough to register the delay, but if you do even moderately precise experiments involving signal transmission and look at it on an oscilloscope, you will find that the travel time is easily measurable. The speed of signal propagation is close to that of light, or about a foot per nanosecond. (It's worth noting that this is not the speed of electrons moving through the wires, which is dramatically slower. The signal is a disturbance that propagates more rapidly than the drift velocity of electrons in a conductor.) | {
"source": [
"https://physics.stackexchange.com/questions/335",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
]
} |
347 | Fish like electric eels and torpedoes have specially designed nerve cells that allow them to discharge hundreds of volts of electricity. Now, while pure water is usually nonconductive, the dissolved salts and other stuff in both sea and fresh water allow them to be conductive. If an electric fish is able to use its electricity to stun enemies or prey, how come the fish itself is unaffected? | Suppose current entering into this parallel circuit is $10A$ then almost all the current flows through poor small fish's body current through poor small fish's body = $10A \times \frac{1M}{1M+1} \approx 10A $ This is probably the large picture but I am just guessing. Hope its correct. | {
"source": [
"https://physics.stackexchange.com/questions/347",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
356 | How can I measure the mass of the Earth at home? How was the mass of the Earth first measured? | Yes we/you can. I recall seeing a famous video of a homemade version of the Cavendish torsion balance experiment from the early 1960's, made I think for the PSSC high school course. Basically, the physicist hung a torsion balance from a high ceiling by a long (>10 m?) piece of computer data tape (chosen because it would not stretch). He carefully minimized air currents. The torsion masses were two .5 kg bottles of water on a wooden bar (no magnetic interference). Mass, in the form of boxes of sand, say 20kg was piled around on the floor as static mass and then reversed in position with respect to the suspended masses. There was a clear plastic box around the balance (with a hole in its top for the suspending tape to pass through) also to minimize the effect of air currents, since the lateral force on each bottle is about G*m1*m2/r^2 = (6.7e-11)*0.5kg*20kg/(0.1m)^2 N ~ 6.7e-8 N, i.e. a lateral force on each bottle equivalent to that generated by a weight of about 7 micrograms, about that of a 1 mm^3 grain of sand. This is visible to us because the long arms of the torsion balance convert this small force into a torque on the suspending filament, and the restoring torque is itself very small. I found an Italian dubbed version of the video on Youtube. See http://www.youtube.com/watch?v=uUGpF3h3RaM&feature=related and a slightly longer version at http://www.youtube.com/watch?v=V4hWMLjfe_M&feature=related . I believe the demonstrator was Prof. Jerrold Zacharias from MIT and the PSSC staff. If anyone can point me to the original undubbed black and white film loop, I'd appreciate it. It looked really crude but qualitatively it worked. The mirror moved upon reversal of the mass positions. Yeah, experimental physics!! Calculate it out. Use your laser pointer. Glue mirrors. Calibrate. Give it as an experiment in class. Make a (music?) video. Put it on Youtube and embed it here. Social physics. I also found some other do it your self experimenters with crude equipment, experimental tips (try fishing line) and different masses.
See http://funcall.blogspot.com/2009/04/lets-do-twist.html http://www.hep.fsu.edu/~wahl/phy3802/expinfo/cavendish/cenco_grav.pdf and http://www.fourmilab.ch/gravitation/foobar/ , which uses a ladder, some cobblestones, monofilament fishing line and has videos. For the experiment in this last reference, you don't need mirrors, since you can see the balance masses move directly because their excursion is so large. See also http://www.youtube.com/watch?v=euvWU-4_B5Y For all these experiments there is no calibration of the restoring force of the twisted filament (which Cavendish did from the free torsion period of the balance), the balance beam of one appears to be styrofoam, (so I would worry about subtle charge effects), and the beam hits the support of the fixed masses so that it bounces and we do not see the harmonic angular acceleration we might expect. This last problem is apparently well known to amateur experimenters in this field.
Another exposition and video is at http://www.juliantrubin.com/bigten/cavendishg.html The best summary and historical exposition I found is at https://en.wikipedia.org/wiki/Torsion_bar_experiment . I did not realize that the experiment was originally designed by John Michell, a contemporary, whose designs and apparatus passed to Cavendish upon his death. See https://en.wikipedia.org/wiki/John_Michell . Newton had considered the deviation from vertical that a stationary pendulum would have near a terrestrial mountain in the Principia (1686). Although he considered the deviation too small to measure, it was measured 52 years later at Chimborazo, Ecuador in 1738, which was the first experiment showing that the Earth was not hollow, apparently a live hypothesis at the time. The same experiment was repeated in Scotland in 1774. See https://en.wikipedia.org/wiki/Schiehallion_experiment . Mitchell devised the torsion balance experiment in 1783, and started construction of a torsion balance. Cavendish did his experiment in 1797-1798. To me this is all quite inspiring. Editorial (I'll move this positive rant to meta soon) - given the obviously widely varied audience on this site, I would very much like to see more questions like this one relating to amateur or home experiments. The analysis of the data and possible sources of errors in these experiments is often subtle, and is very instructive. To have real physicists and other clever students publicly criticize some aspect of an experiment provides something that many students may never get otherwise. The social network framework will help many newcomers from different countries learn what real science is in a way that yet another dose of imperfectly understood theory never will. And it's fun too. | {
"source": [
"https://physics.stackexchange.com/questions/356",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126/"
]
} |
361 | Take the following gedankenexperiment in which two astronauts meet each other again and again in a perfectly symmetrical setting - a hyperspherical (3-manifold) universe in which the 3 dimensions are curved into the 4. dimension so that they can travel without acceleration in straight opposite directions and yet meet each other time after time. On the one hand this situation is perfectly symmetrical - even in terms of homotopy and winding number. On the other hand the Lorentz invariance should break down according to GRT , so that one frame is preferred - but which one? So the question is: Who will be older? And why? And even if there is one prefered inertial frame - the frame of the other astronaut should be identical with respect to all relevant parameters so that both get older at the same rate. Which again seems to be a violation of SRT in which the other twin seems to be getting older faster/slower... How should one find out what the preferred frame is when everything is symmetrical - even in terms of GRT? And if we are back to a situation with a preferred frame: what is the difference to the classical Galilean transform? Don't we get all the problems back that seemed to be solved by RT - e.g. the speed limit of light, because if there was a preferred frame you should be allowed to classically add velocities and therefore also get speeds bigger than c ?!? (I know SRT is only a local theory but I don't understand why the global preferred frame should not 'override' the local one). Could anyone please enlighten me (please in a not too technical way because otherwise I wouldn't understand!) EDIT Because there are still things that are unclear to me I posted a follow-up question: Here | Your question is addressed in the following paper: The twin paradox in compact spaces Authors: John D. Barrow, Janna Levin Phys. Rev. A 63 no. 4, (2001) 044104 arXiv:gr-qc/0101014 Abstract: Twins travelling at constant relative velocity will each see the other's time dilate leading to the apparent paradox that each twin believes the other ages more slowly. In a finite space, the twins can both be on inertial, periodic orbits so that they have the opportunity to compare their ages when their paths cross. As we show, they will agree on their respective ages and avoid the paradox. The resolution relies on the selection of a preferred frame singled out by the topology of the space. | {
"source": [
"https://physics.stackexchange.com/questions/361",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/171/"
]
} |
363 | What are some good books for learning general relativity? | I can only recommend textbooks because that's what I've used, but here are some suggestions: Gravity: An Introduction To General Relativity by James Hartle is reasonably good as an introduction, although in order to make the content accessible, he does skip over a lot of mathematical detail. For your purposes, you might consider reading the first few chapters just to get the "big picture" if you find other books to be a bit too much at first. A First Course in General Relativity by Bernard Schutz is one that I've heard similar things about, but I haven't read it myself. Spacetime and Geometry: An Introduction to General Relativity by Sean Carroll is one that I've used a bit, and which goes into a slightly higher level of mathematical detail than Hartle. It introduces the basics of differential geometry and uses them to discuss the formulation of tensors, connections, and the metric (and then of course it goes on into the theory itself and applications). It's based on these notes which are available for free. General Relativity by Robert M. Wald is a classic, though I'm a little embarrassed to admit that I haven't read much of it. From what I know, though, there's certainly no shortage of mathematical detail, and it derives/explains certain principles in different ways from other books, so it can either be a good reference on its own (if you're up for the detail) or a good companion to whatever else you're reading. However it was published back in 1984 and thus doesn't cover a lot of recent developments, e.g. the accelerating expansion of the universe, cosmic censorship, various results in semiclassical gravity and numerical relativity, and so on. Gravitation by Charles Misner, Kip Thorne, and John Wheeler , is pretty much the authoritative reference on general relativity (to the extent that one exists). It discusses many aspects and applications of the theory in far more mathematical and logical detail than any other book I've seen. (Consequently, it's very thick.) I would recommend having a copy of this around as a reference to go to about specific topics, when you have questions about the explanations in other books, but it's not the kind of thing you'd sit down and read large chunks of at once. It's also worth noting that this dates back to 1973, so it's out of date in the same ways as Wald's book (and more). Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity by Steven Weinberg is another one that I've read a bit of. Honestly I find it a bit hard to follow - just like some of Weinberg's other books, actually - since he gets into such detailed explanations, and it's easy to get bogged down in trying to understand the details and forget about the main point of the argument. Still, this might be another one to go to if you're wondering about the details omitted by other books. This is not as comprehensive as the Misner/Thorne/Wheeler book, though. A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics by Eric Poisson is a bit beyond the purely introductory level, but it does provide practical guidance on doing certain calculations which is missing from a lot of other books. | {
"source": [
"https://physics.stackexchange.com/questions/363",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126/"
]
} |
387 | It is well known that quantum mechanics and (general) relativity do not fit well. I am wondering whether it is possible to make a list of contradictions or problems between them? E.g. relativity theory uses a space-time continuum , while quantum theory uses discrete states. I am not merely looking for a solution or rebuttal of such opposites, more for a survey of the field out of interest. | There are zero contradictions between quantum mechanics and special relativity; quantum field theory is the framework that unifies them. General relativity also works perfectly well as a low-energy effective quantum field theory. For questions like the low-energy scattering of photons and gravitons, for instance, the Standard Model coupled to general relativity is a perfectly good theory. It only breaks down when you ask questions involving invariants of order the Planck scale, where it fails to be predictive; this is the problem of "nonrenormalizability." Nonrenormalizability itself is no big deal; the Fermi theory of weak interactions was nonrenormalizable, but now we know how to complete it into a quantum theory involving W and Z bosons that is consistent at higher energies. So nonrenormalizability doesn't necessarily point to a contradiction in the theory; it merely means the theory is incomplete. Gravity is more subtle, though: the real problem is not so much nonrenormalizability as high-energy behavior inconsistent with local quantum field theory. In quantum mechanics, if you want to probe physics at short distances, you can scatter particles at high energies. (You can think of this as being due to Heisenberg's uncertainty principle, if you like, or just about properties of Fourier transforms where making localized wave packets requires the use of high frequencies.) By doing ever-higher-energy scattering experiments, you learn about physics at ever-shorter-length scales. (This is why we build the LHC to study physics at the attometer length scale.) With gravity, this high-energy/short-distance correspondence breaks down. If you could collide two particles with center-of-mass energy much larger than the Planck scale, then when they collide their wave packets would contain more than the Planck energy localized in a Planck-length-sized region. This creates a black hole. If you scatter them at even higher energy, you would make an even bigger black hole, because the Schwarzschild radius grows with mass. So the harder you try to study shorter distances, the worse off you are: you make black holes that are bigger and bigger and swallow up ever-larger distances. No matter what completes general relativity to solve the renormalizability problem, the physics of large black holes will be dominated by the Einstein action, so we can make this statement even without knowing the full details of quantum gravity. This tells us that quantum gravity, at very high energies, is not a quantum field theory in the traditional sense. It's a stranger theory, which probably involves a subtle sort of nonlocality that is relevant for situations like black hole horizons. None of this is really a contradiction between general relativity and quantum mechanics. For instance, string theory is a quantum mechanical theory that includes general relativity as a low-energy limit. What it does mean is that quantum field theory, the framework we use to understand all non-gravitational forces, is not sufficient for understanding gravity. Black holes lead to subtle issues that are still not fully understood. | {
"source": [
"https://physics.stackexchange.com/questions/387",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/139/"
]
} |
391 | Some sources describe antimatter as just like normal matter, but "going backwards in time". What does that really mean? Is that a good analogy in general, and can it be made mathematically precise? Physically, how could something move backwards in time? | To the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. It's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint. If I'm remembering correctly, this idea all comes from a story that probably originated with Richard Feynman. At the time, one of the big puzzles of physics was why all instances of a particular elementary particle (all electrons, for example) are apparently identical. Feynman had a very hand-wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. As far as I know, that idea never developed into anything mathematically grounded, but it did inspire Feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. What they came up with was a particle that matched the known properties of the positron. Just to give you a rough idea of what it means for a particle to "move backwards in time" in the technical sense: in quantum field theory, particles carry with them amounts of various conserved quantities as they move. These quantities may include energy, momentum, electric charge, "flavor," and others. As the particles move, these conserved quantities produce "currents," which have a direction based on the motion and sign of the conserved quantity. If you apply the time reversal operator (which is a purely mathematical concept, not something that actually reverses time), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus (roughly speaking) turning the particle into its antiparticle. For example, consider electric current: it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge. $$\vec{I} = q\vec{v}$$ Positive charge moving left ($+q\times -v$) is equivalent to negative charge moving right ($-q\times +v$). If you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ($-q\times -v$). But you would get the exact same result by instead converting the electrons into positrons and letting them continue to move to the right ($+q\times +v$); either way, you wind up with the net positive charge flow moving to the right. By the way, optional reading if you're interested: there is a very basic (though hard to prove) theorem in quantum field theory, the TCP theorem, that says that if you apply the three operations of time reversal, charge conjugation (switch particles and antiparticles), and parity inversion (mirroring space), the result should be exactly equivalent to what you started with. We know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal: physics is not time-reversal invariant . Of course, since we can't actually reverse time, we can't test in exactly what manner this is true. | {
"source": [
"https://physics.stackexchange.com/questions/391",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/139/"
]
} |
401 | Undoubtedly, people use a variety of programs to draw diagrams for physics, but I am not familiar with many of them. I usually hand-draw things in GIMP which is powerful in some regards, but it is time consuming to do things like draw circles or arrows because I make them from more primitive tools. It is also difficult to be precise. I know some people use LaTeX, but I am not quite sure how versatile or easy it is. The only other tools I know are Microsoft Paint and the tools built into Microsoft Office. So, which tools are commonly used by physicists? What are their good and bad points (features, ease of use, portability, etc.)? I am looking for a tool with high flexibility and minimal learning curve/development time. While I would like to hand-draw and drag-and-drop pre-made shapes, I also want to specify the exact locations of curves and shapes with equations when I need better precision. Moreover, minimal programming functionality would be nice additional feature (i.e. the ability to run through a loop that draws a series of lines with a varying parameter). Please recommend few pieces of softwares if they are good for different situations. | I've had good experiences with Inkscape . It has a GUI interface, but allows you to enter coordinates directly if you want, and it's scriptable. There is a plug-in that allows you to enter LaTeX directly (for labels and such). The downside is that it is very much still in development, so sometimes you find that a feature you want is not completely implemented yet. As an example, here is a poster I made last week, entirely within Inkscape: Inkscape now also has the " JessyInk " plug-in which allows you to use it to make presentations ( à la Powerpoint). The presentation can be viewed in a web browser as SVG, or exported to PDF. If you have a Mac and don't mind spending some money ($100), I've heard good things about OmniGraffle . | {
"source": [
"https://physics.stackexchange.com/questions/401",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74/"
]
} |
414 | As a Graduate Mathematics student, my interest lies in Number theory. I am curious to know if Number theory has any connections or applications to physics. I have never even heard of any applications of Number theory to physics. I have heard Applications of linear algebra and analysis to many branches of physics, but not number theory. Waiting forward in receiving interesting answers! | I'm not sure i'll be able to post all the links i'd like to (not enough 'reputation points' yet), but i'll try to point to the major refs i know. Matilde Marcolli has a nice paper entitled " Number Theory in Physics " explaining the several places in Physics where Number Theory shows up. [Tangentially, there's a paper by Christopher Deninger entitled " Some analogies between number theory and dynamical systems on foliated spaces " that may open some windows in this theme: after all, Local Systems are in the basis of much of modern Physics (bundle formulations, etc).] There's a website called " Number Theory and Physics Archive " that contains a vast collection of links to works in this interface. Sir Michael Atiyah just gave a talk (last week) at the Simons Center Inaugural Conference, talking about the recent interplay between Physics and Math. And he capped his talk speculating about the connection between Quantum Gravity and the Riemann Hypothesis. He was supposed to give a talk at the IAS on this last topic, but it was canceled. To finish it off, let me bring the Langlands Duality to the table: it's related to Modular Forms and, a such, Number Theory. (Cavalier version: Think of the QFT Path Integral as having a Möbius symmetry with respect to the coupling constants in the Lagrangian.) With that out of the way, I think the better angle to see the connection between Number Theory and Physics is to think about the physics problem in a different way: think of the critical points in the Potential and what they mean in Phase Space (Hamiltonian and/or Geodesic flow: Jacobi converted one into another; think of Jacobi fields in Differential Geometry), think about how this plays out in QFT, think about Moduli Spaces and its connection to the above. This is sort of how I view this framework... ;-) | {
"source": [
"https://physics.stackexchange.com/questions/414",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/159/"
]
} |
437 | The Schrödinger equation describes the quantum mechanics of a single massive non-relativistic particle. The Dirac equation governs a single massive relativistic spin-½ particle. The photon is a massless, relativistic spin-1 particle. What is the equivalent equation giving the quantum mechanics of a single photon? | There is no quantum mechanics of a photon, only a quantum field theory of electromagnetic radiation. The reason is that photons are never non-relativistic and they can be freely emitted and absorbed, hence no photon number conservation. Still, there exists a direction of research where people try to reinterpret certain quantities of electromagnetic field in terms of the photon wave function, see for example this paper . | {
"source": [
"https://physics.stackexchange.com/questions/437",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29/"
]
} |
466 | So light travels slower in glass (for example) than in a vacuum. What causes light to slow down? Or: How does it slow down? If light passes through the medium, is it not essentially traveling in the "vacuum" between the atoms? | The easiest way to get the exact behavior is from thinking about light as a classical wave interacting with the atoms in the solid material. As long as you're far away from any of the resonant frequencies of the relevant atoms, this picture isn't too bad. You can think each of the atoms as being like a little dipole, consisting of some positive and some negative charge that is driven back and forth by the off-resonant light field. Being an assemblage of charges that are accelerating due to the driving field, these dipoles will radiate, producing waves at the same frequency as the driving field, but slightly out of phase with it (because a dipole being driven at a frequency other than its resonance frequency will be slightly out of phase with the driving field). The total light field in the material will be the sum of the driving light field and the field produced by the oscillating dipoles. If you go through a little bit of math, you find that this gives you a beam in the same direction as the original beam-- the waves going out to the sides will mostly interfere destructively with each other-- with the same frequency but with a slight delay compared to the driving field. This delay registers as a slowing of the speed of the wave passing through the medium. The exact amount of the delay depends on the particulars of the material, such as the exact resonant frequencies of the atoms in question. As long as you're not too close to one of the resonant frequencies, this gives you a really good approximation of the effect (and "too close" here is a pretty narrow range). It works well enough that most people who deal with this stuff stay with this kind of picture, rather than talking in terms of photons. The basic idea of treating the atoms like little dipoles is a variant of "Huygens's Principle," by the way, which is a general technique for thinking about how waves behave. | {
"source": [
"https://physics.stackexchange.com/questions/466",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/207/"
]
} |
535 | As Wikipedia says: [...] the kinetic energy of a non-rotating object of mass $m$ traveling at a speed $v$ is $\frac{1}{2}mv^2$. Why does this not increase linearly with speed? Why does it take so much more energy to go from $1\ \mathrm{m/s}$ to $2\ \mathrm{m/s}$ than it does to go from $0\ \mathrm{m/s}$ to $1\ \mathrm{m/s}$? My intuition is wrong here, please help it out! | The previous answers all restate the problem as "Work is force dot/times distance". But this is not really satisfying, because you could then ask "Why is work force dot distance?" and the mystery is the same. The only way to answer questions like this is to rely on symmetry principles, since these are more fundamental than the laws of motion. Using Galilean invariance, the symmetry that says that the laws of physics look the same to you on a moving train, you can explain why energy must be proportional to the mass times the velocity squared. First, you need to define kinetic energy. I will define it as follows: the kinetic energy $E(m,v)$ of a ball of clay of mass $m$ moving with velocity $v$ is the amount of calories of heat that it makes when it smacks into a wall. This definition does not make reference to any mechanical quantity, and it can be determined using thermometers. I will show that, assuming Galilean invariance, $E(v)$ must be the square of the velocity. $E(m,v)$, if it is invariant, must be proportional to the mass, because you can smack two clay balls side by side and get twice the heating, so $$ E(m,v) = m E(v)$$ Further, if you smack two identical clay balls of mass $m$ moving with velocity $v$ head-on into each other, both balls stop, by symmetry. The result is that each acts as a wall for the other, and you must get an amount of heating equal to $2m E(v)$. But now look at this in a train which is moving along with one of the balls before the collision. In this frame of reference, the first ball starts out stopped, the second ball hits it at $2v$, and the two-ball stuck system ends up moving with velocity $v$. The kinetic energy of the second ball is $mE(2v)$ at the start, and after the collision, you have $2mE(v)$ kinetic energy stored in the combined ball. But the heating generated by the collision is the same as in the earlier case. So there are now two $2mE(v)$ terms to consider: one representing the heat generated by the collision, which we saw earlier was $2mE(v)$, and the other representing the energy stored in the moving, double-mass ball, which is also $2mE(v)$. Due to conservation of energy, those two terms need to add up to the kinetic energy of the second ball before the collision: $$ mE(2v) = 2mE(v) + 2mE(v)$$ $$ E(2v) = 4 E(v)$$ which implies that $E$ is quadratic. Non-circular force-times-distance Here is the non-circular version of the force-times-distance argument that everyone seems to love so much, but is never done correctly. In order to argue that energy is quadratic in velocity, it is enough to establish two things: Potential energy on the Earth's surface is linear in height Objects falling on the Earth's surface have constant acceleration The result then follows. That the energy in a constant gravitational field is proportional to the height is established by statics. If you believe the law of the lever, an object will be in equilibrium with another object on a lever when the distances are inversely proportional to the masses (there are simple geometric demonstrations of this that require nothing more than the fact that equal mass objects balance at equal center-of-mass distances). Then if you tilt the lever a little bit, the mass-times-height gained by 1 is equal to the mass-times-height gained by the other. This allows you to lift objects and lower them with very little effort, so long as the mass-times-height added over all the objects is constant before and after.This is Archimedes' principle. Another way of saying the same thing uses an elevator, consisting of two platforms connected by a chain through a pulley, so that when one goes up, the other goes down. You can lift an object up, if you lower an equal amount of mass down the same amount. You can lift two objects a certain distance in two steps, if you drop an object twice as far. This establishes that for all reversible motions of the elevator, the ones that do not require you to do any work (in both the colloquial sense and the physics sense--- the two notions coincide here), the mass-times-height summed over all the objects is conserved. The "energy" can now be defined as that quantity of motion which is conserved when these objects are allowed to move with a non-infinitesimal velocity. This is Feynman's version of Archimedes. So the mass-times-height is a measure of the effort required to lift something, and it is a conserved quantity in statics. This quantity should be conserved even if there is dynamics in intermediate stages. By this I mean that if you let two weights drop while suspended on a string, let them do an elastic collision, and catch the two objects when they stop moving again, you did no work. The objects should then go up to the same total mass-times-height. This is the original demonstration of the laws of elastic collisions by Christian Huygens, who argued that if you drop two masses on pendulums, and let them collide, their center of mass has to go up to the same height, if you catch the balls at their maximum point. From this, Huygens generalized the law of conservation of potential energy implicit in Archimedes to derive the law of conservation of square-velocity in elastic collisions. His principle that the center of mass cannot be raised by dynamic collisions is the first statement of conservation of energy. For completeness, the fact that an object accelerates in a constant gravitational field with uniform acceleration is a consequence of Galilean invariance, and the assumption that a gravitational field is frame invariant to uniform motions up and down with a steady velocity. Once you know that motion in constant gravity is constant acceleration, you know that $$ mv^2/2 + mgh = C $$ so that Huygens dynamical quantity which is additively conserved along with Archimedes mass times height is the velocity squared. | {
"source": [
"https://physics.stackexchange.com/questions/535",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/285/"
]
} |
541 | I was recently wondering what would happen if the force sliding two surfaces against each other were somehow weaker than kinetic friction but stronger than static friction. Since the sliding force is greater than the maximum force of static friction, $F > f_s = \mu_s F_N$, it seems that the surfaces should slide. But on the other hand, if the force of kinetic friction is greater than the applied force, there'll be a net force $\mu_k F_N - F$ acting against the motion, suggesting that the surfaces should move opposite to the direction they're being pushed! That doesn't make sense. The only logical resolution I can think of is that the coefficient of static friction can never be less than the coefficient of kinetic friction. Am I missing something? | The problem with this question is that static friction and kinetic friction are not fundamental forces in any way-- they're purely phenomenological names used to explain observed behavior. "Static friction" is a term we use to describe the observed fact that it usually takes more force to set an object into motion than it takes to keep it moving once you've got it started. So, with that in mind, ask yourself how you could measure the relative sizes of static and kinetic friction. If the coefficient of static friction is greater than the coefficient of kinetic friction, this is an easy thing to do: once you overcome the static friction, the frictional force drops. So, you pull on an object with a force sensor, and measure the maximum force required before it gets moving, then once it's in motion, the frictional force decreases, and you measure how much force you need to apply to maintain a constant velocity. What would it mean to have kinetic friction be greater than static friction? Well, it would mean that the force required to keep an object in motion would be greater than the force required to start it in motion. Which would require the force to go up at the instant the object started moving. But that doesn't make any sense, experimentally-- what you would see in that case is just that the force would increase up to the level required to keep the object in motion, as if the coefficients of static and kinetic friction were exactly equal. So, common sense tells us that the coefficient of static friction can never be less than the coefficient of kinetic friction. Having greater kinetic than static friction just doesn't make any sense in terms of the phenomena being described. (As an aside, the static/kinetic coefficient model is actually pretty lousy. It works as a way to set up problems forcing students to deal with the vector nature of forces, and allows some simple qualitative explanations of observed phenomena, but if you have ever tried to devise a lab doing quantitative measurements of friction, it's a mess.) | {
"source": [
"https://physics.stackexchange.com/questions/541",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/124/"
]
} |
556 | I realize this isn't possible, but I can't see why not, especially if you change the model a little bit so that the balls simply travel through a tube of water on the way up, rather than exactly this model. Please be clear and detailed. I've heard explanations like "the balls wouldn't move" but that doesn't do it for me - I really do not see why the balls on the right would not be pulled/pushed up, and the rest of the chain wouldn't continue on. | The balls are entering the water well below the surface. The pressure there is much higher than at the surface. The work needed to push the balls into the water at this depth cancels the work gained when they float back up. We can ignore the gravitational force on the balls since gravity pulls down as much as up as you traverse the loop. Mathematically, if the balls enter the water at depth $d$, the pressure is $g \rho d$ with $g$ gravitational acceleration and $\rho$ the density of water. The work done to submerge the balls is then the pressure times their volume, or $W_{ball} = g \rho V d$. The force upwards on the ball is the weight of the water they displace which is $g \rho V$, and the work the water does on the balls is this force times the distance up that they travel, or $W_{water} = g \rho V d$. The work the ball does on the water is the same as the work the water does on the ball. No free energy. | {
"source": [
"https://physics.stackexchange.com/questions/556",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/308/"
]
} |
589 | Question says it all. Ripping a piece of tape along the width is tough, stretches the tape, messes it up, etc, but if you put the tiniest nick at the top, it rips without any problems. Why is that? | This is a problem in the theory of cracks , but let me try to give an intuitive discussion on the level of linear elasticity. Imagine a rectangular sheet of material with two opposite ends (which I'll call East and West) being pulled apart slowly. The stress in the material due to the tension from the boundary conditions will be symmetric across the North South axis by symmetry, and is intuitively "spread out" on the rectangle (though the distribution will not be uniform, of course). Though I won't do it, this stress distribution can be calculated in linear elasticity theory, as part of the theory of plates. On the other hand, if we cut a notch in say the North edge of the rectangle, the symmetry of the material is broken and we should expect the stress distribution to be asymmetrical as well. A calculation similar to above (but harder due to the funnier geometry) in linear elasticity will probably show that the stress will become focused near the notch. The wikipedia article above seems to claim that the stress near a very sharp notch will actually come out infinite! (That is, linear elasticity theory breaks down) Obviously the actual tearing process goes outside of linear elasticity and into plasticity theory somewhat, but I think what happens can be described as follows. The high concentration of stress near the notch means that when the material gives, it will begin to tear at the "corner" of the notch. Then this tearing causes the notch to get larger, and weakens more nearby material at that corner, which then becomes torn, etc. This is a bit like a positive feedback loop. I can't quite relate it to the question you're asking yet, but I remember it's possible to do some cute dimensional analysis to show that cracks below a certain size in a material tend to shrink, and those above the threshold size tend to grow. Let me just add that the shapes of tears in thin sheets has been a recently popular topic in the field of "extreme mechanics" in soft matter physics. See for instance this recent article by Audoly, Reis and Roman and citing references. | {
"source": [
"https://physics.stackexchange.com/questions/589",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/196/"
]
} |
629 | When you're in a train and it slows down, you experience the push forward from the deceleration which is no surprise since the force one experiences results from good old $F=m a$. However, the moment the train stops one is apparently pulled backwards. But is that a real physical effect or just the result from leaning backwards to compensate the deceleration and that force suddenly stopping? So far the answers basically agree that there are two spring forces involved, for one thing oneself as already guessed by me and for the other the vehicle itself as first suggested in Robert's answer . Also, as Gerard suggested the release of the brakes and some other friction effects might play a role. So let's be more precise with the question: Which effect dominates the wrong-pull-effect? And thus, who can reduce it most: the traveler the driver the vehicle designer? edit Let's make this more interesting: I'm setting up a bounty of 50 100 (see edit below) for devising an experiment to explain this effect or at least prove my explanation right/wrong, and by the end of this month I'll award a second bounty of 200 150 for what I subjectively judge to be the best answer describing either: an accomplished experiment (some video or reproducibility should be included) a numerical simulation a rigorous theoretical description update since I like both the suggestions of QH7 and Georg , I decided to put up a second bounty of 50 (thus reducing the second bounty to 150 however) | I spent last weekends making my own realisation of MPM method code (just for fun of it). I just had an idea that I can try to simulate something similar to the problem of interest. So, here is our "car". It is moving to the right with some constant speed. Then I apply some constant external force to the "wheels" to stop them. And that's what I've got: 1 2 3 4 5 6 The colours denote the amount of stress in the medium. And here is the animation: Everyone are free to give other ideas for simulations/visualisations... | {
"source": [
"https://physics.stackexchange.com/questions/629",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/97/"
]
} |
756 | To date, what is the most mathematically precise formulation of the AdS/CFT correspondence, and what are the most robust tests of the conjecture? | I'm not sure what you have in mind with "mathematically precise"; as with almost anything involving quantum field theory or string theory, there's no rigorous definition of the theories involved in the duality. But, if you grant me their existence, I would say the sharpest statement is still the early formulation by Gubser, Klebanov, and Polyakov and by Witten , i.e. that the partition function of the CFT in the presence of external sources for single-trace operators is the same as that for string theory in AdS with boundary conditions determined by the sources. The most detailed computational checks of the correspondence are probably those that use integrability to compute anomalous dimensions of operators over the full range from weak to strong coupling. I'm not an expert on this, but I'll point you to one fairly recent paper that contains some of the major references to get you started. From a more global perspective, though, gauge/gravity duality extends well beyond the original case of ${\cal N}=4$ SYM and $AdS_5 \times S^5$, to any theory that meets the two requirements of having a large-$N$ expansion and a large 't Hooft coupling. The important ideas, again, were mostly there in the very early papers, but I would say they've been put on a somewhat more solid footing. The key point is that the bulk theory is tractable in the case when only a few fields are involved and curvatures are weak. This condition, translated to a statement about the dual field theory, is that most of the single-trace operators acquire very large anomalous dimensions (which is natural in very strongly coupled theories). Recently there has been some progress in formulating a bottom-up argument from the opposite direction, i.e. starting with the assumption that a large-$N$ conformal field theory includes few low-dimensional single-trace operators and arguing that this implies the existence of a bulk dual theory. See this paper of Heemskerk et al. . I don't know if any of this is what you would think of as "mathematics".... | {
"source": [
"https://physics.stackexchange.com/questions/756",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20/"
]
} |
822 | This may seem like a slightly trite question, but it is one that has long intrigued me. Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and generally rotational dynamics) can be fully derived from normal (linear) momentum and dynamics. Simply by considering circular motion of a point mass and introducing new quantities, it seems one can describe and explain angular momentum fully without any new postulates. In this sense, I am lead to believe only ordinary momentum and dynamics are fundamental to mechanics, with rotational stuff effectively being a corollary. Then at a later point I learned quantum mechanics. Alright, so orbital angular momentum does not really disturb my picture of the origin/fundamentality, but when we consider the concept of spin , this introduces a problem in this proposed (philosophical) understanding. Spin is apparently intrinsic angular momentum; that is, it applies to a point particle. Something can possess angular momentum that is not actually moving/rotating - a concept that does not exist in classical mechanics! Does this imply that angular momentum is in fact a fundamental quantity, intrinsic to the universe in some sense? It somewhat bothers me that that fundamental particles such as electrons and quarks can possess their own angular momentum (spin), when otherwise angular momentum/rotational dynamics would fall out quite naturally from normal (linear) mechanics. There are of course some fringe theories that propose that even these so-called fundamental particles are composite, but at the moment physicists widely accept the concept of intrinsic angular momentum. In any case, can this dilemma be resolved, or do we simply have to extend our framework of fundamental quantities? | Note As David pointed out, it's better to distinguish between generic angular momentum and orbital angular momentum . The first concept is more general and includes spin while the second one is (as the name suggests) just about orbiting. There is also the concept of total angular momentum which is the quantity that is really conserved in systems with rotational symmetry. But in the absence of spin it coincides with orbital angular momentum . This is the situation I analyze in the first paragraph. Angular momentum is fundamental. Why? Noether's theorem tells us that the symmetry of the system (in this case space-time) leads to the conservation of some quantity (momentum for translation, orbital angular momentum for rotation). Now, as it happens, Euclidean space is both translation and rotation invariant in compatible manner, so these concepts are related and it can appear that you can derive one from the other. But there might exist space-time that is translation but not rotation invariant and vice versa. In such a space-time you wouldn't get a relation between orbital angular momentum and momentum. Now, to address the spin. Again, it is a result of some symmetry. But in this case the symmetry arises because of Wigner's correspondence between particles and irreducible representations of the Poincaré group which is the symmetry group of the Minkowski space-time . This correspondence tells us that massive particles are classified by their mass and spin. But spin is not orbital angular momentum! The spin corresponds to group $Spin(3) \cong SU(2)$ which is a double cover of $SO(3)$ (rotational symmetry of three-dimensional Euclidean space). So this is a completely different concept that is only superficially similar and can't really be directly compared with orbital angular momentum. One way to see this is that spin can be a half-integer, but orbital angular momentum must always be an integer. So to summarize: orbital angular momentum is a classical concept that arises in any space-time with rotational symmetry. spin is a concept that comes from quantum field theory built on the Minkowski space-time. The same concept also works for classical field theory, but there we don't have a clear correspondence with particles, so I omitted this case. Addition for the curious As Eric has pointed out, there is more than just a superficial similarity between orbital angular momentum and spin. To illustrate the connection, it's useful to consider the question of how particle's properties transform under the change of coordinates (recall that conservation of total angular momentum arises because of the invariance to the change of coordinates that corresponds to rotation). Let us proceed in a little bit more generality and consider any transformation $\Lambda$ from the Lorentz group. Let us have a field $V^a(x^{\mu})$ that transforms in matrix representation ${S^a}_b (\Lambda)$ of the Lorentz group. Thanks to Wigner we know this corresponds to some particle; e.g. it could be scalar (like Higgs), bispinor (like electron) or vector (like Z boson). Its transformation properties under the element ${\Lambda^{\mu}}_{\nu}$ are then determined by (using Einstein summation convention) $$ V'^a ({\Lambda^{\mu}}_{\nu} x^{\nu}) = {S^a}_b(\Lambda) V^b (x^{\mu}) $$ From this one can at least intuitively see the relation between the properties of the space-time ($\Lambda$) and the particle ($S$). To return to the original question: $\Lambda$ contains information about the orbital angular momentum and $S$ contains information about the spin. So the two are connected but not in a trivial way. In particular, I don't think it's very useful to imagine spin as the actual spinning of the particle (contrary to the terminology). But of course anyone is free to imagine whatever they feel helps them grasp the theory better. | {
"source": [
"https://physics.stackexchange.com/questions/822",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/13/"
]
} |
885 | In the calculus of variations, particularly Lagrangian mechanics, people often say we vary the position and the velocity independently. But velocity is the derivative of position, so how can you treat them as independent variables? | Unlike your question suggests, it is not true that velocity is varied independently of position. A variation of position $q \mapsto q + \delta q$ induces a variation of velocity $\partial_t q \mapsto \partial_t q + \partial_t (\delta q)$ as you would expect. The only thing that may seem strange is that $q$ and $\partial_t q$ are treated as independent variables of the Lagrangian $L(q,\partial_t q)$. But this is not surprising; after all, if you ask "what is the kinetic energy of a particle?", then it is not enough to know the position of the particle, you also have to know its velocity to answer that question. Put differently, you can choose position and velocity independently as initial conditions , that's why the Lagrangian function treats them as independent; but the calculus of variation does not vary them independently , a variation in position induces a fitting variation in velocity. | {
"source": [
"https://physics.stackexchange.com/questions/885",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/150/"
]
} |
886 | My question is about the article Swimming in Spacetime . My gut reaction on first reading it was "this violates conservation of momentum, doesn't it?". I now realize, however, that this doesn't allow something to change its momentum; it only allows something to move (change position) without ever having a nonzero momentum. Since this is relativity, there is no simple relationship between momentum and velocity like p = mv, so this is all well and good. An object can move, with a constant momentum of zero, by changing its shape in a nontrivial cycle. However, now I'm thinking about a different conservation law and I can't see how "swimming through spacetime" is possible without violating it. The conserved quantity I'm thinking of is the Noether charge associated with Lorentz boosts , which is basically x - (p/E)t, that is, the position of the center of mass projected back to time t=0. If p = 0, then the conserved quantity is simply x, the position of the center of mass. This obviously contradicts the whole swimming idea. What's going on here? Is swimming through spacetime only possible if the spacetime is curved in some way that breaks the symmetry under Lorentz boosts? Or is there some error in my reasoning? | What's going on here? Is swimming through spacetime only possible if the spacetime is curved in some way that breaks the symmetry under Lorentz boosts? Or is there some error in my reasoning? That is precisely the case. No error in your reasoning. In the case of a curved spacetime the "center of mass" of an extended body is no longer well-defined w.r.t external - i.e. located in an asymptotically flat region - observers. In order to "swim" through spacetime one exploits the inhomogeneities of the gravitational field. The presence of these inhomogeneities breaks local Lorentz symmetry which is necessary for the mechanism to work. In particular the scale of the swimmer and the inhomogeneities should be comparable. This is one reason why, at present, the construction of an actual swimmer is far beyond our technological means. Edit: For those interested on extended body effects in GR there is are classic papers by Dixon. More recently Abraham Harte has done some amazing work along these lines Extended-body effects in cosmological spacetimes . | {
"source": [
"https://physics.stackexchange.com/questions/886",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/61/"
]
} |
937 | My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to me that the gravitational attraction caused by a black hole carries information about the amount of mass within the black hole. So, how does this information escape? Looking at it from a particle point of view: do the gravitons (should they exist) travel faster than the photons? | There are some good answers here already but I hope this is a nice short summary: Electromagnetic radiation cannot escape a black hole, because it travels at the speed of light. Similarly, gravitational radiation cannot escape a black hole either, because it too travels at the speed of light. If gravitational radiation could escape, you could theoretically use it to send a signal from the inside of the black hole to the outside, which is forbidden. A black hole, however, can have an electric charge, which means there is an electric field around it. This is not a paradox because a static electric field is different from electromagnetic radiation. Similarly, a black hole has a mass, so it has a gravitational field around it. This is not a paradox either because a gravitational field is different from gravitational radiation. You say the gravitational field carries information about the amount of mass (actually energy) inside, but that does not give a way for someone inside to send a signal to the outside, because to do so they would have to create or destroy energy, which is impossible. Thus there is no paradox. | {
"source": [
"https://physics.stackexchange.com/questions/937",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/493/"
]
} |
961 | Lighter nuclei liberate energy when undergoing fusion, heavier nuclei when undergoing fission. What is it about the nucleus of an Iron atom that makes it so stable? Alternatively: Iron has the greatest nuclear binding energy - but why? | It all comes down to a balance between a number of different physical interactions. The binding energy of a nucleus is commonly described with the semiempirical mass formula : $$E(A, Z) = a_V A - a_S A^{2/3} - a_C \frac{Z(Z-1)}{A^{1/3}} - a_A \frac{(A-2Z)^2}{A} + \delta(A,Z)$$ where $A = Z + N$ is the total number of nucleons, $Z$ the number of protons, and $N$ the number of neutrons. The different contributions have physical explanation as: $a_V$ : volume term, the bigger the volume the more nucleons interact with each other through the strong interaction, the more they attract each other $a_S$ : surface term, similar to the surface tension, some energy stored in there, reducing the binding interaction $a_C$ : the Coulomb repulsion of the protons within the nucleus $a_A$ : asymmetry term, rooted in the Pauli exclusion principle. Basically if there are more of one type of nucleon (generally of neutrons) then the overall energy is larger than needs to be thus decreasing the binding energy (note: $A-2Z = Z - N$) $\delta$ : pairing term, depends on whether there are even or odd number of nucleons altogether and even or odd number of protons/neutrons. In empirical description usually modeled as a continuous variable $a_P/A^{1/2}$. This is of the expression for the total binding energy, what is interesting is the binding energy per nucleon , as a measure of stability: $$E(A, Z)/A \approx a_V - a_S \frac{1}{A^{1/3}} - a_C \frac{Z(Z-1)}{A^{4/3}} - a_A \frac{(A-2Z)^2}{A^2} + a_P \frac{1}{A^{3/2}}$$ To see which nucleus (what value of $A$) is the most stable one has to find for which $A$ is this function maximal. At this point $Z$ is arbitrary but we should chose a physically meaningful value. From theoretical point of view a good choice is the $Z$ that gives the highest binding energy for a given $A$ (the most stable isotope), for which we need to solve solve $\frac{\partial (E/A)}{\partial Z} = 0$. The results is $Z_{stable}(A) \approx \dfrac12\dfrac{A}{1+A^{2/3} \frac{a_C}{4 a_A}}$. After putting back the $Z_{stable}(A)$ into $E(A, Z)/A$ one can maximize the function value to get the "optimal number" of nucleons for the most stable element. Depending on the empirically determined values of $a_S, a_C, a_A, a_P$ the maximum will occur in the area $A \approx 58 \ldots 63$. The interpretation of this result is something like this: for small atoms (small $A$) the biggest contribution is the surface term (they have a large surface-to-volume ratio), and they want increase the number of nucleons to reduce it - hence you have fusion for large atoms (large $A$) the Coulomb term increases because more protons mean more repulsion between them, and also, to keep everything together more neutrons are needed (thus $N \gg Z$ which makes the asymmetry term larger as well. By ejecting some nucleons (alpha decay), or converting between neutrons and protons (beta decay) the nucleus can reduce this terms. optimally bound $A$ (and $Z$) happens when these two groups of competing contributions balance each others out. | {
"source": [
"https://physics.stackexchange.com/questions/961",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/453/"
]
} |
999 | I've noticed that whenever I turn the lamp off in my room at night, the lightbulb seems to continue to glow for a minute or so after that. It's not bright though; the only way I even notice it is if the room is dark. So why does it keep glowing? | If it's an incandescent bulb, it's because the whole operating principle of the bulb is based on getting the filament really hot, hot enough to glow. When you cut off the current, it stops heating the filament, so it cools down fairly rapidly, but there may be enough residual heat for a faint glow lasting a little while afterwards. If it's a CFL bulb or an ordinary fluorescent bulb, the "white" color we see is produced by a fluorescent coating on the glass of the bulb, which converts some of the invisible ultraviolet light produced by the excited gas atoms into visible light. When you cut off the current, you stop exciting the gas atoms and thus stop producing new ultraviolet light, but there may be enough residual excitation in the material to keep glowing for a little while afterwards. This would be similar to the way glow-in-the-dark materials work-- those absorb UV light when they're in bright light, exciting atoms and molecules in the material, which then slowly emit visible photons. | {
"source": [
"https://physics.stackexchange.com/questions/999",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
1,008 | There's more gravitational force in our galaxy (and others) than can be explained by counting stars made of ordinary matter. So why not lots of dark planetary systems (i.e., without stars) made of ordinary matter? Why must we assume some undiscovered and unexplained form of matter? | There is a very precise reason why dark planets made of 'ordinary matter' (baryons - particles made up of 3 quarks) cannot be the dark matter. It turns out that the amount of baryons can be measured in two different ways in cosmology: By measuring present-day abundances of some light elements (esp deuterium) which are very sensitive to the baryon amount. By measuring the distribution of the hot and cold spots in the Cosmic Microwave background (CMB), radiation left over from the early universe that we observe today. These two methods agree spectacularly, and both indicate that baryons are 5% of the total stuff (energy/matter) in the universe. Meanwhile, various measures of gravitational clustering (gravitational lensing, rotation of stars around galaxies, etc etc) all indicate that total matter comprises 25% of the total. (The remaining 75% is in the infamous dark energy which is irrelevant for this particular question). Since 5% is much less than 25%, and since the errors on both of these measurements are rather small, we infer that most of the matter, about 4/5 ths (that is, 20% out of 25%) is 'dark' and NOT made up of baryons. | {
"source": [
"https://physics.stackexchange.com/questions/1008",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/495/"
]
} |
1,048 | Suppose in the milliseconds after the big bang the cosmic egg had aquired some large angular momentum. As it expanded, keeping the momentum constant (not external forces) the rate of rotation would have slowed down, but it would never reach zero. What implications would that have to measurments to distant supernova and CMB radiation? Do we have any experimental data that definitely rules out such as scenario? And to what confidence level? Edit A Recent Article suggests that the universe might indeed be spinning as a whole. Anyone care to poke holes at it? Edit 2 Even more Recent Article places limits on possible rotation. Edit 3 Apparently Gödel pondered the same question ( https://www.space.com/rotating-universe-would-permit-time-travel ) | If you believe wholeheartedly in Mach's principle, then there is no way to test empirically for rotation of the universe as a whole, since there is nothing else for it to be rotating relative to. However, general relativity is not very Machian, and it offers a variety of ways in which an observer inside a sealed laboratory can detect whether the lab is rotating. For example, she can observe the motion of a gyroscope, or measure whether the Sagnac effect is zero. There are alternative theories of gravity, such as Brans-Dicke gravity, that are more Machian than GR,[Brans 1961] and in these theories there is probably no meaningful sense in which the universe could rotate. However, solar-system tests[Bertotti 2003] rule out any significant deviations from GR of the type predicted by Brans-Dicke gravity, so that it appears that the universe really is as non-Machian as GR says it is. It is therefore possible according to general relativity to have cosmologies in which the universe is rotating. Historically, one of the earliest cosmological solutions to the Einstein field equations to be discovered was the Gödel metric, which rotates and has closed timelike curves. If we lived in a rotating universe such Gödel's example, the rate of rotation would have to be expressed in terms of angular velocity, not angular momentum. Angular velocity is what is measured by a gyroscope or the Sagnac effect, and GR doesn't even have a definition of angular momentum that applies to cosmological spacetimes. A rotating universe does not have to have a center of rotation, and it can be homogeneous. In other words, we could determine a direction in the sky and say that the universe was rotating counterclockwise at a certain rate about the line connecting us to that point on the celestial sphere. However, aliens living somewhere else in the universe could do the same thing. Their line would be parallel to ours, but there would be no way to tell whether one such line was the real center of rotation. To find out whether the universe is rotating, in principle the most straightforward test is to watch the motion of a gyroscope relative to the distant galaxies. If it rotates at an angular velocity -ω relative to them, then the universe is rotating at angular velocity ω. In practice, we do not have mechanical gyroscopes with small enough random and systematic errors to put a very low limit on ω. However, we can use the entire solar system as a kind of gyroscope. Solar-system observations put a model-independent upper limit of 10^-7 radians/year on the rotation,[Clemence 1957] which is an order of magnitude too lax to rule out the Gödel metric. A rotating universe must have a certain axis of rotation, so it must have a particular type of anisotropy that picks out a certain preferred direction. We can therefore look at the cosmic microwave background and see whether its anisotropy contains a preferred axis.[Collins 1973] Such observations impose a limit that is tighter than provided by solar-system measurements (perhaps 10^-9 rad/yr[Su 2009] or 10^-15 rad/yr[Barrow 1985]), but such limits are model-dependent. Because all of the present observation are consistent with zero rotational velocity, it is not possible to attribute any prominent cosmological role to rotation. Centrifugal forces cannot contribute significantly to cosmological expansion, or to the way your head feels when you're hung over. Brans and Dicke, "Mach's principle and a relativistic theory of gravitation," Phys. Rev. 124 (1961) 925, http://loyno.edu/~brans/ST-history/ Bertotti, Iess, and Tortora, "A test of general relativity using radio links with the Cassini spacecraft," Nature 425 (2003) 374 Clemence, "Astronomical time," Rev. Mod. Phys. Vol. 29 (1957) 2 Collins and Hawking, "The rotation and distortion of the universe," Mon. Not. R. Astr. Soc. 162 (1973) 307 Hawking, "On the rotation of the universe," Mon. Not. R. Astr. Soc. 142 (1969) 529 Barrow, Juszkiewicz, and Sonoda, "Universal rotation: how large can it be?," Mon. Not. R. Astr. Soc. 213 (1985) 917, http://adsabs.harvard.edu/full/1985MNRAS.213..917B Su and Chu, "Is the universe rotating?," 2009, http://arxiv.org/abs/0902.4575 [This is a physicsforums FAQ entry that I wrote with input from users
George Jones,
jim mcnamara,
marcus,
PAllen,
tiny-tim, and
vela.] | {
"source": [
"https://physics.stackexchange.com/questions/1048",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/392/"
]
} |
1,061 | I am reading A Brief History of Time by Stephen Hawking, and in it he mentions that without compensating for relativity, GPS devices would be out by miles. Why is this? (I am not sure which relativity he means as I am several chapters ahead now and the question just came to me.) | Error margin for position predicted by GPS is $15\text{m}$. So GPS system must keep time with accuracy of at least $15\text{m}/c$ which is roughly $50\text{ns}$. So $50\text{ns}$ error in timekeeping corresponds to $15\text{m}$ error in distance prediction. Hence, for $38\text{μs}$ error in timekeeping corresponds to $11\text{km}$ error in distance prediction. If we do not apply corrections using GR to GPS then $38\text{μs}$ error in timekeeping is introduced per day . You can check it yourself by using following formulas $T_1 = \frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$ ...clock runs relatively slower if it is moving at high velocity. $T_2 = \frac{T_0}{\sqrt{1-\frac{2GM}{c^2 R}}}$ ...clock runs relatively faster because of weak gravity. $T_1$ = 7 microseconds/day $T_2$ = 45 microseconds/day $T_2 - T_1$ = 38 microseconds/day use values given in this very good article . And for equations refer to HyperPhysics . So Stephen Hawking is right! :-) | {
"source": [
"https://physics.stackexchange.com/questions/1061",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/104/"
]
} |
1,073 | First, I'll state some background that lead me to the question. I was thinking about quantization of space-time on and off for a long time but I never really looked into it any deeper (mainly because I am not yet quite fluent in string theory). But the recent discussion about propagation of information in space-time got me thinking about the topic again. Combined with the fact that it has been more or less decided that we should also ask graduate level questions here , I decided I should give it a go. So, first some of my (certainly very naive) thoughts. It's no problem to quantize gravitational waves on a curved background. They don't differ much from any other particles we know. But what if we want the background itself to change in response to movement of the matter and quantize these processes? Then I would imagine that space-time itself is built from tiny particles (call them space-timeons ) that interact by means of exchanging gravitons. I drawed this conclusion from an analogy with how solid matter is built from atoms in a lattice and interact by means of exchanging phonons. Now, I am aware that the above picture is completely naive but I think it must in some sense also be correct. That is, if there exists any reasonable notion of quantum gravity, it has to look something like that (at least on the level of QFT). So, having come this far I decided I should not stop. So let's move one step further and assume string theory is correct description of the nature. Then all the particles above are actually strings. So I decided that space-time must probably arise as condensation of huge number of strings. Now does this make any sense at all? To make it more precise (and also to ask something in case it doesn't make a sense), I have two questions: In what way does the classical space-time arise as a limit in the string theory? Try to give a clear, conceptual description of this process. I don't mind some equations, but I want mainly ideas. Is there a good introduction into this subject? If so, what is it? If not, where else can I learn about this stuff? Regarding 2., I would prefer something not too advanced. But to give you an idea of what kind of literature might be appropriate for my level: I already now something about classical strings, how to quantize them (in different quantization schemes) and some bits about the role of CFT on the worldsheets. Also I have a qualitative overview of different types of string theories and also a little quantitative knowledge about moduli spaces of Calabi-Yau manifolds and their properties. | First, you are right in that non-Minkowski solutions to string theory, in which the gravitational field is macroscopic, it should be thought of as a condensate of a huge number of gravitons (which are one of the spacetime particles associated to a degree of freedom of the string). (Aside: a point particle, corresponding to quantum field theory, has no internal degrees of freedom; the different particles come simply from different labels attached to ponits. A string has many degrees of freedom, each of which corresponds to a particle in the spacetime interpretation of string theory, i.e. the effective field theory.) To your question (1): certainly there is no great organizing principle of string theory (yet). One practical principle is that the 2-dimensional (quantum) field theory which describes the fluctuations of the string worldsheet should be conformal , i.e. independent of local scale invariance of the metric. This allows us to integrate over all metrics on Riemann surfaces only up to diffeomorphisms and scalings , which is to say only up to a finite number of degrees of freedom. That's an integral we can do. (Were we able to integrate over all metrics in a way that is sensible within quantum field theory, we would already have been able to quantize gravity.) Now, scale invariance imposes constraints on the background spacetime fields used to construct the 2d action (such as the metric, which determines the energy of the map from the worldsheet of the string). These constraints reduce to Einstein's equations. That's not a very fundamental derivation, but formulating string theory in a way which is independent of the starting point ("background independence") is notoriously tricky. (2): This goes under the name "strings in background fields," and can be found in Volume 1 of Green, Schwarz and Witten. | {
"source": [
"https://physics.stackexchange.com/questions/1073",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/329/"
]
} |
1,111 | When a particle spins in the same direction as its momentum, it has right helicity, and left helicity otherwise. Neutrinos, however, have some kind of inherent helicity called chirality. But they can have either helicity. How is chirality different from helicity ? | At first glance, chirality and helicity seem to have no relationship to each other. Helicity, as you said, is whether the spin is aligned or anti aligned with the momentum. Chirality is like your left hand versus your right hand. Its just a property that makes them different than each other, but in a way that is reversed through a mirror imaging - your left hand looks just like your right hand if you look at it in a mirror and vice-versa.
If you do out the math though, you find out that they are linked. Helicity is not an inherent property of a particle because of relativity. Suppose you have some massive particle with spin. In one frame the momentum could be aligned with the spin, but you could just boost to a frame where the momentum was pointing the other direction (boost meaning looking from a frame moving with respect to the original frame). But if the particle is massless, it will travel at the speed of light, and so you can't boost past it. So you can't flip its helicity by changing frames. In this case, if it is "chiral right-handed", it will have right-handed helicity. If it is "chiral left-handed", it will have left-handed helicity.
So chirality in the end has something to do with the natural helicity in the massless limit. Note that chirality is not just a property of neutrinos. It is important for neutrinos because it is not known whether both chiralities exist. It is possible that only left-handed neutrinos (and only right-handed antineutrinos) exist. | {
"source": [
"https://physics.stackexchange.com/questions/1111",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/556/"
]
} |
1,135 | I have noticed that authors in the literature sometimes divide characteristics of some phenomenon into "kinematics" and "dynamics". I first encountered this in Jackson's E&M book, where, in section 7.3 of the third edition, he writes, on the reflection and refraction of waves at a plane interface: Kinematic properties:
(a) Angle of reflection equals angle of incidence
(b) Snell's law Dynamic properties
(a) Intensities of reflected and refracted radiation
(b) Phase changes and polarization But this is by no means the only example. A quick Google search reveals "dynamic and kinematic viscosity," "kinematic and dynamic performance," "fully dynamic and kinematic voronoi diagrams," "kinematic and reduced-dynamic precise orbit determination," and many other occurrences of this distinction. What is the real distinction between kinematics and dynamics ? | In classical mechanics "kinematics" generally refers to the study of properties of motion-- position, velocity, acceleration, etc.-- without any consideration of why those quantities have the values they do. "Dynamics" means a study of the rules governing the interactions of these particles, which allow you to determine why the quantities have the values they do. Thus, for example, problems involving motion with constant acceleration ("A car starts from rest and accelerates at 4m/s/s. How long does it take to cover 100m?") are classified as kinematics, while problems involving forces ("A 100g mass is attached to a spring with a spring constant of 10 N/m and hangs vertically from a support. How much does the spring stretch?") are classified as "dynamics." That's kind of an operational definition, at least. | {
"source": [
"https://physics.stackexchange.com/questions/1135",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29/"
]
} |
1,165 | Ordinary matter and antimatter have the same physical properties when it comes to, for example, spectroscopy. Hydrogen and antihydrogen atoms produce the same spectroscopy when excited, and adsorb the same frequencies. The charge does not make a difference in the potential (regardless if it's generated by a proton or an antiproton) nor in how the positron behaves in this potential (being its mass equal to the mass of an electron) How can astronomy evaluate if a far galaxy is made of matter or antimatter, given that from the spectroscopy point of view, they behave in the same way? In other words, how do we know that an asymmetry exists between matter and antimatter in the universe? | To be a little pedantic, nobody has yet done precision spectroscopy of antihydrogen, though the recent success in trapping it at CERN (all over the news this week, paper here ) is an early step toward that. It's possible that there are small differences in the spectrum of antihydrogen and hydrogen, though these differences can't be all that large, or they would be reflected in the interactions of antiprotons and positrons with ordinary matter in ways that would've shown up in other experiments. As I understand it (and I am not an astronomer) the primary evidence for a lack of vast amounts of antimatter out there in the universe is a lack of radiation from the annihiliation. We're very confident that our local neighborhood is matter, not antimatter, which means that if there were an anti-galaxy somewhere, there would also need to be a boundary region between the normal matter and antimatter areas. At that boundary region, there would be a constant stream of particle-antiparticle annihilations, which produce gamma rays of a very particular energy. We don't see any such region when we look out at the universe, though, which strongly suggests that there aren't any anti-galaxies running around out there. | {
"source": [
"https://physics.stackexchange.com/questions/1165",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/217/"
]
} |
1,193 | I was reading somewhere about a really cheap way of travelling: using balloons to get ourselves away from the surface of the earth. The idea held that because the earth rotates, we should be able to land in a different place after some time. As we all know, this doesn't happen. Someone said that the reason why this doesn't happen is because the atmosphere (air, clouds etc.) also revolves around the earth (with the same angular velocity as the earth's rotation). Since we are also part of the atmosphere, our position doesn't change relatively. Well, I'm not convinced with that answer. Why does the atmosphere rotate along with earth? Gravitational force is towards the centre of the earth, but I don't see how it's making the atmosphere rotate. | The atmosphere rotates along with the Earth for the same reason you do. Force isn't needed to make something go. That's a basic law of physics - that a thing that's moving will just keep moving if there's no force on it. Force is needed either to make something change its speed, or to make its motion point in a new direction. A force can do both or just one of these. Most forces do both, but a force that pushes in the exactly the same direction you're already going only changes your speed, and does not change your direction. A force that pushes at a right angle to the direction you're already going only changes your direction, and does not add any speed. A force at "10 o'clock", for example, will change both your speed and your direction. As you stand still on Earth, you continue going the same speed, but your direction changes; between day and night you move opposite directions. So the forces on you must be at a right angle to your direction of motion. Indeed, they are. Your motion is from west to east along the surface of the Earth, and the force of gravity pulls you down towards the center of the Earth - the force and your motion are at right angles. Similarly for the atmosphere. It is moving along with the Earth, and moving at a constant speed. It does not need anything to push it along with the Earth. Since only its direction of motion is changing, it only needs a force at a right angle to its motion, the same as you, and the force that does the job is again gravity. That's not the whole picture, because the amount that your direction of motion changes depends on how strong the right-angle force is. It turns out gravity is much too strong for how much our direction of motion changes as the Earth spins. There must be some other force on us and on the atmosphere canceling out most of the gravity. There is. For me it's the force of the chair on my butt. For the atmosphere, it's the air pressure. So gravity doesn't "make the air rotate". The air is already going, and gravity simply changes its direction to pull it in a circle. You may be wondering why the air doesn't just sit there and have the Earth spin underneath it. One answer to that is that from our point of view that would mean incredibly strong wind all the time. That wind would run into stuff and eventually get slowed down to zero (that's from our point of view - the air would "speed up" to our speed of rotation from a point of view out in space watching everything happen). Even the air high up would eventually rotate with the Earth because although it can't slam into mountains or buildings and get stopped from blowing, it can essentially "slam into" the air beneath it due to friction in the air. (This is a little redundant with dmckee's answer; I was half way done when he beat me to the punch) | {
"source": [
"https://physics.stackexchange.com/questions/1193",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/447/"
]
} |
1,220 | If two ends of a rope are pulled with forces of equal magnitude and opposite direction, the tension at the center of the rope must be zero. True or false? The answer is false. I chose true though and I'm not understanding why. Forces act at the center of mass of the object, so if there are two forces of equal and opposite magnitude, then they should cancel out resulting in zero tension, no? | The tension of the rope is the shared magnitude of the two forces. Imagine cutting the rope at a point and inserting a spring scale in its place. The reading will show the tension. A rope with zero tension would be hanging loosely or laying on the ground, neglecting the rope's mass. | {
"source": [
"https://physics.stackexchange.com/questions/1220",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41/"
]
} |
1,235 | One can obtain the solution to a $2$-Body problem analytically. However, I understand that obtaining a general solution to a $N$-body problem is impossible. Is there a proof somewhere that shows this possibility/impossibility? Edit: I am looking to prove or disprove the below statement: there exists a power series that that
solve this problem, for all the terms
in the series and the summation of the
series must converge. | While the N-body Problem is chaotic, a convergent expansion exists.
The 3-Body expansion was found by Sundman in 1912, and the full N-body problem in 1991 by Wang . However, These expansions are pretty much useless for real problems( millions of terms are required for even short times); you're much better off with a numerical integration. The history of the 3-Body problem is in itself pretty interesting stuff. Check out June Barrow-Green's book which include a pretty good analysis of all the relevant physics, along with a ripping tale. | {
"source": [
"https://physics.stackexchange.com/questions/1235",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/605/"
]
} |
1,289 | I have had a question since childhood. Why do we always get circular waves (ripples) in water even when we throw irregularly shaped object in it? | Actually the ripples are not circular at all. See photo below. For example, a long stick will generate a straight water front on from its sides and circular waves from its edges. Something similar to a rectangle where the two short sides are replaced by semi-circles. As the waves spread, the straight front will retain its length, whereas the circular sides will grow in bigger and bigger circles, hence the impression that on a large body of water the waves end up being circular - they are not, but very close. The reason that an irregular object generates "circular" ripples is therefore this: as the waves propagate, the irregularities are maintained but spread across a larger and larger circular wave front. A very good example of this phenomenon is the Cosmic Microwave Background (CMB) where electromagnetic waves from the Big Bang are measured after having spread for 13.7 billion years. Although the CMB is really, really smooth - because of the "circular ripple" effect, if you like, we can still measure small irregularities, which we think are due to the "irregular shape" of the Big Bang at a certain time. | {
"source": [
"https://physics.stackexchange.com/questions/1289",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/309/"
]
} |
1,292 | Is there any example of a transmission of energy in a medium that does not show wave nature? | Actually the ripples are not circular at all. See photo below. For example, a long stick will generate a straight water front on from its sides and circular waves from its edges. Something similar to a rectangle where the two short sides are replaced by semi-circles. As the waves spread, the straight front will retain its length, whereas the circular sides will grow in bigger and bigger circles, hence the impression that on a large body of water the waves end up being circular - they are not, but very close. The reason that an irregular object generates "circular" ripples is therefore this: as the waves propagate, the irregularities are maintained but spread across a larger and larger circular wave front. A very good example of this phenomenon is the Cosmic Microwave Background (CMB) where electromagnetic waves from the Big Bang are measured after having spread for 13.7 billion years. Although the CMB is really, really smooth - because of the "circular ripple" effect, if you like, we can still measure small irregularities, which we think are due to the "irregular shape" of the Big Bang at a certain time. | {
"source": [
"https://physics.stackexchange.com/questions/1292",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/567/"
]
} |
1,307 | While the speed of light in vacuum is a universal constant ($c$), the speed at which light propagates in other materials/mediums may be less than $c$. This is obviously suggested by the fact that different materials (especially in the case of transparent ones) have a particular refractive index . But surely, matter or even photons can be accelerated beyond this speed in a medium? If so, what are the effects? | Speed of light is indeed lower when light propagates through materials (as opposed to vacuum). This doesn't mean that individual photons go slower but rather that the apparent speed of light pulse is lower due to interactions with atoms of the material. So in this case it is possible for some objects to go "faster than light" and indeed very similar effect to sonic boom, called Cherenkov radiation , appears. Note that for most materials the apparent speed of light is still huge (of the order of speed of light in vacuum) so you need very energetic particles to generate Cherenkov radiation. So this effect is mainly relevant for high-energy particle physics, astrophysics and nuclear physics. | {
"source": [
"https://physics.stackexchange.com/questions/1307",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/619/"
]
} |
1,361 | Scattering of light by light does not occur in the solutions of Maxwell's equations (since they are linear and EM waves obey superposition), but it is a prediction of QED (the most significant Feynman diagrams have a closed loop of four electron propagators). Has scattering of light by light in vacuum ever been observed, for photons of any energy? If not, how close are we to such an experiment? | This was demonstrated by " Experiment 144 " at SLAC in 1997. Here is a list of publications from that project, for instance " Positron Production in Multiphoton Light-by-Light Scattering ", whose abstract reads: A signal of 106±14 positrons above background has been observed in collisions of a low-emittance 46.6 GeV electron beam with terawatt pulses from a Nd:glass laser at 527 nm wavelength in an experiment at the Final Focus Test Beam at SLAC. The positrons are interpreted as arising from a two-step process in which laser photons are backscattered to GeV energies by the electron beam followed by a collision between the high-energy photon and several laser photons to produce an electron-positron pair. These results are the first laboratory evidence for inelastic light-by-light scattering involving only real photons . | {
"source": [
"https://physics.stackexchange.com/questions/1361",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/61/"
]
} |
1,368 | If the velocity is a relative quantity, will it make inconsistent equations when applying it to the conservation of energy equations? For example: In the train moving at $V$ relative to ground, there is an object moving at $v$ relative to the frame in the same direction the frame moves. Observer on the ground calculates the object kinetic energy as $\frac{1}{2}m(v+V)^2$. However, another observer on the frame calculates the energy as $\frac{1}{2}mv^2$. When each of these equations is plugged into the conservation of energy, they will result in 2 distinct results (I think). | Yes, kinetic energy is a relative quantity. As you might guess, this means that when you're using energy conservation, you have to stay within a single frame of reference; all that energy conservation tells you is that the amount of energy as measured in any one frame stays the same over time. You can't meaningfully compare the amount of energy measured in frame A (e.g. the ground) to the amount of energy measured in frame B (e.g. the train). However, you can convert an amount of kinetic energy measured in one frame to another frame, if you know their relative velocity. If you're working at low speeds, the easy (approximate) way to do this is to just calculate the relative velocity, as you did. So if the train observer measures a kinetic energy $K = \frac{1}{2}mv^2$, the ground observer will measure a kinetic energy of $\frac{1}{2}m(v + V)^2$, or $$K + \sqrt{2Km}V + \frac{1}{2}mV^2$$ (in one dimension). If you get up to higher speeds, or you want an exact expression, you'll have to use the relativistic definition of energy. In special relativity, the kinetic energy is given by the difference between the total energy and the "rest energy," $$K = E - mc^2$$ One way to figure out the transformation rule is to use the fact that the total energy is part of a four-vector, along with the relativistic momentum, $$\begin{pmatrix}E/c \\ p\end{pmatrix} = \begin{pmatrix}\gamma_v mc \\ \gamma_v mv\end{pmatrix}$$ where $\gamma_v = 1/\sqrt{1 - v^2/c^2}$. This four-vector transforms under the Lorentz transformation as you shift from one reference frame to another, $$\begin{pmatrix}E/c \\ p\end{pmatrix}_\text{ground} = \begin{pmatrix}\gamma & \gamma\beta \\ \gamma\beta & \gamma\end{pmatrix}\begin{pmatrix}E/c \\ p\end{pmatrix}_\text{train}$$ (where $\beta = V/c$ and $\gamma = 1/\sqrt{1 - \beta^2}$), so the energy as observed from the ground would be given by $$E_\text{ground} = \gamma(E_\text{train} + \beta c p_\text{train})$$ The kinetic energy is obtained by subtracting $mc^2$ from the total energy, so you'd get $$K_\text{ground} = \gamma(E_\text{train} + \beta c p_\text{train}) - mc^2$$ which works out to $$K_\text{ground} = \gamma K_\text{train} + (\gamma - 1) mc^2 + \gamma\beta c p_\text{train}$$ where $K$ is the relativistic kinetic energy and $p$ is the relativistic momentum. If you wanted it in terms of energy alone: $$K_\text{ground} = \gamma K_\text{train} + (\gamma - 1) mc^2 + \gamma\beta\sqrt{K_\text{train}^2 + 2 mc^2 K_\text{train}}$$ You might start to notice a similarity to the non-relativistic expression above ($K + \sqrt{2Km}V + \frac{1}{2}mV^2$), and indeed, if you plug in some approximations that are valid at low speeds ($\gamma \approx 1$, $\gamma - 1 \approx V^2/c^2$, $K_\text{train} \approx \frac{1}{2}mv^2 \ll mc^2$), you will recover exactly that expression. | {
"source": [
"https://physics.stackexchange.com/questions/1368",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/652/"
]
} |
1,479 | I've watched a video from the American National History Museum entitled The Known Universe . The video shows a continuous animation zooming out from earth to the entire known universe. It claims to accurately display every star and galaxy mapped so far. At one point in this video [3:00 - 3:15 minutes] it displays this text: The galaxies we have mapped so far. The empty areas where we have yet to
map. At this point, the shape of the "universe" is roughly hourglass, with earth at it's centre. I'm having trouble understanding what this represents and why they shape is hourglass. Is this simply because we have chosen to map in those directions first? Is there a reason astronomers would choose to map in this pattern, or is this something more fundamental about the shape of the universe? Or is it to do with the speed of light reaching us from these distant galaxies? Continuing on from the hourglass pattern, the cosmic microwave background radiation is represented as a sphere and labelled "Our cosmic horizon in space and time". This doesn't help clear anything up. If we can map CMB in all directions, why have we only mapped galaxies in this hourglass shape. | First of all, the universe is most certainly not shaped like an hourglass. It simply looks that way because the gas and dust in the plane of our galaxy obstruct our view of anything outside the galaxy in those directions. So we can only see other galaxies (and similarly distant objects) by pointing telescopes at some angle to the galactic plane. That gives the "hourglass" shape: it's simply because those are the only directions we can see in. In reality, we have every reason to think galaxies are distributed more or less uniformly, once you look at a large enough scale. The video description doesn't cite its sources, but I suspect that (some/most of) the information on the distant galaxies comes from the Sloan Digital Sky Survey , which is AFAIK the most comprehensive survey of objects in the universe outside our own cluster of galaxies. You might want to check out the information on their website if you're interested in this stuff. And as long as I'm citing sources, the latest CMB data comes from the WMAP project . | {
"source": [
"https://physics.stackexchange.com/questions/1479",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/692/"
]
} |
1,493 | How efficient is an electric heater? My guess: greater than 95%. Possibly even 99%. I say this because most energy is converted into heat; some is converted into light and kinetic energy, and possibly other forms of energy. Anyone other opinions? (This is not homework. I am just curious and I'm having a discussion with a friend who says an electric heater is horribly inefficient, less than 5%.) | It depends on what you mean by efficiency. Suppose you want to heat your house. An electric heater like you're considering would do this by converting electrical energy directly into heat. Pretty much all the electrical energy does get converted to heat, as you suggest. The energy used to get a certain amount of heat into the house is simply equal to that amount of heat. In that sense, the electric heater is 100% efficient, since energy not directly turned into heat will be turned into heat soon. That isn't a very useful way of thinking about efficiency, though, because any form of energy in your house will probably decay into heat energy pretty quickly. Your computer, television, and refrigerator are 100% efficient at heating your house from this point of view, because although they do things other than generate heat, the energy they use to do those things becomes heat in short order. By contrast, a heat pump would heat your house by taking heat from the outside and moving it inside. The energy it needs to do this depends on the outside and inside temperatures. If the temperatures inside and outside are $T_i$ and $T_o$, an ideal heat pump (i.e. a Carnot engine ) would require $(1-\frac{T_o}{T_i})*dH$ Joules of work energy to move $dH$ Joules of heat energy from outside to inside (if the outside temperature were greater, this number is negative, meaning the heat pump can extract energy). The efficiency of the electric heater, compared to the idealized heat pump, is $1-\frac{T_o}{T_i}$ for given inside and outside temperatures. When the inside and outside temperatures are the same, the electric heater is zero percent efficient. If it's 0C outside and 25C inside, the electric heater is about 8% efficient. | {
"source": [
"https://physics.stackexchange.com/questions/1493",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/104/"
]
} |
1,557 | I'm in a freshmen level physics class now, so I don't know much, but something I heard today intrigued me. My TA was talking about how at the research facility he worked at, they were able to accelerate some certain particle to "99.99% the speed of light". I said why not 100%, and I didn't quite understand his explanation, but he said it wasn't possible. This confused me. Since the speed of light is a finite number, why can we go so close to its speed but not quite? Edit: I read all the answers, and I think I'm sort of understanding it. Another silly question though: If we are getting this particle to 99.99% the speed of light by giving it some sort of finite acceleration, and increasing it more and more, why cant we increase it just a little more ? Sorry I know this is a silly question. I totally accept the fact we cant reach 100%, but I'm just trying to break it down. If we've gotten so close by giving it larger and larger acceleration every time, why cant we just supply it with more acceleration? And how much of a difference is there between 99.99% the speed of light, and the speed of light? (I'm not really sure if "difference" is a good word to use, but hopefully you get what I'm asking). | By special relativity, the energy needed to accelerate a particle (with mass) grow super-quadratically when the speed is close to c , and is ∞ when it is c . $$ E = \gamma mc^2 = \frac{mc^2}{\sqrt{1 - (\text{“percent of speed of light”})^2}} $$ Since you can't supply infinite energy to the particle, it is not possible to get to 100% c . Edit: Suppose you have got an electron (m = 9.1 × 10 -31 kg) to 99.99% of speed of light. This is equivalent to providing 36 MeV of kinetic energy. Now suppose you accelerate "a little more" by providing yet another 36 MeV of energy. You will find this this only boosts the electron to 99.9975% c . Say you accelerate "a lot more" by providing 36,000,000 MeV instead of 36 MeV. That will still make you reach 99.99999999999999% c instead of 100%. The energy increase explodes as you approach c , and your input will exhaust eventually no matter how large it is. The difference between 99.99% and 100% is infinite amount of energy. | {
"source": [
"https://physics.stackexchange.com/questions/1557",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41/"
]
} |
1,601 | I am a graduate student in mathematics who would like to learn some classical mechanics. However, there is one caveat: I am not interested in the standard coordinate approach. I can't help but think of the fields that arise in physics as sections of vector bundles (or maybe principal bundles) and would love an approach to classical mechanics or what have you that took advantage of this. Now for the questions: Is there a text book you would recommend that phrases the constructions in classical mechanics via bundles without an appeal to transition functions? What are the drawbacks to this approach other than the fact that it makes computations less doable? (if it does that) Are there benefits to thinking about things this way, ie would it be of benefit to someone attempting to learn this material to do it this way? | 1. I am in love with Fecko's Differential Geometry and Lie Groups for Physicists . Despite not being just about mechanics (but rather about more or less all rudimentary modern theoretical physics) it discusses both Lagrangian and Hamiltonian formalism. It also provides countless exercises (with nice hints) so that you can really get a feel for the matter. 2. I can't think of any major drawbacks. Of course, if the problem has no symmetry you sometimes have no other choice than to go back to some coordinates and solve numerically. But this is probably non-issue for you because I suppose you first want to understand physical problems with some structure. 3. There are countless benefits. To list just few of them. relation to symmetries and conserved quantities becomes obvious. Noether's theorem in Hamiltonian formalism is so amazingly simple statement (Hamiltonian is constant for symmetry flow if an only if the generator of the symmetry is constant for Hamiltonian flow) that one has to wonder where all the long-winded coordinate calculations went. Not only are the calculations short, one also gains valuable geometrical insights e.g. about the flow of the configuration on the manifold. It's a beautiful formalism. I don't know about others but whenever I have to calculate in coordinates I become nervous. I can compute the results but after few pages when most of the quantities mysteriously cancel, you don't really know why what you derived is true. So then you go back to geometry and lo and behold, the derivation is just few lines and obvious. Of course I am exaggerating now but that's the way I feel. It's the basis for all of modern physics. If the above four points were true in classical mechanics, they are even more true when dealing with things like gauge theories (and that is where the full beauty and power of mathematics comes out). | {
"source": [
"https://physics.stackexchange.com/questions/1601",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40/"
]
} |
1,639 | Clearly there will be differences like air resistance; I'm not interested in that. It seems like you're working against gravity when you're actually running in a way that you're not if you're on a treadmill, but on the other hand it seems like one should be able to take a piece of the treadmill's belt as an inertial reference point. What's going on here? | For me it is axiomatic that machine miles are easier than real miles, but let's analyze the situation. Assume the runner maintains a constant velocity up the hill, or remains stationary in the frame of the gym on the treadmill. In both cases the runner's acceleration is zero, so we know that her legs must provide a constant force with upward magnitude $mg$, and the they have to do this against a surface passing by at an angle $\theta$ below the horizontal and moving with a velocity $v$. The kinematics in the runners frame of reference look the same. This is not the cause of the difference in perceived difficulty. I have always assumed that the difference in difficulty was two fold: Wind resistance is not really negligible. The treadmill presents a very uniform reliable surface and the runner need not lift her legs as high to insure non-tripping progress. Also modern treadmill are designed to be relatively easy on the knees, and the accomplish this by having a slightly springy feeling which presumably returns some energy to the runner. | {
"source": [
"https://physics.stackexchange.com/questions/1639",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
1,686 | I'm reading Nano: The Essentials by T. Pradeep and I came upon this statement in the section explaining the basics of scanning electron microscopy. However, the equation breaks down when the electron velocity approaches the speed of light as mass increases. At such velocities, one needs to do relativistic correction to the mass so that it becomes[...] We all know about the famous theory of relativity, but I couldn't quite grasp the "why" of its concepts yet. This might shed new light on what I already know about time slowing down for me if I move faster. Why does the (relativistic) mass of an object increase when its speed approaches that of light? | The mass (the true mass which physicists actually deal with when they calculate something concerning relativistic particles) does not change with velocity . The mass (the true mass!) is an intrinsic property of a body, and it does not depends on the observer's frame of reference. I strongly suggest to read this popular article by Lev Okun, where he calls the concept of relativistic mass a "pedagogical virus". What actually changes at relativistic speeds is the dynamical law that relates momentum and energy depend with the velocity (which was already written). Let me put it this way: trying to ascribe the modification of the dynamical law to a changing mass is the same as trying to explain non-Euclidean geometry by redefining $\pi$! Why this law changes is the correct question, and it is discussed in the answers here. | {
"source": [
"https://physics.stackexchange.com/questions/1686",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/293/"
]
} |
1,775 | If temperature makes particles vibrate faster, and movement is limited by the speed of light, then I would assume that temperature must be limited as well. Why is there no limit? | I think the problem here is that you're being vague about the limits Special Relativity impose. Let's get this clarified by being a bit more precise. The velocity of any particle is of course limited by the speed of light c . However, the theory of Special Relativity does not imply any limit on energy . In fact, as energy of a massive particle tends towards infinity, its velocity tends toward the speed of light. Specifically, $$E = \text{rest mass energy} + \text{kinetic energy} = \gamma mc^2$$ where $\gamma = 1/\sqrt{1-(u/c)^2}$. Clearly, for any energy and thus any gamma, $u$ is still bounded from above by $c$. We know that microscopic (internal) energy relates to macroscopic temperature by a constant factor (on the order of the Boltzmann constant), hence temperature of particles, like energy, has no real limit. | {
"source": [
"https://physics.stackexchange.com/questions/1775",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/213/"
]
} |
1,787 | General relativity says that spacetime is a Lorentzian 4-manifold $M$ whose metric satisfies Einstein's field equations. I have two questions: What topological restrictions do Einstein's equations put on the manifold? For instance, the existence of a Lorentz metric implies some topological things, like the Euler characteristic vanishing. Are there any experiments being done or even any hypothetical experiments that can give information on the topology? E.g. is there a group of graduate students out there trying to contract loops to discover the fundamental group of the universe? | That's a great question! What you are asking about is one of the missing links between classical and quantum gravity. On their own, the Einstein equations, $ G_{\mu\nu} = 8 \pi G T_{\mu\nu}$, are local field equations and do not contain any topological information. At the level of the action principle, $$ S_{\mathrm{eh}} = \int_\mathcal{M} d^4 x \, \sqrt{-g} \, \mathbf{R} $$ the term we generally include is the Ricci scalar $ \mathbf{R} = \mathrm{Tr}[ R_{\mu\nu} ] $, which depends only on the first and second derivatives of the metric and is, again, a local quantity. So the action does not tell us about topology either, unless you're in two dimensions, where the Euler characteristic is given by the integral of the ricci scalar: $$ \int d^2 x \, \mathcal{R} = \chi $$ (modulo some numerical factors). So gravity in 2 dimensions is entirely topological. This is in contrast to the 4D case where the Einstein-Hilbert action appears to contain no topological information. This should cover your first question. All is not lost, however. One can add topological degrees of freedom to 4D gravity by the addition of terms corresponding to various topological invariants (Chern-Simons, Nieh-Yan and Pontryagin). For instance, the Chern-Simons contribution to the action looks like: $$ S_{cs} = \int d^4 x \frac{1}{2} \left(\epsilon_{ab} {}^{ij}R_{cdij}\right)R_{abcd} $$ Here is a very nice paper by Jackiw and Pi for the details of this construction. There's plenty more to be said about topology and general relativity. Your question only scratches the surface. But there's a goldmine underneath ! I'll let someone else tackle your second question. Short answer is "yes". | {
"source": [
"https://physics.stackexchange.com/questions/1787",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37/"
]
} |
1,799 | I would be very happy if someone could take a stab at conveying what conformal blocks are and how they are used in conformal field theory (CFT). I'm finally getting the glimmerings of understanding from reading Moore and Read's wonderful paper. But I think/hope this site has people who can explain the notions involved in a simpler and more intuitive manner. Edit: Here is a simple example, taken from pg 8 of the reference cited above ... In a 2D CFT we have correlation functions of fields $ \phi_i(z,\bar z) $, (where $ z = x+\imath y$) at various points on the complex plane. The n-point correlation function can be expanded as: $$ \left \langle \prod_{a=1}^n \phi_{i_a}(z_a,\bar z_a) \right \rangle = \sum_p | F_{p\; i_{1} \dots i_n}(z_{1} \dots z_n)|^2 $$ Here $p$ labels members of a basis of functions $ F_{p\; i_1 \dots i_n}(z_{1} \dots z_n) $ which span a vector space for each n-tuple $(z_{1} \dots z_n)$ These functions $F_p$ are known as conformal blocks, and appear to give a "fourier" decomposition of the correlation functions. This is what I've gathered so far. If someone could elaborate with more examples that would be wonderful ! Edit: It is proving very difficult to decide which answer is the "correct" one. I will give it a few more days. Perhaps the situation will change ! The "correct" answer goes to (drum-roll): David Zavlasky. Well they are all great answers. I chose David's for the extra five points because his is the simplest, IMHO. He also mentions the "cross-ratio" which is a building block of CFT. | I did a bit of reading about this, and it turns out that conformal blocks are actually quite relevant to my research! So I figured it was worth the time to investigate in some more detail. I've never studied conformal field theory formally, but I hope I'm not writing anything outright wrong here. (I lost my first draft and had to reconstruct it, which is why it's taken so long) In conformal field theory, it's common to represent coordinates on a two-dimensional space by using complex numbers, so $\vec{r} = (x,y)$ becomes $\rho = x + iy$ . In this notation, the theory is invariant under the action of a Möbius transformation (a.k.a. conformal transformation), $$\rho \to \frac{a\rho + b}{c\rho + d}$$ in which $a$ , $b$ , $c$ , and $d$ are complex constants that satisfy $ad - bc \neq 0$ . The transformation has three complex degrees of freedom - in other words, if you specify three initial points and three final points on the complex plane, there is a unique Möbius transformation that maps those three initial points to the three final points. So any function of four coordinates on the plane, for example a four-point correlation function of quantum fields, $$G_4 = \langle \phi_1(\rho_1,\rho_1^*) \phi_2(\rho_2,\rho_2^*) \phi_3(\rho_3,\rho_3^*) \phi_4(\rho_4,\rho_4^*) \rangle$$ has only one real degree of freedom, after you factor out the gauge freedoms corresponding to the Möbius transformation. In other words, you can map any three of those coordinates on to three fixed reference points (for example $0$ , $1$ , and $\infty$ ), and you're left with a function of only one variable, something like $$x = \frac{(\rho_4 - \rho_2)(\rho_3 - \rho_1)}{(\rho_4 - \rho_1)(\rho_3 - \rho_2)}$$ This opens the door to write $G_4$ as a simple function of this one ratio (at least, simpler than a function of four independent coordinates). The particular part of CFT in which conformal blocks are applied (as far as I can tell; I'm starting to get a little out of my depth here) has to do with Virasoro algebras. Specifically, the way the individual fields $\phi_i$ transform under a conformal transformation is described by the group defined by the Virasoro algebra. The four-point function $G_4$ can be written as a sum of contributions from different representations of the group, $$G_4(\rho_1,\rho_2,\rho_3,\rho_4) = \sum_l G_l f(D_l, d_i, C, x) f(D_l, d_i, C, x^*)$$ Here $l$ indexes the different representations; $C$ is a constant (the "central charge" of the Virasoro algebra); and $d_i$ and $D_l$ are anomalous dimensions of the external fields and the internal field respectively. The function $f$ is called a conformal block. $f$ is useful because it can be calculated (in principle or in practice, I'm not sure which) using only information about a single representation of the Virasoro group. It can be expressed as a series in $x$ of a known form, the coefficients of which depend on the structure of the group. Further Reading Belavin A. Infinite conformal symmetry in two-dimensional quantum field theory. Nuclear Physics B . 1984;241(2):333-380. Available at: https://doi.org/10.1016/0550-3213(84)90052-X . Zamolodchikov AB. Conformal symmetry in two dimensions: an explicit recurrence formula for the conformal partial wave amplitude. Communications in Mathematical Physics (1965-1997) . 1984;96(3):419-422. Available at: https://doi.org/10.1007/BF01214585 . Zamolodchikov AB. Conformal symmetry in two-dimensional space: Recursion representation of conformal block . Theoretical and Mathematical Physics . 1987;73(1):1088-1093. Available at: https://doi.org/10.1007/BF01022967 . and of course DiFrancesco et al's book. | {
"source": [
"https://physics.stackexchange.com/questions/1799",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
1,836 | I think that something is invisible if it's isolated particles are smaller than the wavelength of visible light. Is this correct? Why is air invisible? What about other gases and fumes which are visible? | I think the pithy answer is that our eyes adapted to see the subset of the electromagnetic spectrum where air has no absorption peaks. If we saw in different frequency ranges, then air would scatter the light we saw, and our eyes would be less useful. | {
"source": [
"https://physics.stackexchange.com/questions/1836",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/293/"
]
} |
1,894 | Coming from a mathematical background, I'm trying to get a handle on the path integral formulation of quantum mechanics. According to Feynman, if you want to figure out the probability amplitude for a particle moving from one point to another, you 1) figure out the contribution from every possible path it could take, then 2) "sum up" all the contributions. Usually when you want to "sum up" an infinite number of things, you do so by putting a measure on the space of things, from which a notion of integration arises. However the function space of paths is not just infinite, it's extremely infinite . If the path-space has a notion of dimension, it would be infinite-dimensional (eg, viewed as a submanifold of $C([0,t] , {\mathbb R}^n))$. For any reasonable notion of distance, every ball will fail to be compact. It's hard to see how one could reasonably define a measure over a space like this - Lebesgue-like measures are certainly out. The books I've seen basically forgo defining what anything is, and instead present a method to do calculations involving "zig-zag" paths and renormalization. Apparently this gives the right answer experimentally, but it seem extremely contrived (what if you approximate the paths a different way, how do you know you will get the same answer?). Is there a more rigorous way to define Feynman path integrals in terms of function spaces and measures? | Path integral is indeed very problematic on its own. But there are ways to almost capturing it rigorously. Wiener process One way is to start with Abstract Wiener space that can be built out of the Hamiltonian and carries a canonical Wiener measure. This is the usual measure describing properties of the random walk. Now to arrive at path integral one has to accept the existence of "infinite-dimensional Wick rotation " and analytically continue Wiener measure to the complex plane (and every time this is done a probabilist dies somewhere). This is the usual connection between statistical physics (which is a nice, well-defined real theory) at inverse temperature $\beta$ in (N+1,0) space-time dimensions and evolution of the quantum system in (N, 1) dimensions for time $t = -i \hbar \beta$ that is used all over the physics but almost never justified. Although in some cases it was actually possible to prove that Wightman QFT theory is indeed a Wick rotation of some Euclidean QFT (note that quantum mechanics is also a special case of QFT in (0, 1) space-time dimensions). Intermezzo This is a good place to point out that while path integral is problematic in QM, whole lot of different issues enter with more space dimensions. One has to deal with operator valued distributions and there is no good way to multiply them (which is what physicist absolutely need to do). There are various axiomatic approaches to get a handle on this and they in fact do look very nice. Except that it's very hard to actually find a theory that satisfies these axioms. In particular, none of our present day theories of Standard model have been rigorously defined. Anyway, to make the Wick rotation a bit clearer, recall that Schrödinger equation is a kind of diffusion equation but for an introduction of complex numbers. And then just come back to the beginning and note that diffusion equation is macroscopic equation that captures the mean behavior of the random walk. (But this is not to say that path integral in any way depends on the Schrödingerian, non-relativistic physics) Others There were other approaches to define the path-integral rigorously. They propose some set of axioms that path-integral has to obey and continue from there. To my knowledge (but I'd like to be wrong), all of these approaches are too constraining (they don't describe most of physically interesting situations). But if you'd like I can dig up some references. | {
"source": [
"https://physics.stackexchange.com/questions/1894",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/821/"
]
} |
1,906 | I'm trying to understand the Kubo Formula for the electrical conductivity in the context of the Quantum Hall Effect. My problem is that several papers, for instance the famous TKNN (1982) paper, or an elaboration by Kohmoto (1984) , write the diagonal entries of the conductivity tensor in the form $$ \sigma_{xy}(\omega \to 0) = \frac{ie^2}{\hbar} \sum_{E^a < E_F < E^b} \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle - \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{(E^a - E^b)^2} .$$ This is the static limit $\omega\to 0$ and low temperature $T\to 0$. The sum goes over all eigenstates $|a\rangle$ and $|b\rangle$ of the single-particle Hamiltonian. $E_F$ is the Fermi energy. $v_x$ and $v_y$ are the single-particle velocity operators. However, these papers don't derive this equation, which is unfortunate because the Kubo formula is usually not presented in this form. I have found (and succeeded in rederiving) the following variation instead $$ \sigma_{xy}(\omega+i\eta) = \frac{-ie^2}{V(\omega + i\eta)} \sum_{a,b} f(E^a) \left( \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle}{\hbar\omega + i\eta + E^a - E^b} + \frac{\langle a|v_y|b \rangle \langle b|v_x|a \rangle}{-\hbar\omega - i\eta + E^a - E^b} \right).$$ This is formula (13.37) from Ashcroft, Mermin , though they don't actually prove it. $f(E)$ is the Fermi distribution. A nice derivation is given in Czycholl (german). Now, my question is, obviously How to derive the first formula from the second? I can see that the first equation arises as the linear term when writing the sum as a power series in $\omega$, but why doesn't the constant term diverge? | The first formula indeed follows from the second formula if we let $\omega\to0$. To see that, expand the fractions as $$ \frac1{\pm\hbar\omega + E^a - E^b} = \frac1{E^a-E^b}\left(1 \mp \frac{\hbar\omega}{E^a-E^b}\right) + \mathcal O(\omega^2)$$ to obtain $\sigma_{xy} = \sigma^1 + \sigma^2$ as the sum of a potentially divergent term $$ \sigma^1 = \frac{-ie^2}{V\omega} \sum_{a,b} f(E^a) \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{E^a - E^b} $$ and a term that looks like the first formula $$ \sigma^2 = \frac{-ie^2\hbar}{V} \sum_{a,b} f(E^a) \frac{- \langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{(E^a - E^b)^2} .$$ To see that the first term vanishes instead of diverging, we have to use the Heisenberg equation of motion $v_x = \frac{d}{dt}x = [H_0,x]$ which gives $$ \langle a | v_x | b \rangle = \langle a | H_0 x - x H_0 | b \rangle = (E^a-E^b) \langle a | x | b \rangle $$ and thus $$ \langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle = (E^a-E^b) (\langle a|x|b \rangle \langle b|v_y|a \rangle - \langle a|v_y|b \rangle \langle b|x|a \rangle) .$$ The factors $(E^b-E^b)$ cancel and the remaining sum over $b$ becomes a sum over the identity $\sum_b |b\rangle\langle b| = 1$. Thus, we arrive at $$ \sigma^1 = \frac{-ie^2}{V\omega} \sum_{a,b} f(E^a) \left(\langle a|xv_y - v_yx |a \rangle \right) = 0 .$$ since the commutator $[x,v_y]$ vanishes. To see that the second term is correct, we have to get the summation indices right. To do that, we have to rearrange the summation to obtain $$ \sigma^2 = \frac{ie^2\hbar}{V} \sum_{a,b} (f(E^a)-f(E^b))\frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle}{(E^a - E^b)^2} .$$ In the limit $T\to0$, the difference of Fermi-Dirac distributions $f(E^a)-f(E^b)$ will be equal to $1$ if $E^a < E_F < E^b$ $-1$ if $E^b < E_F < E^a$ $0$ otherwise Using this and rearranging the summation again gives the Kubo formula in the first form. | {
"source": [
"https://physics.stackexchange.com/questions/1906",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/188/"
]
} |
1,907 | I understand that the Moon's phases are determined by its position in orbit relative to the Sun. (See: Full Story on the Moon ). The "shadow" is not cast by the Earth (a common misconception - this is actually a lunar eclipse), but by the moon's body itself. It would appear that, in order for the moon to be a new moon, it would have to be somewhere between the Earth and the Sun. However, it would seem that, if we were looking at a new moon, we'd necessarily be looking at the Sun as well. Why, then, can we sometimes see a new moon at night? Why doesn't it vanish at night for a half-month every month, between its last/first quarters? | Well, I'd like to say that you are almost there. The key point of this question is to know that usually illustrations are just showing the relative positions but not with the real ratio. If the size and distance of the moon is the same as such pictures show, it will much harder to find when it lies at the same side of the sun. Because to see it, the smallest position angle from the sun is $\alpha = (R_E+R_M)/D_{EM}$ .Then it will vanish more days every month as you say. But the real situation is like this: Since the real distance is so far, even the moon is quite close to the sun, people besides the Day Night Terminator can still find it. An estimate can be given in this way:
$\alpha = (R_E+R_M)/D_{EM} = (6471+3476)/384400 = 0.0259 $ rad.
So there a only $ 2 \alpha \times 180/\pi \times 28/360 = 0.23 $ days we can not see it from the earth. Due to the strong day light, this time can be longer but still within one day. | {
"source": [
"https://physics.stackexchange.com/questions/1907",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/834/"
]
} |
1,957 | I was taught that something which reflects all the colors of light is white. The function of a mirror is the same, it also reflects all light. What's the difference? Update: But what if the white object is purely plain and does not scatter light? Will it be a mirror? | The difference is the direction the light is emitted in. Mirrors 'bounce' light in a predictable direction, white objects scatter light. | {
"source": [
"https://physics.stackexchange.com/questions/1957",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/847/"
]
} |
1,984 | I read the definition of work as
$$W ~=~ \vec{F} \cdot \vec{d}$$
$$\text{ Work = (Force) $\cdot$ (Distance)}.$$ If a book is there on the table, no work is done as no distance is covered. If I hold up a book in my hand and my arm is stretched, if no work is being done, where is my energy going? | While you do spend some body energy to keep the book lifted, it's important to differentiate it from physical effort. They are connected but are not the same. Physical effort depends not only on how much energy is spent, but also on how energy is spent. Holding a book in a stretched arm requires a lot of physical effort, but it doesn't take that much energy. In the ideal case, if you manage to hold your arm perfectly steady, and your muscle cells managed to stay contracted without requiring energy input , there wouldn't be any energy spent at all because there wouldn't be any distance moved. On real scenarios, however, you do spend (chemical) energy stored within your body, but where is it spent? It is spent on a cellular level. Muscles are made with filaments which can slide relative to one another, these filaments are connected by molecules called myosin, which use up energy to move along the filaments but detach at time intervals to let them slide.
When you keep your arm in position, myosins hold the filaments in position, but when one of them detaches other myosins have to make up for the slight relaxation locally. Chemical energy stored within your body is released by the cell as both work and heat.* Both on the ideal and the real scenarios we are talking about the physical definition of energy. On your consideration, you ignore the movement of muscle cells, so you're considering the ideal case. A careful analysis of the real case leads to the conclusion that work is done and heat is released, even though the arm itself isn't moving. * Ultimately, the work done by the cells is actually done on other cells, which eventually dissipates into heat due to friction and non-elasticity. So all the energy you spend is invested in keeping the muscle tension and eventually dissipated as heat. | {
"source": [
"https://physics.stackexchange.com/questions/1984",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/847/"
]
} |
2,051 | In the standard model of particle physics, there are three generations of quarks (up/down, strange/charm, and top/bottom), along with three generations of leptons (electron, muon, and tau). All of these particles have been observed experimentally, and we don't seem to have seen anything new along these lines. A priori, this doesn't eliminate the possibility of a fourth generation, but the physicists I've spoken to do not think additional generations are likely. Question: What sort of theoretical or experimental reasons do we have for this limitation? One reason I heard from my officemate is that we haven't seen new neutrinos. Neutrinos seem to be light enough that if another generation's neutrino is too heavy to be detected, then the corresponding quarks would be massive enough that new physics might interfere with their existence. This suggests the question: is there a general rule relating neutrino masses to quark masses, or would an exceptionally heavy neutrino just look bizarre but otherwise be okay with our current state of knowledge? Another reason I've heard involves the Yukawa coupling between quarks and the Higgs field. Apparently, if quark masses get much beyond the top quark mass, the coupling gets strong enough that QCD fails to accurately describe the resulting theory. My wild guess is that this really means perturbative expansions in Feynman diagrams don't even pretend to converge, but that it may not necessarily eliminate alternative techniques like lattice QCD (about which I know nothing). Additional reasons would be greatly appreciated, and any words or references (the more mathy the better) that would help to illuminate the previous paragraphs would be nice. | There are very good experimental limits on light neutrinos that have the same electroweak couplings as the neutrinos in the first 3 generations from the measured width of the $Z$ boson. Here light means $m_\nu < m_Z/2$. Note this does not involve direct detection of neutrinos, it is an indirect measurement based on the calculation of the $Z$ width given the number of light neutrinos. Here's the PDG citation: http://pdg.lbl.gov/2010/listings/rpp2010-list-number-neutrino-types.pdf There is also a cosmological bound on the number of neutrino generations coming from production of Helium during big-bang nucleosynthesis. This is discussed in "The Early Universe" by Kolb and Turner although I am sure there are now more up to date reviews. This bound is around 3 or 4. There is no direct relationship between quark and neutrino masses, although you can derive
possible relations by embedding the Standard Model in various GUTS such as those based on
$SO(10)$ or $E_6$. The most straightforward explanation in such models of why neutrinos are light is called the see-saw mechanism http://en.wikipedia.org/wiki/Seesaw_mechanism and leads to neutrinos masses $m_\nu \sim m_q^2/M$ where $M$ is some large mass scale on the order of $10^{11} ~GeV$ associated with the vacuum expectation value of some Higgs field that plays a role in breaking the GUT symmetry down to $SU(3) \times SU(2) \times U(1)$. If the same mechanism is at play for additional generations one would expect the neutrinos to be lighter than $M_Z$ even if the quarks are quite heavy.
Also, as you mentioned, if you try to make fourth or higher generations very heavy you have to increase the Yukawa coupling to the point that you are outside the range of perturbation theory. These are rough theoretical explanations and the full story is much more complicated but the combination of the excellent experimental limits, cosmological bounds and theoretical expectations makes most people skeptical of further generations. Sorry this wasn't mathier. | {
"source": [
"https://physics.stackexchange.com/questions/2051",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/761/"
]
} |
2,072 | I searched and couldn't find it on the site, so here it is (quoted to the letter): On this infinite grid of ideal one-ohm resistors, what's the equivalent resistance between the two marked nodes? With a link to the source . I'm not really sure if there is an answer for this question. However, given my lack of expertise with basic electronics, it could even be an easy one. | Nerd Sniping ! The answer is $\frac{4}{\pi} - \frac{1}{2}$. Simple explanation : Successive Approximation! I'll start with the simplest case (see image
below) and add more and more resistors to try and approximate an
infinite grid of resistors. Mathematical derivation : $$R_{m,m}=\frac 2\pi \left( 1 + \frac 13 + \frac 15 + \frac 17 + \dots + \frac 1 {2m-1} \right)$$ | {
"source": [
"https://physics.stackexchange.com/questions/2072",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/724/"
]
} |
2,110 | I have looked at other questions on this site (e.g. "why does space expansion affect matter") but can't find the answer I am looking for. So here is my question: One often hears talk of space expanding when we talk about the speed of galaxies relative to ours. Why, if space is expanding, does matter not also expand? If a circle is drawn on balloon (2d plane), and the balloon expands, then the circle also expands. If matter is an object with 3 spatial dimensions, then when those 3 dimensions expand, so should the object. If that was the case, we wouldn't see the universe as expanding at all, because we would be expanding (spatially) with it. I have a few potential answers for this, which raise their own problems: Fundamental particles are 'point sized' objects. They cannot expand because they do not have spatial dimension to begin with. The problem with this is that while the particles would not expand, the space between them would, leading to a point where the 3 non-gravity forces would no longer hold matter together due to distance Fundamental particles are curled up in additional dimensions a la string theory. These dimensions are not expanding. Same problems as 1, with the added problem of being a bit unsatisfying. The answer seems to be (from Marek in the previous question) that the gravitational force is so much weaker than the other forces that large (macro) objects move apart, but small (micro) objects stay together. However, this simple explanation seems to imply that expansion of space is a 'force' that can be overcome by a greater one. That doesn't sound right to me. | Let's talk about the balloon first because it provides a pretty good model for the expanding universe. It's true that if you draw a big circle then it will quickly expand as you blow into the balloon. Actually, the apparent speed with which two of the points on the circle in a distance $D$ of each other would move relative to each other will be $v = H_0 D$ where $H_0$ is the speed the balloon itself is expanding. This simple relation is known as Hubble's law and $H_0$ is the famous Hubble constant . The moral of this story is that the expansion effect is dependent on the distance between objects and really only apparent for the space-time on the biggest scales. Still, this is only part of the full picture because even on small distances objects should expand (just slower). Let us consider galaxies for the moment. According to wikipedia, $H_0 \approx 70\, {\rm km \cdot s^{-1} \cdot {Mpc}^{-1}}$ so for Milky way which has a diameter of $D \approx 30\, {\rm kPc}$ this would give $v \approx 2\,{\rm km \cdot s^{-1}}$ . You can see that the effect is not terribly big but the given enough time, our galaxy should grow. But it doesn't. To understand why, we have to remember that space expansion isn't the only important thing that happens in our universe. There are other forces like electromagnetism. But most importantly, we have forgotten about good old Newtonian gravity that holds big massive objects together. You see, when equations of space-time expansion are derived, nothing of the above is taken into account because all of it is negligible on the macroscopic scale. One assumes that universe is a homogenous fluid where microscopic fluid particles are the size of the galaxies (it takes some getting used to to think about galaxies as being microscopic ). So it shouldn't be surprising that this model doesn't tell us anything about the stability of galaxies; not to mention planets, houses or tables. And conversely, when investigating stability of objects you don't really need to account for space-time expansion unless you get to the scale of galaxies and even there the effect isn't that big. | {
"source": [
"https://physics.stackexchange.com/questions/2110",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/909/"
]
} |
2,175 | Is it possible for information (like 1 and 0s) to be transmitted faster than light? For instance, take a rigid pole of several AU in length. Now say you have a person on each end, and one of them starts pulling and pushing on his/her end. The person on the opposite end should receive the pushes and pulls instantaneously as no particle is making the full journey. Would this actually work? | The answer is no. The pole would bend/wobble and the effect at the other end would still be delayed. The reason is that the force which binds the atoms of the pole together - the Electro-Magnetic force - needs to be transmitted from one end of the pole to the other. The transmitter of the EM-force is light, and thus the signal cannot travel faster than the speed of light; instead the pole will bend, because the close end will have moved, and the far end will not yet have received intelligence of the move. EDIT: A simpler reason. In order to move the whole pole, you need to move every atom of the pole. You might like to think of atoms as next door neighbours If one of them decides to move, he sends out a messenger to all his closest neighbours telling them he is moving. Then they all decide to move as well, so they each send out messengers to to their closest neighbours to let them know they are moving; and so it continues, until the message to move has travelled all the way to the end. No atom will move until he has received the message to do so, and the message won't travel any faster than all the messengers can run; and the messengers can't run faster than the speed of light. /B2S | {
"source": [
"https://physics.stackexchange.com/questions/2175",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
]
} |
2,206 | What ever happened to "action at a distance" in entangled quantum states, i.e. the Einstein-Rosen-Podolsky (EPR) paradox ? I thought they argued that in principle one could communicate faster than speed of light with entangled, separated states when one wave function gets collapsed. I imagine this paradox has been resolved, but I don't know the reference. | It's not possible to communicate faster than light using entangled states. All you get out of entanglement is a correlation between the values of two measurements.; the entanglement doesn't allow you to influence the value measured at another location in a non-causal way. In other words, the correlation only becomes evident after combining the results from the measurements afterwards, for which you need classical information transfer. For example, consider the thought experiment described on the Wikipedia page for the EPR paradox: a neutral pion decays into an electron and a positron, emitting them in opposite directions and with opposite spins. However, the actual value of the spin is undetermined, so with respect to a spin measurement along a chosen axis, the electron and positron are in the state $$\frac{1}{\sqrt{2}}\left(|e^+ \uparrow\rangle|e^- \downarrow\rangle + |e^+ \downarrow\rangle|e^- \uparrow\rangle\right)$$ Suppose you measure the spin of the positron along this chosen axis. If you measure $\uparrow$, then the state will collapse to $|e^+ \uparrow\rangle|e^- \downarrow\rangle$, which determines that the spin of the electron must be $\downarrow$; and vice versa. So if you and the other person (who is measuring the electron spin) get together and compare measurements afterwards, you'll always find that you've made opposite measurements for the spins. But there is no way to control which value you measure for the spin of the positron, which is what you'd need to do to send information. As long as the other person doesn't know what the result of your measurement is, he can't attach any informational value to either result for his measurement. | {
"source": [
"https://physics.stackexchange.com/questions/2206",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/775/"
]
} |
2,228 | I heard somewhere that quarks have a property called 'colour' - what does this mean? | You heard well. But note that this doesn't have anything to do with real color. The reason that property was so named is because of accidental similarities to how color mix and because certain physicist have a weird sense of humor. Quantum electrodynamics Note that the thing they are talking about is color charge . So it seems useful to first review properties of normal (electric) charges. Let's talk about electrons for instance. We know from classical physics that charged objects produce electromagnetic field around them and this field in turn affects charged particles. In quantum theory one has to quantize this field. The quantum of electromagnetic field is photon and so quantum theory tells us that charged objects interact electromagnetically by exchanging photons (source: stanford.edu ) Read the image from left to right. Electron radiates photon, this makes it change direction and the other electron absorbs it changing direction too. This is the way quantum theory explain electromagnetic interaction. Quantum chromodynamics Note: the discussion below will not be technically precise because of inherent complexity of quantum theory. I'll elaborate on certain points in a separate paragraph after this. With the above knowledge under our belts, it's not that hard to describe Quantum chromodynamics (from greek Χρώμα, chroma meaning color) anymore. Instead of electric charge, objects can carry so called color charge. But there is not just one of them, but three: red, green, blue (yes, this is one of the superficial similarities to usual color). Okay, we need one more ingredient in our theory -- something that would replace photons. Such a particle exists and it is called gluon . But note that while it was enough to have only one kind of photon (because there was only one electric charge), we need more gluons to mediate various types of interactions (e.g. interaction between red quark and blue quark, which would be mediated by red-antiblue gluon, and so on). So we have nine type of gluons in total, right? Well, there are actually only eight of them and this is one of the technicalities that I'll address later. Note that quantum chromodynamics predicts that bound states of colored particles have to be "white" meaning that it should e.g. contain three particles, one red, one blue and one green (this is another similarity with the real color mixing). "White" particles containing three quarks are called baryons and you should know at least two of them: protons and neutrons. Actually, it turns out that there is more than one quark type (six actually) and you get various particles by mixing different kinds of them. Two lights quarks are called up and down . Proton contains two ups and one down while neutron contains one up and two downs. Okay, let's see how this works: This is pretty complicated diagram because I wasn't able to find anything simpler, but let me try to explain what's going on. The incoming proton's blue quark emits blue-antired gluon and by doing that changes to red and also change direction a little. This gluon is then caught by its red down quark and this changes it to blue and throws it out of the proton. Similar thing happens with the green up quark, the only difference being that in the end antiblue up is thrown out of the proton. So we have blue-antiblue up/down pair. This is a color neutral particle and it is called pion . From the point of view of the neutron, exactly the same discussion applies, so I hope this is at least a little clear by now. What that picture actually explained is how proton - neutron strong interactions in nucleus are explained from the point of view of quantum chromodynamics. Technicalities All of the above discussion is subsumed in so called gauge theories. These are theories that contain certain number of charges, $N$ (e.g. $N = 1$ for electromagnetism) and certain number of interaction particles. But to specify their number, we have to first talk about the group of symmetries of the said theory. For electromagnetism this group is $U(1)$ which is one-dimensional and so there is only one photon. For weak interactions there are two charges, so called flavors and the group that corresponds to that is $SU(2)$ ; this one is three-dimensional, so we obtain three mediating particles: $Z, W^{\pm}$ . For strong interactions, there are three colors and the group that describes their mixing is $SU(3)$ . This one is eight-dimensional, giving us eight gluons. One could also try to use $U(3)$ group, which is nine-dimensional. But this is ruled out by experiment (!). To understand why, let me make precise other statements I greatly simplified. In quantum theory, states are often by superposition of other, more elementary states. It turns out, that when one talks about blue-antired gluon, what is actually meant is the superposition $${1 \over \sqrt 2} \big( \big|b\big>\big|\bar r\big> + \big|\bar b\big>\big| r\big> \big)$$ and similarly in other cases. This is because there are certain symmetry conditions imposed on the theory and this state is invariant if we swap $b$ and $r$ (it wouldn't be if it contained only one part); but of course it would be painful to explicitly read out that formula all the time one wants to talk about gluons; therefore the simplifictation. Now, as we mentioned, the actual bound states or particles have to be color neutral. It turns out gluons themselves could create a scalar state $${1 \over \sqrt 3} \big( \big|b\big>\big|\bar b\big> +
\big|r\big>\big|\bar r\big> +
\big|g\big>\big|\bar g\big> \big)$$ If this state existed in nature then there would exist long-range gluon interaction. But we know it doesn't. So the actual group is smaller, just $SU(3)$ and we are left with eight gluons. | {
"source": [
"https://physics.stackexchange.com/questions/2228",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/954/"
]
} |
2,229 | As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects being affected by gravity. However, momentum is the product of mass and velocity, so, by this definition, massless photons cannot have momentum. How can photons have momentum? How is this momentum defined (equations)? | There are two important concepts here that explain the influence of gravity on light (photons). (In the equations below $p$ is momentum and $c$ is the speed of light, $299,792,458 \frac{m}{s}$ .) The theory of Special Relativity, proved in 1905 (or rather the 2nd paper of that year on the subject) gives an equation for the relativistic energy of a particle; $$E^2 = (m_0 c^2)^2 + p^2 c^2$$ where $m_0$ is the rest mass of the particle (0 in the case of a photon). Hence this reduces to $E = pc$ . Einstein also introduced the concept of relativistic mass (and the related mass-energy equivalence) in the same paper; we can then write $$m c^2 = pc$$ where $m$ is the relativistic mass here, hence $$m = p/c$$ In other words, a photon does have relativistic mass proportional to its momentum . De Broglie's relation , an early result of quantum theory (specifically wave-particle duality), states that $$\lambda = h / p$$ where $h$ is simply Planck's constant. This gives $$p = h / \lambda$$ Hence combining the two results, we get $$E / c^2 = m = \frac{p}{c} = \frac {h} {\lambda c}$$ again, paying attention to the fact that $m$ is relativistic mass . And here we have it: photons have 'mass' inversely proportional to their wavelength! Then simply by Newton's theory of gravity, they have gravitational influence. (To dispel a potential source of confusion, Einstein specifically proved that relativistic mass is an extension/generalisation of Newtonian mass, so we should conceptually be able to treat the two the same.) There are a few different ways of thinking about this phenomenon in any case, but I hope I've provided a fairly straightforward and apparent one. (One could go into general relativity for a full explanation, but I find this the best overview.) | {
"source": [
"https://physics.stackexchange.com/questions/2229",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/954/"
]
} |
2,230 | I was told that the Galilean relative velocity rule does not apply to the speed of light. No matter how fast two objects are moving, the speed of light will remain same for both of them. How and why is this possible? Also, why can't anything travel faster than light? | The view of most physicists is that asking "How can it be that the speed of light is constant?" is similar to asking "How can it be that things don't always go in the direction of the force on them?" or "How can it be that quantum-mechanical predictions involve probability?" The usual answer is that these things simply are . There is no deeper, more fundamental explanation. There is some similarity here with the viewpoint you may have learned in studying Euclidean geometry; we need to start with some axioms that we assume to be true, and cannot justify. Philosophically, these ideas are not precisely the same (mathematical axioms are not subject to experimental test), but the constant speed of light is frequently described as a "postulate" of relativity. Once we assume it is true, we can work out its logical consequences . This is not to say that, in physics, postulates stay postulates. For example, many people are especially concerned about probability in quantum mechanics, and are trying to understand it based on more fundamental ideas (see decoherence as one example). As another example, Newton's laws of motion were originally taken as unprovable postulates, but are now explained via quantum mechanics (see Ehrenfest's theorem ). At this time, the constancy of the speed of light, or more generally the principle of Lorentz symmetry , is not justified by anything considered to be more fundamental. In fact, the assumption that it is true has been a guiding light to theoretical physicists; quantum field theory was invented by thinking about how quantum mechanics could be made to respect the ideas of relativity. Although we do not have a theoretical justification for the constancy of the speed of light, we do have very accurate experimental tests of the idea. The most famous is the Michelson-Morley experiment , which measured the relative speed of light in different directions to see if it was affected by the motion of the Earth. This experiment rejected the hypothesis that the motion of the Earth affects the speed of light. According to the Wikipedia article I linked, a modern version of this experiment by Hils and Hall concluded that the difference in the speed of light along directions parallel and perpendicular to Earth's motion is less than one part in $5*10^{12}$. In addition to direct tests of the speed of light, there have also been many other experimental tests of special relativity . (I haven't read this last page carefully, but, on flipping through, it looks good.) There are a few caveats worth mentioning. In general relativity, the speed of light is only constant locally. This means that the distance between two objects can increase faster than the speed of light, but it is still impossible for light to zip past you at a speed faster than the normal one. Also, in quantum theory, the speed of light is a statistical property. A photon may travel slightly slower or faster than light, and only travels at light speed on average. However, deviations from the speed of light would be probably be too small to observe directly. | {
"source": [
"https://physics.stackexchange.com/questions/2230",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/847/"
]
} |
2,244 | Pardon me for my stubborn classical/semiclassical brain. But I bet I am not the only one finding such description confusing. If EM force is caused by the exchange of photons, does that mean only when there are photons exchanged shall there be a force? To my knowledge, once charged particles are placed, the electromagnetic force is always there, uninterruptedly. According to such logic, there has to be a stream of infinite photons to build EM force, and there has to be no interval between one "exchange event" to another. A free light source from an EM field? The scenario is really hard to imagine. For nuclei the scenario becomes even odder. The strong interaction between protons is caused by the exchange of massive pions. It sounds like the protons toss a stream of balls to one another to build an attractive force - and the balls should come from nothing. Please correct me if I am wrong: the excitations of photons and pions all come from nothing. So there should be EM force and strong force everywhere, no matter what type of particles out there. Say, even electrical neutral, dipole-free particles can build EM force in-between. And I find no reason such exchanges of particles cannot happen in vacuum. Hope there will be some decent firmware to refresh my classical brain with newer field language codes. | Update: I went over this answer and clarified some parts. Most importantly I expanded the Forces section to connect better with the question. I like your reasoning and you actually come to the right conclusions, so congratulations on that! But understanding the relation between forces and particles isn't that simple and in my opinion the best one can do is provide you with the bottom-up description of how one arrives to the notion of force when one starts with particles. So here comes the firmware you wanted. I hope you won't find it too long-winded. Particle physics So let's start with particle physics. The building blocks are particles and interactions between them. That's all there is to it. Imagine you have a bunch of particles of various types (massive, massless, scalar, vector, charged, color-charged and so on) and at first you could suppose that all kinds of processes between this particles are allowed (e.g. three photons meeting at a point and creating a gluon and a quark; or sever electrons meeting at a point and creating four electrons a photon and three gravitons). Physics could indeed look like this and it would be an incomprehensible mess if it did. Fortunately for us, there are few organizing principles that make the particle physics reasonably simple (but not too simple, mind you!). These principles are known as conservation laws. After having done large number of experiments, we became convinced that electric charged is conserved (the number is the same before and after the experiment). We have also found that momentum is conserved. And lots of other things too. This means that processes such as the ones I mentioned before are already ruled out because they violate some if these laws. Only processes that can survive (very strict) conservation requirements are to be considered possible in a theory that could describe our world. Another important principle is that we want our interactions simple. This one is not of experimental nature but it is appealing and in any case, it's easier to start with simpler interactions and only if that doesn't work trying to introduce more complex ones. Again fortunately for us, it turns out basic interactions are very simple. In a given interaction point there is always just a small number of particles. Namely: two: particle changing direction three: particle absorbing another particle, e.g. $e^- + \gamma \to e^-$ or one particle decaying to two other particles $W^- \to e^- + \bar\nu_e$ four: these ones don't have as nice interpretation as the above ones; but to give an example anyone, one has e.g. two gluons going in and two gluons going out So one example of such a simple process is electron absorbing a photon. This violates no conservation law and actually turns out to be the building block of a theory of electromagnetism. Also, the fact that there is a nice theory for this interaction is connected to the fact that the charge is conserved (and in general there is a relation between conservation of quantities and the way we build our theories) but this connection is better left for another question. Back to the forces So, you are asking yourself what was all that long and boring talk about, aren't you? The main point is: our world (as we currently understand it) is indeed described by all those different species of particles that are omnipresent everywhere and interact by the bizarre interactions allowed by the conservation laws. So when one wants to understand electromagnetic force all the way down, there is no other way (actually, there is one and I will mention it in the end; but I didn't want to over-complicate the picture) than to imagine huge number of photons flying all around, being absorbed and emitted by charged particles all the time. So let's illustrate this on your problem of Coulomb interaction between two electrons. The complete contribution to the force between the two electrons consists of all the possible combination of elementary processes. E.g. first electron emits photon, this then flies to the other electron and gets absorbed, or first electron emits photon, this changes to electron-positron pair which quickly recombine into another photon and this then flies to the second electron and gets absorbed. There is huge number of these processes to take into account but actually the simplest ones contribute the most. But while we're at Coulomb force, I'd like to mention striking difference to the classical case. There the theory tells you that you have an EM field also when one electron is present. But in quantum theory this wouldn't make sense. The electron would need to emit photons (because this is what corresponds to the field) but they would have nowhere to fly to. Besides, electron would be losing energy and so wouldn't be stable. And there are various other reasons while this is not possible. What I am getting at is that a single electron doesn't produce any EM field until it meets another charged particle! Actually, this should make sense if you think about it for a while. How do you detect there is an electron if nothing else at all is present? The simple answer is: you're out of luck, you won't detect it. You always need some test particles. So the classical picture of an electrostatic EM field of a point particle describes only what would happen if another particle would be inserted in that field. The above talk is part of the bigger bundle of issues with measurement (and indeed even of the very definition of) the mass, charge and other properties of system in quantum field theory. These issues are resolved by the means of renormalization but let's leave that for another day. Quantum fields Well, turns out all of the above talk about particles (although visually appealing and technically very useful) is just an approximation to the more precise picture of there existing just one quantum field for every particle type and the huge number of particles everywhere corresponding just to sharp local peaks of that field. These fields then interact by the means of quite complex interactions that reduce to the usual particle stuff when once look what those peaks are doing when they come close together. This field view can be quite enlightening for certain topics and quite useless for others. One place where it is actually illuminating is when one is trying to understand to spontaneous appearance of so-called virtual particle-antiparticle pairs. It's not clear where do they appear from as particles. But from the point of view of the field, they are just local excitations. One should imagine quantum field as a sheet that is wiggling around all the time (by the means of inherent quantum wigglage) and from time to time wiggling hugely enough to create a peak that corresponds to the mentioned pair. | {
"source": [
"https://physics.stackexchange.com/questions/2244",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/749/"
]
} |
2,254 | Take a glass of water and piece of toilet paper. If you keep the paper vertical, and touch the surface of the water with the tip of the paper, you can see the water being absorbed and climbing up the paper. This climbing up is intriguing me. Where is the energy coming from? Supposedly, a lot of water molecules are moving up and gaining potential energy. This has to be balanced by something. I can think of a few possibilities, but I can't tell which one it is. When the water molecules dilute into the paper, they are at a state of lower potential binding energy. Some molecular interaction (van der Waals?) is at lower energy when you have a water+paper solution, compared to a water-only solution. This compensates the gain in gravitational energy. The surface between the paper and the water is at lower pressure than the atmosphere. This causes the water to be pushed into the paper by the atmospheric pressure, up until the point that the column of water above the surface is heavy enough to counterbalance. The potential energy would be a result of work done by the atmosphere. Some water molecules climb up randomly and loose kinetic energy going up, and somehow get "stuck" up there. Something else? | The surface of any fluid has an associated energy-per-unit-area, known as the surface energy, a.k.a. surface tension. This energy is not a property of the fluid alone, but of the fluid and the medium it is in contact with. In your case you would have associated surface energies for the water-air interface, $e_{wa}$, as well as for the water-paper interface, $e_{wp}$. The total energy of the fluid in a configuration is the sum of the potential energy, plus the product of the corresponding surface energies by their respective surface areas, $S_{wa}e_{wa} +S_{wp}e_{wp}$. So if you want to look at it in a purely energy balance point of view, the increase in potential energy of the water climbing up the paper is compensated by a reduction of the total surface energy. When capillary action makes things rise, it is because the liquid-solid energy is lower than the liquid-air energy. By wicking into the porous material, the solid-liquid contact area is increased at the expense of the liquid-air one, resulting in an overall reduction of contact energy, which is what drives the rise in potential energy. | {
"source": [
"https://physics.stackexchange.com/questions/2254",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/724/"
]
} |
2,264 | Why in the irradiament mulipoles of Lienard-Wiechert's potential we say that electric quadrupole give a contribute of the same order of the magnetic dipole? How can we see it from their equations? And is there a physical reason for this? Sorry for my trivial question. | The surface of any fluid has an associated energy-per-unit-area, known as the surface energy, a.k.a. surface tension. This energy is not a property of the fluid alone, but of the fluid and the medium it is in contact with. In your case you would have associated surface energies for the water-air interface, $e_{wa}$, as well as for the water-paper interface, $e_{wp}$. The total energy of the fluid in a configuration is the sum of the potential energy, plus the product of the corresponding surface energies by their respective surface areas, $S_{wa}e_{wa} +S_{wp}e_{wp}$. So if you want to look at it in a purely energy balance point of view, the increase in potential energy of the water climbing up the paper is compensated by a reduction of the total surface energy. When capillary action makes things rise, it is because the liquid-solid energy is lower than the liquid-air energy. By wicking into the porous material, the solid-liquid contact area is increased at the expense of the liquid-air one, resulting in an overall reduction of contact energy, which is what drives the rise in potential energy. | {
"source": [
"https://physics.stackexchange.com/questions/2264",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/558/"
]
} |
2,281 | Our ability to store data on or in physical media continues to grow, with the maximum amount a data you can store in a given volume increasing exponentially from year to year. Storage devices continue to get smaller and their capacity gets bigger. This can't continue forever, though, I would imagine. "Things" can only get so small; but what about information? How small can a single bit of information be? Put another way: given a limited physical space -- say 1 cubic centimeter -- and without assuming more dimensions than we currently have access to, what is the maximum amount of information that can be stored in that space? At what point does the exponential growth of storage density come to such a conclusive and final halt that we have no reason to even attempt to increase it further? | The answer is given by the covariant entropy bound (CEB) also referred to as the Bousso bound after Raphael Bousso who first suggested it. The CEB sounds very similar to the Holographic principle (HP) in that both relate the dynamics of a system to what happens on its boundary, but the similarity ends there. The HP suggests that the physics (specifically Supergravity or SUGRA) in a d-dimensional spacetime can be mapped to the physics of a conformal field theory living on it d-1 dimensional boundary. The CEB is more along the lines of the Bekenstein bound which says that the entropy of a black hole is proportional to the area of its horizon: $$ S = \frac{k A}{4} $$ To cut a long story short the maximum information that you can store in $1\ \mathrm{cm^3} = 10^{-6}\ \mathrm{m^3}$ of space is proportional to the area of its boundary. For a uniform spherical volume, that area is: $$ A = V^{2/3} = 10^{-4}\ \mathrm{m^2}$$ Therefore the maximum information (number of bits) you can store is approximately given by: $$ S \sim \frac{A}{A_\mathrm{pl}} $$ where $A_\mathrm{pl}$ is the planck area $ \sim 10^{-70}\ \mathrm{m^2}$ . For our $1\ \mathrm{cm^3}$ volume this gives $ S_\mathrm{max} \sim 10^{66} $ bits. Of course, this is a rough order-of-magnitude estimate, but it lies in the general ballpark and gives you an idea of the limit that you are talking about. As you can see, we still have decades if not centuries before our technology can saturate this bound! Edit : Thanks to @mark for pointing out that $1\ \mathrm{cm^3} = 10^{-6}\ \mathrm{m^3}$ and not $10^{-9}\ \mathrm{m^3}$ . Changes final result by three orders of magnitude. On Entropy and Planck Area In response to @david's observations in the comments let me elaborate on two issues. Planck Area: From lqg (and also string theory) we know that geometric observables such as the area and volume are quantized in any theory of gravity. This result is at the kinematical level and is independent of what the actual dynamics are. The quantum of area, as one would expect, is of the order of $\sim l_\mathrm{pl}^2$ where $l_\mathrm{pl}$ is the Planck length. In quantum gravity the dynamical entities are precisely these area elements to which one associates a spin-variable $j$ , where generally $j = \pm 1/2$ (the lowest rep of SU(2)). Each spin can carry a single qubit of information. Thus it is natural to associate the Planck areas with a single unit of information. Entropy as a measure of Information: There is a great misunderstanding in the physics community regarding the relationship between entropy $S$ – usually described as a measure of disorder – and useful information $I$ such as that stored on a chip, an abacus or any other device. However they are one and the same. I remember being laughed out of a physics chat room once for saying this so I don't expect anyone to take this at face value. But think about this for a second (or two). What is entropy? $$ S = k_\mathrm B \ln(N) $$ where $k_\mathrm B$ is Boltzmann's constant and $N$ the number of microscopic degrees of freedom of a system. For a gas in a box, for eg, $N$ corresponds to the number of different ways to distribute the molecules in a given volume. If we were able to actually use a gas chamber as an information storage device, then each one of these configurations would correspond to a unit of memory. Or consider a spin-chain with $m$ spins. Each spin can take two (classical) values $\pm 1/2$ . Using a spin to represent a bit, we see that a spin-chain of length $m$ can encode $2^m$ different numbers. What is the corresponding entropy: $ S \sim \ln(2^m) = m \ln(2) \sim \textrm{number of bits} $ since we have identified each spin with a bit (more precisely qubit). Therefore we can safely say that the entropy of a system is proportional to the number of bits required to describe the system and hence to its storage capacity. | {
"source": [
"https://physics.stackexchange.com/questions/2281",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/985/"
]
} |
2,407 | Of course, assuming your grandmother is not a theoretical physicist. I'd like to hear the basics concepts that make LQG tick and the way it relates to the GR. I heard about spin-networks where one assigns Lie groups representations to the edges and intertwining operators to the nodes of the graph but at the moment I have no idea why this concept should be useful (except for a possible similarity with gauge theories and Wilson loops; but I guess this is purely accidental). I also heard that this spin-graph can evolve by means of a spin-foam which, I guess, should be a generalization of a graph to the simplicial complexes but that's where my knowledge ends. I have also read the wikipedia article but I don't find it very enlightening. It gives some motivation for quantizing gravity and lists some problems of LQG but (unless I am blind) it never says what LQG actually is. So, my questions: Try to give a simple description of fundamentals of Loop Quantum Gravity. Give some basic results of the theory. Not necessary physical, I just want to know what are implications of the fundamentals I ask for in 1. Why is this theory interesting physically? In particular, what does it tell us about General Relativity (both about the way it is quantized and the way it is recovered from LQG). | @Marek your question is very broad. Replace "lqg" with "string theory" and you can imagine that the answer would be too long to fit here ;>). So if this answer seems short on details, I hope you will understand. The program of Loop Quantum Gravity is as follows: The notion of diffeomorphism invariance background independence , which is central to General Relativity, is considered sacrosanct. In other words this rules out the String Theory based approaches where the target manifold, in which the string is embedded, is generically taken to be flat [Please correct me if I'm wrong.] I'm sure that that is not the only background geometry that has been looked at, but the point is that String Theory is not written in a manifestly background independent manner. LQG aims to fill this gap. The usual quantization of LQG begins with Dirac's recipe for quantizing systems with constraints. This is because General Relativity is a theory whose Hamiltonian density ($\mathcal{H}_{eh}$), obtained after performing a $3+1$ split of the Einstein-Hilbert action via the ADM procedure [ 1 , 2 ], is composed only of constraints, i.e. $$ \mathcal{H}_{eh} = N^a \mathcal{V}_a + N \mathcal{H} $$ where $N^a$ and $N$ are the lapse and shift vectors respectively which determine the choice of foliation for the $3+1$ split. $\mathcal{V}_a$ and $\mathcal{H}$ are referred to as the vector (or diffeomorphism) constraint and the scalar (or "hamiltonian") constraint. In the resulting phase space the configuration and momentum variables are identified with the intrinsic metric ($h_{ab}$) of our 3-manifold $M$ and its extrinsic curvature ($k_{ab}$) w.r.t its embedding in the full $3+1$ spacetime, i.e. $$ {p,q} \rightarrow \{\pi_{ab},q^{ab}\} := \{k_{ab},h^{ab}\} $$ This procedure is generally referred to as canonical quantization . It can also be shown that $ k_{ab} = \mathcal{L}_t h_{ab} $, where $ \mathcal{L}_t $ is the Lie derivative along the time-like vector normal to $M$. This is just a fancy way of saying that $ k_{ab} = \dot{h}_{ab} $ This is where, in olden days, our progress would come to a halt, because after applying the ADM procedure to the usual EH form of the action, the resulting constraints are complicated non-polynomial expressions in terms of the co-ordinates and momenta. There was little progress in this line until in 1986 $\sim$ 88, Abhay Ashtekar put forth a form of General Relativity where the phase space variables were a canonically transformed version of $ \{k_{ab},h^{ab}\} $ This change is facilitated by writing GR in terms of connection and vielbien (tetrads) $ \{A_{a}^i,e^{a}_i\}$ where $a,b,\cdots$ are our usual spacetime indices and $i,j,\cdots$ take values in a Lie Algebra . The resulting connection is referred to as the "Ashtekar" or sometimes "Ashtekar-Barbero" connection. The metric is given in terms of the tetrad by : $$ h_{ab} = e_a^i e_b^j \eta_{ij} $$ where $\eta_{ij}$ is the Minkowski metric $\textrm{diag}(-1,+1,+1,+1)$. After jumping through lots of hoops we obtain a form for the constraints which is polynomial in the co-ordinates and momenta and thus amenable to usual methods of quantization: $$ \mathcal{H}_{eha} = N^a_i \mathcal{V}_a^i + N \mathcal{H} + T^i \mathcal{G_i} $$ where, once again, $ \mathcal{V}_a^i $ and $\mathcal{H}$ are the vector and scalar constraints. The explanation of the new, third term is postponed for now. Nb: Thus far we have made no modifications to the theoretical structure of GR. The Ashtekar formalism describes the exact same physics as the ADM version. However, the ARS (Ashtekar-Rovelli-Smolin) framework exposes a new symmetry of the metric. The introduction of spinors in quantum mechanics (and the corresponding Dirac equation) allows us to express a scalar field $\phi(x)$ as the "square" of a spinor $ \phi = \Psi^i \Psi_i $. In a similar manner the use of the vierbien allows us to write the metric as a square $ g_{ab} = e_a^i e_b^j \eta_{ij} $. The transition from the metric to connection variables in GR is analogous to the transition from the Klein-Gordon to the Dirac equation in field theory. The application of the Dirac quantization procedure for constrained systems shows us that the kinematical Hilbert space, consisting of those states which are annihilated by the quantum version of the constraints, has spin-networks as its elements. All of this is very rigorous and several mathematical technicalities have gradually been resolved over the past two decades. This answer is already pretty long. It only gives you a taste of things to come. The explanation of the Dirac quantization procedure and spin-networks would be separate answers in themselves. One can give an algorithm for this approach: Write GR in connection and tetrad variables (in first order form). Perform $3+1$ decomposition to obtain the Einstein-Hilbert-Ashtekar Hamiltonian $\mathcal{H}_{eha}$ which turns out to be a sum of constraints. Therefore, the action of the quantized version of this Hamiltonian on elements of the physical space of states yields $ \mathcal{H}_{eha} \mid \Psi \rangle = 0 $. (After a great deal of investigation) we find that these states are represented by graphs whose edges are labeled by representations of the gauge group (for GR this is $SU(2)$). Spin-foams correspond to histories which connect two spin-networks states. On a given spin-network one can perform certain operations on edges and vertices which leave the state in the kinematical Hilbert space. These involve moves which split or join edges and vertices and those which change the connectivity (as in the "star-triangle transformation"). One can formally view a spin-foam as a succession of states $\{ \mid \Psi(t_i) \rangle \}$ obtained by the repeated action of the scalar constraint $ \mid \Psi(t_1) \rangle \sim \exp{}^{-i\mathcal{H}_{eha}\delta t} \mid \Psi (t_0); \mid \Psi(t_2) \rangle \sim \exp{}^{-i\mathcal{H}_{eha}\delta t} \mid \Psi (t_1) \cdots \rangle $ [ 3 ]. The graviton propogator has a robust quantum version in these models. Its long-distance limit yields the $1/r^2$ behavior expected for gravity and an effective coarse-grained action given by the usual one consisting of the Ricci scalar plus terms containing quantum corrections. There is a great deal of literature to back up everything I've said here, but this is already pretty exhausting so you'll have to take me on my word. Let me know what your Grandma thinks of this answer ;). | {
"source": [
"https://physics.stackexchange.com/questions/2407",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/329/"
]
} |
2,447 | Regarding general relativity: What is the physical meaning of the Christoffel symbol ($\Gamma^i_{\ jk}$)? What are the (preferably physical) differences between the Riemann curvature tensor ($R^i_{\ jkl}$), Ricci tensor ($R_{ij}$) and Ricci scalar ($R$)? For example why do the Einstein equations include the Ricci tensor and scalar, but not the Riemann tensor? To be clear, by "physical meaning" I mean something like - what physical effect do these components generate? Or, they make the GR solutions deviate from Newton because of xxx factor... or something similarly physically intuitive. | The simplest way to explain the Christoffel symbol is to look at them in flat space. Normally, the laplacian of a scalar in three flat dimensions is: $$\nabla^{a}\nabla_{a}\phi = \frac{\partial^{2}\phi}{\partial x^{2}}+\frac{\partial^{2}\phi}{\partial y^{2}}+\frac{\partial^{2}\phi}{\partial z^{2}}$$ But, that isn't the case if I switch from the $(x,y,z)$ coordinate system to cylindrical coordinates $(r,\theta,z)$. Now, the laplacian becomes: $$\nabla^{a}\nabla_{a}\phi=\frac{\partial^{2}\phi}{\partial r^{2}}+\frac{1}{r^{2}}\left(\frac{\partial^{2}\phi}{\partial \theta^{2}}\right)+\frac{\partial^{2}\phi}{\partial z^{2}}-\frac{1}{r}\left(\frac{\partial\phi}{\partial r}\right)$$ The most important thing to note is the last term above--you now have not only second derivatives of $\phi$, but you also now have a term involving a first derivative of $\phi$. This is precisely what a Christoffel symbol does. In general, the Laplacian operator is: $$\nabla_{a}\nabla^{a}\phi = g^{ab}\partial_{a}\partial_{b}\phi - g^{ab}\Gamma_{ab}{}^{c}\partial_{c}\phi$$ In the case of cylindrical coordinates, what the extra term does is encode the fact that the coordinate system isn't homogenous into the derivative operator--surfaces at constant $r$ are much larger far from the origin than they are close to the origin. In the case of a curved space(time), what the Christoffel symbols do is explain the inhomogenities/curvature/whatever of the space(time) itself. As far as the curvature tensors--they are contractions of each other. The Riemann tensor is simply an anticommutator of derivative operators--$R_{abc}{}^{d}\omega_{d} \equiv \nabla_{a}\nabla_{b}\omega_{c} - \nabla_{b}\nabla_{a} \omega_{c}$. It measures how parallel translation of a vector/one-form differs if you go in direction 1 and then direction 2 or in the opposite order. The Riemann tensor is an unwieldy thing to work with, however, having four indices. It turns out that it is antisymmetric on the first two and last two indices, however, so there is in fact only a single contraction (contraction=multiply by the metric tensor and sum over all indices) one can make on it, $g^{ab}R_{acbd}=R_{cd}$, and this defines the Ricci tensor. The Ricci scalar is just a further contraction of this, $R=g^{ab}R_{ab}$. Now, due to Special Relativity, Einstein already knew that matter had to be represented by a two-index tensor that combined the pressures, currents, and densities of the matter distribution. This matter distribution, if physically meaningful, should also satisfy a continuity equation: $\nabla_{a}T^{ab}=0$, which basically says that matter is neither created nor destroyed in the distribution, and that the time rate of change in a current is the gradient of pressure. When Einstein was writing his field equations down, he wanted some quantity created from the metric tensor that also satisfied this (call it $G^{ab}$) to set equal to $T^{ab}$. But this means that $\nabla_{a}G^{ab} =0$. It turns out that there is only one such combination of terms involving first and second derivatives of the metric tensor: $R_{ab} - \frac{1}{2}Rg_{ab} + \Lambda g_{ab}$, where $\Lambda$ is an arbitrary constant. So, this is what Einstein picked for his field equation. Now, $R_{ab}$ has the same number of indicies as the stress-energy tensor. So, a hand-wavey way of looking at what $R_{ab}$ means is to say that it tells you the "part of the curvature" that derives from the presence of matter. Where does this leave the remaining components of $R_{abc}{}^{d}$ on which $R_{ab}$ does not depend? Well, the simplest way (not COMPLETELY correct, but simplest) is to call these the parts of the curvature derived from the dynamics of the gravitational field itself--an empty spacetime containing only gravitational radiation, for example, will satisfy $R_{ab}=0$ but will also have $R_{abc}{}^{d}\neq 0$. Same for a spacetime containing only a black hole. These extra components of $R_{abc}{}^{d}$ give you the information about the gravitational dynamics of the spacetime, independent of what matter the spacetime contains. This is getting long, so I'll leave this at that. | {
"source": [
"https://physics.stackexchange.com/questions/2447",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/66/"
]
} |
2,490 | What is the significance about the bell shape, when its hit at the rim it rings/produces sound better than other shaped objects? If so could anyone explain a little bit on it. EDIT: From the suggestions in the comments, clarification for the term "sound better": Sound more effective for the purpose which bells are created for. (Thanks Justin) | The bell is typically bell-shaped for two reasons, first because the circle is structurally strong and this allows bells to be struck with greater force than if the shape was flat or had sharp edges which would be more prone to cracking, further the circular shape allows a wave to travel around the bells perimeter so that standing waves can develop around the circumference of the bell. It is the resonance from standing waves that is responsible for the sound of the ringing. And second the bell's shape makes the timbre of the bell more musically pleasing. The reason for the increasing diameter as you go from the top to the bottom of the bell is so that the bell resonates at different frequencies which can be tuned in a large bell so that you have what amounts to a complex musical chord playing when the bell is struck. For example, a given bell might have a resonance at the fundamental, a subharmonic one octave lower, a minor third above, a fifth above, and a full octave above. The different diameter sections of the bell contribute to these different harmonics. Bell construction is as much an art as a science. Here is a good online resource that describes the process of creating a large bell: https://www.msu.edu/~carillon/batmbook/chapter4.htm also the next chapter which goes deeper into the acoustics of bells: https://www.msu.edu/~carillon/batmbook/chapter5.htm | {
"source": [
"https://physics.stackexchange.com/questions/2490",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1077/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.