url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://motls.blogspot.com/2011/07/phil-gibbs-tevatronlhc-higgs-synthesis.html
Saturday, July 23, 2011 ... // Tevatron+LHC Higgs synthesis: 111-131 GeV The Higgs mass is more likely than not to be between 110 and 120 GeV Let me begin with my version of the graph, to be explained at the end. Now, let's talk about the viXra version first. Phil Gibbs has digitized the graphs indicating the cross sections and exclusions for the Higgs mass, as presented by the D0, CDF, ATLAS, and CMS Collaborations during the recent two days. I have gotten a copy of the D0 and CDF numbers and got a similar result in Mathematica as he did even though our methods almost certainly differ. Only on Saturday evening, I could apply my formulae to the combination to the LHC results. But Phil's overall results were surely lovable when I saw them (and my version became even more lovable later haha). See: You should understand the rules of the game. The blue horizontal line in the middle of the picture is the sea level. If you're sitting at the thick wiggly black line somewhere and you're below the blue sea level, you will glub glub glub glub to the bottom of the sea and the corresponding value of the Higgs mass - on the x-axis - is excluded at the 95% confidence level. So only the values above the sea level survive. You see that the picture allows Higgses between 111 and 131 GeV only. And especially those at 119 and 127 GeV are favored a little bit more than the neighbors. :-) If 200 GeV is not enough for you, here is a graph that continues to higher a priori possible masses: Higgses above 500 GeV are allowed, too. Otherwise everything has drowned! :-) Of course, if you could trust this graph, it would be just excellent. The lightest Higgs would be in the 115-130 GeV window and there would probably be one more Higgs either in the same window or above 500 GeV. A beautiful graph potentially supporting a two-Higgs supersymmetric model. Or the Standard Model that becomes unstable at some below-Planckian energy scale. Meanwhile, the experiments have chased the squark and gluino masses out of the visible arena and above 1 TeV so one can no longer reasonably expect the rest of supersymmetry - or any other new physics, for that matter - to be a low-lying fruit. My version of the graph Here is my version of the graph for the 100-200 GeV region of the masses: click to zoom in. Tevatron-LHC unofficial TRF Higgs combo. The 100-110 GeV interval is only calculated from the Tevatron data: that's the source of the discontinuity at 110 GeV. No idea why Phil doesn't see this discontinuity. If you're satisfied with a 50% "certainty", look at the lower boundary of the red strip. Whenever it's below the sea level, the corresponding value of the Higgs mass on the x-axis is excluded at a 50% confidence level. Only 112-120 GeV is allowed. You see that I got nicer, sharper peaks than Phil in this region (well, mostly because my graph is more stretched in the y-direction, much like temperature graphs that want to pretend that the warming trends are substantial haha) - suggesting that 112-120 GeV is really preferred and there is an excess there (especially at 112, 116, 119 GeV - and 116 GeV is the top peak - but this accuracy is really insignificant noise and you shouldn't take it seriously). If you prefer the "climate science" standards, a 68% certainty, look at the upper border of the red strip. Something like 111-122 GeV and 126-128 GeV is allowed. The upper boundary of the green strip is 95% confidence level. It's a sensible choice. Something like 110-131 GeV is allowed. The upper border of the dark blue strip is 99.7% confidence level: 110-144 GeV is allowed. The upper boundary of the yellow strip is 99.99% confidence level: 109-154 GeV and 192-200 GeV (201 GeV is already banned by the LHC again and the Tevatron goes mute there, see below) is allowed. Tevatron-LHC unofficial TRF Higgs combo. The 100-110 GeV interval is only calculated from the Tevatron data and the 200-600 GeV interval only comes from the LHC data; these are the causes of the discontinuities at 110, 200 GeV. Here is the graph from 100 GeV to 600 GeV: click to zoom in. The extra editing and description is missing because it's the same as above. You see that around 248 GeV (239-256 at 99.99% confidence level and 246-251 GeV at 99.7% confidence level), there was a slight excess that allows a very thin interval to survive if you're very tolerant. And then there's the whole semi-infinite realm above 470-550 GeV where the Higgs may live if you require various levels of certainty. (The lowest allowed values of the masses at 99.99%, 99.7%, 95%, 68%, and 50% exclusion are 478, 509, 525, 537, 548 GeV.) As you see, I ignored the "expected upper bound" on the cross sections and drew the different strips determined by the observed bound only. That's because I wasn't really interested in "excesses above expectations"; I was only interested in the exclusion so the observed upper bounds and error margins of the 4 experiments were enough. Download a Mathematica notebook (search for the file named "combining...") that drew the graphs; by L.M., digitization of the official, uncombined graphs by Phil Gibbs Note that LEP has effectively excluded the Higgs masses up to 114 GeV. So if you trust the graphs of mine, it's legitimate to guess that it's more likely than not that the lightest Higgs mass is between 115 and 120 GeV. Quite an accuracy. Just to be sure, the total Higgs width at those low masses is a few percent of a GeV: it should be very sharp but it probably gets fuzzy because of inaccurate measurements. Update: The graphs should be moved up by nearly 2 widths of the band (to become less excluding), approximately, in the (not too important) 100-110 GeV and 200-600 GeV regions because of some very subtle counting of the confidence level that I don't want to explain in detail. This will reduce, but not eliminate, the discontinuities over there. The qualitative effect is almost none in the 100-110 GeV which is excluded anyway. The 250 GeV island becomes slightly more allowed, and new islands near 290 GeV and 210 GeV become a bit viable. The lower bounds on the 500-GeV-like Higgs will drop by 50 GeV or so. BTW I learned that Phil thought that the y-axis on the LHC, Tevatron graphs was a "confidence level" - it is (an upper limit on) a cross section - and he used an ad hoc formula that wouldn't seem to have a justification even if the y-axis were a confidence level. In effect, the formula neglected large excesses of events if there were any which is why his peaks are so subdued. It is a bit of a coincidence that his graphs look so realistic. ;-) There are also possible bugs in my formulae and graphs but at least they're based on an understanding of what the input graphs roughly mean when it comes to their axes. ;-)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697866797447205, "perplexity": 1195.9060899739497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00099.warc.gz"}
http://mathhelpforum.com/discrete-math/99605-very-simple-binary-relation-definition-needed.html
# Thread: Very simple Binary Relation definition needed 1. ## Very simple Binary Relation definition needed Everywhere I look, binary relation is defined as the set of ordered pairs containing elements from 2 sets. But then I get a a problem that says How many binary relations are there on the set (1, 2, 3}? This doesn't make sense to me. How can you have a relation with only 1 set? 2. Phrase "R is a binary relation on a set A" is commonly used to mean that R is a subset of the cartesian product A x A. That is, R is a collection of ordered pairs of elements of A. 3. Do you understand the following calculations? $\begin{gathered} X = \left\{ {1,2,3} \right\}\; \Rightarrow \;\left| X \right| = 3 \hfill \\ X \times X = \left\{ {\left( {1,1} \right),\left( {1,2} \right),\left( {1,3} \right),\left( {2,2} \right),\left( {2,1} \right),\left( {2,3} \right),\left( {3,3} \right),\left( {3,1} \right),\left( {3,2} \right)} \right\} \hfill \\ \left| {X \times X} \right| = 9\; \Rightarrow \;\left| {P(X \times X)} \right| = 2^9 \hfill \\ \end{gathered}$ Because a binary relation on $X$ is any subset of pairs in $X\times X$ there are $2^9$ possible binary relations on $X$. (Some authors say $2^9-1$ because of not liking an empty relation.) 4. Yes, that makes sense to me. I had actually tried 9 as an answer, and I was wrong. .... any idea why this was wrong? 5. I believe Plato has already answered that! Could you tell us why you thought it was 9? 6. I just took the cartesian product and found that there were 9 results. It was more of a guess than anything else. It took a few times reading to figure this out a little more. May I present you with another example, and maybe you could help me figure out what I'm doing wrong? I am presented with the set {1,2} So, the product is {(1,1), (1,2), (2,1), (2,2)} and there would be 16 relations? I am then asked a series of questions such as "How many relations are reflexive? Symmetrical?" I know I need to set up a matrix showing all the possible relations, but... aren't they all possible? Wouldn't the matrix be all "1"s? 7. Think about what the definition is. A relation on a set is a subset of the Cartesian product. So you have to think about all the possible subsets of: $\{(1, 1), (1, 2), (2, 1), (2, 2)\}$ A reflexive relation is one which includes (1,1) and (2,2). (By looking hard at the definition of "reflexive", ask yourself: why?) A symmetric relation is one where, if (a, b) is in the relation, then (b, a) is also in that relation. So as there's only 16 possible relations, it's feasible to list them all out and examine them one by one for whether they have all these properties. For example, $\{(1, 1), (1, 2), (2, 2)\}$ is reflexive, because it has (1, 1) and (2, 2), but not symmetric - because it has (1, 2) but not (2, 1). And so on. As I say, list them all out and look at them - it's an instructional exercise in its own right. 8. Originally Posted by JTG2003 I am then asked a series of questions such as "How many relations are reflexive? Symmetrical?" Suppose that $|X|=n$ then there are $2^n$ binary relations on $X$. The are $2^{n^2-n}$ reflexive relations on $X$ The are $2^{\frac{n(n+1)}{2}}$ symmetric relations on $X$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049218058586121, "perplexity": 307.8850560930628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00059-ip-10-171-10-70.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/32998/total-noise-power-of-a-resistor-all-frequencies
# total noise power of a resistor (all frequencies) Let's calculate the power generated by Johnson-Nyquist noise (and then immediately dissipated as heat) in a short-circuited resistor. I mean the total power at all frequencies, zero to infinity... $$(\text{Noise power at frequency }f) = \frac{V_{rms}^2}{R} = \frac{4hf}{e^{hf/k_BT}-1}df$$ $$(\text{Total noise power}) = \int_0^\infty \frac{4hf}{e^{hf/k_BT}-1}df$$ $$=\frac{4(k_BT)^2}{h}\int_0^\infty \frac{\frac{hf}{k_BT}}{e^{hf/k_BT}-1}d(\frac{hf}{k_BT})$$ $$=\frac{4(k_BT)^2}{h}\int_0^\infty \frac{x}{e^x-1}dx=\frac{4(k_BT)^2}{h}\frac{\pi^2}{6}$$ $$=\frac{\pi k_B^2}{3\hbar}T^2$$ i.e. temperature squared times a certain constant, 1.893E-12 W/K2. Is there a name for this constant? Or any literature discussing its significance or meaning? Is there any intuitive way to understand why total blackbody radiation goes as temperature to the fourth power, but total Johnson noise goes only as temperature squared? - I think you've just derived the Stefan-Boltzman law for a one-dimensional system. The T^4 comes from three dimensions. The more dimensions the quanta can populate the higher power of T you get. - Thanks, that's what I was looking for. Maybe there is no universal official name for the constant, but "one-dimensional analogue of the Stefan-Boltzmann constant" is pretty good. Since the oscillations in a wire or resistor are stuck in one dimension, there are many fewer high-energy modes, so extra temperature does not as dramatically grow the total power. That makes sense. –  Steve B Aug 2 '12 at 23:08 This related question has a nice discussion. The answer cites a "classic" 1946 paper by Robert Dicke (free link , official link). The discrepancy between 3D mode density and 1D mode density helps explain some facts in antenna theory, particularly the fact that antennas larger than 1 square wavelength have to have a narrow acceptance angle. –  Steve B Aug 2 '12 at 23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185611009597778, "perplexity": 713.089174462082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345766127/warc/CC-MAIN-20131218054926-00035-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume28/clement07a-html/4_2Summary_Resource.html
Next: 5 Hierarchical Planning and Up: 4 Identifying Abstract Solutions Previous: 4.1 Threats on Summary ## 4.2 Summary Resource Usage Threats Planners detect threats on resource constraints in different ways. If the planner reasons about partially ordered actions, it must consider which combinations of actions can overlap and together exceed (or fall below) the resource's maximum value (or minimum value). A polynomial algorithm does this for the IxTeT planner [Laborie Ghallab, 1995]. Other planners that consider total order plans can more simply project the levels of the resource from the initial state through the plan, summing overlapping usages, to see if there are conflicts []<e.g.,>rabideau:00. Finding conflicts involving summarized resource usages can work in the same way. For the partial order planner, the resultant usage of clusters of actions are tested using the PARALLEL-AND algorithm in Section 3.5. For the total order planner, the level of the resource is represented as a summarized usage, initially [, ], [, ], [, ] for a consumable resource with an initial level and [, ], [, ], [0, 0] for a non-consumable resource. Then, for each subinterval between start and end times of the schedule of tasks, the summary usage for each is computed using the PARALLEL-AND algorithm. Then the level of the resource is computed for each subinterval while propagating persistent usages using the SERIAL-AND algorithm. We can decide and as defined in Section 4.1, in terms of the summary usage values resulting from invocations of PARALLEL-AND and SERIAL-AND in the propagation algorithm at the end of Section 3.5.2. is true if and only if there are no potential threats. These algorithms discover a threat if they ever compute an interval such that is true if and only if there is a possible run with potentially no threats. SERIAL-AND discovers such a run if it returns a summary usage where Now that we have mechanisms for deriving summary information and evaluating plans based on their summarizations, we will discuss how to exploit them in a planning/coordination algorithm. Next: 5 Hierarchical Planning and Up: 4 Identifying Abstract Solutions Previous: 4.1 Threats on Summary
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.884100615978241, "perplexity": 2381.100198164082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670852/warc/CC-MAIN-20140305060750-00034-ip-10-183-142-35.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/86240/how-would-one-expect-a-massive-graviton-to-behave
# How would one expect a massive graviton to behave? Typically, adding a mass $m$ to a gauge boson causes the boson to only be able to travel over a finite distance, $L\sim m^{-1}$, limiting the range of the associated force. For example, photons become massive in superconductors and hence magnetic fields cannot penetrate very deep into superconductors. Should one expect the same behavior for a massive graviton? In the literature there are examples of massive gravity theories, such as the de Rham-Gabadadze-Tolly model (dRGT), which can give rise to self-accelerating universes due to a condensate of the graviton field (see here, for example). How does this phenomenon mesh with the usual reasoning that a mass limits the range of a gauge field? - I believe the technical term you're looking for is "kamehameha" haha sorry. just a joke. –  ahnbizcad Oct 3 '14 at 2:57 Before I start I should point out that it's not yet clear whether or not massive gravity works on a theoretical level. dRGT is a special theory but it still has some fundamental problems that have not yet been resolved (such as superluminal propagation around nontrivial backgrounds). The yukawa suppression indeed is true for massive gravitons around flat space. This just follows from the basic form of the relativistic wave equation around flat space, and doesn't require a fancy nonlinear completion. Writing $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ we have $$\square h + m^2 h = T$$ For a static, spherically symmetric source such as $T=M\delta^{(3)}(\vec{r})$, the solution to the above equation is $h=M e^{-mr}/4\pi r$. Now you might worry that I've been too quick because in real gravity $h$ has indices. However this doesn't change the form of the solution--the yukawa suppression still works. However, it does constrain the form of the equation. Naively you would think a mass term in the equations of motion could contain any combination of $h_{\mu\nu}$ and $h \eta_{\mu\nu}$, but actually there is only one special combination that is allowed, the Fierz-Pauli tuning $h_{\mu\nu}-h \eta_{\mu\nu}$. If you follow your intuition about the yukawa suppression and the cc a little further, you are lead to what is called 'degravitation.' Roughly speaking, the idea is that you could have a large cosmological constant, but since gravity is yukawa suppressed on very large scales, gravity doesn't see the cosmological constant. In other words, the CC is essentially a very long wavelength source, and the hope was you could have that wavelength be in the regime where the graviton propagator was suppressed. Degravitation hasn't been able to work in any specific examples however. For example in dRGT if you try to degravitate the CC, then you end up in conflict with solar system tests, because you end up not effectively screening an extra degree of freedom that massive gravity has over normal GR. (A massless spin 2 has 2 dofs, a massive spin 2 has 5 dofs--the point is that the helicity 0 mode likes to couple strongly to matter and you need a special 'screening mechanism' to get continuity with GR. If you try to degravitate a large CC, you end up making this screening mechanism very inefficient). So instead people who work on massive gravity try to use a condensate of gravitons to source the acceleration. This is really a fancy way of saying that you can treat the mass term as an effective source in einstein's equations $$G_{\mu\nu} = T_{\mu\nu} + m^2 T^{eff}_{\mu\nu}$$ In cosmology in particular, the $m^2 T^{eff}_{\mu\nu}$ can act like a cosmological constant term with a cosmological constant set by $m$ (recall that a real CC would like like $\Lambda g_{\mu\nu}$ on the RHS). However when you think about cosmology in these terms, the yukawa suppression isn't a good way to look at things, because you are far from flat space. There are subtleties here because there aren't any exactly homogeneous and isotropic solutions in massive gravity, but that's the basic idea. - Thank you, excellent answer and this is pretty much how I thought it worked. As for degravitation, was the original hope that you could take the enormous cosmological constant (CC), $\Lambda\sim M_{pl}^4$, and use the mass term as a high pass filter so that the effect of the CC is greatly reduced and we'd only end up with the relatively small amount of cosmological acceleration we observe today? –  user26866 Nov 12 '13 at 16:09 Yes that's exactly right, that's actually the language used in the original degravitation paper (degravitation is a high pass filter). –  Andrew Nov 13 '13 at 3:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8582174777984619, "perplexity": 281.3786723350635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930143.90/warc/CC-MAIN-20150521113210-00022-ip-10-180-206-219.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/24639/how-to-write-a-command-to-file
# How to write a command to file I am trying to write to a auxillary file using \newwrite\tempfile [...] \immediate\openout\tempfile=list.tex \immediate\write\tempfile{Text to write to file} \immediate\closeout\tempfile It works like a charm for plain text but I need it to work with \input{ans\thesection-\arabic{enumi}} How do I write it to the file without expansion and thus allowing me to insert the entire file in the end (in my answer section) of my document. - See also the answers to the very similar question How to append data to a temporary file?. –  Martin Scharrer Aug 1 '11 at 13:48 The simplest way to do this is to use \unexpanded which requires e-TeX (i.e.\ a not too old version of TeX): \immediate\write\tempfile{\unexpanded{Text to write to file}} - @Mats: In this case use \noexpand on \input: \immediate\write\tempfile{\noexpand\input{ans\thesection-\arabic{enumi}} –  Martin Scharrer Aug 1 '11 at 14:08 Either put \noexpand before a command which should not be expanded or \unexpanded{...} around a longer text. - What you want is etex's \unexpanded macro. Although, if you're using it with \thesection, are you sure this is actually what you want? For example, look at the following: \documentclass{article} \newwrite\tempfile \begin{document} \section{First} \immediate\openout\tempfile=\jobname.tmp \immediate\write\tempfile{\unexpanded{Section \thesection}} \immediate\closeout\tempfile \section{Second} Something: \input{\jobname.tmp} \end{document} What this does is write \thesection to the temporary file and then input it back in later. As you'll see if you run it, it outputs "Something: Section 2" Is this really the behaviour you want? If you are writing "answers" then presumably, you want it to refer back to the right section where the corresponding question was, in which case you will want it to expand \thesection at the time of writing. There may still be stuff you want unexpanded, but think about what you want expanding and what you don't... There's a lot more information about writing to files in this question: What is the basic mechanism for writing something to an aux file? - It sounds like you might be interested in using the answers package, below is a MWE. It does all of the heavy lifting for you, and allows you to put anything you like in the solution files without having to escape any characters. It has a nice toggle feature that allows you to preview the answers next to the questions (see my MWE). \documentclass{article} \usepackage{answers} % solutions to problems done *beautifully* %\usepackage[nosolutionfiles]{answers} % use this line if you want to see the answers % in the document \newcounter{problem} \newenvironment{problem}{\refstepcounter{problem} {\bfseries\theproblem}.\ }{} % solution files \Opensolutionfile{shortsolutions} \Newassociation{shortsolution}{shortSoln}{shortsolutions} \begin{document} \begin{problem} Here's a question \begin{shortsolution} Here's the answer- can put anything in here: e.g $\frac{1}{3}$ \end{shortsolution} \end{problem} \begin{problem} Here's another question \begin{shortsolution} Here's another answer- can put figures, tables- anything you like! \end{shortsolution} \end{problem} \newpage % close the solutions files \Closesolutionfile{shortsolutions} % input the SHORT solutions file \IfFileExists{shortsolutions.tex}{\input{shortsolutions.tex}}{} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532636761665344, "perplexity": 1559.8052348326353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705235/warc/CC-MAIN-20140313024505-00053-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/232876-simple-probability-question.html
1. ## Simple Probability Question I am a bit confused by the way this question is solved:- From a of well shuffled pack 52 cards, three cards are drawn at random. Find the probability of drawing an ace, a king and a jack. Solution given:- There are 4 aces, 4 king and 4 jacks and their selection can be made in the following ways: 12C1 X 8C1 X 4C1 = 12 X 8 X 4. Total selections can be made = 52C3= 52 X 51 X 50. Therefore required probability = $\frac{(12)(8)(4)}{ (52)(51)(50)}$ I don't understand why are we taking 12C1 X 8C1 X 4C1 = 12 X 8 X 4 instead of 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the numerator. Since, we are selecting 1 ace from 4 aces, 1 king from 4 kings and 1 jack from 4 jacks shouldn't we be taking 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the favourable events ? Please advice on the above. 2. ## Re: Simple Probability Question Originally Posted by SheekhKebab I am a bit confused by the way this question is solved:- From a of well shuffled pack 52 cards, three cards are drawn at random. Find the probability of drawing an ace, a king and a jack. Here's how I would do that: there are 52 cards, 4 aces, four kings, and a jack. The probability the first card drawn is an ace is 4/52= 1/13. There are then 51 cards left, flur of which are kings. The probability that the second card you draw is a king is 4/51. There are then 50 cards left, 4 of which are jacks. The probability the third card you draw is a jack is 4/50= 2/25. The probability of drawing "ace, king, jack" in that order is (1/13)(4/51)(2/25)= 8/(13*51*25). But if you look at "jack, ace, king" or any other specific order you will see that while you have different fractions, you have the same numerators and the same denominators in different orders so the same probability. There are 3!= 6 such orders so there the probability of drawing an ace, king, and jack is 6(8/(13*51*25)). Solution given:- There are 4 aces, 4 king and 4 jacks and their selection can be made in the following ways: 12C1 X 8C1 X 4C1 = 12 X 8 X 4. Total selections can be made = 52C3= 52 X 51 X 50. Therefore required probability = $\frac{(12)(8)(4)}{ (52)(51)(50)}$ I don't understand why are we taking 12C1 X 8C1 X 4C1 = 12 X 8 X 4 instead of 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the numerator. Since, we are selecting 1 ace from 4 aces, 1 king from 4 kings and 1 jack from 4 jacks shouldn't we be taking 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the favourable events ? Please advice on the above. The reason for " $^{12}C_1$" is that a total of 12 "aces, kings, and jacks" and you are drawing one of them. The reason for the " $^8C_1$" is that whether that is an ace, king, or jack, there are then 8 of the remaining kind of card you are looking for (if the first card drawn was a jack, there are 8 aces and kings left) and you want to draw one of them. The reason for the " $^4C_1$" is that whichever of ace, jack, or king, the first two cards are, there are 4 cards of the remaining type and you want to draw 1 of them. 3. ## Re: Simple Probability Question Hi HallsofIvy, Thanks ! That 3! I would have definitely factored in if I had solved the question completely. Since the 3! would have come from the denominator of 52C3. So, that was not my question. I was a bit confused about the way the selection was made in the numerator of the original solution. So, according to you, then, both the solutions/approach are correct ? Should we prefer one approach over the other cause the original solution doesn't appear to be much convincing ? 4. ## Re: Simple Probability Question Both solutions are correct, and both give the same answer. Neither is "more correct" than the other, though I would admit that my instinct is to solve it the way that Halls did. I tend to think first in terms of selecting cards in a particular order, then multiply by the number of ways that the order can be changed, which is what he did. But the book's solution is equally valid, and arguably more elegant - i.e. how many choices do I have for the first card, then how many choices are available for the 2nd card, then how many for the third card, with no need to go through an additional step of considering the specific order of cards selected. 5. ## Re: Simple Probability Question Hi Ebaines, Thanks ! But for the original solution I think there is an error in the denominator . 52C3= $\frac{(52)(51)(50)}{3!}$, but that 3! is missing in the original solution. So we are not getting an answer of $\frac{16}{5525}$ which is the correct answer and which we will get if we follow the other approach. Please inform whether I am correct or I am missing something ! 6. ## Re: Simple Probability Question Both approaches yield the same results: the first is $\frac {12 \times 8 \times 4}{52 \times 51 \times 50} = 0.002896$ and HallsOfIvy's approach: $\frac {6 \times 8}{13 \times 51 \times 25} = 0.002896$ Note that you can multiply the numeratort and denominator of Halls' by 8 get the same form as the first: $\frac {6 \times 8}{13 \times 51 \times 25} \times \frac 8 8 = \frac {12 \times 8 \times 4}{52 \times 51 \times 50} = 0.002896$ The first approach does not need to explicitly multiply by 3! because the fact that the three cards may be selected in any order is already included in the numerator by using 12 x 8 x 4. 7. ## Re: Simple Probability Question Hi ebaines, But the original solution mentions: Total selections can be made in 52C3 ways, which is equivalent to $\frac{(52)(51)(50)}{3!}$, which is the denominator. So where will the 3! go then ? 8. ## Re: Simple Probability Question This part of your first post is incorrect: "Total selections can be made = 52C3= 52 X 51 X 50." "Total selections can be made = 52P3= 52 X 51 X 50." 9. ## Re: Simple Probability Question Thanks and yes ebaines that will take care of all the possible selections in every possible order. Therefore, there is an error in the book.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470097780227661, "perplexity": 304.7671665852688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542655.88/warc/CC-MAIN-20161202170902-00336-ip-10-31-129-80.ec2.internal.warc.gz"}
https://support.sas.com/documentation/cdl/en/statug/66103/HTML/default/statug_lifetest_overview.htm
# The LIFETEST Procedure ## Overview: LIFETEST Procedure A common feature of lifetime or survival data is the presence of right-censored observations due either to withdrawal of experimental units or to termination of the experiment. For such observations, you know only that the lifetime exceeded a given value; the exact lifetime remains unknown. Such data cannot be analyzed by ignoring the censored observations because, among other considerations, the longer-lived units are generally more likely to be censored. The analysis methodology must correctly use the censored observations in addition to the uncensored observations. Texts that discuss the survival analysis methodology include Collett (1994), Cox and Oakes (1984); Kalbfleisch and Prentice (1980); Klein and Moeschberger (1997); Lawless (1982); Lee (1992). Users interested in the theory should consult Fleming and Harrington (1991); Andersen et al. (1992). Usually, a first step in the analysis of survival data is the estimation of the distribution of the survival times. Survival times are often called failure times, and event times are uncensored survival times. The survival distribution function (SDF), also known as the survivor function, is used to describe the lifetimes of the population of interest. The SDF evaluated at t is the probability that an experimental unit from the population will have a lifetime that exceeds t—that is, where denotes the survivor function and T is the lifetime of a randomly selected experimental unit. The LIFETEST procedure can be used to compute nonparametric estimates of the survivor function either by the product-limit method (also called the Kaplan-Meier method) or by the life-table method (also called the actuarial method). The life-table estimator is a grouped-data analog of the Kaplan-Meier estimator. The procedure can also compute the Breslow estimator or the Fleming-Harrington estimator, which are asymptotic equivalent alternatives to the Kaplan-Meier estimator. Some functions closely related to the SDF are the cumulative distribution function (CDF), the probability density function (PDF), and the hazard function. The CDF, denoted , is defined as and is the probability that a lifetime does not exceed t. The PDF, denoted , is defined as the derivative of , and the hazard function, denoted , is defined as . If the life-table method is chosen, the estimates of the probability density function can also be computed. Plots of these estimates can be produced by a graphical or line printer device, or based on the output delivery system (ODS). An important task in the analysis of survival data is the comparison of survival curves. It is of interest to determine whether the underlying populations of k () samples have identical survivor functions. PROC LIFETEST provides nonparametric k-sample tests based on weighted comparisons of the estimated hazard rate of the individual population under the null and alternative hypotheses. Corresponding to various weight functions, a variety of tests can be specified, which include the log-rank test, Wilcoxon test, Tarone-Ware test, Peto-Peto test, modified Peto-Peto test, and Fleming-Harrington family of tests. PROC LIFETEST also provides corresponding trend tests to detect ordered alternatives. Stratified tests can be specified to adjust for prognostic factors that affect the events rates in the various populations. A likelihood ratio test, based on an underlying exponential model, is also included to compare the survival curves of the samples. There are other prognostic variables, called covariates, that are thought to be related to the failure time. These covariates can also be used to construct statistics to test for association between the covariates and the lifetime variable. PROC LIFETEST can compute two such test statistics: censored data linear rank statistics based on the exponential scores and the Wilcoxon scores. The corresponding tests are known as the log-rank test and the Wilcoxon test, respectively. These tests are computed by pooling over any defined strata, thus adjusting for the stratum variables. One change in SAS 9.2 and later is that the calculation of confidence limits for the quartiles of survival time is based on the transformation specified by the CONFTYPE= option. Another change is that the SURVIVAL statement in SAS 9.1 is folded into the PROC LIFETEST statement; that is, options that were in the SURVIVAL statement can now be specified in the PROC LIFETEST statement. The SURVIVAL statement is no longer needed and it is not documented.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290570020675659, "perplexity": 1145.9802974338427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00473.warc.gz"}
https://davidegerosa.com/category/cqg/
# Davide Gerosa ## On the equal-mass limit of precessing black-hole binaries Davide Gerosa, Ulrich Sperhake, Jakub Vošmera. Classical and Quantum Gravity 34 (2017) 6 ,064004. arXiv:1612.05263 [gr-qc]. Equal-mass binaries correspond to a discontinuous limit in the spin precession equations. A new constant of motion pops up, which can be exploited to study the dynamics. This is a really neat calculation done with Jakub, a Cambridge undergraduate student. Also, my first paper at Caltech! ## Numerical simulations of stellar collapse in scalar-tensor theories of gravity Davide Gerosa, Ulrich Sperhake, Christian D. Ott. Classical and Quantum Gravity 33 (2016) 13 , 135002. arXiv:1602.06952 [gr-qc]. Here we present 1+1 numerical-relativity simulation of stellar collapse in scalar-tensor theories, where gravity is mediated by the usual metric coupled to an additional scalar field. Bottom line: you can test General Relativity with supernovae explosions! Supporting material available here. ## Tensor-multi-scalar theories: relativistic stars and 3+1 decomposition Michael Horbatsch, Hector O. Silva, Davide Gerosa, Paolo Pani, Emanuele Berti, Leonardo Gualtieri, Ulrich Sperhake. Classical and Quantum Gravity 32 (2015) 20, 204001. arXiv:1505.07462 [gr-qc]. What happens if you throw a scalar field into General Relativity? And if you throw more than one? Here is a paper on the phenomenology of neutron stars in theories with more than one scalar field coupled to gravity. Featured in CQG+. Selected as IOPselect. ## Testing general relativity with present and future astrophysical observations. Emanuele Berti, et al. (53 authors incl. Davide Gerosa). Classical and Quantum Gravity 32 (2015) 24, 243001. arXiv:1501.07274 [gr-qc]. A massive review on testing gravity, which came out of a nice meeting we had at University of Mississippi in January 2014.  See the numbers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011196494102478, "perplexity": 3222.5484686288887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00309.warc.gz"}
http://math.stackexchange.com/questions/142271/when-x-sy-t-almost-surely-for-every-st-implies-that-x-sy-t-for-every
# When $X_s<Y_t$ almost surely, for every $s<t$, implies that $X_s<Y_t$ for every $s<t$, almost surely? When I have shown, for $s\le t$ and for two continuous stochastic process an inequality: $$X_s \le Y_t$$ P-a.s. How can I deduce that this P-a.s. simultaneously for all rational $s\le t$ ? Thank you for your help EDIT: According to Ilya's answer, I see that we have $$P(X_s\le Y_t\text{ simultaneously for all rationals }s\le t) = 1.$$ How could we use continuity of $X,Y$ to deduce $P(X_s\le Y_t,s\le t)=1$. Of course we take sequences of rational, however I mess up the details. So a detailed answer how to do this, would be appreciated. - Hint: there are countably many pairs of rationals $s \le t$. –  Robert Israel May 7 '12 at 16:35 I have updated my answer, using outcome-wise approach. –  Ilya Jun 26 '12 at 15:44 I think, now it is very explicit. –  Ilya Jun 26 '12 at 16:15 If I got you correct, you have $P(X_s\leq Y_t) = 1$ for all $s\leq t$. Recall that if $P(A_n)$ for all $n\in \mathbb N$ then also you have $P(\bigcap_n A_n) = 1$ since $$P(\bigcap_n A_n) = 1-P(\bigcup_n A^c_n)\geq 1-\sum\limits_n P(A^c_n) =1.$$ Now take $s_n$ be the $n$-th rational number that is less or equal to $t$ and $A_n=\{X_{s_n}\leq Y_t\}$. You can even use continuity of $X,Y$ to show that $P(X_s\leq Y_t,s\leq t) = 1$. To show the latter, consider two sets: $$C = \{\omega\in \Omega: X_t(\omega),Y_t(\omega)\text{ are continuous }\}$$ $$D = \{\omega\in \Omega:X_s(\omega)\leq Y_t(\omega) \text{ simultaneously for all rational }s\leq t\}.$$ It is given to us that $\mathsf P(C) = 1$ and we have proved above that $\mathsf P(D) = 1$. As a result, $$\mathsf P(C\cap D) = 1-\mathsf P(C^c\cup D^c)\geq 1-\mathsf P(C^c)-\mathsf P(D^c) = 1.$$ Consider now any $\omega\in C\cap D$. 1. It holds that for this $\omega$: $X_s(\omega)\leq Y_t(\omega)$ for all rational $s\leq t$. Since both $X,Y$ are continuous on $\omega$, it follows that $X_s(\omega)\leq Y_t(\omega)$ for all $s\leq t$. Indeed, if that would not be true, i.e. for some $s'\leq t$ you have $X_{s'}(\omega)>Y_{t}(\omega)$ then $X_s(\omega)>Y_t(\omega)$ in the neighborhood of $s'$ which cannot happen since there will be at least one rational number $s$. 2. Thus, for any $\omega\in C\cap D$ the desired relation holds: $C\cap D \subseteq \{X_s\leq Y_t,s\leq t\}$, so that $$1 =\mathsf P(C\cap D) \leq \mathsf P\{X_s\leq Y_t,s\leq t\}.$$ - @ Ilya: So I agree we have $P(X_s\le Y_t \mbox{ simultaneously for all rational$s\le t$})$. But how could I use continuity of $X,Y$ to deduce $P(X_s\le Y_t,s\le t)=1$? If you could this explain in a more detailed way, I will accept your answer. Thank you for your help. Sorry for the late response! –  user20869 Jun 25 '12 at 11:49 @hulik: You show that $\{X_s\leq Y_t, s\leq t, s\in\mathbb{Q}\}\subseteq \{X_s\leq Y_t,s\leq t\}$. Pick an $\omega$ in the left-hand set. Then $X_s(\omega)\leq Y_t(\omega)$ for all $s\leq t$ with $s\in\mathbb{Q}$. Now suppose that $u\leq t$ is arbitrary. Then we pick a sequence $(u_n)\subseteq \mathbb{Q}$ such that $u_n\to u$ for $n\to\infty$. Since $X_{u_n}(\omega)\leq Y_t(\omega)$ for all $n$, the continuity gives that $X_u(\omega)=\lim_{n\to\infty} X_{u_n}(\omega) \leq Y_t(\omega)$ and hence $\omega$ is in the right-hand set. –  Stefan Hansen Jun 25 '12 at 12:32 @ Stefan Hansen: If I understand you right, you want to show, that $\{X_s\le Y_t,s\le t,s,t\in \mathbb{Q}\}=\{X_s\le Y_t,s\le t\}$ and the first set has measure 1. But why do you take just the set $\{X_s\le Y_t,s\le t,s\in \mathbb{Q}\}$? Thanks for your help –  user20869 Jun 25 '12 at 15:20 @ Ilya: Thanks for your updated answer. However, please I would like to have a detailed answer of this. I totally agree that the intersection has probability one. But somewhere you need a careful approximation argument as Stefan Hansen suggested. Sorry for nit-picking! But I have troubles showing these things in a detailed way. –  user20869 Jun 26 '12 at 16:01 @hulik: please, tell me - is anything unclear in the new version –  Ilya Jun 27 '12 at 8:01 This follows from the fact that the complement of the event $[\forall s\leqslant t,\,X_s\leqslant Y_t]$ is the event $$\left[\exists s\leqslant t,\,X_s\gt Y_t\right]\ =\left[\exists n\in\mathbb N,\,\exists s\in\mathbb Q,\,\exists t\in\mathbb Q,\,s\leqslant t,\,X_s\geqslant Y_t+\frac1n\right],$$ hence $$\left[\exists s\leqslant t,\,X_s\gt Y_t\right]\ =\bigcup\limits_{s\leqslant t,\, s\in\mathbb Q,\,t\in \mathbb Q}\ \bigcup_{n\geqslant1}\ \left[X_s\geqslant Y_t+\frac1n\right].$$ Since $\mathrm P(X_s\leqslant Y_t)=1$ for every $s\leqslant t$, $\mathrm P(X_s\geqslant Y_t+\frac1n)=0$ for every $n\geqslant1$. The union on the RHS of the displayed identity above is countable hence $\mathrm P(\exists s\leqslant t,\,X_s\gt Y_t)=0$. Considering the complement, one gets $$\mathrm P(\forall s\leqslant t,\,X_s\leqslant Y_t)=1.$$ - I do not see how I can use this to conclude that $P(X_s\le X_t,s\le t)$ a.s. And shouldn't it be an intersection over $n$ instead of an union? –  user20869 Jun 26 '12 at 15:34 The union is correct. See Edit. –  Did Jun 26 '12 at 16:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838946461677551, "perplexity": 180.1161627101309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00324-ip-10-171-96-226.ec2.internal.warc.gz"}
http://gravity.wikia.com/wiki/Mantle_(geology)
# Mantle (geology) 771pages on this wiki The mantle is a part of an astronomical object. The interior of the Earth, similar to the other terrestrial planets, is chemically divided into layers. The mantle is a highly viscous layer between the crust and the outer core. Earth's mantle is about 2,890 km (1,800 mi) thick rocky shell that constitutes about 84 percent of Earth's volume. It is predominantly solid and takes over Earth's iron-rich hot core, which occupies about 15 percent of Earth's volume.[1] Past episodes of melting and volcanism at the shallower levels of the mantle have produced a thin crust of crystallized melt products near the surface, upon which we live.[2] The gases evolved during the melting of Earth's mantle have a large effect on the composition and abundance of Earth's atmosphere. Information about structure and composition of the mantle either result from geophysical investigation or from direct geoscientific analyses on Earth mantle derived xenoliths. ## Structure Edit The mantle is divided into sections based upon results from seismology. These layers (and their depths) are the following: the upper mantle (base of the crust–410 km),[3] the transition zone (410–660 km), the lower mantle (660–2891 km), and in the bottom of the latter region there is the anomalous D" layer with a variable thickness (on average ~200 km thick)[2][4].[5][6] The top of the mantle is defined by a sudden increase in seismic velocity, which was first noted by Andrija Mohorovičić in 1909; this boundary is now referred to as the "Mohorovičić discontinuity" or "Moho".[4][7] The uppermost mantle plus overlying crust are relatively rigid and form the lithosphere, an irregular layer with a maximum thickness of perhaps 200 km. Below the lithosphere the upper mantle becomes notably more plastic in its rheology. In some regions below the lithosphere, the seismic velocity is reduced; this so-called low velocity zone (LVZ) extends down to a depth of several hundred km. Inge Lehmann discovered a seismic discontinuity at about 220 km depth;[8] although this discontinuity has been found in other studies, it is not known whether the discontinuity is ubiquitous. The transition zone is an area of great complexity; it physically separates the upper and lower mantle.[6] Very little is known about the lower mantle apart from that it appears to be relatively seismically homogeneous. The D" layer at the Core–mantle boundary separates the mantle from the core.[2][4] ## Characteristics Edit The mantle differs substantially from the crust in its mechanical characteristics and its chemical composition. The distinction between crust and mantle is based on chemistry, rock types, rheology and seismic characteristics. The crust is, in fact, a product of mantle melting. Partial melting of mantle material is believed to cause incompatible elements to separate from the mantle rock, with less dense material floating upward through pore spaces, cracks, or fissures, to cool and freeze at the surface. Typical mantle rocks have a higher magnesium to iron ratio, and a smaller proportion of silicon and aluminium than the crust. This behavior is also predicted by experiments that partly melt rocks thought to be representative of Earth's mantle. Mantle rocks shallower than about 410 km depth consist mostly of olivine, pyroxenes, spinel-structure minerals, and garnet;[6] typical rock types are thought to be peridotite,[6] dunite (olivine-rich peridotite), and eclogite. Between about 400 km and 650 km depth, olivine is not stable and is replaced by high pressure polymorphs with approximately the same composition: one polymorph is wadsleyite (also called beta-spinel type), and the other is ringwoodite (a mineral with the gamma-spinel structure). Below about 650 km, all of the minerals of the upper mantle begin to become unstable. The most abundant minerals present have structures (but not compositions) like that of the mineral perovskite followed by the magnesium/iron oxide ferropericlase.[9] The changes in mineralogy at about 400 and 650 km yield distinctive signatures in seismic records of the Earth's interior, and like the moho, are readily detected using seismic waves. These changes in mineralogy may influence mantle convection, as they result in density changes and they may absorb or release latent heat as well as depress or elevate the depth of the polymorphic phase transitions for regions of different temperatures. The changes in mineralogy with depth have been investigated by laboratory experiments that duplicate high mantle pressures, such as those using the diamond anvil.[10] Composition of Earth's mantle in weight percent[11] Element Amount   Compound Amount O 44.8 Si 21.5 SiO2 46 Mg 22.8 MgO 37.8 Fe 5.8 FeO 7.5 Al 2.2 Al2O3 4.2 Ca 2.3 CaO 3.2 Na 0.3 Na2O 0.4 K 0.03 K2O 0.04 Sum 99.7 Sum 99.1 The inner core is solid, the outer core is liquid, and the mantle solid/plastic. This is because of the relative melting points of the different layers (nickel-iron core, silicate crust and mantle) and the increase in temperature and pressure as one moves deeper into the Earth. At the surface both nickel-iron alloys and silicates are sufficiently cool to be solid. In the upper mantle, the silicates are generally solid (localised regions with small amounts of melt exist); however, as the upper mantle is both hot and under relatively little pressure, the rock in the upper mantle has a relatively low viscosity, i.e. it is relatively fluid. In contrast, the lower mantle is under tremendous pressure and therefore has a higher viscosity than the upper mantle. The metallic nickel-iron outer core is liquid despite the enormous pressure as it has a melting point that is lower than the mantle silicates. The inner core is solid due to the overwhelming pressure found at the center of the planet.[12] ## Temperature Edit In the mantle, temperatures range between Template:Convert/to at the upper boundary with the crust to over  /</span> at the boundary with the core.[12] Although the higher temperatures far exceed the melting points of the mantle rocks at the surface (about 1200 °C for representative peridotite), the mantle is almost exclusively solid.[12] The enormous lithostatic pressure exerted on the mantle prevents melting, because the temperature at which melting begins (the solidus) increases with pressure. ## Movement Edit Due to the temperature difference between the Earth's surface and outer core, and the ability of the crystalline rocks at high pressure and temperature to undergo slow, creeping, viscous-like deformation over millions of years, there is a convective material circulation in the mantle[4]. Hot material upwells, while cooler (and heavier) material sinks downward. Downward motion of material often occurs at convergent plate boundaries called subduction zones[4], while upwelling of material can take the form of plumes. Locations on the surface that lie over plumes will often increase in elevation (due to the buoyancy of the hotter, less-dense plume beneath) and exhibit hot spot volcanism. The convection of the Earth's mantle is a chaotic process (in the sense of fluid dynamics), which is thought to be an integral part of the motion of plates. Plate motion should not be confused with the older term continental drift which applies purely to the movement of the crustal components of the continents. The movements of the lithosphere and the underlying mantle are coupled since descending lithosphere is an essential component of convection in the mantle. The observed continental drift is a complicated relationship between the forces causing oceanic lithosphere to sink and the movements within Earth's mantle. Although there is a tendency to larger viscosity at greater depth, this relation is far from linear, and shows layers with dramatically decreased viscosity, in particular in the upper mantle and at the boundary with the core.[13] The mantle within about 200 km above the core-mantle boundary appears to have distinctly different seismic properties than the mantle at slightly shallower depths; this unusual mantle region just above the core is called D″ ("D double-prime"), a nomenclature introduced over 50 years ago by the geophysicist Keith Bullen.[14] D″ may consist of material from subducted slabs that descended and came to rest at the core-mantle boundary and/or from a new mineral polymorph discovered in perovskite called post-perovskite. Earthquakes at shallow depths are a result of stick-slip faulting, however, below about 50 km the hot, high pressure conditions ought to inhibit further seismicity. The mantle is also considered to be viscous, and so incapable of brittle faulting. However, in subduction zones, earthquakes are observed down to 670 km. A number of mechanisms have been proposed to explain this phenomenon, including dehydration, thermal runaway, and phase change. the geothermal gradient can be lowered where cool material from the surface sinks downward, increasing the strength of the surrounding mantle, and allowing earthquakes to occur down to a depth of 400 km and 670 km. The pressure at the bottom of the mantle is ~136 GPa (1.4 million atm).[6] There exists increasing pressure as one travels deeper into the mantle, since the material beneath has to support the weight of all the material above it. The entire mantle, however, is still thought to deform like a fluid on long timescales, with permanent plastic deformation accommodated by the movement of point, line, and/or planar defects through the solid crystals comprising the mantle. Estimates for the viscosity of the upper mantle range between 1019 and 1024 Pa·s, depending on depth,[13] temperature, composition, state of stress, and numerous other factors. Thus, the upper mantle can only flow very slowly. However, when large forces are applied to the uppermost mantle it can become weaker, and this effect is thought to be important in allowing the formation of tectonic plate boundaries. ## ExplorationEdit Exploration of the mantle is generally conducted at the seabed rather than on land due to the relative thinness of the oceanic crust as compared to the significantly thicker continental crust. The first attempt at mantle exploration, known as Project Mohole, was abandoned in 1966 after repeated failures and cost over-runs. The deepest penetration was approximately 180 m (590 ft). In 2005 the third-deepest oceanic borehole hole reached 1,416 metres (4,650 ft) below the sea floor from the ocean drilling vessel JOIDES Resolution. On 5 March 2007, a team of scientists on board the RRS James Cook embarked on a voyage to an area of the Atlantic seafloor where the mantle lies exposed without any crust covering, mid-way between the Cape Verde Islands and the Caribbean Sea. The exposed site lies approximately three kilometres beneath the ocean surface and covers thousands of square kilometres.[15][16] A relatively difficult attempt to retrieve samples from the Earth's mantle was scheduled for later in 2007.[17] As part of the Chikyu Hakken mission, was to use the Japanese vessel 'Chikyu' to drill up to 7,000 m (23,000 ft) below the seabed. This is nearly three times as deep as preceding oceanic drillings. A novel method of exploring the uppermost hundreds km of the Earth was recently analysed, consisting of a small, dense, heat-generating probe which melts its way down through the crust and mantle while its position and progress are tracked by acoustic signals generated in the rocks.[18] The probe consists of an outer sphere of tungsten about 1 m in diameter inside which is a 60Co radioactive heat source. It was calculated that such a probe will reach the oceanic Moho in less than 6 months and attain minimum depths of well over 100 km in a few decades beneath both oceanic and continental lithosphere.[19] Exploration can also be aided through computer simulations of the evolution of the mantle. In 2009, a supercomputer application provided new insight into the distribution of mineral deposits, especially isotopes of iron, from when the mantle developed 4.5 billion years ago.[20] ## ReferencesEdit 1. Robertson, Eugene (2007). "The interior of the earth". USGS. Retrieved on 2009-01-06. 2. 2.0 2.1 2.2 "The structure of the Earth". Moorland School (2005). Retrieved on 2007-12-26. 3. The location of the base of the crust varies from approximately 10 to 70 kilometers. Oceanic crust is generally less than 10 kilometers thick. "Standard" continental crust is around 35 kilometers thick, and the large crustal root under the Tibetan Plateau is approximately 70 kilometers thick. 4. 4.0 4.1 4.2 4.3 4.4 Alden, Andrew (2007). "Today's Mantle: a guided tour". About.com. Retrieved on 2007-12-25. 5. Earth cutaway (image). Retrieved 2007-12-25. 6. 6.0 6.1 6.2 6.3 6.4 Burns, Roger George (1993). Mineralogical Applications of Crystal Field Theory, Cambridge University Press. p. 354. ISBN 0521430771. Retrieved on 26 December 2007. 7. "Istria on the Internet – Prominent Istrians – Andrija Mohorovicic" (2007). Retrieved on 2007-12-25. 8. Carlowicz, Michael (2005). "Inge Lehmann biography". American Geophysical Union, Washington, D.C.. Retrieved on 2007-12-25. 9. Anderson, Don L. (2007) New Theory of the Earth. Cambridge University Press. ISBN 978-0-521-84959-3, 0-521-84959-4 10. Alden, Andrew. "The Big Squeeze: Into the Mantle". About.com. Retrieved on 2007-12-25. 11. [email protected]. Retrieved 2007-12-26. 12. 12.0 12.1 12.2 Louie, J. (1996). "Earth's Interior". University of Nevada, Reno. Retrieved on 2007-12-24. 13. 13.0 13.1 Mantle Viscosity and the Thickness of the Convective Downwellings retrieved on November 7, 2007 14. Alden, Andrew. "The End of D-Double-Prime Time?". About.com. Retrieved on 2007-12-25. 15. Than, Ker (2007-03-01). "Scientists to study gash on Atlantic seafloor", Msnbc.com. Retrieved on 16 March 2008. "A team of scientists will embark on a voyage next week to study an “open wound” on the Atlantic seafloor where the Earth’s deep interior lies exposed without any crust covering." 16. "Earth's Crust Missing In Mid-Atlantic", Science Daily (2007-03-02). Retrieved on 16 March 2008. "Cardiff University scientists will shortly set sail (March 5) to investigate a startling discovery in the depths of the Atlantic." 17. "Japan hopes to predict 'Big One' with journey to center of Earth", PhysOrg.com (2005-12-15). Retrieved on 16 March 2008. "An ambitious Japanese-led project to dig deeper into the Earth's surface than ever before will be a breakthrough in detecting earthquakes including Tokyo's dreaded "Big One," officials said Thursday." 18. Ojovan M.I., Gibb F.G.F., Poluektov P.P., Emets E.P. 2005. Probing of the interior layers of the Earth with self-sinking capsules. Atomic Energy, 99, 556–562 19. Ojovan M.I., Gibb F.G.F. "Exploring the Earth’s Crust and Mantle Using Self-Descending, Radiation-Heated, Probes and Acoustic Emission Monitoring". Chapter 7. In: Nuclear Waste Research: Siting, Technology and Treatment, ISBN 978-1-60456-184-5, Editor: Arnold P. Lattefer, Nova Science Publishers, Inc. 2008 20. University of California - Davis (2009-06-15). Super-computer Provides First Glimpse Of Earth's Early Magma Interior. ScienceDaily. Retrieved on 2009-06-16 from http://www.sciencedaily.com/releases/2009/06/090615153118.htm.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854423463344574, "perplexity": 4015.1777245092276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607591.53/warc/CC-MAIN-20170523064440-20170523084440-00096.warc.gz"}
http://www.commens.org/dictionary/entry/quote-harvard-lectures-logic-science-lecture-viii-forms-induction-and-hypothesis-3
# The Commens DictionaryQuote from ‘Harvard Lectures on the Logic of Science. Lecture VIII: Forms of Induction and Hypothesis’ Quote: Hypothesis is to be explained in a similar manner to induction. Hypothesis is quite a different thing from induction and is usually so considered although I have not found any definition given of it which brings out the difference distinctly. But it will be acknowledged that a hypothesis is a categorical assertion of something we have not experienced. Now in induction there is nothing of this sort. [—] Hypothesis is in fact the inference of a minor proposition as in the following examples respecting light. We find that light gives certain peculiar fringes. Required an explanation of the fact. We reflect that ether waves would give the same fringes. We have therefore only to suppose that light is ether waves and the marvel is explained. [—] We have then three different kinds of inference. Deduction or inference à priori. Induction or inference à particularis, and Hypothesis or inference a posteriori. Date: 1865 References: W 1:266-267 Citation: ‘Hypothesis [as a form of reasoning]’ (pub. 02.02.13-17:17). Quote in M. Bergman & S. Paavola (Eds.), The Commens Dictionary: Peirce's Terms in His Own Words. New Edition. Retrieved from http://www.commens.org/dictionary/entry/quote-harvard-lectures-logic-science-lecture-viii-forms-induction-and-hypothesis-3. Posted: Feb 02, 2013, 17:17 by Sami Paavola Last revised: Jan 07, 2014, 01:00 by Commens Admin
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063010215759277, "perplexity": 2036.0348370745849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00568.warc.gz"}
https://labs.tib.eu/arxiv/?author=Carlos%20Vergara-Cervantes
• ### The ${\it XMM}$ Cluster Survey: joint modelling of the $L_{\rm X}-T$ scaling relation for clusters and groups of galaxies(1805.03465) May 9, 2018 astro-ph.CO We characterize the X-ray luminosity--temperature ($L_{\rm X}-T$) relation using a sample of 353 clusters and groups of galaxies with temperatures in excess of 1 keV, spanning the redshift range $0.1 < z < 0.6$, the largest ever assembled for this purpose. All systems are part of the ${\it XMM-Newton}$ Cluster Survey (XCS), and have also been independently identified in Sloan Digital Sky Survey (SDSS) data using the redMaPPer algorithm. We allow for redshift evolution of the normalisation and intrinsic scatter of the $L_{\rm X}-T$ relation, as well as, for the first time, the possibility of a temperature-dependent change-point in the exponent of such relation. However, we do not find strong statistical support for deviations from the usual modelling of the $L_{\rm X}-T$ relation as a single power-law, where the normalisation evolves self-similarly and the scatter remains constant with time. Nevertheless, assuming {\it a priori} the existence of the type of deviations considered, then faster evolution than the self-similar expectation for the normalisation of the $L_{\rm X}-T$ relation is favoured, as well as a decrease with redshift in the scatter about the $L_{\rm X}-T$ relation. Further, the preferred location for a change-point is then close to 2 keV, possibly marking the transition between the group and cluster regimes. Our results also indicate an increase in the power-law exponent of the $L_{\rm X}-T$ relation when moving from the group to the cluster regime, and faster evolution in the former with respect to the later, driving the temperature-dependent change-point towards higher values with redshift. • ### The Carbon Cycle as the Main Determinant of Glacial-Interglacial Transitions(1308.2709) An intriguing problem in climate science is the existence of Earth's glacial cycles. We show that it is possible to generate these periodic changes in climate by means of the Earth's carbon cycle as the main determinant factor. The carbon exchange between the Ocean, the Continent and the Atmosphere is modeled by means of a tridimensional Lotka-Volterra system and the resulting atmospheric carbon cycle is used as the unique radiative forcing mechanism. It is shown that the carbon dioxide (CO$_{2}$) and temperature anomaly curves, which are thus obtained, have the same first-order structure as the 100 kyr glacial--interglacial cycles depicted by the Vostok ice core data, reproducing the asymmetries of rapid heating--slow cooling, and short interglacial--long glacial ages.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9125084280967712, "perplexity": 870.8020441943823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00123.warc.gz"}
https://www.catalyzex.com/paper/arxiv:2106.13513
Get our free extension to see links to code for papers anywhere online! # Littlestone Classes are Privately Online Learnable Jun 25, 2021 Noah Golowich, Roi Livni We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples $(x_t, y_t)$, for $1 \leq t \leq T$, and returns at each iteration $t$ a hypothesis $h_t$ which is used to predict the label of each new example $x_t$. The learner's performance is measured by her regret against a known hypothesis class $\mathcal{H}$. We require that the algorithm satisfies the following privacy constraint: the sequence $h_1, \ldots, h_T$ of hypotheses output by the algorithm needs to be an $(\epsilon, \delta)$-differentially private function of the whole input sequence $(x_1, y_1), \ldots, (x_T, y_T)$. We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class $\mathcal{H}$ has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most $O(\log T)$ mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor. Moreover, for general values of the Littlestone dimension $d$, the same mistake bound holds but with a doubly-exponential in $d$ factor. A recent line of work has demonstrated a strong connection between classes that are online learnable and those that are differentially-private learnable. Our results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting). We also discuss an adaptive setting and provide a sublinear regret bound of $O(\sqrt{T})$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779758214950562, "perplexity": 460.4181948505138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00547.warc.gz"}
https://dspace.nwu.ac.za/handle/10394/1872/browse?value=multivariate+stable+distribution&type=subject
Now showing items 1-1 of 1 • #### Goodness-of-fit tests for multivariate stable distributions based on the empirical characteristic function  (Elsevier, 2015) We consider goodness-of-fit testing for multivariate stable distributions. The proposed test statistics exploit a characterizing property of the characteristic function of these distributions and are consistent under ... Theme by
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733519315719604, "perplexity": 1405.1474317069124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00014.warc.gz"}
https://gmatclub.com/forum/is-x-0-1-x-y-30-2-y-261877.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Oct 2019, 17:01 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Is x > 0? (1) x + y = 30 (2) y^2 > 10 Author Message TAGS: ### Hide Tags Intern Joined: 05 Dec 2017 Posts: 20 GPA: 3.35 Is x > 0? (1) x + y = 30 (2) y^2 > 10  [#permalink] ### Show Tags 22 Mar 2018, 09:37 00:00 Difficulty: 25% (medium) Question Stats: 77% (01:04) correct 23% (00:44) wrong based on 45 sessions ### HideShow timer Statistics Is $$x > 0?$$ (1) $$x + y = 30$$ (2) $$y^2 > 10$$ Math Expert Joined: 02 Sep 2009 Posts: 58434 Re: Is x > 0? (1) x + y = 30 (2) y^2 > 10  [#permalink] ### Show Tags 22 Mar 2018, 09:46 Is $$x > 0?$$ (1) $$x + y = 30$$. No info about y. Not sufficient. (2) $$y^2 > 10$$. Clearly not sufficient. (1)+(2) If y = 30, then x = 0 and the answer is NO but if y = 29, then x = 1 and the answer is YES. Not sufficient. _________________ Intern Joined: 05 Dec 2017 Posts: 20 GPA: 3.35 Re: Is x > 0? (1) x + y = 30 (2) y^2 > 10  [#permalink] ### Show Tags 22 Mar 2018, 20:21 Hi Bunuel sir. Thank you for OA. Would you elaborate the statement y^2>10. basically i need to know, 1. what does it mean by y^2>10? 2. What is the range of y^2>10 on graph line? 3. Is it single line finite? or not? Math Expert Joined: 02 Sep 2009 Posts: 58434 Re: Is x > 0? (1) x + y = 30 (2) y^2 > 10  [#permalink] ### Show Tags 22 Mar 2018, 20:51 Jamil1992Mehedi wrote: Hi Bunuel sir. Thank you for OA. Would you elaborate the statement y^2>10. basically i need to know, 1. what does it mean by y^2>10? 2. What is the range of y^2>10 on graph line? 3. Is it single line finite? or not? $$y^2 > 10$$; Take the square root: $$|y| > \sqrt{10}$$ (recall that $$x^2=|x|$$); $$y<-\sqrt{10}$$ or $$y > \sqrt{10}$$. 9. Inequalities For more check Ultimate GMAT Quantitative Megathread Attachment: MSP1090823dc0a95a00d8f18000057i5ii6776eai155.gif [ 1.01 KiB | Viewed 557 times ] _________________ Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 8017 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: Is x > 0? (1) x + y = 30 (2) y^2 > 10  [#permalink] ### Show Tags 24 Mar 2018, 18:33 Jamil1992Mehedi wrote: Is $$x > 0?$$ (1) $$x + y = 30$$ (2) $$y^2 > 10$$ Forget conventional ways of solving math questions. For DS problems, the VA (Variable Approach) method is the quickest and easiest way to find the answer without actually solving the problem. Remember that equal numbers of variables and independent equations ensure a solution. Since we have 2 variables (x and y) and 0 equations, C is most likely to be the answer. So, we should consider conditions 1) & 2) together first. After comparing the number of variables and the number of equations, we can save time by considering conditions 1) & 2) together first. Conditions 1) & 2) If x = 15, y = 15, then the answer is 'yes'. If x = -10, y = 40, then the answer is 'no'. Thus, both conditions together are not sufficient. Normally, in problems which require 2 equations, such as those in which the original conditions include 2 variables, or 3 variables and 1 equation, or 4 variables and 2 equations, each of conditions 1) and 2) provide an additional equation. In these problems, the two key possibilities are that C is the answer (with probability 70%), and E is the answer (with probability 25%). Thus, there is only a 5% chance that A, B or D is the answer. This occurs in common mistake types 3 and 4. Since C (both conditions together are sufficient) is the most likely answer, we save time by first checking whether conditions 1) and 2) are sufficient, when taken together. Obviously, there may be cases in which the answer is A, B, D or E, but if conditions 1) and 2) are NOT sufficient when taken together, the answer must be E. _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only \$79 for 1 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Re: Is x > 0? (1) x + y = 30 (2) y^2 > 10   [#permalink] 24 Mar 2018, 18:33 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198804259300232, "perplexity": 2533.5577534350587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00190.warc.gz"}
http://mathhelpforum.com/trigonometry/198839-unit-circle-help.html
# Math Help - Unit Circle Help 1. ## Unit Circle Help Hi, I am currently tutoring yr 11 and 12. I had difficulty explaining how to determine unit circle values without the use of known formulas For example: tan(3*pi/2 - a) = cot(a) OR sin(pi/2 - b) = cos(b) How can I explain it with just the unit circle - not the given equivalencies? Thanks 2. ## Re: Unit Circle Help Originally Posted by ineedhelplz Hi, I am currently tutoring yr 11 and 12. I had difficulty explaining how to determine unit circle values without the use of known formulas For example: tan(3*pi/2 - a) = cot(a) OR sin(pi/2 - b) = cos(b) How can I explain it with just the unit circle - not the given equivalencies? Thanks The second is easier to show using any right-angle triangle. 3. ## Re: Unit Circle Help using "radian" notation, we have that the sum of the 3 angles of any triangle is equal to π. if our triangle is a right triangle, then our "main angle" (the adjacent angle, formed by the hypotenuse and the x-axis) and the "other angle" (the opposite angle, formed by the hypotenuse and the vertical leg) sum to π/2. if we "swap" the angles, we turn the vertical leg into a horizontal one, and the x-axis becomes the y-axis (imagine taking that same triangle, and moving the point that was at the center of the unit circle to the outer radius, while moving the point that was on the circle radius to the origin, and then "rotating and flipping" to switch the axes). this means that the side that was the cosine, is now the sine of the opposite angle, and that the side that was the sine, is now the cosine of the opposite angle. so sin(π/2 - θ) = cos(θ) cos(π/2 - θ) = sin(θ) 4. ## Re: Unit Circle Help Thank you very much for that, is there a visual aid you could please show me? Thanks! 5. ## Re: Unit Circle Help I'm sure you can see \displaystyle \begin{align*} \sin{\left(\frac{\pi}{2} - \theta\right)} &= \frac{b}{h} \\ \\ \cos{\theta} &= \frac{b}{h} \\ \\ \sin{\left(\frac{\pi}{2} - \theta\right)} &= \cos{\theta} \end{align*} 6. ## Re: Unit Circle Help I understand that thank you, however what about the first example? tan(3pi/2 - a) or tan(3pi/2 + a) I don't see a way to do that with a right angled triangle. Wouldn't you need the unit circle? 7. ## Re: Unit Circle Help The only way I see is to make equivalencies with the unit circle. For example tan(3pi/2 + a) = -tan(pi/2 - a) Is there any way to demonstrate this with only the unit circle? Thanks again
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998625636100769, "perplexity": 1391.059542099079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00102-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.educator.com/mathematics/ap-calculus-ab/hovasapian/example-problems-for-limits-at-infinity.php
INSTRUCTORS Raffi Hovasapian John Zhu Raffi Hovasapian Example Problems for Limits at Infinity Slide Duration: Section 1: Limits and Derivatives Overview & Slopes of Curves 42m 8s Intro 0:00 Overview & Slopes of Curves 0:21 Differential and Integral 0:22 Fundamental Theorem of Calculus 6:36 Differentiation or Taking the Derivative 14:24 What Does the Derivative Mean and How do We Find it? 15:18 Example: f'(x) 19:24 Example: f(x) = sin (x) 29:16 General Procedure for Finding the Derivative of f(x) 37:33 More on Slopes of Curves 50m 53s Intro 0:00 Slope of the Secant Line along a Curve 0:12 Slope of the Tangent Line to f(x) at a Particlar Point 0:13 Slope of the Secant Line along a Curve 2:59 Instantaneous Slope 6:51 Instantaneous Slope 6:52 Example: Distance, Time, Velocity 13:32 Instantaneous Slope and Average Slope 25:42 Slope & Rate of Change 29:55 Slope & Rate of Change 29:56 Example: Slope = 2 33:16 Example: Slope = 4/3 34:32 Example: Slope = 4 (m/s) 39:12 Example: Density = Mass / Volume 40:33 Average Slope, Average Rate of Change, Instantaneous Slope, and Instantaneous Rate of Change 47:46 Example Problems for Slopes of Curves 59m 12s Intro 0:00 Example I: Water Tank 0:13 Part A: Which is the Independent Variable and Which is the Dependent? 2:00 Part B: Average Slope 3:18 Part C: Express These Slopes as Rates-of-Change 9:28 Part D: Instantaneous Slope 14:54 Example II: y = √(x-3) 28:26 Part A: Calculate the Slope of the Secant Line 30:39 Part B: Instantaneous Slope 41:26 Part C: Equation for the Tangent Line 43:59 Example III: Object in the Air 49:37 Part A: Average Velocity 50:37 Part B: Instantaneous Velocity 55:30 Desmos Tutorial 18m 43s Intro 0:00 Desmos Tutorial 1:42 Desmos Tutorial 1:43 Things You Must Learn To Do on Your Particular Calculator 2:39 Things You Must Learn To Do on Your Particular Calculator 2:40 Example I: y=sin x 4:54 Example II: y=x³ and y = d/(dx) (x³) 9:22 Example III: y = x² {-5 <= x <= 0} and y = cos x {0 < x < 6} 13:15 The Limit of a Function 51m 53s Intro 0:00 The Limit of a Function 0:14 The Limit of a Function 0:15 Graph: Limit of a Function 12:24 Table of Values 16:02 lim x→a f(x) Does not Say What Happens When x = a 20:05 Example I: f(x) = x² 24:34 Example II: f(x) = 7 27:05 Example III: f(x) = 4.5 30:33 Example IV: f(x) = 1/x 34:03 Example V: f(x) = 1/x² 36:43 The Limit of a Function, Cont. 38:16 Infinity and Negative Infinity 38:17 Does Not Exist 42:45 Summary 46:48 Example Problems for the Limit of a Function 24m 43s Intro 0:00 Example I: Explain in Words What the Following Symbols Mean 0:10 Example II: Find the Following Limit 5:21 Example III: Use the Graph to Find the Following Limits 7:35 Example IV: Use the Graph to Find the Following Limits 11:48 Example V: Sketch the Graph of a Function that Satisfies the Following Properties 15:25 Example VI: Find the Following Limit 18:44 Example VII: Find the Following Limit 20:06 Calculating Limits Mathematically 53m 48s Intro 0:00 Plug-in Procedure 0:09 Plug-in Procedure 0:10 Limit Laws 9:14 Limit Law 1 10:05 Limit Law 2 10:54 Limit Law 3 11:28 Limit Law 4 11:54 Limit Law 5 12:24 Limit Law 6 13:14 Limit Law 7 14:38 Plug-in Procedure, Cont. 16:35 Plug-in Procedure, Cont. 16:36 Example I: Calculating Limits Mathematically 20:50 Example II: Calculating Limits Mathematically 27:37 Example III: Calculating Limits Mathematically 31:42 Example IV: Calculating Limits Mathematically 35:36 Example V: Calculating Limits Mathematically 40:58 Limits Theorem 44:45 Limits Theorem 1 44:46 Limits Theorem 2: Squeeze Theorem 46:34 Example VI: Calculating Limits Mathematically 49:26 Example Problems for Calculating Limits Mathematically 21m 22s Intro 0:00 Example I: Evaluate the Following Limit by Showing Each Application of a Limit Law 0:16 Example II: Evaluate the Following Limit 1:51 Example III: Evaluate the Following Limit 3:36 Example IV: Evaluate the Following Limit 8:56 Example V: Evaluate the Following Limit 11:19 Example VI: Calculating Limits Mathematically 13:19 Example VII: Calculating Limits Mathematically 14:59 Calculating Limits as x Goes to Infinity 50m 1s Intro 0:00 Limit as x Goes to Infinity 0:14 Limit as x Goes to Infinity 0:15 Let's Look at f(x) = 1 / (x-3) 1:04 Summary 9:34 Example I: Calculating Limits as x Goes to Infinity 12:16 Example II: Calculating Limits as x Goes to Infinity 21:22 Example III: Calculating Limits as x Goes to Infinity 24:10 Example IV: Calculating Limits as x Goes to Infinity 36:00 Example Problems for Limits at Infinity 36m 31s Intro 0:00 Example I: Calculating Limits as x Goes to Infinity 0:14 Example II: Calculating Limits as x Goes to Infinity 3:27 Example III: Calculating Limits as x Goes to Infinity 8:11 Example IV: Calculating Limits as x Goes to Infinity 14:20 Example V: Calculating Limits as x Goes to Infinity 20:07 Example VI: Calculating Limits as x Goes to Infinity 23:36 Continuity 53m Intro 0:00 Definition of Continuity 0:08 Definition of Continuity 0:09 Example: Not Continuous 3:52 Example: Continuous 4:58 Example: Not Continuous 5:52 Procedure for Finding Continuity 9:45 Law of Continuity 13:44 Law of Continuity 13:45 Example I: Determining Continuity on a Graph 15:55 Example II: Show Continuity & Determine the Interval Over Which the Function is Continuous 17:57 Example III: Is the Following Function Continuous at the Given Point? 22:42 Theorem for Composite Functions 25:28 Theorem for Composite Functions 25:29 Example IV: Is cos(x³ + ln x) Continuous at x=π/2? 27:00 Example V: What Value of A Will make the Following Function Continuous at Every Point of Its Domain? 34:04 Types of Discontinuity 39:18 Removable Discontinuity 39:33 Jump Discontinuity 40:06 Infinite Discontinuity 40:32 Intermediate Value Theorem 40:58 Intermediate Value Theorem: Hypothesis & Conclusion 40:59 Intermediate Value Theorem: Graphically 43:40 Example VI: Prove That the Following Function Has at Least One Real Root in the Interval [4,6] 47:46 Derivative I 40m 2s Intro 0:00 Derivative 0:09 Derivative 0:10 Example I: Find the Derivative of f(x)=x³ 2:20 Notations for the Derivative 7:32 Notations for the Derivative 7:33 Derivative & Rate of Change 11:14 Recall the Rate of Change 11:15 Instantaneous Rate of Change 17:04 Graphing f(x) and f'(x) 19:10 Example II: Find the Derivative of x⁴ - x² 24:00 Example III: Find the Derivative of f(x)=√x 30:51 Derivatives II 53m 45s Intro 0:00 Example I: Find the Derivative of (2+x)/(3-x) 0:18 Derivatives II 9:02 f(x) is Differentiable if f'(x) Exists 9:03 Recall: For a Limit to Exist, Both Left Hand and Right Hand Limits Must Equal to Each Other 17:19 Geometrically: Differentiability Means the Graph is Smooth 18:44 Example II: Show Analytically that f(x) = |x| is Nor Differentiable at x=0 20:53 Example II: For x > 0 23:53 Example II: For x < 0 25:36 Example II: What is f(0) and What is the lim |x| as x→0? 30:46 Differentiability & Continuity 34:22 Differentiability & Continuity 34:23 How Can a Function Not be Differentiable at a Point? 39:38 How Can a Function Not be Differentiable at a Point? 39:39 Higher Derivatives 41:58 Higher Derivatives 41:59 Derivative Operator 45:12 Example III: Find (dy)/(dx) & (d²y)/(dx²) for y = x³ 49:29 More Example Problems for The Derivative 31m 38s Intro 0:00 Example I: Sketch f'(x) 0:10 Example II: Sketch f'(x) 2:14 Example III: Find the Derivative of the Following Function sing the Definition 3:49 Example IV: Determine f, f', and f'' on a Graph 12:43 Example V: Find an Equation for the Tangent Line to the Graph of the Following Function at the Given x-value 13:40 Example VI: Distance vs. Time 20:15 Example VII: Displacement, Velocity, and Acceleration 23:56 Example VIII: Graph the Displacement Function 28:20 Section 2: Differentiation Differentiation of Polynomials & Exponential Functions 47m 35s Intro 0:00 Differentiation of Polynomials & Exponential Functions 0:15 Derivative of a Function 0:16 Derivative of a Constant 2:35 Power Rule 3:08 If C is a Constant 4:19 Sum Rule 5:22 Exponential Functions 6:26 Example I: Differentiate 7:45 Example II: Differentiate 12:38 Example III: Differentiate 15:13 Example IV: Differentiate 16:20 Example V: Differentiate 19:19 Example VI: Find the Equation of the Tangent Line to a Function at a Given Point 12:18 Example VII: Find the First & Second Derivatives 25:59 Example VIII 27:47 Part A: Find the Velocity & Acceleration Functions as Functions of t 27:48 Part B: Find the Acceleration after 3 Seconds 30:12 Part C: Find the Acceleration when the Velocity is 0 30:53 Part D: Graph the Position, Velocity, & Acceleration Graphs 32:50 Example IX: Find a Cubic Function Whose Graph has Horizontal Tangents 34:53 Example X: Find a Point on a Graph 42:31 The Product, Power & Quotient Rules 47m 25s Intro 0:00 The Product, Power and Quotient Rules 0:19 Differentiate Functions 0:20 Product Rule 5:30 Quotient Rule 9:15 Power Rule 10:00 Example I: Product Rule 13:48 Example II: Quotient Rule 16:13 Example III: Power Rule 18:28 Example IV: Find dy/dx 19:57 Example V: Find dy/dx 24:53 Example VI: Find dy/dx 28:38 Example VII: Find an Equation for the Tangent to the Curve 34:54 Example VIII: Find d²y/dx² 38:08 Derivatives of the Trigonometric Functions 41m 8s Intro 0:00 Derivatives of the Trigonometric Functions 0:09 Let's Find the Derivative of f(x) = sin x 0:10 Important Limits to Know 4:59 d/dx (sin x) 6:06 d/dx (cos x) 6:38 d/dx (tan x) 6:50 d/dx (csc x) 7:02 d/dx (sec x) 7:15 d/dx (cot x) 7:27 Example I: Differentiate f(x) = x² - 4 cos x 7:56 Example II: Differentiate f(x) = x⁵ tan x 9:04 Example III: Differentiate f(x) = (cos x) / (3 + sin x) 10:56 Example IV: Differentiate f(x) = e^x / (tan x - sec x) 14:06 Example V: Differentiate f(x) = (csc x - 4) / (cot x) 15:37 Example VI: Find an Equation of the Tangent Line 21:48 Example VII: For What Values of x Does the Graph of the Function x + 3 cos x Have a Horizontal Tangent? 25:17 28:23 Example IX: Evaluate 33:22 Example X: Evaluate 36:38 The Chain Rule 24m 56s Intro 0:00 The Chain Rule 0:13 Recall the Composite Functions 0:14 Derivatives of Composite Functions 1:34 Example I: Identify f(x) and g(x) and Differentiate 6:41 Example II: Identify f(x) and g(x) and Differentiate 9:47 Example III: Differentiate 11:03 Example IV: Differentiate f(x) = -5 / (x² + 3)³ 12:15 Example V: Differentiate f(x) = cos(x² + c²) 14:35 Example VI: Differentiate f(x) = cos⁴x +c² 15:41 Example VII: Differentiate 17:03 Example VIII: Differentiate f(x) = sin(tan x²) 19:01 Example IX: Differentiate f(x) = sin(tan² x) 21:02 More Chain Rule Example Problems 25m 32s Intro 0:00 Example I: Differentiate f(x) = sin(cos(tanx)) 0:38 Example II: Find an Equation for the Line Tangent to the Given Curve at the Given Point 2:25 Example III: F(x) = f(g(x)), Find F' (6) 4:22 Example IV: Differentiate & Graph both the Function & the Derivative in the Same Window 5:35 Example V: Differentiate f(x) = ( (x-8)/(x+3) )⁴ 10:18 Example VI: Differentiate f(x) = sec²(12x) 12:28 Example VII: Differentiate 14:41 Example VIII: Differentiate 19:25 Example IX: Find an Expression for the Rate of Change of the Volume of the Balloon with Respect to Time 21:13 Implicit Differentiation 52m 31s Intro 0:00 Implicit Differentiation 0:09 Implicit Differentiation 0:10 Example I: Find (dy)/(dx) by both Implicit Differentiation and Solving Explicitly for y 12:15 Example II: Find (dy)/(dx) of x³ + x²y + 7y² = 14 19:18 Example III: Find (dy)/(dx) of x³y² + y³x² = 4x 21:43 Example IV: Find (dy)/(dx) of the Following Equation 24:13 Example V: Find (dy)/(dx) of 6sin x cos y = 1 29:00 Example VI: Find (dy)/(dx) of x² cos² y + y sin x = 2sin x cos y 31:02 Example VII: Find (dy)/(dx) of √(xy) = 7 + y²e^x 37:36 Example VIII: Find (dy)/(dx) of 4(x²+y²)² = 35(x²-y²) 41:03 Example IX: Find (d²y)/(dx²) of x² + y² = 25 44:05 Example X: Find (d²y)/(dx²) of sin x + cos y = sin(2x) 47:48 Section 3: Applications of the Derivative Linear Approximations & Differentials 47m 34s Intro 0:00 Linear Approximations & Differentials 0:09 Linear Approximations & Differentials 0:10 Example I: Linear Approximations & Differentials 11:27 Example II: Linear Approximations & Differentials 20:19 Differentials 30:32 Differentials 30:33 Example III: Linear Approximations & Differentials 34:09 Example IV: Linear Approximations & Differentials 35:57 Example V: Relative Error 38:46 Related Rates 45m 33s Intro 0:00 Related Rates 0:08 Strategy for Solving Related Rates Problems #1 0:09 Strategy for Solving Related Rates Problems #2 1:46 Strategy for Solving Related Rates Problems #3 2:06 Strategy for Solving Related Rates Problems #4 2:50 Strategy for Solving Related Rates Problems #5 3:38 Example I: Radius of a Balloon 5:15 12:52 Example III: Water Tank 19:08 Example IV: Distance between Two Cars 29:27 Example V: Line-of-Sight 36:20 More Related Rates Examples 37m 17s Intro 0:00 0:14 Example II: Particle 4:45 Example III: Water Level 10:28 Example IV: Clock 20:47 Example V: Distance between a House and a Plane 29:11 Maximum & Minimum Values of a Function 40m 44s Intro 0:00 Maximum & Minimum Values of a Function, Part 1 0:23 Absolute Maximum 2:20 Absolute Minimum 2:52 Local Maximum 3:38 Local Minimum 4:26 Maximum & Minimum Values of a Function, Part 2 6:11 Function with Absolute Minimum but No Absolute Max, Local Max, and Local Min 7:18 Function with Local Max & Min but No Absolute Max & Min 8:48 Formal Definitions 10:43 Absolute Maximum 11:18 Absolute Minimum 12:57 Local Maximum 14:37 Local Minimum 16:25 Extreme Value Theorem 18:08 Theorem: f'(c) = 0 24:40 Critical Number (Critical Value) 26:14 Procedure for Finding the Critical Values of f(x) 28:32 Example I: Find the Critical Values of f(x) x + sinx 29:51 Example II: What are the Absolute Max & Absolute Minimum of f(x) = x + 4 sinx on [0,2π] 35:31 Example Problems for Max & Min 40m 44s Intro 0:00 Example I: Identify Absolute and Local Max & Min on the Following Graph 0:11 Example II: Sketch the Graph of a Continuous Function 3:11 Example III: Sketch the Following Graphs 4:40 Example IV: Find the Critical Values of f (x) = 3x⁴ - 7x³ + 4x² 6:13 Example V: Find the Critical Values of f(x) = |2x - 5| 8:42 Example VI: Find the Critical Values 11:42 Example VII: Find the Critical Values f(x) = cos²(2x) on [0,2π] 16:57 Example VIII: Find the Absolute Max & Min f(x) = 2sinx + 2cos x on [0,(π/3)] 20:08 Example IX: Find the Absolute Max & Min f(x) = (ln(2x)) / x on [1,3] 24:39 The Mean Value Theorem 25m 54s Intro 0:00 Rolle's Theorem 0:08 Rolle's Theorem: If & Then 0:09 Rolle's Theorem: Geometrically 2:06 There May Be More than 1 c Such That f'( c ) = 0 3:30 Example I: Rolle's Theorem 4:58 The Mean Value Theorem 9:12 The Mean Value Theorem: If & Then 9:13 The Mean Value Theorem: Geometrically 11:07 Example II: Mean Value Theorem 13:43 Example III: Mean Value Theorem 21:19 Using Derivatives to Graph Functions, Part I 25m 54s Intro 0:00 Using Derivatives to Graph Functions, Part I 0:12 Increasing/ Decreasing Test 0:13 Example I: Find the Intervals Over Which the Function is Increasing & Decreasing 3:26 Example II: Find the Local Maxima & Minima of the Function 19:18 Example III: Find the Local Maxima & Minima of the Function 31:39 Using Derivatives to Graph Functions, Part II 44m 58s Intro 0:00 Using Derivatives to Graph Functions, Part II 0:13 Concave Up & Concave Down 0:14 What Does This Mean in Terms of the Derivative? 6:14 Point of Inflection 8:52 Example I: Graph the Function 13:18 Example II: Function x⁴ - 5x² 19:03 Intervals of Increase & Decrease 19:04 Local Maxes and Mins 25:01 Intervals of Concavity & X-Values for the Points of Inflection 29:18 Intervals of Concavity & Y-Values for the Points of Inflection 34:18 Graphing the Function 40:52 Example Problems I 49m 19s Intro 0:00 Example I: Intervals, Local Maxes & Mins 0:26 Example II: Intervals, Local Maxes & Mins 5:05 Example III: Intervals, Local Maxes & Mins, and Inflection Points 13:40 Example IV: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 23:02 Example V: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 34:36 Example Problems III 59m 1s Intro 0:00 Example I: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 0:11 Example II: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 21:24 Example III: Cubic Equation f(x) = Ax³ + Bx² + Cx + D 37:56 Example IV: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 46:19 L'Hospital's Rule 30m 9s Intro 0:00 L'Hospital's Rule 0:19 Indeterminate Forms 0:20 L'Hospital's Rule 3:38 Example I: Evaluate the Following Limit Using L'Hospital's Rule 8:50 Example II: Evaluate the Following Limit Using L'Hospital's Rule 10:30 Indeterminate Products 11:54 Indeterminate Products 11:55 Example III: L'Hospital's Rule & Indeterminate Products 13:57 Indeterminate Differences 17:00 Indeterminate Differences 17:01 Example IV: L'Hospital's Rule & Indeterminate Differences 18:57 Indeterminate Powers 22:20 Indeterminate Powers 22:21 Example V: L'Hospital's Rule & Indeterminate Powers 25:13 Example Problems for L'Hospital's Rule 38m 14s Intro 0:00 Example I: Evaluate the Following Limit 0:17 Example II: Evaluate the Following Limit 2:45 Example III: Evaluate the Following Limit 6:54 Example IV: Evaluate the Following Limit 8:43 Example V: Evaluate the Following Limit 11:01 Example VI: Evaluate the Following Limit 14:48 Example VII: Evaluate the Following Limit 17:49 Example VIII: Evaluate the Following Limit 20:37 Example IX: Evaluate the Following Limit 25:16 Example X: Evaluate the Following Limit 32:44 Optimization Problems I 49m 59s Intro 0:00 Example I: Find the Dimensions of the Box that Gives the Greatest Volume 1:23 Fundamentals of Optimization Problems 18:08 Fundamental #1 18:33 Fundamental #2 19:09 Fundamental #3 19:19 Fundamental #4 20:59 Fundamental #5 21:55 Fundamental #6 23:44 Example II: Demonstrate that of All Rectangles with a Given Perimeter, the One with the Largest Area is a Square 24:36 Example III: Find the Points on the Ellipse 9x² + y² = 9 Farthest Away from the Point (1,0) 35:13 Example IV: Find the Dimensions of the Rectangle of Largest Area that can be Inscribed in a Circle of Given Radius R 43:10 Optimization Problems II 55m 10s Intro 0:00 Example I: Optimization Problem 0:13 Example II: Optimization Problem 17:34 Example III: Optimization Problem 35:06 Example IV: Revenue, Cost, and Profit 43:22 Newton's Method 30m 22s Intro 0:00 Newton's Method 0:45 Newton's Method 0:46 Example I: Find x2 and x3 13:18 Example II: Use Newton's Method to Approximate 15:48 Example III: Find the Root of the Following Equation to 6 Decimal Places 19:57 Example IV: Use Newton's Method to Find the Coordinates of the Inflection Point 23:11 Section 4: Integrals Antiderivatives 55m 26s Intro 0:00 Antiderivatives 0:23 Definition of an Antiderivative 0:24 Antiderivative Theorem 7:58 Function & Antiderivative 12:10 x^n 12:30 1/x 13:00 e^x 13:08 cos x 13:18 sin x 14:01 sec² x 14:11 secxtanx 14:18 1/√(1-x²) 14:26 1/(1+x²) 14:36 -1/√(1-x²) 14:45 Example I: Find the Most General Antiderivative for the Following Functions 15:07 Function 1: f(x) = x³ -6x² + 11x - 9 15:42 Function 2: f(x) = 14√(x) - 27 4√x 19:12 Function 3: (fx) = cos x - 14 sinx 20:53 Function 4: f(x) = (x⁵+2√x )/( x^(4/3) ) 22:10 Function 5: f(x) = (3e^x) - 2/(1+x²) 25:42 Example II: Given the Following, Find the Original Function f(x) 26:37 Function 1: f'(x) = 5x³ - 14x + 24, f(2) = 40 27:55 Function 2: f'(x) 3 sinx + sec²x, f(π/6) = 5 30:34 Function 3: f''(x) = 8x - cos x, f(1.5) = 12.7, f'(1.5) = 4.2 32:54 Function 4: f''(x) = 5/(√x), f(2) 15, f'(2) = 7 37:54 Example III: Falling Object 41:58 Problem 1: Find an Equation for the Height of the Ball after t Seconds 42:48 Problem 2: How Long Will It Take for the Ball to Strike the Ground? 48:30 Problem 3: What is the Velocity of the Ball as it Hits the Ground? 49:52 Problem 4: Initial Velocity of 6 m/s, How Long Does It Take to Reach the Ground? 50:46 The Area Under a Curve 51m 3s Intro 0:00 The Area Under a Curve 0:13 Approximate Using Rectangles 0:14 Let's Do This Again, Using 4 Different Rectangles 9:40 Approximate with Rectangles 16:10 Left Endpoint 18:08 Right Endpoint 25:34 Left Endpoint vs. Right Endpoint 30:58 Number of Rectangles 34:08 True Area 37:36 True Area 37:37 Sigma Notation & Limits 43:32 When You Have to Explicitly Solve Something 47:56 Example Problems for Area Under a Curve 33m 7s Intro 0:00 Example I: Using Left Endpoint & Right Endpoint to Approximate Area Under a Curve 0:10 Example II: Using 5 Rectangles, Approximate the Area Under the Curve 11:32 Example III: Find the True Area by Evaluating the Limit Expression 16:07 Example IV: Find the True Area by Evaluating the Limit Expression 24:52 The Definite Integral 43m 19s Intro 0:00 The Definite Integral 0:08 Definition to Find the Area of a Curve 0:09 Definition of the Definite Integral 4:08 Symbol for Definite Integral 8:45 Regions Below the x-axis 15:18 Associating Definite Integral to a Function 19:38 Integrable Function 27:20 Evaluating the Definite Integral 29:26 Evaluating the Definite Integral 29:27 Properties of the Definite Integral 35:24 Properties of the Definite Integral 35:25 Example Problems for The Definite Integral 32m 14s Intro 0:00 Example I: Approximate the Following Definite Integral Using Midpoints & Sub-intervals 0:11 Example II: Express the Following Limit as a Definite Integral 5:28 Example III: Evaluate the Following Definite Integral Using the Definition 6:28 Example IV: Evaluate the Following Integral Using the Definition 17:06 Example V: Evaluate the Following Definite Integral by Using Areas 25:41 Example VI: Definite Integral 30:36 The Fundamental Theorem of Calculus 24m 17s Intro 0:00 The Fundamental Theorem of Calculus 0:17 Evaluating an Integral 0:18 Lim as x → ∞ 12:19 Taking the Derivative 14:06 Differentiation & Integration are Inverse Processes 15:04 1st Fundamental Theorem of Calculus 20:08 1st Fundamental Theorem of Calculus 20:09 2nd Fundamental Theorem of Calculus 22:30 2nd Fundamental Theorem of Calculus 22:31 Example Problems for the Fundamental Theorem 25m 21s Intro 0:00 Example I: Find the Derivative of the Following Function 0:17 Example II: Find the Derivative of the Following Function 1:40 Example III: Find the Derivative of the Following Function 2:32 Example IV: Find the Derivative of the Following Function 5:55 Example V: Evaluate the Following Integral 7:13 Example VI: Evaluate the Following Integral 9:46 Example VII: Evaluate the Following Integral 12:49 Example VIII: Evaluate the Following Integral 13:53 Example IX: Evaluate the Following Graph 15:24 Local Maxs and Mins for g(x) 15:25 Where Does g(x) Achieve Its Absolute Max on [0,8] 20:54 On What Intervals is g(x) Concave Up/Down? 22:20 Sketch a Graph of g(x) 24:34 More Example Problems, Including Net Change Applications 34m 22s Intro 0:00 Example I: Evaluate the Following Indefinite Integral 0:10 Example II: Evaluate the Following Definite Integral 0:59 Example III: Evaluate the Following Integral 2:59 Example IV: Velocity Function 7:46 Part A: Net Displacement 7:47 Part B: Total Distance Travelled 13:15 Example V: Linear Density Function 20:56 Example VI: Acceleration Function 25:10 Part A: Velocity Function at Time t 25:11 Part B: Total Distance Travelled During the Time Interval 28:38 Solving Integrals by Substitution 27m 20s Intro 0:00 Table of Integrals 0:35 Example I: Evaluate the Following Indefinite Integral 2:02 Example II: Evaluate the Following Indefinite Integral 7:27 Example IIII: Evaluate the Following Indefinite Integral 10:57 Example IV: Evaluate the Following Indefinite Integral 12:33 Example V: Evaluate the Following 14:28 Example VI: Evaluate the Following 16:00 Example VII: Evaluate the Following 19:01 Example VIII: Evaluate the Following 21:49 Example IX: Evaluate the Following 24:34 Section 5: Applications of Integration Areas Between Curves 34m 56s Intro 0:00 Areas Between Two Curves: Function of x 0:08 Graph 1: Area Between f(x) & g(x) 0:09 Graph 2: Area Between f(x) & g(x) 4:07 Is It Possible to Write as a Single Integral? 8:20 Area Between the Curves on [a,b] 9:24 Absolute Value 10:32 Formula for Areas Between Two Curves: Top Function - Bottom Function 17:03 Areas Between Curves: Function of y 17:49 What if We are Given Functions of y? 17:50 Formula for Areas Between Two Curves: Right Function - Left Function 21:48 Finding a & b 22:32 Example Problems for Areas Between Curves 42m 55s Intro 0:00 Instructions for the Example Problems 0:10 Example I: y = 7x - x² and y=x 0:37 Example II: x=y²-3, x=e^((1/2)y), y=-1, and y=2 6:25 Example III: y=(1/x), y=(1/x³), and x=4 12:25 Example IV: 15-2x² and y=x²-5 15:52 Example V: x=(1/8)y³ and x=6-y² 20:20 Example VI: y=cos x, y=sin(2x), [0,π/2] 24:34 Example VII: y=2x², y=10x², 7x+2y=10 29:51 Example VIII: Velocity vs. Time 33:23 Part A: At 2.187 Minutes, Which care is Further Ahead? 33:24 Part B: If We Shaded the Region between the Graphs from t=0 to t=2.187, What Would This Shaded Area Represent? 36:32 Part C: At 4 Minutes Which Car is Ahead? 37:11 Part D: At What Time Will the Cars be Side by Side? 37:50 Volumes I: Slices 34m 15s Intro 0:00 Volumes I: Slices 0:18 Rotate the Graph of y=√x about the x-axis 0:19 How can I use Integration to Find the Volume? 3:16 Slice the Solid Like a Loaf of Bread 5:06 Volumes Definition 8:56 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 12:18 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 19:05 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 25:28 Volumes II: Volumes by Washers 51m 43s Intro 0:00 Volumes II: Volumes by Washers 0:11 Rotating Region Bounded by y=x³ & y=x around the x-axis 0:12 Equation for Volumes by Washer 11:14 Process for Solving Volumes by Washer 13:40 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 15:58 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 25:07 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 34:20 Example IV: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 44:05 Volumes III: Solids That Are Not Solids-of-Revolution 49m 36s Intro 0:00 Solids That Are Not Solids-of-Revolution 0:11 Cross-Section Area Review 0:12 Cross-Sections That Are Not Solids-of-Revolution 7:36 Example I: Find the Volume of a Pyramid Whose Base is a Square of Side-length S, and Whose Height is H 10:54 Example II: Find the Volume of a Solid Whose Cross-sectional Areas Perpendicular to the Base are Equilateral Triangles 20:39 Example III: Find the Volume of a Pyramid Whose Base is an Equilateral Triangle of Side-Length A, and Whose Height is H 29:27 Example IV: Find the Volume of a Solid Whose Base is Given by the Equation 16x² + 4y² = 64 36:47 Example V: Find the Volume of a Solid Whose Base is the Region Bounded by the Functions y=3-x² and the x-axis 46:13 Volumes IV: Volumes By Cylindrical Shells 50m 2s Intro 0:00 Volumes by Cylindrical Shells 0:11 Find the Volume of the Following Region 0:12 Volumes by Cylindrical Shells: Integrating Along x 14:12 Volumes by Cylindrical Shells: Integrating Along y 14:40 Volumes by Cylindrical Shells Formulas 16:22 Example I: Using the Method of Cylindrical Shells, Find the Volume of the Solid 18:33 Example II: Using the Method of Cylindrical Shells, Find the Volume of the Solid 25:57 Example III: Using the Method of Cylindrical Shells, Find the Volume of the Solid 31:38 Example IV: Using the Method of Cylindrical Shells, Find the Volume of the Solid 38:44 Example V: Using the Method of Cylindrical Shells, Find the Volume of the Solid 44:03 The Average Value of a Function 32m 13s Intro 0:00 The Average Value of a Function 0:07 Average Value of f(x) 0:08 What if The Domain of f(x) is Not Finite? 2:23 Let's Calculate Average Value for f(x) = x² [2,5] 4:46 Mean Value Theorem for Integrate 9:25 Example I: Find the Average Value of the Given Function Over the Given Interval 14:06 Example II: Find the Average Value of the Given Function Over the Given Interval 18:25 Example III: Find the Number A Such that the Average Value of the Function f(x) = -4x² + 8x + 4 Equals 2 Over the Interval [-1,A] 24:04 Example IV: Find the Average Density of a Rod 27:47 Section 6: Techniques of Integration Integration by Parts 50m 32s Intro 0:00 Integration by Parts 0:08 The Product Rule for Differentiation 0:09 Integrating Both Sides Retains the Equality 0:52 Differential Notation 2:24 Example I: ∫ x cos x dx 5:41 Example II: ∫ x² sin(2x)dx 12:01 Example III: ∫ (e^x) cos x dx 18:19 Example IV: ∫ (sin^-1) (x) dx 23:42 Example V: ∫₁⁵ (lnx)² dx 28:25 Summary 32:31 Tabular Integration 35:08 Case 1 35:52 Example: ∫x³sinx dx 36:39 Case 2 40:28 Example: ∫e^(2x) sin 3x 41:14 Trigonometric Integrals I 24m 50s Intro 0:00 Example I: ∫ sin³ (x) dx 1:36 Example II: ∫ cos⁵(x)sin²(x)dx 4:36 Example III: ∫ sin⁴(x)dx 9:23 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sin^m) (x) (cos^p) (x) dx 15:59 #1: Power of sin is Odd 16:00 #2: Power of cos is Odd 16:41 #3: Powers of Both sin and cos are Odd 16:55 #4: Powers of Both sin and cos are Even 17:10 Example IV: ∫ tan⁴ (x) sec⁴ (x) dx 17:34 Example V: ∫ sec⁹(x) tan³(x) dx 20:55 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sec^m) (x) (tan^p) (x) dx 23:31 #1: Power of sec is Odd 23:32 #2: Power of tan is Odd 24:04 #3: Powers of sec is Odd and/or Power of tan is Even 24:18 Trigonometric Integrals II 22m 12s Intro 0:00 Trigonometric Integrals II 0:09 Recall: ∫tanx dx 0:10 Let's Find ∫secx dx 3:23 Example I: ∫ tan⁵ (x) dx 6:23 Example II: ∫ sec⁵ (x) dx 11:41 Summary: How to Deal with Integrals of Different Types 19:04 Identities to Deal with Integrals of Different Types 19:05 Example III: ∫cos(5x)sin(9x)dx 19:57 More Example Problems for Trigonometric Integrals 17m 22s Intro 0:00 Example I: ∫sin²(x)cos⁷(x)dx 0:14 Example II: ∫x sin²(x) dx 3:56 Example III: ∫csc⁴ (x/5)dx 8:39 Example IV: ∫( (1-tan²x)/(sec²x) ) dx 11:17 Example V: ∫ 1 / (sinx-1) dx 13:19 Integration by Partial Fractions I 55m 12s Intro 0:00 Integration by Partial Fractions I 0:11 Recall the Idea of Finding a Common Denominator 0:12 Decomposing a Rational Function to Its Partial Fractions 4:10 2 Types of Rational Function: Improper & Proper 5:16 Improper Rational Function 7:26 Improper Rational Function 7:27 Proper Rational Function 11:16 Proper Rational Function & Partial Fractions 11:17 Linear Factors 14:04 15:02 Case 1: G(x) is a Product of Distinct Linear Factors 17:10 Example I: Integration by Partial Fractions 20:33 Case 2: D(x) is a Product of Linear Factors 40:58 Example II: Integration by Partial Fractions 44:41 Integration by Partial Fractions II 42m 57s Intro 0:00 Case 3: D(x) Contains Irreducible Factors 0:09 Example I: Integration by Partial Fractions 5:19 Example II: Integration by Partial Fractions 16:22 Case 4: D(x) has Repeated Irreducible Quadratic Factors 27:30 Example III: Integration by Partial Fractions 30:19 Section 7: Differential Equations Introduction to Differential Equations 46m 37s Intro 0:00 Introduction to Differential Equations 0:09 Overview 0:10 Differential Equations Involving Derivatives of y(x) 2:08 Differential Equations Involving Derivatives of y(x) and Function of y(x) 3:23 Equations for an Unknown Number 6:28 What are These Differential Equations Saying? 10:30 Verifying that a Function is a Solution of the Differential Equation 13:00 Verifying that a Function is a Solution of the Differential Equation 13:01 Verify that y(x) = 4e^x + 3x² + 6x + e^π is a Solution of this Differential Equation 17:20 General Solution 22:00 Particular Solution 24:36 Initial Value Problem 27:42 Example I: Verify that a Family of Functions is a Solution of the Differential Equation 32:24 Example II: For What Values of K Does the Function Satisfy the Differential Equation 36:07 Example III: Verify the Solution and Solve the Initial Value Problem 39:47 Separation of Variables 28m 8s Intro 0:00 Separation of Variables 0:28 Separation of Variables 0:29 Example I: Solve the Following g Initial Value Problem 8:29 Example II: Solve the Following g Initial Value Problem 13:46 Example III: Find an Equation of the Curve 18:48 Population Growth: The Standard & Logistic Equations 51m 7s Intro 0:00 Standard Growth Model 0:30 Definition of the Standard/Natural Growth Model 0:31 Initial Conditions 8:00 The General Solution 9:16 Example I: Standard Growth Model 10:45 Logistic Growth Model 18:33 Logistic Growth Model 18:34 Solving the Initial Value Problem 25:21 What Happens When t → ∞ 36:42 Example II: Solve the Following g Initial Value Problem 41:50 Relative Growth Rate 46:56 Relative Growth Rate 46:57 Relative Growth Rate Version for the Standard model 49:04 Slope Fields 24m 37s Intro 0:00 Slope Fields 0:35 Slope Fields 0:36 Graphing the Slope Fields, Part 1 11:12 Graphing the Slope Fields, Part 2 15:37 Graphing the Slope Fields, Part 3 17:25 Steps to Solving Slope Field Problems 20:24 Example I: Draw or Generate the Slope Field of the Differential Equation y'=x cos y 22:38 Section 8: AP Practic Exam AP Practice Exam: Section 1, Part A No Calculator 45m 29s Intro 0:00 0:10 Problem #1 1:26 Problem #2 2:52 Problem #3 4:42 Problem #4 7:03 Problem #5 10:01 Problem #6 13:49 Problem #7 15:16 Problem #8 19:06 Problem #9 23:10 Problem #10 28:10 Problem #11 31:30 Problem #12 33:53 Problem #13 37:45 Problem #14 41:17 AP Practice Exam: Section 1, Part A No Calculator, cont. 41m 55s Intro 0:00 Problem #15 0:22 Problem #16 3:10 Problem #17 5:30 Problem #18 8:03 Problem #19 9:53 Problem #20 14:51 Problem #21 17:30 Problem #22 22:12 Problem #23 25:48 Problem #24 29:57 Problem #25 33:35 Problem #26 35:57 Problem #27 37:57 Problem #28 40:04 AP Practice Exam: Section I, Part B Calculator Allowed 58m 47s Intro 0:00 Problem #1 1:22 Problem #2 4:55 Problem #3 10:49 Problem #4 13:05 Problem #5 14:54 Problem #6 17:25 Problem #7 18:39 Problem #8 20:27 Problem #9 26:48 Problem #10 28:23 Problem #11 34:03 Problem #12 36:25 Problem #13 39:52 Problem #14 43:12 Problem #15 47:18 Problem #16 50:41 Problem #17 56:38 AP Practice Exam: Section II, Part A Calculator Allowed 25m 40s Intro 0:00 Problem #1: Part A 1:14 Problem #1: Part B 4:46 Problem #1: Part C 8:00 Problem #2: Part A 12:24 Problem #2: Part B 16:51 Problem #2: Part C 17:17 Problem #3: Part A 18:16 Problem #3: Part B 19:54 Problem #3: Part C 21:44 Problem #3: Part D 22:57 AP Practice Exam: Section II, Part B No Calculator 31m 20s Intro 0:00 Problem #4: Part A 1:35 Problem #4: Part B 5:54 Problem #4: Part C 8:50 Problem #4: Part D 9:40 Problem #5: Part A 11:26 Problem #5: Part B 13:11 Problem #5: Part C 15:07 Problem #5: Part D 19:57 Problem #6: Part A 22:01 Problem #6: Part B 25:34 Problem #6: Part C 28:54 Bookmark & Share Embed ## Copy & Paste this embed code into your website’s HTML Please ensure that your website editor is in text mode when you paste the code. (In Wordpress, the mode button is on the top right corner.) × • - Allow users to view the embedded video in full-size. Since this lesson is not free, only the preview will appear on your website. • ## Transcription ### Start Learning Now Our free lessons will get you started (Adobe Flash® required). ### Membership Overview • *Ask questions and get answers from the community and our teachers! • Practice questions with step-by-step solutions. • Track your course viewing progress. • Learn at your own pace... anytime, anywhere! ### Example Problems for Limits at Infinity Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Example I: Calculating Limits as x Goes to Infinity 0:14 • Example II: Calculating Limits as x Goes to Infinity 3:27 • Example III: Calculating Limits as x Goes to Infinity 8:11 • Example IV: Calculating Limits as x Goes to Infinity 14:20 • Example V: Calculating Limits as x Goes to Infinity 20:07 • Example VI: Calculating Limits as x Goes to Infinity 23:36 ### Transcription: Example Problems for Limits at Infinity Hello, welcome back to www.educator.com, welcome back to AP Calculus.0000 Today, I thought we would do some more example problems for limits at infinity0004 or as x goes to positive or negative infinity.0009 Let us jump right on in.0011 Evaluate the following limit x² + 6x - 12/ x³ + x² + x + 4, as the limit as x goes to negative infinity.0015 Here we have a rational function.0029 We know how to deal with rational functions.0031 We pretty much just divide the top and bottom, by the highest power of x in the denominator.0035 The first thing you do is basically plug in.0048 You evaluate the limit to see, before you actually have to manipulate.0050 In this case, when we put in negative infinity into here and here, what we are going to end up with is,0054 I will say plugging in, we get infinity/ negative infinity.0062 x², this term is going to dominate, this term is going to dominate.0083 Negative infinity², negative number² is positive.0089 Negative number³ is going to be negative.0092 We are going to get something like this which does not make sense.0094 This is going to be the limit as x goes to negative infinity of x² + 6x - 12/ the greatest power in the denominator which is x³.0100 That is our manipulation x³ + x² + x + 4/ x³.0112 This gives us the limit as x approaches negative infinity of, here we have 1/ x + 6/ x² – 12/ x³ / 1 + 1/ x + 1/ x² + 4/ x³.0121 Now when we take the limit as x goes to negative infinity, this is 0, this is 0, this is 0.0152 All of these go to 0 and you are left with 0/1 which is an actual finite number 0.0159 Our limit is 0.0166 Again, it is always nice to confirm this with a graph.0169 This is our function, down here a little table of values.0177 Basically, what this part of the table of values does is it confirms the fact that we start negative,0181 the function crosses the 0.0186 And then actually comes up and gets closer and closer to 0 which is the limit.0191 As x gets really big, the function gets close to 0 which is what we just calculated analytically.0199 Evaluate the following limit.0209 The limit as x goes to infinity of x + 4/ 8x³ + 1.0211 A couple of things to notice.0218 Again, instead of just launching right in this, you want to stop and ask yourself some questions.0219 Here the radical is the denominator.0223 It is a rational function.0226 We have to think about these things.0227 8x³ + 1, it is under the radical sign.0229 It, itself has to be greater than 0.0233 Let us see what the conditions are here.0236 Here the 8x³ + 1 is in the denominator.0241 We know that that value cannot be 0.0251 It cannot equal 0.0257 Since we cannot have the square root of a negative number, 8x³ + 1 has to be greater than 0.0266 It could be greater than or equal to 0.0275 But then, if it were equal to 0 then you have a 0 in the denominator.0277 That is why we have only the relation greater than.0279 I will write not greater than or equal to, which normally we could.0283 Let us go ahead and work this out first.0288 8x³ + 1 is greater than 0.0290 We have 8x³ greater than -1.0295 We have x³ greater than -1.0298 That is greater than -1/8, which implies that x itself has to be greater than -1/2.0307 That is our domain.0317 We said not that relation, greater than -1/2.0320 Here we do not have to worry about going to negative infinity.0324 All we have to worry about here is going to positive infinity.0328 Because x cannot be, x cannot go to negative infinity.0331 We do not have to worry about x going to negative infinity.0339 We saved ourselves a little bit of work.0352 Our x + 4/ 8x³ + 1, it is a rational function.0356 We want to divide it by the greatest power of x in the denominator.0368 This is going to be x + 4, x³/ √x³.0379 This is going to be x + 4/ x³/2.0396 The √x³ is x³/2 / 8x³ + 1/ x³.0405 That is going to equal, this is going to be 1/ x ^½.0418 This is going to be + 4/ x³/2.0424 This is going to be √8 + 1/ x³.0429 Now we go ahead and take the limit.0440 The limit as x goes to infinity of 1/ x ^½ + 4/ x³/2 / √8 + 1/ x³.0445 As x goes to infinity, this goes to 0, this goes to 0, this goes to 0.0463 We end up with 0/ √8 which is a finite number.0468 Our limit is 0.0472 The graph that we get is this.0477 We see as x gets really big, the function gets closer and closer and closer to 0.0480 There we go, confirmed it graphically.0487 Evaluate the following limit.0492 The limit of √9x ⁺10 – x³/ x + 5 + 100.0494 The best way to handle this is, let us try this.0503 Again, my way is not the right way.0511 It is just one way, you might come up with, 5 different people might come up with 5 different ways of doing this limit.0513 That is totally fine, that is the beauty of this.0518 Let us do the following.0531 Let us take this 9x ⁺10 – x³.0533 As x goes to infinity, here only 9x ⁺10 term is going to dominate.0541 I’m just going to deal with that term.0557 The same thing for the denominator.0559 For the denominator, the x⁵ is the one that I'm going to take.0560 Basically, what the limit that I'm going to take is the limit as x approaches infinity of √9x ⁺10/ x⁵.0569 Essentially, I just said that this does not matter and this does not matter.0582 The limit is going to be essentially the same.0585 I just deal with those.0587 This is going to equal to the limits as x approaches infinity, √9x ⁺10 = absolute value of 3x⁵.0591 Remember, the square root of something is the absolute value of something/ x⁵.0608 For x greater than 0, in other words x going to positive infinity.0618 The absolute value of 3x⁵/ x⁵, when x is greater than 0, this is just 3x⁵/ x⁵ = 3.0625 The limit as x approaches positive infinity of 3 = 3.0640 Now for x less than 0, this 3x⁵ absolute value/ x⁵, it is actually going to be -3x⁵/ x⁵ = -3.0648 The limit as x goes to negative infinity of -3 = -3.0668 Here you are going to end up with two different asymptotes, 3 and -3.0678 Now notice the difference between the following.0687 We finished the problem but we are going to talk about something.0693 Notice the difference between following.0700 When I take √x ⁺10, I get the absolute value of x⁵.0708 The absolute value of x⁵ is either going to be x⁵ because when x is greater than 0 or it is going to be,0717 I actually did it in reverse, that is why I got a little confused here.0740 It is going to be –x⁵ and that is when x is less than 0.0742 When x is greater than 0, it is going to be x⁵.0751 The reason is because this is an odd power.0755 x⁵ itself, depending on whether x is positive or negative, this inside is going to be a positive or negative number.0768 If what is in here is positive, then it is going to go one way.0780 If what is in here is negative, it is going to go the other way.0785 However, if I take something like x⁸, this is going to give me,0788 we said that the square root of a thing is going to be absolute value of x⁴.0798 Here it does not matter.0804 If x is positive or negative, it is an even power.0806 An even power is always going to be positive number.0812 Therefore, this is just going to be x⁴ because it is an even power.0816 Be careful of that, you have to watch the powers.0823 When you pass from the square root of something, we said the square root of something is the absolute of something.0825 But the power itself is going to make a difference on whether you separate or whether you do not.0831 Let us take a look at the graph of the function that we just did.0838 We said that as x goes to positive infinity, the function approaches 3.0842 As x goes to negative infinity, the function approaches -3.0849 I did not draw out the horizontal asymptotes here.0854 I just want you to see that, but it is essentially what we did.0855 Evaluate the following limit, the limit as x approaches positive infinity of 16x² + x.0863 You might be tempted to do something like this.0875 Let me write this down.0877 We are tempted to say as x goes to positive infinity, this term is going to dominate, which is true.0879 We are tempted to say that we can treat this as √16x² which is equal to 4x.0898 We can just say that 4x - 4x is equal to 0.0909 The limit is x approaches this is just 0.0913 That is not the case.0917 The problem is the x term may contribute to such a point that, what you end up with is infinity – infinity.0919 This will go to infinity, this will go to infinity.0944 But infinity – infinity, we are not exactly sure about the rates at which this goes to infinity and this goes to negative infinity.0946 Because we are not sure about how fast that happens,0955 we do not know if it goes to 0 or if it goes to infinity, or if it goes to some other number in between.0957 This is an indeterminate form, infinity – infinity.0964 We have to handle it differently.0967 Let us deal with the function itself, before we actually take the limit.0970 Here when we put the infinity in, we get infinity - infinity which does not make sense.0974 We have to manipulate it.0979 We have 16x² + x under the radical, -4x.0982 I’m going to go ahead and rationalize this out.0989 I’m going to multiply by its conjugate.0991 16x² + x, this is going to be + 4x/ 16x² + x + 4x.0994 When I multiply this out, I end up with 16x² + x.1007 This is going to be -16x²/ √16x² + x/ +4x.1013 Those go away, leaving me with just x/ 16x² + x + 4x.1027 We have a rational function, even though we have a square root in the denominator,1040 let us go ahead and divide by the largest power in the denominator which is what we always do in the denominator.1043 The largest power in the denominator is essentially going to be the √x².1060 It is going to be x, but it is going to be √x².1065 What we have is the following.1069 We are going to have x/x, that is the numerator.1072 We are going to have 16x² + x, all under the radical, + 4x all under x, which is going to equal 1/ √16x² + 4x/ x².1075 This one, I’m going to treat x as √x².1115 I’m going to get 16x² + 4x/ x², + 4x/ x, I’m going to leave this as x.1121 This x for these two, because under the radical I’m just going to treat it as √x².1132 That ends up equaling 1/ √16, this is x.1141 + 1/ x and this is + 4.1155 There we go, now we can take the limit.1160 The limit as x approaches positive infinity of 1/ √16 + 1/ x + 4.1164 As x goes to infinity, the 1/x goes to 0.1175 We are left with 1/ 4 + 4.1181 √16 is 4, you will get 1/8.1184 Sure enough, that is what it looks like.1192 This is our asymptote, this is y = 1/8.1194 This is our origin, as x goes to positive infinity, the function itself gets closer and closer to 1/8.1198 That is the limit.1206 Evaluate the following limit as x goes to infinity, 1 – 5e ⁺x/ 1 + 3e ⁺x.1210 When we put x in, we are going to end up with -infinity/ infinity which is in indeterminate form.1220 We have to do something with it.1231 1 – 5e ⁺x/ 1 + 3e ⁺x, we can do the same thing that we did with rational functions.1235 This is going to be the same as, I’m going to divide everything by e ⁺x.1244 1 - 5e ⁺x, the top and the bottom, I mean, / 1 + 3e ⁺x/ e ⁺x.1249 What I end up with is 1/ e ⁺x - 5/ 1/ e ⁺x + 3.1258 I’m going to take the limit of that.1272 The limit as x goes to, I’m going to do positive infinity first.1274 1/ e ⁺x – 5, put the 1/ e ⁺x + 3.1283 As x goes to infinity, e ⁺x goes to infinity that means this thing goes to 0, this thing goes to 0.1291 I'm left with -5/3.1297 As x goes to positive infinity, my function actually goes to -5/3.1299 I have a horizontal asymptote at 5/3, -5/3.1306 Now for x going to negative infinity, I have the following.1311 The limit as x goes to negative infinity of 1 - 5e ⁺x/ 1 + 3e ⁺x.1318 x is a negative number, it is negative infinity.1332 e ⁺negative number is 1/ e ⁺positive number.1336 This is actually equivalent to the limit as x goes to positive infinity of 1 - 5/ e ⁺x.1339 It is e ⁻x is the same as e ⁻x is 1/ e ⁺x.1357 Because we are going to negative infinity, x is negative number.1366 Because it is a negative number, I can just drop it into the denominator and make it a positive number.1372 1 - 5 and then 1 + 3/ e ⁺x.1377 As x goes to infinity, this goes to 0, you are left with 1.1383 Sure enough, there you go.1392 As x goes to negative infinity, we approach y = 1.1394 As x goes to positive infinity, our function approaches y = -5/3.1401 That is it, just nice manipulation.1410 Let us see what we got.1417 Now the whole idea of a reachable finite numerical limit is as x gets closer and closer to a certain number or as x goes to infinity,1418 but f(x) gets closer and closer to a certain number like we just saw -5/3 or 1.1427 This latter number is the limit.1434 The question here is how big would x have to be, in order for the function f(x) = e ⁻x/25 + 21439 to be less than a distance of 0.001 away from its limit?1449 Closer and closer, closer and closer means we can take it as close as we want.1455 In this case, the tolerance that I'm looking for is 0.001 away from its limit.1459 The first thing we want to do, what is the limit?1467 What is the limit as x goes to infinity of e ⁻x/ 25 + 2.1474 Let us just deal with positive infinity here.1489 This is the same as e ⁻x/25.1507 This is the same as the limit as x goes to positive infinity of 1/ e ⁺x /25 + 2.1512 As x goes to infinity, e ⁺x/25 goes to infinity.1532 This goes to 0.1537 The limit is actually 2.1542 The limit of this function as x goes to infinity is equal to 2.1544 I probably going to need more room.1557 Let me go ahead and go and work in red.1559 Now e ⁻x/25 is always greater than 0.1566 e ^- x/ 25 + 2 is always going to be greater than 2.1576 f(x) which is equal to e ^- x/25 + 2, we said that the limit of this function as x goes to infinity is 2.1592 But we said that the function is always greater than 2 which means that1602 the function is actually approaching 2 from above.1606 It actually looks like this, this is our graph and this is our asymptote at 2.1610 The function is doing this.1619 The limit is 2, that is this dash line right here.1622 We know that the function itself, because this is always greater than 0, the function itself e ⁻x/25 + 2 is always going to be greater than 2.1627 It is always going to be above it.1635 It is above it, it is getting closer to it from above.1637 That is what is happening here physically, getting closer to the 2.1640 That is happening from above.1645 We want to make this distance, that distance right there.1648 We will call it d, we want that distance to be less than 0.001.1651 Our question is asking how far out do we have to go?1659 What x value passed which x value will this distance?1663 This distance between the function and limit be less than 0.001, that is what this is asking.1668 Again, we said we will call that d.1680 D is equal to the function itself - the limit.1685 Here was the limit, here was the function, this is the distance right there.1694 We want that distance, that distance is f(x) – l.1698 Here is our origin, this is 0,0.1704 This number - this number gives me the distance between them.1708 It is f(x) – l.1712 We know what f(x) is, that is just e ⁻x/25 + 2.1714 We know what l is, it is -2.1719 These go away, we want this distance which is e ⁻x/25, we want it to be less than 0.001.1724 Now we can solve this equation for x.1737 I’m going to go ahead and take the natlog of both sides.1744 I have -x/25 is less than the natlog of 0.001.1750 I’m going to make this a little more clear here, 0.001.1765 -x is less than 25 × the natlog of 0.001, that means x is greater than -25 × the natlog of 0.001.1771 Whenever I do, the natlog of 0.001 is going to be a negative number.1790 Negative × a negative, when I put this in the calculator, I get x has to be greater than 172.069.1795 The whole idea of the limit is we want to get closer and closer and closer.1810 In this particular case, we specified what we meant by closer and closer.1813 I want it closer than 0.001, the function to be less than not far away from the limit.1817 I knew that the function was approaching it from above.1823 The distance between the function and limit, that is what I want it to be, less than 0.01.1827 The distance between the function and limit is the function - the limit.1832 This distance, that distance right there.1836 I set it and I solve for x.1839 As long as x is bigger than 172.69, f(x) - l is going to be less than 0.001.1841 In other words, the function is going to be less than 0.001 units away from its limit.1853 What if f actually approached it from below?1865 What if we have something like this?1868 Let us say again, this was our limit.1874 This time let us say that the function came from below.1877 Now this is f(x) and this is the origin.1880 The limit is above the function.1884 The distance that we are interested in is this distance.1889 The distance between the function and limit.1891 Here the distance is going to equal the length - the function.1894 The length is a bigger number.1902 We want it to be positive.1905 It is going to be l – f(x).1907 Now we combine f(x) - l and l - f(x) as the absolute value of f(x) – l.1910 This distance, and if we are coming from above, this distance,1927 they will be the same if we use the absolute value because distance is a positive number.1930 You cannot have a negative distance.1936 We just combine those two, when we give the definition of a limit by using the absolute value sign.1938 It is that absolute value sign that has confounded and intimidated the students for about 150 years now.1944 Our formal mathematical definition of the limit.1955 We are concerned with the formal mathematical definition of the limit.1975 I told you not to believe about that, I do not believe that these kind of definitions,1977 these precise definitions do not belong in this level.1984 By intuition, we mean the idea of how close can you get.1988 We speak of closer and closer and closer.1992 There is a way of describing that symbolically.1995 What do we mean by closer and closer, that is what I'm going to describe here.1997 Again, I just want you to see it because some of your classes will deal with it,2001 some of your classes would not deal with it.2004 But I wanted you to see the idea and where it actually came from.2006 Our formal mathematical definition of, when we say something like the limit as x approaches infinity of f(x) = l.2011 When we say that some function as x goes to infinity,2023 that the function actually approaches a finite limit, this symbol, here is what it means mathematically.2027 The formal definition is for any choice of a number that we will symbolize with ε,2035 which is going to be always greater than 0, there is an x value somewhere on the real line.2052 Such that the absolute value of f(x) - the limit is going to be less than this choice of ε,2065 whenever x is greater then x sub 0.2076 In the previous example, we found our x sub 0, that was our 172.2080 We found an x sub 0.2090 Our ε in that problem, we chose 0.001 as our ε.2093 We wanted to make the difference between the function and the limit less than 0.001.2100 We found an x sub 0 of 172.69.2104 Why do I keep writing 6, that is strange.2110 Any x value that is bigger than 172.69, we will make the difference between the function and the limit less than 0.001.2115 The precise general mathematical definition is, for any choice of the number ε greater than 0,2124 there exists an x sub 0 such that whenever x is bigger than x sub 0,2131 the difference between f(x) and its limit is going to be less than ε.2138 You can see why this stuff is confusing and why is it that it actually does not belong at this level.2142 Again, for those of you that go on in mathematics and taking analysis course, math majors mostly, this is what you will do.2147 You will go back and you actually work with epsilons, deltas, and x sub 0.2154 You will prove why certain things are the way they are.2159 At this level, we just want to be able to accept that those proofs had been done.2162 We want to be able to use it to solve problems.2166 We want to learn how to compute.2169 We want to use it as a tool.2171 We do not want to justify it.2173 Later, you can justify it, as a math major.2174 Now we just want to be able to use it.2177 This idea of closer and closer and closer to a limit is absolutely fine.2179 It is that intuition, if you want to do it.2185 Thank you so much for joining us here at www.educator.com.2188 We will see you next time, bye.2190 OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130086898803711, "perplexity": 2696.5614135597502}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00166.warc.gz"}
http://etsf.polytechnique.fr/biblio?page=12&f%5Bkeyword%5D=54&f%5Bauthor%5D=1984&amp%3Bf%5Bauthor%5D=2319&amp%3Bamp%3Bf%5Bauthor%5D=2762
Found 42 results Filters: Keyword is paper and Author is Maquet, Alfred  [Clear All Filters] 1995 , 2-COLOR MULTIPHOTON IONIZATION OF ATOMS USING HIGH-ORDER HARMONIC RADIATION, PHYSICAL REVIEW LETTERS, vol. 74, pp. 4161-4164, 1995. 1994 , HIGH-ORDER HARMONIC-GENERATION AND ATOMIC STABILIZATION IN ULTRA-INTENSE LASER-PULSES, ACTA PHYSICA POLONICA A, vol. 86, pp. 191-199, 1994. , Two-photon free-free transitions in laser-assisted electron-hydrogen scattering, JOURNAL OF PHYSICS B-ATOMIC MOLECULAR AND OPTICAL PHYSICS, vol. 27, p. 3241, 1994. 1992 , HARMONIC-GENERATION BY LASER-DRIVEN CLASSICAL HYDROGEN-ATOMS, PHYSICAL REVIEW A, vol. 46, pp. 380-390, 1992. , STABILIZATION OF ATOMS IN SUPERINTENSE LASER FIELDS - ROLE OF THE COULOMB SINGULARITY, JOURNAL OF PHYSICS B-ATOMIC MOLECULAR AND OPTICAL PHYSICS, vol. 25, pp. L263-L268, 1992. 1991 , LIGHT POLARIZATION EFFECTS IN LASER-ASSISTED (E, 2E) COLLISIONS - A STURMIAN APPROACH, JOURNAL OF PHYSICS B-ATOMIC MOLECULAR AND OPTICAL PHYSICS, vol. 24, pp. 3229-3240, 1991. 1989 , ELECTRON-IMPACT IONIZATION OF ATOMIC-HYDROGEN IN THE PRESENCE OF A LASER FIELD, PHYSICAL REVIEW A, vol. 39, pp. 6178-6189, 1989. 1987 , 2-PHOTON BREMSSTRAHLUNG, PHYSICAL REVIEW A, vol. 35, pp. 448-451, 1987. , SOME NEW PERSPECTIVES IN BREMSSTRAHLUNG RESEARCH, JOURNAL DE PHYSIQUE, vol. 48, pp. 799-809, 1987. 1985 , 2-PHOTON FREE-FREE TRANSITIONS IN A COULOMB POTENTIAL, PHYSICAL REVIEW A, vol. 32, pp. 2537-2540, 1985. , Multiphoton transitions in the Coulomb continuous spectrum, Lecture Notes in Physics, vol. 229, p. 275, 1985.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151703476905823, "perplexity": 4174.099558249709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00318.warc.gz"}
https://math.stackexchange.com/questions/2999056/counterexample-of-the-converse-of-jensens-inequality
# Counterexample of the converse of Jensen's inequality Let $$\phi$$ be a convex function on $$(-\infty, \infty)$$, $$f$$ a Lebesgue integrable function over $$[0,1]$$ and $$\phi\circ f$$ also integrable over $$[0,1]$$. Then we have: $$\phi\Big(\int_{0}^{1} f(x)dx\Big)\leq\int_{0}^{1}\Big(\phi\circ f(x)\Big)dx.$$ I am thinking about a counterexample that the converse of this statement. In other words, I am trying to find a $$\phi$$ which is convex on $$\mathbb{R}$$, and $$f$$ is a Lebesgue integral function (on some set), satisfying $$\phi\Big(\int_{0}^{1} f(x)dx\Big)\leq\int_{0}^{1}\Big(\phi\circ f(x)\Big)dx$$, but $$\phi\circ f$$ is not Lebesgue integrable (on some set). I tried to use the convexity of non-integrability of $$\dfrac{1}{x}$$. However, $$\dfrac{1}{x}$$ is not convex in the whole $$\mathbb{R}$$, so I tried to use $$\left|\dfrac{1}{x}\right|$$. So define $$\phi:=\left|\dfrac{1}{x}\right|$$. We know that $$f(x)=x$$ is Lebesgue integrable, and if we restrict our case to $$\mathbb{R^{+}}$$, then $$\phi\circ f=\dfrac{1}{x}$$ which is not Lebesgue integrable. Then, we have $$\phi\Big(\int_{1}^{2} f(x)dx\Big)=\dfrac{2}{3}<\int_{1}^{2}\Big(\phi\circ f(x)\Big)dx=\log(2).$$ Is my argument correct? I feel that I am kind of in a self-contradiction, or my attempt to show the converse of the statement of Jensen's inequality is wrong from the beginning. Thank you so much for any ideas! • Your argument just re-affirms Jensen's inequality because $\phi \circ f$ is actually integrable on $[1,2]$ in the first place. It's not integrable on $[0,1]$ but in this case you just get that the right side is $\infty$ and Jensen's inequality still holds in this situation. About the only way to break it is to have $\phi \circ f$ be non-integrable for $\infty - \infty$ reasons (so that the inequality simply doesn't make sense), I think. – Ian Nov 15 '18 at 1:39 • @Ian so you mean I can use $\phi=\Big|\dfrac{1}{x}\Big|$ and $f(x)=x$ but just to restrict the case to $[0,1]$? – JacobsonRadical Nov 15 '18 at 1:42 • If you look at $[0,1]$ then the inequality still holds with the right side being infinity. About the only way to really break it is to find a suitable $\phi,f$ so that the integral of $\phi \circ f$ doesn't exist at all. – Ian Nov 15 '18 at 2:52 • @Ian Oh! In fact I needs the inequality holds. I just needs $\phi\circ f$ is not Lebesgue integrable, so as you pointed out, at $[0,1]$ the integral of $\phi\circ f$ is infinity, so $\phi\circ f$ is not Lebesgue integrable on $[0,1]$. – JacobsonRadical Nov 15 '18 at 3:03 In fact, I need to find a $$\phi(x)$$ which is convex on whole $$\mathbb{R}$$, $$f(x)\in L^{1}(\mathbb{R})$$ and $$\phi(\int f)\leq\int\phi(f)$$, but $$\phi(f)\notin L^{1}(\mathbb{R})$$. In my post before, I used $$f(x)=x$$, but I realized that $$f(x)$$ is no integrable over $$\mathbb{R}$$, so I modified it a little bit and now here is a valid counter-example. Consider $$\phi(x)=\Big|\dfrac{1}{x}\Big|$$ and $$f(x)=x$$ if $$x\in[0,1]$$ but $$f(x)=0$$ at all other $$x$$. It is clear that $$\phi(x)$$ is convex on $$\mathbb{R}$$, and $$f(x)\in L^{1}(\mathbb{R})$$. Now, $$\phi(\int_{0}^{1}f(x)dx)=2$$, but $$\int_{0}^{1}\phi(f(x))dx=\infty$$. Thus, we have $$\phi(\int_{0}^{1}f(x)dx)<\int_{0}^{1}\phi(f(x))dx$$ but $$\phi(f(x))$$ is not Lebesgue integrable over $$[0,1]$$ by definition, and thus it cannot be integrable over $$\mathbb{R}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635938405990601, "perplexity": 82.10756454043735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00447.warc.gz"}
http://www.reference.com/browse/Degree+(field+theory)
Definitions # Quantum field theory In quantum field theory (QFT) the forces between particles are mediated by other particles. For instance, the electromagnetic force between two electrons is caused by an exchange of photons. But quantum field theory applies to all fundamental forces. Intermediate vector bosons mediate the weak force, gluons mediate the strong force, and gravitons mediate the gravitational force. These force carrying particles are virtual particles and, by definition, cannot be detected while carrying the force, because such detection will imply that the force is not being carried. In QFT photons are not thought of as 'little billiard balls', they are considered to be field quanta - necessarily chunked ripples in a field that 'look like' particles. Fermions, like the electron, can also be described as ripples in a field, where each kind of fermion has its own field. In summary, the classical visualisation of "everything is particles and fields", in quantum field theory, resolves into "everything is particles", which then resolves into "everything is fields". In the end, particles are regarded as excited states of a field (field quanta). Quantum field theory provides a theoretical framework for constructing quantum mechanical models of systems classically described by fields or of many-body systems. It is widely used in particle physics and condensed matter physics. Most theories in modern particle physics, including the Standard Model of elementary particles and their interactions, are formulated as relativistic quantum field theories. In condensed matter physics, quantum field theories are used in many circumstances, especially those where the number of particles is allowed to fluctuate—for example, in the BCS theory of superconductivity. ## History Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In 1926, Max Born, Pascual Jordan, and Werner Heisenberg constructed such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators and by employing the usual procedure for quantizing those oscillators (canonical quantization). This theory assumed that no electric charges or currents were present and today would be called a free field theory. The first reasonably complete theory of quantum electrodynamics, which included both the electromagnetic field and electrically charged matter (specifically, electrons) as quantum mechanical objects, was created by Paul Dirac in 1927. This quantum field theory could be used to model important processes such as the emission of a photon by an electron dropping into a quantum state of lower energy, a process in which the number of particles changes — one atom in the initial state becomes an atom plus a photon in the final state. It is now understood that the ability to describe such processes is one of the most important features of quantum field theory. It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had after all grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant), and in 1933 Niels Bohr and Leon Rosenfeld showed that this result could be interpreted as a limitation on the ability to measure fields at space-like separations, exactly as required by relativity. A further boost for quantum field theory came with the discovery of the Dirac equation, a single-particle equation obeying both relativity and quantum mechanics, when it was shown that several of its undesirable properties (such as negative-energy states) could be eliminated by reformulating the Dirac equation as a quantum field theory. This work was performed by Wendell Furry, Robert Oppenheimer, Vladimir Fock, and others. The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Jordan tried to extend the canonical quantization of fields to the many-body wavefunctions of identical particles, a procedure that is sometimes called second quantization. In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle. This thread of development was incorporated into many-body theory, and strongly influenced condensed matter physics and nuclear physics. Despite its early successes, quantum field theory was plagued by several serious theoretical difficulties. Many seemingly-innocuous physical quantities, such as the energy shift of electron states due to the presence of the electromagnetic field, gave infinity — a nonsensical result — when computed using quantum field theory. This "divergence problem" was solved during the 1940s by Bethe, Tomonaga, Schwinger, Feynman, and Dyson, through the procedure known as renormalization. This phase of development culminated with the construction of the modern theory of quantum electrodynamics (QED). Beginning in the 1950s with the work of Yang and Mills, QED was generalized to a class of quantum field theories known as gauge theories. The 1960s and 1970s saw the formulation of a gauge theory now known as the Standard Model of particle physics, which describes all known elementary particles and the interactions between them. The weak interaction part of the standard model was formulated by Sheldon Glashow, with the Higgs mechanism added by Steven Weinberg and Abdus Salam. The theory was shown to be consistent by Gerardus 't Hooft and Martinus Veltman. Also during the 1970s, parallel developments in the study of phase transitions in condensed matter physics led Leo Kadanoff, Michael Fisher and Kenneth Wilson (extending work of Ernst Stueckelberg, Andre Peterman, Murray Gell-Mann and Francis Low) to a set of ideas and methods known as the renormalization group. By providing a better physical understanding of the renormalization procedure invented in the 1940s, the renormalization group sparked what has been called the "grand synthesis" of theoretical physics, uniting the quantum field theoretical techniques used in particle physics and condensed matter physics into a single theoretical framework. The study of quantum field theory is alive and flourishing, as are applications of this method to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to many branches of physics. ## Principles of quantum field theory ### Classical fields and quantum fields Quantum mechanics, in its most general formulation, is a theory of abstract operators (observables) acting on an abstract state space (Hilbert space), where the observables represent physically-observable quantities and the state space represents the possible states of the system under study. Furthermore, each observable corresponds, in a technical sense, to the classical idea of a degree of freedom. For instance, the fundamental observables associated with the motion of a single quantum mechanical particle are the position and momentum operators $hat\left\{x\right\}$ and $hat\left\{p\right\}$. Ordinary quantum mechanics deals with systems such as this, which possess a small set of degrees of freedom. (It is important to note, at this point, that this article does not use the word "particle" in the context of wave–particle duality. In quantum field theory, "particle" is a generic term for any discrete quantum mechanical entity, such as an electron, which can behave like classical particles or classical waves under different experimental conditions.) A quantum field is a quantum mechanical system containing a large, and possibly infinite, number of degrees of freedom. This is not as exotic a situation as one might think. A classical field contains a set of degrees of freedom at each point of space; for instance, the classical electromagnetic field defines two vectors — the electric field and the magnetic field — that can in principle take on distinct values for each position $r$. When the field as a whole is considered as a quantum mechanical system, its observables form an infinite (in fact uncountable) set, because $r$ is continuous. Furthermore, the degrees of freedom in a quantum field are arranged in "repeated" sets. For example, the degrees of freedom in an electromagnetic field can be grouped according to the position $r$, with exactly two vectors for each $r$. Note that $r$ is an ordinary number that "indexes" the observables; it is not to be confused with the position operator $hat\left\{x\right\}$ encountered in ordinary quantum mechanics, which is an observable. (Thus, ordinary quantum mechanics is sometimes referred to as "zero-dimensional quantum field theory", because it contains only a single set of observables.) It is also important to note that there is nothing special about $r$ because, as it turns out, there is generally more than one way of indexing the degrees of freedom in the field. In the following sections, we will show how these ideas can be used to construct a quantum mechanical theory with the desired properties. We will begin by discussing single-particle quantum mechanics and the associated theory of many-particle quantum mechanics. Then, by finding a way to index the degrees of freedom in the many-particle problem, we will construct a quantum field and study its implications. ### Single-particle and many-particle quantum mechanics In ordinary quantum mechanics, the time-dependent Schrödinger equation describing the time evolution of the quantum state of a single non-relativistic particle is $left\left[frac\left\{|mathbf\left\{p\right\}|^2\right\}\left\{2m\right\} + V\left(mathbf\left\{r\right\}\right) right\right]$ |psi(t)rang = i hbar frac{partial}{partial t} |psi(t)rang, where $m$ is the particle's mass, $V$ is the applied potential, and $|psirang$ denotes the quantum state (we are using bra-ket notation). We wish to consider how this problem generalizes to $N$ particles. There are two motivations for studying the many-particle problem. The first is a straightforward need in condensed matter physics, where typically the number of particles is on the order of Avogadro's number (6.0221415 x 1023). The second motivation for the many-particle problem arises from particle physics and the desire to incorporate the effects of special relativity. If one attempts to include the relativistic rest energy into the above equation, the result is either the Klein-Gordon equation or the Dirac equation. However, these equations have many unsatisfactory qualities; for instance, they possess energy eigenvalues which extend to –∞, so that there seems to be no easy definition of a ground state. It turns out that such inconsistencies arise from neglecting the possibility of dynamically creating or destroying particles, which is a crucial aspect of relativity. Einstein's famous mass-energy relation predicts that sufficiently massive particles can decay into several lighter particles, and sufficiently energetic particles can combine to form massive particles. For example, an electron and a positron can annihilate each other to create photons. Thus, a consistent relativistic quantum theory must be formulated as a many-particle theory. Furthermore, we will assume that the $N$ particles are indistinguishable. As described in the article on identical particles, this implies that the state of the entire system must be either symmetric (bosons) or antisymmetric (fermions) when the coordinates of its constituent particles are exchanged. These multi-particle states are rather complicated to write. For example, the general quantum state of a system of $N$ bosons is written as $|phi_1 cdots phi_N rang = sqrt\left\{frac\left\{prod_j N_j!\right\}\left\{N!\right\}\right\} sum_\left\{pin S_N\right\} |phi_\left\{p\left(1\right)\right\}rang cdots |phi_\left\{p\left(N\right)\right\} rang,$ where $|phi_irang$ are the single-particle states, $N_j$ is the number of particles occupying state $j$, and the sum is taken over all possible permutations $p$ acting on $N$ elements. In general, this is a sum of $N!$ ($N$ factorial) distinct terms, which quickly becomes unmanageable as $N$ increases. The way to simplify this problem is to turn it into a quantum field theory. ### Second quantization In this section, we will describe a method for constructing a quantum field theory called second quantization. This basically involves choosing a way to index the quantum mechanical degrees of freedom in the space of multiple identical-particle states. It is based on the Hamiltonian formulation of quantum mechanics; several other approaches exist, such as the Feynman path integral, which uses a Lagrangian formulation. For an overview, see the article on quantization. #### Second quantization of bosons For simplicity, we will first discuss second quantization for bosons, which form perfectly symmetric quantum states. Let us denote the mutually orthogonal single-particle states by $|phi_1rang, |phi_2rang, |phi_3rang,$ and so on. For example, the 3-particle state with one particle in state $|phi_1rang$ and two in state$|phi_2rang$ is $frac\left\{1\right\}\left\{sqrt\left\{3\right\}\right\} left\left[|phi_1rang |phi_2rang$ |phi_2rang + |phi_2rang |phi_1rang |phi_2rang + |phi_2rang |phi_2rang |phi_1rang right]. The first step in second quantization is to express such quantum states in terms of occupation numbers, by listing the number of particles occupying each of the single-particle states $|phi_1rang, |phi_2rang,$ etc. This is simply another way of labelling the states. For instance, the above 3-particle state is denoted as $|1, 2, 0, 0, 0, cdots rangle.$ The next step is to expand the $N$-particle state space to include the state spaces for all possible values of $N$. This extended state space, known as a Fock space, is composed of the state space of a system with no particles (the so-called vacuum state), plus the state space of a 1-particle system, plus the state space of a 2-particle system, and so forth. It is easy to see that there is a one-to-one correspondence between the occupation number representation and valid boson states in the Fock space. At this point, the quantum mechanical system has become a quantum field in the sense we described above. The field's elementary degrees of freedom are the occupation numbers, and each occupation number is indexed by a number $jcdots$, indicating which of the single-particle states $|phi_1rang, |phi_2rang, cdots|phi_jrangcdots$ it refers to. The properties of this quantum field can be explored by defining creation and annihilation operators, which add and subtract particles. They are analogous to "ladder operators" in the quantum harmonic oscillator problem, which added and subtracted energy quanta. However, these operators literally create and annihilate particles of a given quantum state. The bosonic annihilation operator $a_2$ and creation operator $a_2^dagger$ have the following effects: $a_2 | N_1, N_2, N_3, cdots rangle = sqrt\left\{N_2\right\} mid N_1, \left(N_2 - 1\right), N_3, cdots rangle,$ $a_2^dagger | N_1, N_2, N_3, cdots rangle = sqrt\left\{N_2 + 1\right\} mid N_1, \left(N_2 + 1\right), N_3, cdots rangle.$ It can be shown that these are operators in the usual quantum mechanical sense, i.e. linear operators acting on the Fock space. Furthermore, they are indeed Hermitian conjugates, which justifies the way we have written them. They can be shown to obey the commutation relation where $delta$ stands for the Kronecker delta. These are precisely the relations obeyed by the ladder operators for an infinite set of independent quantum harmonic oscillators, one for each single-particle state. Adding or removing bosons from each state is therefore analogous to exciting or de-exciting a quantum of energy in a harmonic oscillator. The Hamiltonian of the quantum field (which, through the Schrödinger equation, determines its dynamics) can be written in terms of creation and annihilation operators. For instance, the Hamiltonian of a field of free (non-interacting) bosons is $H = sum_k E_k , a^dagger_k ,a_k,$ where $E_k$ is the energy of the $k$-th single-particle energy eigenstate. Note that $a_k^dagger,a_k|cdots, N_k, cdots rangle=N_k| cdots, N_k, cdots rangle$. #### Second quantization of fermions It turns out that a different definition of creation and annihilation must be used for describing fermions. According to the Pauli exclusion principle, fermions cannot share quantum states, so their occupation numbers $N_i$ can only take on the value 0 or 1. The fermionic annihilation operators $c$ and creation operators $c^dagger$ are defined by $c_j | N_1, N_2, cdots, N_j = 0, cdots rangle = 0$ $c_j | N_1, N_2, cdots, N_j = 1, cdots rangle = \left(-1\right)^\left\{\left(N_1 + cdots + N_\left\{j-1\right\}\right)\right\} | N_1, N_2, cdots, N_j = 0, cdots rangle$ $c_j^dagger | N_1, N_2, cdots, N_j = 0, cdots rangle = \left(-1\right)^\left\{\left(N_1 + cdots + N_\left\{j-1\right\}\right)\right\} | N_1, N_2, cdots, N_j = 1, cdots rangle$ $c_j^dagger | N_1, N_2, cdots, N_j = 1, cdots rangle = 0$ These obey an anticommutation relation: One may notice from this that applying a fermionic creation operator twice gives zero, so it is impossible for the particles to share single-particle states, in accordance with the exclusion principle. #### Field operators We have previously mentioned that there can be more than one way of indexing the degrees of freedom in a quantum field. Second quantization indexes the field by enumerating the single-particle quantum states. However, as we have discussed, it is more natural to think about a "field", such as the electromagnetic field, as a set of degrees of freedom indexed by position. To this end, we can define field operators that create or destroy a particle at a particular point in space. In particle physics, these operators turn out to be more convenient to work with, because they make it easier to formulate theories that satisfy the demands of relativity. Single-particle states are usually enumerated in terms of their momenta (as in the particle in a box problem.) We can construct field operators by applying the Fourier transform to the creation and annihilation operators for these states. For example, the bosonic field annihilation operator $phi\left(mathbf\left\{r\right\}\right)$ is $phi\left(mathbf\left\{r\right\}\right) stackrel\left\{mathrm\left\{def\right\}\right\}\left\{=\right\} sum_\left\{j\right\} e^\left\{imathbf\left\{k\right\}_jcdot mathbf\left\{r\right\}\right\} a_\left\{j\right\}$ The bosonic field operators obey the commutation relation where $delta\left(x\right)$ stands for the Dirac delta function. As before, the fermionic relations are the same, with the commutators replaced by anticommutators. It should be emphasized that the field operator is not the same thing as a single-particle wavefunction. The former is an operator acting on the Fock space, and the latter is just a scalar field. However, they are closely related, and are indeed commonly denoted with the same symbol. If we have a Hamiltonian with a space representation, say $H = - frac\left\{hbar^2\right\}\left\{2m\right\} sum_i nabla_i^2 + sum_\left\{i < j\right\} U\left(|mathbf\left\{r\right\}_i - mathbf\left\{r\right\}_j|\right)$ where the indices $i$ and $j$ run over all particles, then the field theory Hamiltonian is $H = - frac\left\{hbar^2\right\}\left\{2m\right\} int d^3!r ; phi^dagger\left(mathbf\left\{r\right\}\right) nabla^2 phi\left(mathbf\left\{r\right\}\right) + int!d^3!r int!d^3!r\text{'} ; phi^dagger\left(mathbf\left\{r\right\}\right) phi^dagger\left(mathbf\left\{r\right\}\text{'}\right) U\left(|mathbf\left\{r\right\} - mathbf\left\{r\right\}\text{'}|\right) phi\left(mathbf\left\{r\text{'}\right\}\right) phi\left(mathbf\left\{r\right\}\right)$ This looks remarkably like an expression for the expectation value of the energy, with $phi$ playing the role of the wavefunction. This relationship between the field operators and wavefunctions makes it very easy to formulate field theories starting from space-projected Hamiltonians. ### Implications of quantum field theory #### Unification of fields and particles The "second quantization" procedure that we have outlined in the previous section takes a set of single-particle quantum states as a starting point. Sometimes, it is impossible to define such single-particle states, and one must proceed directly to quantum field theory. For example, a quantum theory of the electromagnetic field must be a quantum field theory, because it is impossible (for various reasons) to define a wavefunction for a single photon. In such situations, the quantum field theory can be constructed by examining the mechanical properties of the classical field and guessing the corresponding quantum theory. The quantum field theories obtained in this way have the same properties as those obtained using second quantization, such as well-defined creation and annihilation operators obeying commutation or anticommutation relations. Quantum field theory thus provides a unified framework for describing "field-like" objects (such as the electromagnetic field, whose excitations are photons) and "particle-like" objects (such as electrons, which are treated as excitations of an underlying electron field). #### Physical meaning of particle indistinguishability The second quantization procedure relies crucially on the particles being identical. We would not have been able to construct a quantum field theory from a distinguishable many-particle system, because there would have been no way of separating and indexing the degrees of freedom. Many physicists prefer to take the converse interpretation, which is that quantum field theory explains what identical particles are. In ordinary quantum mechanics, there is not much theoretical motivation for using symmetric (bosonic) or antisymmetric (fermionic) states, and the need for such states is simply regarded as an empirical fact. From the point of view of quantum field theory, particles are identical if and only if they are excitations of the same underlying quantum field. Thus, the question "why are all electrons identical?" arises from mistakenly regarding individual electrons as fundamental objects, when in fact it is only the electron field that is fundamental. #### Particle conservation and non-conservation During second quantization, we started with a Hamiltonian and state space describing a fixed number of particles ($N$), and ended with a Hamiltonian and state space for an arbitrary number of particles. Of course, in many common situations $N$ is an important and perfectly well-defined quantity, e.g. if we are describing a gas of atoms sealed in a box. From the point of view of quantum field theory, such situations are described by quantum states that are eigenstates of the number operator $hat\left\{N\right\}$, which measures the total number of particles present. As with any quantum mechanical observable, $hat\left\{N\right\}$ is conserved if it commutes with the Hamiltonian. In that case, the quantum state is trapped in the $N$-particle subspace of the total Fock space, and the situation could equally well be described by ordinary $N$-particle quantum mechanics. For example, we can see that the free-boson Hamiltonian described above conserves particle number. Whenever the Hamiltonian operates on a state, each particle destroyed by an annihilation operator $a_k$ is immediately put back by the creation operator $a_k^dagger$. On the other hand, it is possible, and indeed common, to encounter quantum states that are not eigenstates of $hat\left\{N\right\}$, which do not have well-defined particle numbers. Such states are difficult or impossible to handle using ordinary quantum mechanics, but they can be easily described in quantum field theory as quantum superpositions of states having different values of $N$. For example, suppose we have a bosonic field whose particles can be created or destroyed by interactions with a fermionic field. The Hamiltonian of the combined system would be given by the Hamiltonians of the free boson and free fermion fields, plus a "potential energy" term such as $H_I = sum_\left\{k,q\right\} V_q \left(a_q + a_\left\{-q\right\}^dagger\right) c_\left\{k+q\right\}^dagger c_k$, where $a_k^dagger$ and $a_k$ denotes the bosonic creation and annihilation operators, $c_k^dagger$ and $c_k$ denotes the fermionic creation and annihilation operators, and $V_q$ is a parameter that describes the strength of the interaction. This "interaction term" describes processes in which a fermion in state $k$ either absorbs or emits a boson, thereby being kicked into a different eigenstate $k+q$. (In fact, this type of Hamiltonian is used to describe interaction between conduction electrons and phonons in metals. The interaction between electrons and photons is treated in a similar way, but is a little more complicated because the role of spin must be taken into account.) One thing to notice here is that even if we start out with a fixed number of bosons, we will typically end up with a superposition of states with different numbers of bosons at later times. The number of fermions, however, is conserved in this case. In condensed matter physics, states with ill-defined particle numbers are particularly important for describing the various superfluids. Many of the defining characteristics of a superfluid arise from the notion that its quantum state is a superposition of states with different particle numbers. ### Axiomatic approaches The preceding description of quantum field theory follows the spirit in which most physicists approach the subject. However, it is not mathematically rigorous. Over the past several decades, there have been many attempts to put quantum field theory on a firm mathematical footing by formulating a set of axioms for it. These attempts fall into two broad classes. The first class of axioms, first proposed during the 1950s, include the Wightman, Osterwalder-Schrader, and Haag-Kastler systems. They attempted to formalize the physicists' notion of an "operator-valued field" within the context of functional analysis, and enjoyed limited success. It was possible to prove that any quantum field theory satisfying these axioms satisfied certain general theorems, such as the spin-statistics theorem and the CPT theorem. Unfortunately, it proved extraordinarily difficult to show that any realistic field theory, including the Standard Model, satisfied these axioms. Most of the theories that could be treated with these analytic axioms were physically trivial, being restricted to low-dimensions and lacking interesting dynamics. The construction of theories satisfying one of these sets of axioms falls in the field of constructive quantum field theory. Important work was done in this area in the 1970s by Segal, Glimm, Jaffe and others. During the 1980s, a second set of axioms based on geometric ideas was proposed. This line of investigation, which restricts its attention to a particular class of quantum field theories known as topological quantum field theories, is associated most closely with Michael Atiyah and Graeme Segal, and was notably expanded upon by Edward Witten, Richard Borcherds, and Maxim Kontsevich. However, most physically-relevant quantum field theories, such as the Standard Model, are not topological quantum field theories; the quantum field theory of the fractional quantum Hall effect is a notable exception. The main impact of axiomatic topological quantum field theory has been on mathematics, with important applications in representation theory, algebraic topology, and differential geometry. Finding the proper axioms for quantum field theory is still an open and difficult problem in mathematics. One of the Millennium Prize Problems—proving the existence of a mass gap in Yang-Mills theory—is linked to this issue. ## Phenomena associated with quantum field theory In the previous part of the article, we described the most general properties of quantum field theories. Some of the quantum field theories studied in various fields of theoretical physics possess additional special properties, such as renormalizability, gauge symmetry, and supersymmetry. These are described in the following sections. ### Renormalization Early in the history of quantum field theory, it was found that many seemingly innocuous calculations, such as the perturbative shift in the energy of an electron due to the presence of the electromagnetic field, give infinite results. The reason is that the perturbation theory for the shift in an energy involves a sum over all other energy levels, and there are infinitely many levels at short distances which each give a finite contribution. Many of these problems are related to failures in classical electrodynamics that were identified but unsolved in the 19th century, and they basically stem from the fact that many of the supposedly "intrinsic" properties of an electron are tied to the electromagnetic field which it carries around with it. The energy carried by a single electron—its self energy—is not simply the bare value, but also includes the energy contained in its electromagnetic field, its attendant cloud of photons. The energy in a field of a spherical source diverges in both classical and quantum mechanics, but as discovered by Weisskopf, in quantum mechanics the divergence is much milder, going only as the logarithm of the radius of the sphere. The solution to the problem, presciently suggested by Stueckelberg, independently by Bethe after the crucial experiment by Lamb, implemented at one loop by Schwinger, and systematically extended to all loops by Feynman and Dyson, with converging work by Tomonaga in isolated postwar Japan, is called renormalization. The technique of renormalization recognizes that the problem is essentially purely mathematical, that extremely short distances are at fault. In order to define a theory on a continuum, first place a cutoff on the fields, by postulating that quanta cannot have energies above some extremely high value. This has the effect of replacing continuous space by a structure where very short wavelengths do not exist, as on a lattice. Lattices break rotational symmetry, and one of the crucial contributions made by Feynman, Pauli and Villars, and modernized by 't Hooft and Veltman, is a symmetry preserving cutoff for perturbation theory. There is no known symmetrical cutoff outside of perturbation theory, so for rigorous or numerical work people often use an actual lattice. On a lattice, every quantity is finite but depends on the spacing. When taking the limit of zero spacing, we make sure that the physically-observable quantities like the observed electron mass stay fixed, which means that the constants in the Lagrangian defining the theory depend on the spacing. Hopefully, by allowing the constants to vary with the lattice spacing, all the results at long distances become insensitive to the lattice, defining a continuum limit. The renormalization procedure only works for a certain class of quantum field theories, called renormalizable quantum field theories. A theory is perturbatively renormalizable when the constants in the Lagrangian only diverge at worst as logarithms of the lattice spacing for very short spacings. The continuum limit is then well defined in perturbation theory, and even if it is not fully well defined non-perturbatively, the problems only show up at distance scales which are exponentially small in the inverse coupling for weak couplings. The Standard Model of particle physics is perturbatively renormalizable, and so are its component theories (quantum electrodynamics/electroweak theory and quantum chromodynamics). Of the three components, quantum electrodynamics is believed to not have a continuum limit, while the asymptotically free SU(2) and SU(3) weak hypercharge and strong color interactions are nonperturbatively well defined. The renormalization group describes how renormalizable theories emerge as the long distance low-energy effective field theory for any given high-energy theory. Because of this, renormalizable theories are insensitive to the precise nature of the underlying high-energy short-distance phenomena. This is a blessing because it allows physicists to formulate low energy theories without knowing the details of high energy phenomenon. It is also a curse, because once a renormalizable theory like the standard model is found to work, it gives very few clues to higher energy processes. The only way high energy processes can be seen in the standard model is when they allow otherwise forbidden events, or if they predict quantitative relations between the coupling constants. ### Gauge freedom A gauge theory is a theory that admits a symmetry with a local parameter. For example, in every quantum theory the global phase of the wave function is arbitrary and does not represent something physical. Consequently, the theory is invariant under a global change of phases (adding a constant to the phase of all wave functions, everywhere); this is a global symmetry. In quantum electrodynamics, the theory is also invariant under a local change of phase, that is - one may shift the phase of all wave functions so that the shift may be different at every point in space-time. This is a local symmetry. However, in order for a well-defined derivative operator to exist, one must introduce a new field, the gauge field, which also transforms in order for the local change of variables (the phase in our example) not to affect the derivative. In quantum electrodynamics this gauge field is the electromagnetic field. The change of local gauge of variables is termed gauge transformation. In quantum field theory the excitations of fields represent particles. The particle associated with excitations of the gauge field is the gauge boson, which is the photon in the case of quantum electrodynamics. The degrees of freedom in quantum field theory are local fluctuations of the fields. The existence of a gauge symmetry reduces the number of degrees of freedom, simply because some fluctuations of the fields can be transformed to zero by gauge transformations, so they are equivalent to having no fluctuations at all, and they therefore have no physical meaning. Such fluctuations are usually called "non-physical degrees of freedom" or gauge artifacts; usually some of them have a negative norm, making them inadequate for a consistent theory. Therefore, if a classical field theory has a gauge symmetry, then its quantized version (i.e. the corresponding quantum field theory) will have this symmetry as well. In other words, a gauge symmetry cannot have a quantum anomaly. If a gauge symmetry is anomalous (i.e. not kept in the quantum theory) then the theory is non-consistent: for example, in quantum electrodynamics, had there been a gauge anomaly, this would require the appearance of photons with longitudinal polarization and polarization in the time direction, the latter having a negative norm, rendering the theory inconsistent; another possibility would be for these photons to appear only in intermediate processes but not in the final products of any interaction, making the theory non unitary and again inconsistent (see optical theorem). In general, the gauge transformations of a theory consist several different transformations, which may not be commutative. These transformations are together described by a mathematical object known as a gauge group. Infinitesimal gauge transformations are the gauge group generators. Therefore the number of gauge bosons is the group dimension (i.e. number of generators forming a basis). All the fundamental interactions in nature are described by gauge theories. These are: ### Supersymmetry Supersymmetry assumes that every fundamental fermion has a superpartner that is a boson and vice versa. It was introduced in order to solve the so-called Hierarchy Problem, that is, to explain why particles not protected by any symmetry (like the Higgs boson) do not receive radiative corrections to its mass driving it to the larger scales (GUT, Planck...). It was soon realized that supersymmetry has other interesting properties: its gauged version is an extension of general relativity (Supergravity), and it is a key ingredient for the consistency of string theory. The way supersymmetry protects the hierarchies is the following: since for every particle there is a superpartner with the same mass, any loop in a radiative correction is cancelled by the loop corresponding to its superpartner, rendering the theory UV finite. Since no superpartners have yet been observed, if supersymmetry exists it must be broken (through a so-called soft term, which breaks supersymmetry without ruining its helpful features). The simplest models of this breaking require that the energy of the superpartners not be too high; in these cases, supersymmetry is expected to be observed by experiments at the Large Hadron Collider. ## Suggested reading for the layman • Gribbin, John ; Q is for Quantum: Particle Physics from A to Z, Weidenfeld & Nicolson (1998) [ISBN 0297817523|] Dictionary of all things quantum. • Feynman, Richard ; The Character of Physical Law • Feynman, Richard ; QED • Wilczek, Frank ; Quantum Field Theory, Review of Modern Physics 71 (1999) S85-S95. Review article written by a master of Q.C.D., Nobel laureate 2003 Full text available at : hep-th/9803075 • Ryder, Lewis H. ; Quantum Field Theory (Cambridge University Press, 1985), [ISBN 0-521-33859-X]. Introduction to relativistic Q.F.T. for particle physics. • Yndurain, Francisco Jose; The Theory of Quark and Gluon Interactions,(Springer,2006), Fourth Edition. [ISBN 978-3-540-33209-I] • Yndurain, Francisco Jose; Quantum Chromodynamics. Springler-Verlag 1983. [ISBN 3-540-11752-0] • Zee, Anthony ; Quantum Field Theory in a Nutshell, Princeton University Press (2003) [ISBN 0-691-01019-6]. • Peskin, M and Schroeder, D. ;An Introduction to Quantum Field Theory (Westview Press, 1995) [ISBN 0-201-50397-2] • Weinberg, Steven ; The Quantum Theory of Fields (3 volumes) Cambridge University Press (1995). A monumental treatise on Q.F.T. written by a leading expert, Nobel laureate 1979 • Loudon, Rodney ; The Quantum Theory of Light (Oxford University Press, 1983), [ISBN 0-19-851155-8] • Greiner, Walter and Müller, Berndt (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. • Paul H. Frampton , Gauge Field Theories, Frontiers in Physics, Addison-Wesley (1986), Second Edition, Wiley (2000). • Gordon L. Kane (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. • Kleinert, Hagen, Multivalued Fields in in Condensed Matter, Electrodynamics, and Gravitation, World Scientific (Singapore, 2008) (also available online)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 81, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072206020355225, "perplexity": 289.21127300476445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924919.42/warc/CC-MAIN-20150521113204-00313-ip-10-180-206-219.ec2.internal.warc.gz"}
https://gateoverflow.in/1614/gate2012-28
777 views The bisection method is applied to compute a zero of the function $f(x) =x ^{4} – x ^{3} – x ^{2} – 4$ in the interval [1,9]. The method converges to a solution after ––––– iterations. (A) 1 (B) 3 (C) 5 (D) 7 Bisection method is exactly like binary search on a list. In bisection method, in each iteration, we pick the mid point of the interval as approxiamation of the root, and see where are we, i.e. should we choose left sub-interval, or right-subinterval, and we continue until we find the root, or we reach some error tolerance. So in first iteration, our guess for root is mid point of [1,9] i.e. 5. Now f(5) > 0, so we choose left sub-interval [1,5] (as any value in right sub-interval [5,9] would give more positive value of $f$). In second iteration, we choose mid point of [1,5] i.e. 3, but again f(3) > 0, so we again choose left sub-interval [1,3]. In third iteration, we choose mid point of [1,3] i.e. 2, now f(2) = 0 So we found root in 3 iterations. So answer is option (B). selected by 0 is it still in the syllabus? 0 Not in syllabus 0 okk..thanks buddy! 0 newton rapson method, simpson or kutta methods, etc What else are not in syllabus. Thanks.. +1 vote 1 +1 vote 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758533000946045, "perplexity": 1644.7165335280001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00155.warc.gz"}
https://motls.blogspot.com/2011/09/ten-new-things-that-science-has-learned.html?m=1
## Tuesday, September 13, 2011 ### Ten new things that science has learned about matter This blog entry is somewhat analogous to the text about Ten new things modern physics has learned about time. 1. Matter is made of atoms. This is the proposition that Richard Feynman would have chosen as the single most important insight of the scientific research. The atomic theory, confirming guesses of some ancient Greek philosophers, explains why different materials have different properties; how matter stores heat in the microscopic motion of the atoms and other constituents (all thermal phenomena may be interpreted as statistical properties of large ensembles of atoms); why tiny particles exhibit the Brownian motion; how chemical reactions proceed at the microscopic level; how living creatures store and reproduce their genetic information, and many other things. Atoms are not indivisible; they're composed of small nuclei and electrons that orbit them. Nuclei are made of protons and neutrons and protons and neutrons are made of quarks. 2. Properties of materials depend on chemical composition as well as details of the bonds between atoms. Alchemists believed that only the relative concentration of different elements (materials composed of a single type of an atom only) matters and they were able to figure out that the mixing ratios for every compound were rational numbers in some units. However, the diamond and the graphite (both of which are pure carbon) are the simplest example that the character of the bonds between the atoms heavily influences the material properties as well. Gases are made of free, well-separated individual atoms; liquids are made of individual atoms whose density is however so large that they are in constant contact; solids are either amorphous materials, similar to "very slow liquids" (glass), or crystals (diamond, ice) where the atoms are arranged to regular lattices. Gases in which the free particles are electrically charged (including electrons and ions, i.e. atoms with removed/added electrons) are also possible at high temperatures and they're called plasma. 3. Matter is impenetrable because of a combination of Pauli's exclusion principle, Heisenberg's uncertainty principle, and Coulomb's electrostatic force. Matter is made out of atoms, bound states of electrons and nuclei that electrically attract each other. However, the electrons can't be orbiting at arbitrarily short distances near the nuclei because it would violate the uncertainty principle (one would have a well-defined momentum as well as position, which the principle forbids, unless the average squared momentum would be too large to make the configuration energetically disfavored). A compromise between the kinetic and potential energy, fighting with each other according tothe uncertainty principle, determines the size of the atoms. Atoms can't be squeezed much more densely than their natural size indicates because Pauli's exclusion principle guarantees that you can't squeeze more than one electron into the same state (e.g. into the same volume for an atom, into the energy ground state in this volume). The white dwarfs maximize the density of "electron-degenerate matter": the Chandrasekhar limit determines the highest possible mass of stars arranged in this way. The neutron stars obey similar principles but it's the neutrons, not electrons, that maximize their density in this case. 4. Elementary particles that make up matter are either bosons or fermions. Fermions have spin (internal angular momentum) being a half-integral multiple of $$\hbar$$, the reduced Planck constant; for bosons, it is an integer multiple. The spin-statistics theorem due to Pauli guarantees that the half-integral spin requires the particles to obey Pauli's exclusion principle which makes the matter composed of fermions "impenetrable" – what we would normally call "matter". The fermions' wave function flips the sign under permutations of two identical particles. On the other hand, the bosons don't change the sign which is why they don't obey Pauli's exclusion principle. Consequently, bosons like to overlap with their siblings and they are often interpreted as "particles of forces" (e.g. the photon is a particle of the electromagnetic force) which love to overlap with their siblings much like the electric and magnetic field lines. That's why lasers produce coherent light (lots of photons which are examples of bosons) and why one can have Bose-Einstein condensates (out of atoms which behave as bosons). 5. Elementary particles behave as quanta of waves and waves are made out of particles. Niels Bohr's complementarity principle guarantees that basic building blocks of matter behave both as particles and waves in different contexts. The more they behave as waves, the less they behave as particles, and vice versa. Such a unified picture reconciles the particle-like properties of matter and the wave-like properties (such as interference in double-slit experiments). Electrons (fermions) may be viewed as quanta of a Dirac field; photons are quanta (basic packages of energy) of the electromagnetic field. Alternatively, the electromagnetic field can't carry a continuously varying energy. The energy of an electromagnetic wave of frequency $$\omega$$ (above the vacuum energy) is equal to $$E=N\hbar\omega$$ where $$N$$ has to be an integer and may be interpreted as the number of photons. 6. Sound is made of waves in the air or the environment but there's no luminiferous eather. The sounds and music result from vibrations of the air: "A" is 440 Hz (periods per second). Sound may propagate in the form of vibrations through other materials, too. On the other hand, the electromagnetic waves and radiation don't require any atoms to be present: the electromagnetic field is a property of the vacuum itself. Even in the vacuum, there is an electric vector and a magnetic vector at each point of space and at each moment of time. Consequently, there is no aether wind (observations that would allow us to "feel" that we are moving relatively to the aether); as special relativity assumes/shows, the speed of light is always 299,792,458 m/s, regardless of the speed of the source or the speed of the observer. 7. Inertial mass is equal to gravitational mass. This so-called equivalence principle guarantees that all objects accelerate by the same rate in gravitational fields (e.g. on Earth's surface, assuming it is in the vacuum, i.e. in the absence of friction forces), as we can observe. On the theoretical front, this property of the gravitational force is the basic insight behind Einstein's general theory of relativity that explains gravity as a consequence of the curvature of spacetime. 8. The total mass is conserved but the total mass is the same thing as the total energy. Whether you define the overall mass of an object as the inertial mass (force you need to exert to achieve a unit acceleration) or the gravitational mass (the strength of the gravitational field around the object, as measured by the acceleration of other objects at a fixed distance), the overall mass is conserved. However, you must also include mass of "pure energy" to the equation, according to Einstein's $$E=mc^2$$ obtained from the special theory of relativity, otherwise the conservation law would be violated: nuclear fission or fusion may convert about 0.1% or 1% of the mass into pure energy, respectively. This new unified energy-mass conservation law exists because the laws of physics are time-translationally invariant (Emmy Noether's law) and it becomes vacuous or invalid in the context of cosmology (where the effective laws of physics or background is quickly evolving with time). 9. The number of elementary particles isn't conserved. Indeed, one may create lots of new particles, typically coming in particle-antiparticle pairs, by particle collisions. Particle-antiparticle pairs may annihilate, too. The possibility to create matter out of pure energy is the most characteristic prediction of quantum field theory. Quantum field theory also implies that there exists antimatter: for each particle, there exists an antiparticle with the same (positive) mass and the opposite signs of all charges (whose magnitudes are identical). The antimatter has to behave analogously to the matter, especially if it is in the mirror and observed backwards in time (in the latter case, the identical behavior of matter and antimatter is guaranteed by the CPT theorem). 10. There exist heavier particle species which are relevant for shorter distance scales. Most of the matter around us is composed of electrons, protons, and neutrons, or – using the more elementary description – electrons, up-quarks, and down-quarks (which are attracted by forces mediated by photons and gluons). However, there exist many other particle species similar to electrons – the so-called leptons – and many other quarks. Many of those particles are unstable, and therefore unimportant in the composition of stable materials. But even if heavier particles are stable, they are less important than the light ones because it is hard to create them and because their potential existence only affects the phenomena at ever shorter distances. Elementary particles heavier than the Planck mass or so – $$10^{-8}$$ kilograms or so – also exist and there are many of them. However, they may be interpreted as black hole microstates and their description in terms of Einstein's general theory of relativity becomes more natural than their description in terms of quantum field theory. String/M-theory provides us with many detailed interpolations between the regular light particle species and the black holes – e.g. Kaluza-Klein modes i.e. particles moving in extra dimensions; excited string states and branes, and others. #### 1 comment: 1. I recommend reading "The Case Against the Nuclear Atom" by Dewey B. Larson.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439348936080933, "perplexity": 449.58548701752795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00567.warc.gz"}
http://math.stackexchange.com/questions/125702/the-number-of-solutions-to-an-nth-order-differential-equation
# The number of solutions to an $n^{th}$ order differential equation. For an $n$th order differential equation, why are there always $n$ solutions? Why exactly $n$, not $n - 1, n+1$ or infinite many? Addendum by LePressentiment : This is motivated by P176 on Strang's Intro to Lin Alg, 4th Ed. An $n$th order differential equation, how many basis functions does it have? I'd guess $n$, because the diffl eqn have $n$ linearly independent solutions (predicated on Julián Aguirre's answer), all of which look to span the nullspace/solution space of the homogeneous ODE. - Well, technically it's not true that order $n$ differential equations have $n$ independent solutions. For example, the first-order DE $y'=\sqrt{|y|}$ has infinitely many independent solutions, even with the initial condition $y(0)=0$. If you put some reasonable restrictions on the DE then you get to the theorem Julian cites. –  Ryan Budney Mar 29 '12 at 18:15 The answer to why it's true this comes down to the proof of the existence and uniqueness theorem for ordinary differential equations. The main condition is that the DE has to satisfy a Lipschitz inequality (which the square root function does not). The key part of the proof is an application of the mean value theorem. –  Ryan Budney Mar 29 '12 at 18:17 You can also view the existence and uniqueness theorem as an application of something called Gronwall's Inequality. en.wikipedia.org/wiki/Gronwall%27s_inequality This tells you that "nearby solutions stay nearby" in some precise sense, so if two solutions have the same initial condition, they have to stay the same. –  Ryan Budney Mar 29 '12 at 18:18 In general, a differential equation has an infinite number of solutions. The family of solutions of an equation of order $n$ depends on $n$ constants. If the equation is an homogeneous linear equation of order $n$ then there exist $n$ linearly independent solutions $y_1,\dots,y_n$ such that the general solution is $$y=C_1y_1+\dots+C_ny_n,$$ where $C_1,\dots C_n$ are constants. Consider the $n$-th order linear homogeneous equation $$y^{(n)}+a_{n-1}(x)y^{(n-1)}+\dots+a_1(x)y'+a_0(x)y=0,$$ where the $a_i(x)$ are continuous functions on an interval, that without loss of generality we may assume that contains $x=0$. The theory of existence and uniqueness proves that there are solutions $y_1,\dots,y_n$ such that $$y_1(0)=1,y_1'(0)=0,y_1''(0)=0,\dots,y^{(n-1)}(0)=0\\ y_2(0)=0,y_2'(0)=1,y_0''(0)=0,\dots,y^{(n-1)}(0)=0\\ y_3(0)=0,y_3'(0)=0,y_3''(0)=1,\dots,y^{(n-1)}(0)=0\\ \dots\\ y_n(0)=0,y_n'(0)=0,y_n''(0)=0,\dots,y^{(n-1)}(0)=1$$ These solutions are linearly independent, and form a basis of the space of solutions. You can check the details in almost any book on ODE's.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631444215774536, "perplexity": 175.92396217229586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275111.45/warc/CC-MAIN-20140728011755-00039-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/polynomial-over-finite-field.237297/
# Polynomial over finite field • Start date • #1 5 0 Hello, I've got a quite simple question, but I don't get it: Say we've got a finite field $$\mathbb{F}_q$$ and a polynomial $$f \in \mathbb{F}_q[X]$$. Let $$v$$ denote the number of distinct values of $$f$$. Then, i hope, it should be possible to proof, that $$\deg f \geq 1 \Rightarrow v \geq \frac{q}{\deg f}$$. I'd appreciate any suggestions. Greetings, korollar • #2 Hurkyl Staff Emeritus Gold Member 14,950 19 Isn't this just a straightforward counting argument? Further hint: (how many inputs does it have?) Last edited: • #3 5 0 Sorry, I just don't see it. I have q inputs. I can try out some polynomials and fields of small degree. But doesn't it demand number theoretical arguments do show this estimation? • #4 5 0 Ah, it's too easy. Each value can at most $$\deg f$$ times assumed by f. So $$v \geq \frac{q}{\deg f}$$. But how can I show, that even $$v \geq \frac{q-1}{\deg f} + 1$$ is true? • #5 10 0 Hint: Show that for $$\deg f \neq 1, q$$, $$\frac{q}{\deg f}$$ has a remainder. Then do $$\deg f = 1, q$$ as special cases. EDIT: Also note that the inequality, as stated, isn't true: consider $$f = x^{q+1}-x$$, which has only one value, 0, yet $$1 \not\geq 1+\frac{q-1}{q+1}$$. You need a floor around your fraction. Last edited: • #6 5 0 Sorry, I forgot to write that I only look at polynomials of degree $$< q$$. But I still need help. Say $$\frac{q}{\deg f} = n \cdot \deg f + r$$ where $$r < \deg f$$. But in general $$r + \frac{\deg f -1}{\deg f} > 1$$. So I can't tell why the estimation is correct? • #7 10 0 You're right, I was assuming you meant to include a floor on the fraction. • #8 10 0 Wan et al's "Value Sets over Finite Fields" give this result as their Corollary 2.4; their paper may be worth a look. If there's a simple counting argument, I don't see it. • #9 5 0 I'd love to have a look at them. You don't accidentally have it at hand? However - I found this inequation in a paper of Gomez-Calderon, where the notation is the following: $$\left[ \frac{q-1}{d} \right] + 1 \leq |V_f|$$ (d: degree of f, V_f: value set of f) And it says: $$[x]$$ denotes the greatest integer $$\leq x$$ - which makes it trivial. In my source there were no brackets. I assume the notation above is the correct one. • Last Post Replies 15 Views 13K • Last Post Replies 5 Views 2K • Last Post Replies 1 Views 3K • Last Post Replies 1 Views 2K • Last Post Replies 8 Views 3K • Last Post Replies 1 Views 6K • Last Post Replies 2 Views 6K • Last Post Replies 5 Views 8K • Last Post Replies 6 Views 679 • Last Post Replies 2 Views 2K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950210452079773, "perplexity": 1155.4376724361298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00547.warc.gz"}
http://nlp.stanford.edu/IR-book/html/htmledition/probability-estimates-in-theory-1.html
Next: Probability estimates in practice Up: The Binary Independence Model Previous: Deriving a ranking function   Contents   Index ## Probability estimates in theory For each term , what would these numbers look like for the whole collection? odds-ratio-ct-contingency gives a contingency table of counts of documents in the collection, where is the number of documents that contain term : Using this, and and (74) To avoid the possibility of zeroes (such as if every or no relevant document has a particular term) it is fairly standard to add to each of the quantities in the center 4 terms of odds-ratio-ct-contingency, and then to adjust the marginal counts (the totals) accordingly (so, the bottom right cell totals ). Then we have: (75) Adding in this way is a simple form of smoothing. For trials with categorical outcomes (such as noting the presence or absence of a term), one way to estimate the probability of an event from data is simply to count the number of times an event occurred divided by the total number of trials. This is referred to as the relative frequency of the event. Estimating the probability as the relative frequency is the maximum likelihood estimate (or MLE ), because this value makes the observed data maximally likely. However, if we simply use the MLE, then the probability given to events we happened to see is usually too high, whereas other events may be completely unseen and giving them as a probability estimate their relative frequency of 0 is both an underestimate, and normally breaks our models, since anything multiplied by 0 is 0. Simultaneously decreasing the estimated probability of seen events and increasing the probability of unseen events is referred to as smoothing . One simple way of smoothing is to add a number to each of the observed counts. These pseudocounts correspond to the use of a uniform distribution over the vocabulary as a Bayesian prior , following Equation 59. We initially assume a uniform distribution over events, where the size of denotes the strength of our belief in uniformity, and we then update the probability based on observed events. Since our belief in uniformity is weak, we use . This is a form of maximum a posteriori ( MAP ) estimation, where we choose the most likely point value for probabilities based on the prior and the observed evidence, following Equation 59. We will further discuss methods of smoothing estimated counts to give probability models in Section 12.2.2 (page ); the simple method of adding to each observed count will do for now. Next: Probability estimates in practice Up: The Binary Independence Model Previous: Deriving a ranking function   Contents   Index
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873571395874023, "perplexity": 558.2527996738819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900379.37/warc/CC-MAIN-20141030025820-00017-ip-10-16-133-185.ec2.internal.warc.gz"}
https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html
# KolmogorovSmirnovTest KolmogorovSmirnovTest[data] tests whether data is normally distributed using the KolmogorovSmirnov test. KolmogorovSmirnovTest[data,dist] tests whether data is distributed according to dist using the KolmogorovSmirnov test. KolmogorovSmirnovTest[data,dist,"property"] returns the value of "property". # Details and Options • KolmogorovSmirnovTest performs the KolmogorovSmirnov goodness-of-fit test with null hypothesis that data was drawn from a population with distribution dist and alternative hypothesis that it was not. • By default, a probability value or -value is returned. • A small -value suggests that it is unlikely that the data came from dist. • The dist can be any symbolic distribution with numeric and symbolic parameters or a dataset. • The data can be univariate {x1,x2,} or multivariate {{x1,y1,},{x2,y2,},}. • The KolmogorovSmirnov test assumes that the data came from a continuous distribution. • The KolmogorovSmirnov test effectively uses a test statistic based on where is the empirical CDF of data and is the CDF of dist. • For multivariate tests, the sum of the univariate marginal -values is used and is assumed to follow a UniformSumDistribution under . • KolmogorovSmirnovTest[data,dist,"HypothesisTestData"] returns a HypothesisTestData object htd that can be used to extract additional test results and properties using the form htd["property"]. • KolmogorovSmirnovTest[data,dist,"property"] can be used to directly give the value of "property". • Properties related to the reporting of test results include: • "PValue" -value "PValueTable" formatted version of "PValue" "ShortTestConclusion" a short description of the conclusion of a test "TestConclusion" a description of the conclusion of a test "TestData" test statistic and -value "TestDataTable" formatted version of "TestData" "TestStatistic" test statistic "TestStatisticTable" formatted "TestStatistic" • The following properties are independent of which test is being performed. • Properties related to the data distribution include: • "FittedDistribution" fitted distribution of data "FittedDistributionParameters" distribution parameters of data • The following options can be given: • Method Automatic the method to use for computing -values SignificanceLevel 0.05 cutoff for diagnostics and reporting • For a test for goodness of fit, a cutoff is chosen such that is rejected only if . The value of used for the "TestConclusion" and "ShortTestConclusion" properties is controlled by the SignificanceLevel option. By default, is set to 0.05. • With the setting Method->"MonteCarlo", datasets of the same length as the input are generated under using the fitted distribution. The EmpiricalDistribution from KolmogorovSmirnovTest[si,dist,"TestStatistic"] is then used to estimate the -value. # Examples open allclose all ## Basic Examples(3) Perform a KolmogorovSmirnov test for normality: Test the fit of some data to a particular distribution: Compare the distributions of two datasets: There is not a sufficient evidence that data may be samples from different distributions: ## Scope(9) ### Testing(6) Perform a KolmogorovSmirnov test for normality: The -value for the normal data is large compared to the -value for the non-normal data: Test the goodness of fit to a particular distribution: Compare the distributions of two datasets: The two datasets do not have the same distribution: Test for multivariate normality: Test for goodness of fit to any multivariate distribution: Create a HypothesisTestData object for repeated property extraction: The properties available for extraction: ### Reporting(3) Tabulate the results of the KolmogorovSmirnov test: The full test table: A -value table: The test statistic: Retrieve the entries from a KolmogorovSmirnov test table for custom reporting: Report test conclusions using "ShortTestConclusion" and "TestConclusion": The conclusion may differ at a different significance level: ## Options(4) ### Method(3) Use Monte Carlo-based methods for a computation formula: Set the number of samples to use for Monte Carlo-based methods: The Monte Carlo estimate converges to the true -value with increasing samples: Set the random seed used in Monte Carlo-based methods: The seed affects the state of the generator and has some effect on the resulting -value: ### SignificanceLevel(1) Set the significance level used for "TestConclusion" and "ShortTestConclusion": By default, is used: ## Applications(2) A power curve for the KolmogorovSmirnov test: Visualize the approximate power curve: Estimate the power of the KolmogorovSmirnov test when the underlying distribution is a UniformDistribution[{-4,4}], the test size is 0.05, and the sample size is 12: A sample of 31 sheets of airplane glass were subjected to a constant stress until breakage. Investigate whether the data is drawn from a NormalDistribution or a GammaDistribution: Compare the quantile-quantile plots for the candidate distributions: The data appears to fit a GammaDistribution slightly better than a NormalDistribution: ## Properties & Relations(9) By default, univariate data is compared to a NormalDistribution: The parameters have been estimated from the data: Multivariate data is compared to a MultinormalDistribution by default: The parameters of the test distribution are estimated from the data if not specified: Specified parameters are not estimated: Maximum-likelihood estimates are used for unspecified parameters of the test distribution: If the parameters are unknown, KolmogorovSmirnovTest applies a correction when possible: The parameters are estimated but no correction is applied: The fitted distribution is the same as before and the -value is corrected: When parameters are estimated, Lilliefors' correction is used: Estimate the parameters prior to testing to perform the classical KolmogorovSmirnov test: Conceptually, the KolmogorovSmirnov test computes the maximum absolute difference between the empirical and theoretical CDFs: Plot the CDFs, showing the maximum absolute difference: Independent marginal densities are assumed in tests for multivariate goodness of fit: The test statistic is identical when independence is assumed: The KolmogorovSmirnov test works with the values only when the input is a TimeSeries: ## Possible Issues(3) The KolmogorovSmirnov test is not intended for discrete distributions: The test tends to be conservative: Use Monte Carlo methods or PearsonChiSquareTest in these cases: The KolmogorovSmirnov test is not valid for some distributions when parameters have been estimated from the data: Provide parameter values if they are known: Alternatively, use Monte Carlo methods to approximate the -value: Ties in the data are ignored: Differences may be more apparent with larger numbers of ties: ## Neat Examples(1) Compute the statistic when the null hypothesis is true: The test statistic given a particular alternative: Compare the distributions of the test statistics: Wolfram Research (2010), KolmogorovSmirnovTest, Wolfram Language function, https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html. #### Text Wolfram Research (2010), KolmogorovSmirnovTest, Wolfram Language function, https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html. #### CMS Wolfram Language. 2010. "KolmogorovSmirnovTest." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html. #### APA Wolfram Language. (2010). KolmogorovSmirnovTest. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html #### BibTeX @misc{reference.wolfram_2022_kolmogorovsmirnovtest, author="Wolfram Research", title="{KolmogorovSmirnovTest}", year="2010", howpublished="\url{https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html}", note=[Accessed: 26-November-2022 ]} #### BibLaTeX @online{reference.wolfram_2022_kolmogorovsmirnovtest, organization={Wolfram Research}, title={KolmogorovSmirnovTest}, year={2010}, url={https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.html}, note=[Accessed: 26-November-2022 ]}
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129353523254395, "perplexity": 3613.5618907182657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00460.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/musician-plays-string-guitar-fundamental-frequency-3300-hz-string-641-cm-long-mass-0275-g--q2669121
A musician plays a string on a guitar that has a fundamental frequency of 330.0 Hz. The string is 64.1 cm long and has a mass of 0.275 g. (a) What is the tension in the string? N (b) At what speed do the waves travel on the string? m/s (c) While the guitar string is still being plucked, another musician plays a slide whistle that is closed at one end and open at the other. He starts at a very high frequency and slowly lowers the frequency until beats, with a frequency of 5 Hz, are heard with the guitar. What is the fundamental frequency of the slide whistle with the slide in this position? Hz (d) How long is the open tube in the slide whistle for this frequency? m ### Get this answer with Chegg Study Practice with similar questions College Physics (3rd Edition) Q: A musician plays a string on a guitar that has a fundamental frequency of 330.0 Hz. The string is 65.5 cm long and has a mass of 0.300 g. (a) What is the tension in the string? (b) At what speed do the waves travel on the string? (c) While the guitar string is still being plucked, another musician plays a slide whistle that is closed at one end and open at the other. He starts at a very high frequency and slowly lowers the frequency until beats, with a frequency of 5 Hz, are heard with the guitar. What is the fundamental frequency of the slide whistle with the slide in this position? (d) How long is the open tube in the slide whistle for this frequency?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924694299697876, "perplexity": 531.1054259339417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258951764.94/warc/CC-MAIN-20160723072911-00126-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cant-identify-variable-in-qm-question.737127/
# Can't Identify Variable in QM Question 1. Feb 7, 2014 ### Jbar Hello, my question is a simple one: I am attempting to do problem 2.31 in Griffiths' QM book (latest edition). The question states, "The Dirac Delta function can be thought of as the limiting case of a rectangle of area 1, as the height goes to infinity and the width to zero. Show that the Delta-function well is a "weak" potential, in the sense that z0 -> 0. ..." What is z0 in this case? I've not seen it introduced anywhere in the book and there's no description for what it represents. I'm not sure if I'm missing something obvious but any help would be great. 2. Feb 7, 2014 ### Dick The meaning of z0 is specific to this problem and I don't have Griffith's book, but it almost certainly means that z0 is the width of the potential. Draft saved Draft deleted Similar Discussions: Can't Identify Variable in QM Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300871253013611, "perplexity": 529.0848328217913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00170.warc.gz"}
http://mathhelpforum.com/pre-calculus/135966-proving-cofunction-identity-using-subtraction-formula.html
# Thread: Proving cofunction identity using subtraction formula. 1. ## Proving cofunction identity using subtraction formula. Got this question in textbook assignments: Prove the cofunction identity using the addition and subtraction formulas. 20. cot (pi/2 - u) = tan u so to my knowledge, I turn it into: 1 / tan (pi/2 - u) and try to apply subtraction formula but then tan pi/2 isn't defined is it? because 1/0 isn't defined. Is this the case? [EDIT] I've also run into this now and don't want to spam the forums so I will post here, prove: sin(pi/2 - x) = sin(pi/2 + x) I start with RHS = sin(pi/2) cos x + cos (pi/2) sin x = (1) cos x + (0) sin x = cos x Haha this is probly also simple but alas it eludes me... 2. Hello, DannyMath! Your reasoning is correct ... We can't work it that way. Prove the cofunction identity using the addition and subtraction formulas. . . $20.\;\;\cot\left(\tfrac{\pi}{2}- u\right) \:=\: \tan u$ If you don't know the addition/subtraction formula for cotangent, we can re-invent it. $\cot(A \pm B) \;=\;\frac{1}{\tan(A+B)} \;=\;\frac{1\mp\tan A\tan B}{\tan A \pm \tan B} \;=\; \frac{1 \mp \dfrac{1}{\cot A}\,\dfrac{1}{\cot B}}{\dfrac{1}{\cot A} \pm\dfrac{1}{\cot B}}$ Multiply by $\frac{\cot A\cot B}{\cot A\cot B}\!:\quad\boxed{\cot(A \pm B) \;=\;\frac{\cot A\cot A \mp 1}{\cot B \pm \cot A}}$ We have: . $\cot\left(\tfrac{\pi}{2} - u\right) \;=\;\frac{\cot\frac{\pi}{2}\cot u + 1}{\cot u - \cot\frac{\pi}{2}} \;=\;\frac{0\cdot\cot u + 1}{\cot u - 0} \;=\;\frac{1}{\cot u} \;=\;\tan u$ 3. > 4. Ok I hesitate to ask but I understand everything until after $ \frac{1 \mp \dfrac{1}{\cot A}\,\dfrac{1}{\cot B}}{\dfrac{1}{\cot A} \pm\dfrac{1}{\cot B}} $ The steps between the above and $ \boxed{\cot(A \pm B) \;=\;\frac{\cot A\cot B \mp 1}{\cot B \pm \cot A}} $ have me stumped. I tried for an hour to work it out on paper and googled it. Does it have something to do with multiplying by its reciprocal? 5. Originally Posted by DannyMath Ok I hesitate to ask but I understand everything until after $ \frac{1 \mp \dfrac{1}{\cot A}\,\dfrac{1}{\cot B}}{\dfrac{1}{\cot A} \pm\dfrac{1}{\cot B}} $ Multiply both numerator and denominator by cot(A) cot(B) to get $\frac{cot(A)cot(B)\mp 1}{cot(B)+ cot(A)}$. The steps between the above and $ \boxed{\cot(A \pm B) \;=\;\frac{\cot A\cot B \mp 1}{\cot B \pm \cot A}} $ have me stumped. I tried for an hour to work it out on paper and googled it. Does it have something to do with multiplying by its reciprocal? 6. Ok I think I understand now. I was getting confused cause I know that when you multiply something by its reciprocal you get 1, but I'm starting to realize that's not what we're doing in this case haha. Thanks guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258119463920593, "perplexity": 1150.0437748017232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00314-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/68744/bounding-mutual-information-given-bounds-on-pointwise-mutual-information
# Bounding mutual information given bounds on pointwise mutual information Suppose I have two sets $X$ and $Y$ and a joint probability distribution over these sets $p(x,y)$. Let $p(x)$ and $p(y)$ denote the marginal distributions over $X$ and $Y$ respectively. The mutual information between $X$ and $Y$ is defined to be: $$I(X; Y) = \sum_{x,y}p(x,y)\cdot\log\left(\frac{p(x,y)}{p(x)p(y)}\right)$$ i.e. it is the average value of the pointwise mutual information pmi$(x,y) \equiv \log\left(\frac{p(x,y)}{p(x)p(y)}\right)$. Suppose I know upper and lower bounds on pmi$(x,y)$: i.e. I know that for all $x,y$ the following holds: $$-k \leq \log\left(\frac{p(x,y)}{p(x)p(y)}\right) \leq k$$ What upper bound does this imply on $I(X; Y)$. Of course it implies $I(X; Y) \leq k$, but I would like a tighter bound if possible. This seems plausible to me because p defines a probability distribution, and pmi$(x,y)$ cannot take its maximum value (or even be non-negative) for every value of $x$ and $y$. - Alas, that's almost all you can say for large $k$: take $X=Y$ of cardinality $e^k$ and consider the linear combination of the uniform distribution on the diagonal with the coefficient $1-e^{-k}$ and the uniform distribution on the entire product with the coefficient $e^{-k}$. You'll still get $k-ke^{-k}<k$ but I doubt it is the kind of improvement you are looking for. Or are you interested in the case $k\to 0$? –  fedja Jun 24 '11 at 17:49 Good example -- I guess for k >> 1, there is no substantially better upper bound. I wonder if something is possible if $p(x)$ is a uniform distribution over $X$ and $|X|$ >> $e^k$... –  Florian Jun 24 '11 at 18:49 What does one mean by $k>>1$ in mathematics? –  Ashok Aug 29 '11 at 8:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856088161468506, "perplexity": 172.8258416044001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00136-ip-10-33-131-23.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.bbms/1553047234
## Bulletin of the Belgian Mathematical Society - Simon Stevin ### Lipsman mapping and dual topology of semidirect products Aymen Rahali #### Abstract We consider the semidirect product $G = K \ltimes V$ where $K$ is a connected compact Lie group acting by automorphisms on a finite dimensional real vector space $V$ equipped with an inner product $\langle,\rangle$. We denote by $\widehat{G}$ the unitary dual of $G$ (note that we identify each representation $\pi\in\widehat{G}$ to its classes $[\pi]$) and by $\mathfrak{g}^\ddag/G$ the space of admissible coadjoint orbits, where $\mathfrak{g}$ is the Lie algebra of $G.$ It was pointed out by Lipsman that the correspondence between $\mathfrak{g}^\ddag/G$ and $\widehat{G}$ is bijective. Under some assumption on $G,$ we prove that the Lipsman mapping \begin{eqnarray*} \Theta:\mathfrak{g}^\ddag/G &\longrightarrow&\widehat{G}\\ \mathcal{O}&\longmapsto&\pi_\mathcal{O} \end{eqnarray*} is a homeomorphism. #### Article information Source Bull. Belg. Math. Soc. Simon Stevin, Volume 26, Number 1 (2019), 149-160. Dates First available in Project Euclid: 20 March 2019 https://projecteuclid.org/euclid.bbms/1553047234 Digital Object Identifier doi:10.36045/bbms/1553047234 Mathematical Reviews number (MathSciNet) MR3934086 Zentralblatt MATH identifier 07060321 #### Citation Rahali, Aymen. Lipsman mapping and dual topology of semidirect products. Bull. Belg. Math. Soc. Simon Stevin 26 (2019), no. 1, 149--160. doi:10.36045/bbms/1553047234. https://projecteuclid.org/euclid.bbms/1553047234
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031721711158752, "perplexity": 628.861595148451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573908.70/warc/CC-MAIN-20190920071824-20190920093824-00205.warc.gz"}
http://math.stackexchange.com/questions/246006/understanding-mathematical-induction-for-divisibility/246014
# Understanding mathematical induction for divisibility I'm on my quest to understand mathematical induction proofs (beginners). First, thanks to How to use mathematical induction with inequalities? I kinda understood better the procedure, and practiced it with Is this induction procedure correct? ($2^n<n!$). It didn't turn out so bad :) So far I've been doing equalities and inequalities. Alright. When things seemed to get better now I'm asked to prove divisibility. My book has an exercise with the solution, but there is a part I don't get well: With $n\ge1$ prove: $n(n+1)(n+2)$ is divisible by 6. Assume $$\exists k[n(n+1)(n+2) = 6k]$$ For the inductive step and using distribution: $$(n+1)(n+2)(n+3) = n(n+1)(n+2)+3(n+1)(n+2)$$ $$=6k+3\cdot2k'$$ $$=6(k+k') = 6k''$$ Alright. I see where is that $6k$ coming from, but what about the $3\cdot2k'$? How does the $3(n+1)(n+2)$ we had become that? That's it for the example I didn't understand well. But I also tried doing another exercise by myself (and didn't manage to do it): Prove, with $n\ge1$: $10^n+3\cdot4^{n+2}+5$ is divisible by $9$. First, I prove it for $n+1$: To do so we need to show that $\exists x[10^1+3\cdot4^{1+2}+5=9x]$. It holds, because $(10^1+3\cdot4^{1+2}+5) = (10+3\cdot16+5) = (15+48) = 63 = 9 \cdot 7$ Now we assume $$\exists x[10^n+3\cdot4^{n+2}+5=9x]$$ We need to prove it for $n+1$: $$10^{n+1}+3\cdot4^{n+3} = 10^n\cdot10+3\cdot4^n\cdot4^3$$ $$= 10^n\cdot10+3\cdot4^{n+2}\cdot4$$ $$=(9x-3\cdot4^{n+2}-5)\cdot10+(3\cdot4^{n+2}\cdot4)$$ By this point I don't even know what I'm doing. I though that I could use $10^n$ with the inductive assumption and replace it with $(9x-3\cdot4^{n+2}-5)$, similar to what the book did before. But now the situation looks worse. Similarly to this question How to use mathematical induction with inequalities?, I seek to understand mathematical induction when applied to divisibility cases this time. It seems (for me) that all these cases (equalities, inequalities and divisibility) do have important differences at the moment of solving. - Simply, either $n+1$ or $n+2$ is even, consequently, so are their product. – Berci Nov 27 '12 at 22:44 Compare to this question, which is the same problem, a little disguised math.stackexchange.com/questions/211121/… – Hendrik Jan Nov 27 '12 at 23:40 There is a simpler theorem of this type and that brings us the $2k'$: With $n\ge 1$ prove that $n(n+1)$ is amultiple of $2$. Remark: You should now be able to prove with yet another induction that $\frac{(n+k)!}{(n-1)!}=n(n+1)\cdots (n+k)$ is a multiple of $k!$. Your other attempt is fine so far. Note that $$(9x-3\cdot 4^{n+2}-5)\cdot 10+(3\cdot 4^{n+2}\cdot4) = 90x-18\cdot 4^{n+2}-50\\=9\cdot(10x-2\cdot 4^{n+2}-5)-5.$$ - Question: In your last expression, there is a $-5$ hanging outside. Why is that okay to have? I mean, I though that the objective was to end up with something like $9x$ but now we have $9x-5$. Doesn't that cause problems? – Zol Tun Kul Nov 27 '12 at 23:22 That's probably because you started with $10^{n+1}+3\cdot 4^{n+3}$ instead of $10^{n+1}+3\cdot 4^{n+3}+5$. Thus the correct expression is divisible by $9$. – Hagen von Eitzen Nov 27 '12 at 23:29 Oh my... Damn it. You're right. – Zol Tun Kul Nov 27 '12 at 23:31 Do you agree that $\,a(b+c)=ab+ac\,$? Well, this is just what happened here: $$(n+1)(n+2)(n+3)=[(n+1)(n+2)]\cdot n+[(n+1)(n+2)]\cdot 3$$ which is exactly what's written there. Added: As for the other exercise: $$10^{n+1}+3\cdot 4^{n+3}+5=10\cdot 10^n+4\cdot 3\cdot 4^{n+2}+5=$$ $$=\left(10^n+3\cdot 4^{n+2}+5\right)+9\cdot 10^n+3\cdot 3\cdot 4^{n+2}$$ and the first parentheses above is divisible by 9 by the inductive hypotheses, whereas the rest is obviously divisible by 9 as well! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002956748008728, "perplexity": 373.6072544084332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275181.24/warc/CC-MAIN-20160524002115-00009-ip-10-185-217-139.ec2.internal.warc.gz"}
http://codeforces.com/blog/entry/44408
Please subscribe to the official Codeforces channel in Telegram via the link: https://t.me/codeforces_official. × ### Alex_2oo8's blog By Alex_2oo8, 3 years ago, translation, , ## 664A - Complicated GCD Author of the idea — GlebsHP We examine two cases: 1. a = b — the segment consists of a single number, hence the answer is a. 2. a < b — we have gcd(a, a + 1, a + 2, ..., b) = gcd(gcd(a, a + 1), a + 2, ..., b) = gcd(1, a + 2, ..., b) = 1. Code ## 663A - Rebus Author of the idea — gen First we check whether any solution exists at all. For that purpose, we calculate the number of positive (the first one and any other with the  +  sign) and negative elements (with the  -  sign) in the sum. Let them be pos and neg, respectively. Then the minimum value of the sum that can be possibly obtained is equal to min = (1 · pos - n · neg), as each positive number can be 1, but all negative can be  - n. Similarly, the maximum possible value is equal to max = (n · pos - 1 · neg). The solution therefore exists if and only if min ≤ n ≤ max. Now suppose a solution exists. Let's insert the numbers into the sum one by one from left to right. Suppose that we have determined the numbers for some prefix of the expression with the sum of S. Let the sign of the current unknown be sgn ( + 1 or  - 1) and there are some unknown numbers left to the right, excluding the examined unknown, among them pos_left positive and neg_left negative elements. Suppose that the current unknown number takes value x. How do we find out whether this leads to a solution? The answer is: in the same way we checked it in the beginning of the solution. Examine the smallest and the largest values of the total sum that we can get. These are equal to min_left = (S + sgn · x + pos_left - n · neg_left) and max_left = (S + sgn · x + n · pos_left - neg_left), respectively. Then we may set the current number to x, if min_left ≤ n ≤ max_left holds. To find the value of x, we can solve a system of inequalities, but it is easier simply to check all possible values from 1 to n. BONUS Let k be the number of unknowns in the rebus. Prove that the complexity of the described solution (implementation shown below) is O(k2 + n), not O(k · n). Code Author of the idea — Alex_2oo8 Consider the abbreviations that are given to the first Olympiads. The first 10 Olympiads (from year 1989 to year 1998) receive one-digit abbreviations (IAO'9, IAO'0, ..., IAO'8). The next 100 Olympiads (1999 - 2098) obtain two-digit abbreviations, because all one-digit abbreviations are already taken, but the last two digits of 100 consecutive integers are pairwise different. Similarly, the next 1000 Olympiads get three-digit abbreviations and so on. Now examine the inversed problem (extract the year from an abbreviation). Let the abbreviation have k digits, then we know that all Olympiads with abbreviations of lengths (k - 1), (k - 2), ..., 1 have passed before this one. The number of such Olympiads is 10k - 1 + 10k - 2 + ... + 101 = F and the current Olympiad was one of the 10k of the following. Therefore this Olympiad was held in years between (1989 + F) and (1989 + F + 10k - 1). As this segment consists of exactly 10k consecutive natural numbers, it contains a single number with a k-digit suffix that matches the current abbreviation. It is also the corresponding year. Code ## 662B - Graph Coloring Author of the problem — gen Examine the two choices for the final color separately, and pick the best option afterwards. Now suppose we want to color the edges red. Each vertex should be recolored at most once, since choosing a vertex two times changes nothing (even if the moves are not consecutive). Thus we need to split the vertices into two sets S and T, the vertices that are recolored and the vertices that are not affected, respectively. Let u and v be two vertices connected by a red edge. Then for the color to remain red, both u and v should belong to the same set (either S or T). On the other hand, if u and v are connected by a blue edge, then exactly one of the vertices should be recolored. In that case u and v should belong to different sets (one to S and the other to T). This problem reduces to 0-1 graph coloring, which can be solved by either DFS or BFS. As the graph may be disconnected, we need to process the components separately. If any component does not have a 0-1 coloring, there is no solution. Otherwise we need to add the smallest of the two partite sets of the 0-1 coloring of this component to S, as we require S to be of minimum size. Code ## 662A - Gambling Nim Author of the idea — GlebsHP It is known that the first player loses if and only if the xor-sum of all numbers is 0. Therefore the problem essentially asks to calculate the number of ways to arrange the cards in such a fashion that the xor-sum of the numbers on the upper sides of the cards is equal to zero. Let and . Suppose that the cards with indices j1, j2, ..., jk are faced with numbers of type b and all the others with numbers of type a. Then the xor-sum of this arrangement is equal to , that is, . Hence we want to find the number of subsets ci with xor-sum of S. Note that we can replace c1 with , as applying c1 is the same as applying . Thus we can freely replace {c1, c2} with and c2 with . This means that we can apply the following procedure to simplify the set of ci: 1. Pick cf with the most significant bit set to one 2. Replace each ci with the bit in that position set to one to 3. Remove cf from the set 4. Repeat steps 1-5 with the remaining set 5. Add cf back to the set After this procedure we get a set that contains k zeros and n - k numbers with the property that the positions of the most significant bit set to one strictly decrease. How do we check now whether it is possible to obtain a subset with xor-sum S? As we have at most one number with a one in the most significant bit, then it tells us whether we should include that number in the subset or not. Similarly we apply the same argument for all other bits. If we don't obtain a subset with the xor-sum equal to S, then there is no such subset at all. If we do get a subset with xor-sum S, then the total number of such subsets is equal to 2k, as for each of the n - k non-zero numbers we already know whether it must be include in such a subset or not, but any subset of k zeros doesn't change the xor-sum. In this case the probability of the second player winning the game is equal to , so the first player wins with probability . Code ## 662C - Binary Table Author of the idea — Alex_2oo8 First let's examine a slow solution that works in O(2n · m). Since each row can be either inverted or not, the set of options of how we can invert the rows may be encoded in a bitmask of length n, an integer from 0 to (2n - 1), where the i-th bit is equal to 1 if and only if we invert the i-th row. Each column also represents a bitmask of length n (the bits correspond to the values of that row in each of the n rows). Let the bitmask of the i-th column be coli, and the bitmask of the inverted rows be mask. After inverting the rows the i-th column will become . Suppose that contains ones. Then we can obtain either k or (n - k) ones in this column, depending on whether we invert the i-th column itself. It follows that for a fixed bitmask mask the minimum possible number of ones that can be obtained is equal to . Now we want to calculate this sum faster than O(m). Note that we are not interested in the value of the mask itself, but only in the number of ones it contains (from 0 to n). Therefore we may group the columns by the value of . Let dp[k][mask] be the number of such i that , then for a fixed bitmask mask we can calculate the sum in O(n). Therefore, we are now able to count the values of dp[k][mask] in time O(2n · n3) using the following recurrence: This is still a tad slow, but we can speed it up to O(2n · n2), for example, in a following fashion: BONUS Are you able to come up with an even faster solution? Code ## 662E - To Hack or not to Hack Author of the idea — Alex_2oo8 Observation number one — as you are the only participant who is able to hack, the total score of any other participant cannot exceed 9000 (3 problems for 3000 points). Hence hacking at least 90 solutions automatically guarantees the first place (the hacks alone increase the score by 9000 points). Now we are left with the problem where the number of hacks we make is at most 90. We can try each of the 63 possible score assignments for the problems in the end of the round. As we know the final score for each problem, we can calculate the maximum number of hacks we are allowed to make so the problem gets the assigned score. This is also the exact amount of hacks we will make in that problem. As we know the number of hacks we will make, we can calculate our final total score. Now there are at most 90 participants who we can possibly hack. We are interested only in those who are on top of us. By hacking we want to make their final score less than that of us. This problem can by solved by means of dynamic programming: dp[p][i][j][k] — the maximum number of participants among the top p, whom we can push below us by hacking first problem i times, second problem j times and third problem k times. The recurrence: we pick a subset of solutions of the current participant that we will hack, and if after these hacks we will push him below us, we update the corresponding dp state. For example, if it is enough to hack the first and the third problems, then dp[p + 1][i + 1][j][k + 1] = max(dp[p + 1][i + 1][j][k + 1], dp[p][i][j][k] + 1) BONUS Can you solve this problem if each hack gives only 10 points, not 100? Code • • +50 • » 3 years ago, # |   +16 The new feature for showing code is really awesome :D » 3 years ago, # | ← Rev. 3 →   +13 In problem To Hack or not to Hack, won't the official approach be too slow? since If I understand correctly, the dp part take O(90^4*2^3), and the enumeration part takes O(6^3), which, when multiplied, is very largeupd: after some rethinking the dp part actually takes O(90*(triple(a,b,c):(a+b+c<90))*8), which the number of triple is around 125580, so it still looks pretty large. • » » 3 years ago, # ^ |   +20 More or less formal approximation of the complexity: 90 hacks is an actual worst case only for scoring 3000 - 3000 - 3000, so, if we have fixed the scoring to be, say, 1000 - 500 - 1500 and we are still allowed to make at least 30 hacks overall, the dynamic programming won't take any time at all, as we are only processing participants that are initially above us, but with 30 hacks and considered scoring we are already taking the first place. So, for scoring 1000 - 500 - 1500 we can approximate the maximal number of hacks on each problem as 10 - 5 - 15 respectively. Thus the total complexity can be approximated as 30 · (5 + 10 + 15 + 20 + 25 + 30)3 · 7 = 243'101'250, in worst case we will have 30 participants with all 3 problems hackable (this way we have an additional factor of 7 for subset of problems that we will hack).In practice, it is impossible to design a testcase, where we will have exactly the maximal number of hacks allowed for all of 63 potential scorings. I have just tried some nearly worst cases and the maximal number of DP transitions (taking the solution from the editorial) I was able to achieve is 104'533'689 on the following test case: 24 0 0 0 -1 -1 -1 -1 -1 -1 ... -1 -1 -1 The reference solution takes approximately 200 ms on this test case. » 3 years ago, # |   0 IS There any other approach for problem Rebus ? if you someone know he would help me a lot? • » » 3 years ago, # ^ | ← Rev. 2 →   0 Correct me if I'm wrong. I solved this problem by using binary search. First, at the left side, let's say the number of positive integers is A and that of negative B. Then we can now say that the sum of positive integers(counted A) equals n+abs(negative intergers). All numbers should be -n~n(except 0). So the right side of my form should be in range [n+B*1 ~ n+B*n]. If we suppose one value V in that range, it can be split into A segments. Let's give value (int)V/A to the first of left side. After then, we have only value V-(int)V/A and (A-1) segments. By trying them, we can now figure out if it is possible. First condition : if V/A is larger than n, =>No Second condition : value returned 0 but still have segments => NO Third condition : Have no segments but sill have V > 0 =>No By this, we can adjust V.But my question is, I set V just like n+b, n+b*2, n+b*3, ...n+b*n but accepted. Can anyone prove or disprove it's ok or not? » 3 years ago, # |   +5 Can Someone Explain 662A - Азартный Ним.Not able to get the editorial » 3 years ago, # |   0 Hi !for the Bonus of problem 'Rubus' I think it can be proved that the code given in the solution works in O(N + K) and not O(n + k^2)correct me if I am wrong thanks ! » 3 years ago, # |   +3 I notice that in the tags for 'E. Binary Table', there are 'divide and conquer', and 'fft', would somebody please provide some explanations on that? Thanks in advance. • » » 18 months ago, # ^ | ← Rev. 3 →   0 In fact, it can be solved with an algorithm similar to FFT. It's FWHT — Fast Walsh-Hadamard Transform — an algorithm to calculate the XOR convolution: c[x] = Sum[y = 0 .. 2^n-1] a[y] * b[x XOR y], in O(2^n * n) time.Let a[x] be the number of x appeared in the columns, b[x] = min(popcount(x), n-popcount(x)), then the answer for mask is ans[mask] = Sum[x = 0 to 2^n-1] a[x] * b[x XOR mask], which is exactly the XOR convolution and can be calculated with FWHT in O(2^n * n) time.Perhaps the one who added the tag couldn't find this algorithm and chose a similar one. • » » » 17 months ago, # ^ |   +10 Thanks for that! » 2 years ago, # |   0 any one pleas prove me the complex for problem E To Hack or not to hack
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8626760840415955, "perplexity": 436.67188502473215}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830305.92/warc/CC-MAIN-20181219005231-20181219031231-00360.warc.gz"}
https://www.arxiv-vanity.com/papers/1306.3537/
# Anisotropic Fermi Contour of (001) GaAs Electrons in Parallel Magnetic Fields D. Kamburov    M. A. Mueed    M. Shayegan    L. N. Pfeiffer    K. W. West    K. W. Baldwin    J. J. D. Lee Department of Electrical Engineering, Princeton University, Princeton, New Jersey 08544, USA    R. Winkler Department of Physics, Northern Illinois University, DeKalb, Illinois 60115, USA Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA December 9, 2020 ###### Abstract We demonstrate a severe Fermi contour anisotropy induced by the application of a parallel magnetic field to high-mobility electrons confined to a 30-nm-wide (001) GaAs quantum well. We study commensurability oscillations, namely geometrical resonances of the electron orbits with a unidirectional, surface-strain-induced, periodic potential modulation, to directly probe the size of the Fermi contours along and perpendicular to the parallel field. Their areas are obtained from the Shubnikov-de Haas oscillations. Our experimental data agree semi-quantitatively with the results of parameter-free calculations of the Fermi contours but there are significant discrepancies. ###### pacs: An isotropic two-dimensional (2D) carrier system is characterized by a circular Fermi contour. In such a system, the application of a small perpendicular magnetic field leads to circular quasi-classical cyclotron orbits. If the layer of charged carriers is purely 2D, i.e., has zero thickness, the application of a parallel magnetic field () would not affect the shape of its Fermi contour and the cyclotron trajectories would remain circular. However, if the layer has a finite (non-zero) thickness, couples to the carriers’ out-of-plane motion and distorts the Fermi contours and the cyclotron orbits Smrcka.JP.1993 ; Ohtsuka.PB.1998 ; Oto.PE.2001 . Understanding this -induced Fermi contour anisotropy is important for devices whose operation relies on ballistic transport Potok.PRL.2002 . The anisotropy also emerges in the context of magnetic breakdown and Fermi contour disintegration in bilayer systems Harff.PRB.1997 ; Jungwirth.PRB.1997 . Here we demonstrate the ability to tune and measure the -induced Fermi contour anisotropy of electrons confined to a 30-nm-wide GaAs quantum well. Using geometrical resonances of cyclotron orbits with a periodic superlattice, the so-called commensurability oscillations (COs) Weiss.EP.1989 ; Winkler.PRL.1989 ; Gerhardts.PRL.1989 ; Beenakker.PRL.1989 ; Beton.PRB.1990 ; Peeters.PRB.1992 ; Mirlin.PRB.1998 , we directly probe the resulting distortions of the Fermi contour and the ballistic electron trajectories. Measuring Shubnikov-de Haas (SdH) oscillations allows us to determine the evolution of the Fermi contour areas with . Our results show that the Fermi contour distortion is significant and leads to a contour anisotropy of for  T in our sample. This is much higher than the previously reported anisotropy in GaAs/AlGaAs heterojunctions Smrcka.JP.1993 ; Ohtsuka.PB.1998 ; Oto.PE.2001 and stems from the larger thickness of the electron layer in our sample. In contrast with the -induced anisotropy in hole samples Kamburov.PRB.2012 , the electron anisotropy appears to be spin-independent. Comparison of our data with the results of numerical calculations reveals generally good agreement, although there are also significant disagreements. Figure 1 captures the key points of our study. In Fig. 1(a) we show the results of parameter-free calculations of the Fermi contours, combining the Kane Hamiltonian RWinkler.book.2003 with spin-density functional theory Attaccalite.PRL.2002 to take into account the exchange-correlation of the quasi-2D electrons in our sample. At  T, the Fermi contours of the two spin-subbands are circular and essentially identical. With the application of along the direction, both contours become elongated in the direction while shrinking along . The areas enclosed by the two contours also differ from each other as electrons are transfered from the minority- to the majority-spin subbands. In our study we measure surface-strain-induced COs Skuras.APL.1997 ; Endo.PRB.2000 ; Endo.PRB.2001 ; Endo.PRB.2005 ; Kamburov.PRB._2012 ; Kamburov.PRB.2012 , triggered by a periodic density modulation [Figs. 1(b) and (c)] to directly map the Fermi wave vectors in two perpendicular directions, and . The magnetoresistance of the modulated sections of our Hall bars exhibits minima at the electrostatic commensurability condition where Weiss.EP.1989 ; Winkler.PRL.1989 ; Gerhardts.PRL.1989 ; Beenakker.PRL.1989 ; Beton.PRB.1990 ; Peeters.PRB.1992 ; Mirlin.PRB.1998 ; an example is shown in Fig. 2. Here is the real-space cyclotron diameter along the modulation direction and is the period of the potential modulation ( is the Fermi wave-vector perpendicular to the modulation direction) Gunawan.PRL.2004 . The anisotropy of the cyclotron diameter and the Fermi contour can therefore be quantified directly from COs measured along the two perpendicular arms of the L-shaped Hall bar in Fig. 1(c). The COs for the arms along and yield along and , respectively. In our measurements, we also recorded SdH oscillations in the unpatterned (reference) part of the Hall bar to probe the area enclosed by each of the Fermi contours. We prepared strain-induced superlattice samples with a lattice period of nm and 2D electrons confined to a 30-nm-wide GaAs quantum well grown via molecular beam epitaxy on a (001) GaAs substrate. The superlattice is made of negative electron-beam resist and modulates the 2D potential through the piezoelectric effect in GaAs Skuras.APL.1997 ; Endo.PRB.2000 ; Endo.PRB.2001 ; Endo.PRB.2005 ; Kamburov.PRB._2012 ; Kamburov.PRB.2012 . The quantum well, located 135 nm under the surface, is flanked on each side by 95-nm-thick AlGaAs spacer layers and Si -doped layers. The 2D electron density at 0.3 K is cm, and the mobility is cm/Vs. We passed current along the two Hall bar arms of the sample [Fig. 1(b)] and measured the longitudinal resistances simultaneously along both arms. The measurements were carried out by first applying a fixed, large magnetic field in the plane of the sample along . We then slowly rotated the sample around the [] axis to introduce a small magnetic field () perpendicular to the 2D plane Tutuc.PRL.2001 ; footnote1 . This induced COs and SdH oscillations in our sample. The magnitude of was extracted from the Hall resistance we measured in the reference region of the sample simultaneously with the resistances of the two patterned regions. We performed all experiments using low-frequency ( Hz) lock-in techniques in a He cryostat with a base temperature of  K. The magnetoresistance data from the two perpendicular Hall bar arms are shown in Figs. 3(a) and (b). In each pannel the bottom traces, taken in the absence of , exhibit clear COs. The Fourier transform (FT) spectra of these two traces are shown as the bottom curves in Figs. 3(c) and (d). Each of the FT spectra exhibits one peak whose position ( T) agrees with the commensurability frequency  T expected for a circular, spin-degenerate Fermi contour with Weiss.EP.1989 ; Winkler.PRL.1989 ; Gerhardts.PRL.1989 ; Beenakker.PRL.1989 ; Beton.PRB.1990 ; Peeters.PRB.1992 ; Mirlin.PRB.1998 . With increasing , the peak in the FTs for the [110] Hall bar data [Fig. 3(c)] moves to higher frequencies. In sharp contrast, the peak in the direction [Fig. 3(d)] moves to smaller frequencies as increases. Figure 4 summarizes the measured as a function of , normalized to its value at . Similarly, we plot the extreme values of the Fermi wave vectors predicted by our parameter-free calculations using the Kane Hamiltonian RWinkler.book.2003 . We include results from calculations that treat the exchange-correlation energy differently. The calculation (red curves) ignores exchange-correlation completely while the calculation (blue curves) uses spin-density functional theory Attaccalite.PRL.2002 to take into account exchange-correlation in the 2D electron system that is partially spin-polarized because of . The evolution of the COs’ FT peaks with increasing is qualitatively consistent with the calculated Fermi contours. The agreement is quantitatively good but for the case the elongation deduced from the experimental data is smaller than the calculations predict. This discrepancy implies that the shape of the Fermi contour is less elongated. We do not know the source of this disagreement at the moment. We note that we have encountered a similar disagreement in our study of hole Fermi contours Kamburov.PRB.2012 . Despite this discrepancy, however, the overall agreement between the measured and calculated values of is remarkable, considering that there are no adjustable parameters in the calculations. The results of Fig. 4 clearly point to a severe distortion of the Fermi contours and the associated real-space ballistic electron trajectories in the presence of a moderately strong . Both calculations show that the extreme sizes of the contours for the two spin species remain very similar, explaining why the COs’ FT peaks show no splittling footnote2 . The COs data in Figs. 3 and 4 probe the electron Fermi contours in two specific directions in -space but give no information about their areas. To probe the areas enclosed by the Fermi contours, we measured the SdH oscillations in the unpatterned region of the sample [ in Fig. 1(b)]. Figure 5(a) shows the magnetoresistance traces at different . Their corresponding FTs are shown in Fig. 5(b). Up to  T, the FT of each trace has two peaks. The position of the stronger peak is very close to the value of  T expected for spin-unresolved SdH oscillations of electrons of density cm. The weaker peak at 11.6 T corresponds to spin-resolved oscillations [ T]. Starting at  T, the spin-unresolved peak at 5.8 T splits, with the upper peak corresponding to the area (electron density) of the majority-spin-subband and the lower peak to the minority-spin-subband. Figure 5(c) summarizes, as a function of , the measured SdH frequencies () normalized to the frequency  T at , and the results of our energy band calculations. Overall, there is good qualitative agreement between the measured and calculated Fermi contour areas. Quantitatively, however, the experimental results fall between the calculated values with and . The differences between the two calculations are vizualized in the inset of Fig. 5(c). When , the system is less spin-polarized and the areas enclosed by the Fermi contours of the two spin species are similar. When , more charge is transferred from the minority- to the majority-spin species. The experimental data and the numerical calculations presented here shed light on the shape of the electron Fermi contours in the presence of . The Fermi contour distortions implied by our data are by far larger than the distortions ( at  T) expected or seen for 2D electrons confined to GaAs/AlGaAs heterojunctions Smrcka.JP.1993 ; Ohtsuka.PB.1998 ; Oto.PE.2001 ; Potok.PRL.2002 . This is mainly because of the larger thickness of the electron wave function in our 30-nm-wide quantum well sample. However, we emphasize that, besides the finite thickness of the carrier layer, other factors, such as the non-parabolicity of the energy bands and the spin-orbit interaction, also affect the distortion. For example, in 2D holes confined to a much narrower 17.5-nm-wide quantum well, the distortions are yet larger than the ones reported here. At  T, the Fermi contour anisotropy there is Kamburov.PRB.2012 , while the distortion we see here (Fig. 4) is only . Furthermore, in contrast to the data presented here, the Fermi contour anisotropy exhibited by holes is very much spin-dependent: the majority-spin contour is much more elongated than the minority-spin contour. This strong spin-dependence stems from the much stronger spin-orbit interaction in 2D hole systems RWinkler.book.2003 . Finally, from the measured extremal (Fig. 4), it appears that the Fermi contours are less elongated than the calculations predict. Remarkably, there is a similar discrepancy between the calculated and measured for 2D hole samples Kamburov.PRB.2012 . ###### Acknowledgements. We acknowledge support through the DOE BES (DE-FG02-00-ER45841) for measurements, and the Moore and Keck Foundations and the NSF (ECCS-1001719, DMR-1305691, and MRSEC DMR-0819860) for sample fabrication and characterization. A portion of this work was performed at the National High Magnetic Field Laboratory which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490, the State of Florida and the US Department of Energy. Work at Argonne was supported by DOE BES under Contract No. DE-AC02-06CH11357. We thank S. Hannahs, T. Murphy, and A. Suslov at NHMFL for valuable technical support during the measurements. We also express gratitude to Tokoyama Corporation for supplying the negative e-beam resist TEBN-1 used to make the samples. ## References • (1) L. Smrcka and T. Jungwirth, J. Phys.: Condens. Matter 6, 55 (1993). • (2) K. Ohtsuka, S. Takaoka, K. Oto, K. Murase, and K. Gamo, Physica B 249, 780 (1998). • (3) K. Oto, S. Takaoka, K. Murase, and K. Gamo, Physica E 11, 177 (2001). • (4) See, e.g., R. M. Potok, J. A. Folk, C. M. Marcus, and V. Umansky, Phys. Rev. Lett. 89, 266602 (2002). In transverse magnetic focusing experiments reported in this work, an anomalous shift of the focusing peaks’ positions is seen in Fig. 1 when T. This shift, which is not discussed by Potok et al., stems from the distortion of the Fermi contour with . • (5) N. E. Harff, J. A. Simmons, J. F. Klem, G. S. Boebinger, L. N. Pfeiffer, and K. W. West, Superlattices and Microstructures 20, 595 (1996). • (6) T. Jungwirth, T. S. Lay, L. Smrcka, and M. Shayegan, Phys. Rev. B 56, 1029 (1997). • (7) D. Weiss, K. von Klitzing, K. Ploog, and G. Weimann, Europhys. Lett. 8, 179 (1989). • (8) R. W. Winkler, J.P. Kotthaus, and K. Ploog, Phys. Rev. Lett. 62, 1177 (1989). • (9) R. R. Gerhardts, D. Weiss, and K. von Klitzing, Phys. Rev. Lett. 62, 1173 (1989). • (10) C. W. J. Beenakker, Phys. Rev. Lett. 62, 2020 (1989). • (11) P. H. Beton, E. S. Alves, P. C. Main, L. Eaves, M. W. Dellow, M. Henini, O. H. Hughes, S. P. Beaumont, and C. D. W. Wilkinson, Phys. Rev. B 42, 9229 (1990). • (12) F. M. Peeters and P. Vasilopoulos, Phys. Rev. B 46, 4667 (1992). • (13) A. D. Mirlin and P. Wolfle, Phys. Rev. B 58, 12986 (1998). • (14) D. Kamburov, M. Shayegan, R. Winkler, L. N. Pfeiffer, K. W. West, and K. W. Baldwin, Phys. Rev. B 86, 241302 (2012). • (15) N.W. Ashcroft and N.D. Mermin, Solid State Physics (Holt, Rinehart and Winston, Philadelphia, 1976), Chapter 12. • (16) R. Winkler, Spin-Orbit Coupling Effects in Two-Dimensional Electron and Hole Systems (Springer, Berlin, 2003). • (17) C. Attaccalite, S. Moroni, P. Gori-Giori, and G. B. Bachelet, Phys. Rev. Lett. 88, 256601 (2002). • (18) E. Skuras, A. R. Long, I. A. Larkin, J. H. Davies, and M. C. Holland, Appl. Phys. Lett. 70, 871 (1997). • (19) A. Endo, S. Katsumoto, and Y. Iye, Phys. Rev. B 62, 16761 (2000). • (20) A. Endo, M. Kawamura, S. Katsumoto, and Y. Iye, Phys. Rev. B 63, 113310 (2001). • (21) A. Endo and Y. Iye, Phys. Rev. B 72, 235303 (2005). • (22) D. Kamburov, H. Shapourian, M. Shayegan, L. N. Pfeiffer, K. W. West, K. W. Baldwin, and R. Winkler, Phys. Rev. B 85, 121305(R) (2012). • (23) O. Gunawan, Y. P. Shkolnikov, E. P. De Poortere, E. Tutuc, and M. Shayegan, Phys. Rev. Lett. 93, 246603 (2004). • (24) E. Tutuc, E. P. De Poortere, S. J. Papadakis, and M. Shayegan, Phys. Rev. Lett. 86, 2858 (2001). • (25) Note that, when the total applied field is large compared to , the parallel component of the field, , remains essentially fixed and equal to as we rotate the sample and take data. • (26) In Fig. 4, for the calculated values, we have plotted corresponding to the maximum diameter of the Fermi contour along (see the small, open-headed, horizontal arrow in Fig. 4 inset). The peanut-shape Fermi contour shown in the inset also has a slightly smaller maximal diameter in the same direction, corresponding to the “neck” of the peanut. In principle, one might expect COs associated with this also, but in our data [Fig. 3(d)] we observe only one frequency for the COs.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138467073440552, "perplexity": 2846.573162726262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00602.warc.gz"}
https://www.physicsforums.com/threads/falling-back-on-the-lebesgue-measure-from-the-abstract-theory.187246/
# Falling back on the Lebesgue measure from the abstract theory? 1. Sep 26, 2007 ### quasar987 I am studying the abstract theory of measure and I was wondering how the Lebesgue case for real functions of a real variable arises. But I did not find it. In the original theory of Lebesgue, a function f:E-->R was said to be measurable if for every real constant b, the preimage of $]-\infty, b]$ by f was measurable. Let the collection of all measurable sets be denoted $$\mathcal{L}_{\mathbb{R}}$$ (the Lebesgue sigma-algebra). The pair $$(\mathbb{R},\mathcal{L}_{\mathbb{R}})$$ is a measurable space. In the abstract theory, we consider a function f btw two measurable spaces: $$f:(X_1,\mathcal{T}_1)\rightarrow (X_2,\mathcal{T}_2)$$ and say that it is measurable if, given a family of subsets of X_2 $G_2$ that generates the sigma-algebra $\mathcal{T}_2$ (i.e. $\mathcal{T}(G_2)=\mathcal{T}_2$), we have $$f^{-1}(G_2)\subset \mathcal{T}_1$$ If I set X_1 = E a subset of R and X_2 = R, I am trying to find which sigma-algebras $\mathcal{T}_1, \mathcal{T}_2$ will make Lebesgue's definition and the abstract definition coincide. Obviously, we must take $\mathcal{T}_2=\mathcal{T}(\{\{[-\infty,b]\}:b\in\mathbb{R}\})=\mathcal{B}_{\mathbb{R}}$ (the borelian sigma-algebra). Now, if I were allowed to take $\mathcal{T}_1=\mathcal{L}_{\mathbb{R}}$ I would have succeeded, but $\mathcal{L}_{\mathbb{R}}$ is not a sigma-algebra on E. The next best thing is the trace of $\mathcal{L}_{\mathbb{R}}$ on E (aka maybe the induced sigma-algebra on E by $\mathcal{L}_{\mathbb{R}}$) defined by $\mathcal{L}_E=\{EM:M\in \mathcal{L}_{\mathbb{R}}\}$. But this does not seem to work. I need to check now that we have the equivalence (for all b in R, the preimage of $]-\infty, b]$ by f is in $\mathcal{L}_{\mathbb{R}}$) <==>(for all b in R the preimage of $]-\infty, b]$ by f is in $\mathcal{L}_E$). The ==> part is trivial but I don'T know how to prive the <== part, and actually, I would think that it is not necessarily true, for instance if E is not a part of $\mathcal{L}_{\mathbb{R}}$. Any ideas??? 2. Sep 26, 2007 ### quasar987 I just noticed that the trace of a sigma-algebra on a set E is only defined if E is itself an element of the sigma-algebra, such that the problem I rise concerning part <== does not exists.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695630669593811, "perplexity": 299.5081874322006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00361-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/110985/number-of-elements-in-projective-special-linear-group-over-mathbbz-n-mathbb
# Number of elements in projective special linear group over $\mathbb{Z}/n\mathbb{Z}$ While reading a paper about the modular group $\Gamma = PSL_2(\mathbb{Z})$, I came upon the following sentence ($\Gamma(N)$ is the kernel of the canonical map $PSL_2(\mathbb{Z}) \rightarrow PSL_2(\mathbb{Z}/N\mathbb{Z})$, which is a surjection): "It is well-known that the index of $\Gamma(N)$ in $\Gamma$ is $\frac{N^3}{2} \prod_{p|N}(1-\frac{1}{p^2})$ for $N \geq 3$." Now, I was wondering, how do you calculate this ? I understand that basically, this means you have to calculate the number of elements in $GL_2(\mathbb{Z}/N\mathbb{Z})$ and then divide by 2 times the number of invertible elements of $\mathbb{Z}/N\mathbb{Z}$, but there I'm stuck. I know that for $N$ prime, this holds. - Use the Chinese Remainder Theorem to reduce to the case that $N$ is a prime power and then see what you can do from there. –  Qiaochu Yuan Feb 19 '12 at 17:27 I had the same idea, but I'm having trouble finding a connection between $GL_2(\mathbb{Z}/N\mathbb{Z})$ and $GL_2(\mathbb{Z}/p^N\mathbb{Z})$, with $p^n|N$. And I'm sorry to say, but I can't even calculate the number of elements in $GL_2(\mathbb{Z}/p^n \mathbb{Z})$. –  KevinDL Feb 19 '12 at 17:34 $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ is the direct product of the groups $\text{GL}_2(\mathbb{Z}/p^n\mathbb{Z})$ where $p^n$ is the greatest power of $p$ that divides $N$. Again, this follows from CRT. –  Qiaochu Yuan Feb 19 '12 at 17:35 Okay, I think I see that, thanks. I still need help in the case of an arbitrary prime power. –  KevinDL Feb 19 '12 at 17:42 You need to count the number of $2 \times 2$ integer matrices with entries in $\{0,1, \dots p^{n}-1 \}$ whose determinants are not divisible by $p.$ –  Geoff Robinson Feb 19 '12 at 18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982285261154175, "perplexity": 101.772531511136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246642037.57/warc/CC-MAIN-20150417045722-00285-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/151916/are-infinite-planar-graphs-still-4-colorable/151917
# Are infinite planar graphs still 4-colorable? Imagine you have a finite number of "sites" $S$ in the positive quadrant of the integer lattice $\mathbb{Z}^2$, and from each site $s \in S$, one connects $s$ to every lattice point to which it has a clear line of sight, in the sense used in my earlier question: No other lattice point lies along that line-of-sight. This creates a (highly) nonplanar graph; here $S=\{(0,0),(5,2),(3,7),(11,6)\}$: Now, for every pair of edges that properly cross in this graph, delete the longer edge, retaining the shorter edge. In the case of ties, give preference to the earlier site, in an initial sorting of the sites. The result is a planar graph, because all edge crossings have been removed: Q1. Is this graph $4$-colorable? Some nodes of this graph (at least those on the convex hull) have a (countably) infinite degree. More generally, Q2. Is every infinite planar graph $4$-colorable? Which types of "infinite planar graphs" are $4$-colorable? The context here is that I am considering a type of "lattice visibility Voronoi diagram." One can ask many specific questions of this structure, but I'll confine myself to the $4$-coloring question, which may have broader interest. - When you say four coloring, I think of political maps and coloring regions. Do you mean assigning colors to faces, edges or vertices? –  The Masked Avenger Dec 15 '13 at 2:25 I meant: 4-coloring the vertices so that no two adjacent vertices are assigned the same color. Sorry for the lack of clarity. –  Joseph O'Rourke Dec 15 '13 at 2:27 BTW the de Bruijn-Erdos theorem (see Johnston's answer) reduces the other famous 4-color conjecture to the finite case, i.e., when we join 2 points of the plane iff their distance is exactly 1, the graph so obtained is 4-chromatic. –  Péter Komjáth Dec 15 '13 at 15:55 The answer to both questions is "yes", by the De Bruijn–Erdős theorem. - Thanks, Nathaniel! Does this still hold even if the number of nodes is uncountable, or the degree of a node is uncountable? (Obviously conditions beyond my particular graph.) –  Joseph O'Rourke Dec 15 '13 at 2:28 @JosephO'Rourke Yes, this holds regardless of the size of the graph. The De Bruijn–Erdős theorem is a particular instance of what in combinatorics we call a compactness argument or Rado's selection principle, and its truth can be seen as a consequence of the topological compactness of (arbitrary) products of finite spaces. In the countable case, a standard argument invokes König's lemma, an idea very useful in Ramsey theory. In the uncountable case, the argument can be recast as a consequence of the compactness of propositional logic. –  Andres Caicedo Dec 15 '13 at 4:30 Do the answers remain "yes" in ZF? –  Timothy Chow Dec 16 '13 at 1:33 @TimothyChow No, the answer changes. I'm expanding this into an answer. –  Andres Caicedo Dec 16 '13 at 2:03 @TimothyChow For Q1, it remains "yes", there is an explicit coloring. I expanded this into an answer. –  Jan Kyncl Dec 22 '13 at 19:16 As the other answer indicates, the yes answer to your question is known as the De Bruijn-Erdős theorem. This holds regardless of the size of the graph. The De Bruijn–Erdős theorem is a particular instance of what in combinatorics we call a compactness argument or Rado's selection principle, and its truth can be seen as a consequence of the topological compactness of (arbitrary) products of finite spaces. In the countable case, a standard argument invokes König's lemma, an idea very useful in Ramsey theory (see here for an example). In the uncountable case, the argument can be recast as a consequence of the compactness of propositional logic (see here for an example of how propositional compactness is used in these arguments). The answer to whether infinite graphs have the same chromatic number as their (large) finite subgraphs changes if we omit choice. For example, Shelah and Soifer considered the graph $G=(\mathbb R^2,E)$, where $s\mathrel{E}t$ for $s,t\in\mathbb R^2$, iff $$s-t-\eta\in\mathbb Q^2$$ where $$\eta\in\{(\sqrt2,0),(0,\sqrt2),(\sqrt2,\sqrt2),(-\sqrt2,\sqrt2)\}.$$ They proved in Saharon Shelah, and Alexander Soifer. Chromatic number of the plane & its relatives. III: Its Future. Geombinatorics, XIII (1), (2003), 41–46. that the chromatic number of $G$ is $4$ under choice, and uncountable if all sets of reals are Lebesgue measurable. This is treated in detail in Soifer's book, Alexander Soifer. The Mathematical Coloring Book, Mathematics of Coloring and the Colorful Life of Its Creators. Springer, New York, 2009. Closely related, Falconer proved in Kenneth Falconer. The realization of distances in measurable subsets covering $\mathbb R^n$, Journal of Combinatorial Theory Series A, 31 (2), (1981), 184–189 that the chromatic number of the plane (the least number of colors needed so any two points at distance one from each other have distinct colors) is at least $5$ if we require each color to be measurable. On the other hand, it is a famous open problem to determine the chromatic number of the plane (assuming choice). What we currently know is that the chromatic number is between $4$ and $7$, and that the distance-$1$ graph of any $12$ points in $\mathbb R^2$ is $4$-colorable, see Dan Pritkin. All unit-distance graphs of order $6197$ are $6$-colorable. Journal of Combinatorial Theory Series B, 73 (2), (1998), 159–163. - Andres, I was aware of the Shelah-Soifer result when I posted my question about ZF, but unless I'm missing something, this doesn't directly answer the question about whether the answers to Q1 and Q2 are still "yes" in ZF. The unit-distance graph of the plane isn't planar. –  Timothy Chow Dec 16 '13 at 3:43 @TimothyChow Ah, yes. Good point, thanks. I do not know for planar graphs. All the examples I have end up not being planar. I'll have to think about this. –  Andres Caicedo Dec 16 '13 at 4:01 Doesn't the result about measurable sets imply that, assuming choice, the chromatic number is at most 5? –  Will Sawin Dec 22 '13 at 15:27 @WillSawin Ah, that was a typo. Thanks. –  Andres Caicedo Dec 22 '13 at 16:38 Regarding Q1: The graph is a subgraph of the visibility graph of the integer lattice. Every sublattice $x+2\mathbb{Z} \times 2\mathbb{Z}$ is an independent set in the visibility graph, and the integer lattice can be decomposed into four such sublattices (according to the parity of coordinates). This gives a proper $4$-coloring. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573324680328369, "perplexity": 456.20323871319573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548622.138/warc/CC-MAIN-20141224185908-00084-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.assignmentexpert.com/homework-answers/mathematics/geometry/question-5791
61 832 Assignments Done 98% Successfully Done In May 2018 # Answer to Question #5791 in Geometry for maria prettymaths Question #5791 Expert sir please explain integration by parts rules. Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate it into a product of two functions &fnof;(x)g(x) such that the integral produced by the integration by parts formula is easier to evaluate than the original one. The following form is useful in illustrating the best strategy to take: int(f*g)dx = f*int(g)dx - int(f&#039;*int(g)dx)dx. Note that on the right-hand side, &fnof; is differentiated and g is integrated; consequently it is useful to choose &fnof; as a function that simplifies when differentiated, and/or to choose g as a function that simplifies when integrated. As a simple example, consider: int(ln(x)/x&sup2;)dx. Since the derivative of ln x is 1/x, we make this part of &fnof;; since the anti-derivative of 1/x2 is &minus;1/x, we make this part of g. The formula now yields: int(ln(x)/x&sup2;)dx = -ln(x)/x - int((1/x)(-1/x))dx. The remaining integral of &minus;1/x2 can be completed with the power rule and is 1/x. Need a fast expert's response? Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086570143699646, "perplexity": 1396.9968303382461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867220.74/warc/CC-MAIN-20180525215613-20180525235613-00150.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/79900-mechanics-moments-question.html
# Thread: Mechanics moments question =) 1. ## Mechanics moments question =) Basically i get the answer but i don't understand it lol: A mirror of mass 24 is supported by two screws 0.8m apart. One of the screws breaks so someone applies a force F to it, 1.2m below the screws to keep it in the same position. Find the force required. I'll draw a picture =) Oh and there's a mass on here which i forgot to draw sorry. Acting from center with force 24g. Ok, so my working went like this: 1.2F = 0.4 x 24g And i got the correct answer of 78.4N Then i managed to confuse myself. I took moments at the broken screw..but why is one distance taken horizontal (0.4, half of 0.8) whilst the other is vertical (1.2)...shouldn't you take both verticle or both horizontal? i.e (1.2 and 0.6) or (0.8 and 0.4)? Thanks =) Hopefully i explained that ok. 2. Originally Posted by AshleyT Basically i get the answer but i don't understand it lol: A mirror of mass 24 is supported by two screws 0.8m apart. One of the screws breaks so someone applies a force F to it, 1.2m below the screws to keep it in the same position. Find the force required. I'll draw a picture =) Oh and there's a mass on here which i forgot to draw sorry. Acting from center with force 24g. Ok, so my working went like this: 1.2F = 0.4 x 24g And i got the correct answer of 78.4N Then i managed to confuse myself. I took moments at the broken screw..but why is one distance taken horizontal (0.4, half of 0.8) whilst the other is vertical (1.2)...shouldn't you take both verticle or both horizontal? i.e (1.2 and 0.6) or (0.8 and 0.4)? Thanks =) Hopefully i explained that ok. You are taking moments about the un-broken screw. The moment is the perpendicular distance between the line of action of the force and the point about which you are taking moments times the force (also taking acount of the sense of the moment). CB 3. Originally Posted by CaptainBlack You are taking moments about the un-broken screw. The moment is the perpendicular distance between the line of action of the force and the point about which you are taking moments times the force (also taking acount of the sense of the moment). CB Oooooo! Im so stupid! Thank-you very much!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352407455444336, "perplexity": 1671.0229780720128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00196.warc.gz"}
http://mathhelpforum.com/advanced-algebra/106657-group-theory-question-involving-isomorphism-cyclic-groups.html
# Math Help - A Group Theory question involving isomorphism and cyclic groups 1. ## A Group Theory question involving isomorphism and cyclic groups Show that if G is a group of order 4 then either G is isomorphic to the cyclic group Z4 of order 4, or x^(2)=1 for all x in G. I have got this far.... For two groups to be isomorphic they must have the same order, be cyclic and be abelian. Case 1: G is cyclic (and abelian) therefore is isomorphic to the cyclic group Z4 Case 2: G is not cyclic (not abelian)............therefore x^(2)=1 for all x in G. I don't understand the connection between the group not being cyclic or abelian and the condition x^(2)=1. Please help, thanks. 2. Originally Posted by Louise Show that if G is a group of order 4 then either G is isomorphic to the cyclic group Z4 of order 4, or x^(2)=1 for all x in G. I have got this far.... For two groups to be isomorphic they must have the same order, be cyclic and be abelian. Case 1: G is cyclic (and abelian) therefore is isomorphic to the cyclic group Z4 Case 2: G is not cyclic (not abelian)............therefore x^(2)=1 for all x in G. I don't understand the connection between the group not being cyclic or abelian and the condition x^(2)=1. Please help, thanks. If G is cyclic => G is isomorphic Z4 If G is not cyclic => no element has order = 4. => every element has order 1 or 2. (as these are only two numbers that divide 4) hence x^2 = 1 for all x in G.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989295363426208, "perplexity": 448.68030462818655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657111354.41/warc/CC-MAIN-20140914011151-00218-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.statemaster.com/encyclopedia/Neutralino
FACTOID # 3: South Carolina has the highest rate of violent crimes and aggravated assaults per capita among US states. Home Encyclopedia Statistics States A-Z Flags Maps FAQ About WHAT'S NEW RELATED ARTICLES People who viewed "Neutralino" also viewed: SEARCH ALL Search encyclopedia, statistics and forums: (* = Graphable) Encyclopedia > Neutralino In particle physics, the neutralino is a hypothetical particle and part of the doubling of the menagerie of particles predicted by supersymmetric theories. Particles explode from the collision point of two relativistic (100 GeV per nucleon) gold ions in the STAR detector of the Relativistic Heavy Ion Collider. ... This article or section is in need of attention from an expert on the subject. ... Since the superpartners of the Z boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they mix to form eigenstates of the mass operator, called neutralinos. The lightest neutralino is thought to be stable unless the gravitino is lighter or R-parity is not conserved. Virtually undetectable, it participates only in weak and gravitational interactions. If the neutralino is stable it will always escape a particle detector. Therefore its production inside the detector, through a known initial state (e.g. proton-proton collision), should be accompanied with large missing energy and momentum from the visible final state particles. This is important signature to discriminate it from Standard Model backgrounds. In physics, the W and Z bosons are the elementary particles that mediate the weak nuclear force. ... In physics, zino is a hypothetical particle that is a superpartner of the Z boson predicted by supersymmetry. ... The word light is defined here as electromagnetic radiation of any wavelength; thus, X-rays, gamma rays, ultraviolet light, microwaves, radio waves, and visible light are all forms of light. ... A photino is a subatomic particle, the fermion WIMP superpartner of the photon predicted by supersymmetry. ... The Higgs boson is a hypothetical massive scalar elementary particle predicted to exist by the Standard Model of particle physics. ... In particle physics, a higgsino is the hypothetical superpartner of the Higgs boson, as predicted by supersymmetry. ... A quantum number describes the energies of electrons in atoms. ... Quantum superposition is the application of the superposition principle to quantum mechanics. ... In linear algebra, the eigenvectors (from the German eigen meaning inherent, characteristic) of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. ... The gravitino is the hypothetical supersymmetric partner of the graviton, as predicted by theories combining general relativity and supersymmetry, i. ... R-parity is a concept in particle physics. ... The weak nuclear force or weak interaction is one of the four fundamental forces of nature. ... Gravity is a force of attraction that acts between bodies that have mass. ... The Standard Model of Fundamental Particles and Interactions For the Standard Model in Cryptography, see Standard Model (cryptography). ... Of the weakly-interacting massive particles (WIMPs) under consideration, a lightest neutralino of 30-5000 GeV is the leading candidate for cold dark matter. The exact behavior of this particle will depend on the dominant constituent: the higgsino, the photino or the zino. In astrophysics, WIMPs, or weakly interacting massive particles, are hypothetical particles serving as one possible solution to the dark matter problem. ... The electronvolt (symbol eV, or, rarely and incorrectly, ev) is a unit of energy. ... Cold dark matter (or CDM) is a refinement of the big bang theory that contains the additional assumption that most of the matter in the Universe consists of material which cannot be observed by its electromagnetic radiation and hence is dark while at the same time the particles making up... The standard symbol for neutralinos in $tilde{chi}^0_i$, where i runs from 1 to 4. v • d • e  Particles in physics - elementary particles Fermions: Quarks: (Up · Down · Strange · Charm · Bottom · Top) | Leptons: (Electron · Muon · Tau · Neutrinos) Gauge bosons: Photon | W and Z bosons | Gluons Not yet observed: Higgs boson | Graviton | Other hypothetical particles Results from FactBites: Dark matter and cosmology (1264 words) However, experimental limits rule out a charged LSP over a broad mass range, and it is therefore generally assumed that the LSP is the lightest of the four neutralinos, the mass eigenstates corresponding to the superpartners of the photon, the Z and the two neutral Higgs bosons. A neutralino passing through such a body has a small but non-zero probability of scattering off a nucleus therein, so that its velocity after scattering is less than the escape velocity. Indirect detection is particularly useful when the axial coupling of the neutralino dominates, as axially-coupled neutralinos are captured in the Sun due to their interaction with hydrogen. More results at FactBites » Share your thoughts, questions and commentary here
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952464759349823, "perplexity": 1056.956239451029}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00481.warc.gz"}
https://www.statisticsviews.com/details/journalArticle/10771995/Error-bounds-and-asymptotic-expansions-for-toeplitz-product-functionals-of-unbou.html
# Journal of Time Series Analysis ## Error bounds and asymptotic expansions for toeplitz product functionals of unbounded spectra ### Journal Article Abstract.  This paper establishes error orders for integral limit approximations to traces of powers (to the pth order) of products of Toeplitz matrices. Such products arise frequently in the analysis of stationary time series and in the development of asymptotic expansions. The elements of the matrices are Fourier transforms of functions which we allow to be bounded, unbounded, or even to vanish on [−π, π], thereby including important cases such as the spectral functions of fractional processes. Error rates are also given in the case in which the matrix product involves inverse matrices. The rates are sharp up to an arbitrarily small ɛ > 0. The results improve on the o(1) rates obtained in earlier work for analogous products. For the p = 1 case, an explicit second‐order asymptotic expansion is found for a quadratic functional of the autocovariance sequences of stationary long‐memory time series. The order of magnitude of the second term in this expansion is shown to depend on the long‐memory parameters. It is demonstrated that the pole in the first‐order approximation is removed by the second‐order term, which provides a substantially improved approximation to the original functional. View all View all
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926474392414093, "perplexity": 478.59014243565747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00482.warc.gz"}
http://www.iapjournals.ac.cn/aas/en/article/doi/10.1007/s00376-021-0352-3
Impact Factor: 3.158 Article Contents # Correlation Structures between Satellite All-Sky Infrared Brightness Temperatures and the Atmospheric State at Storm Scales doi:  10.1007/s00376-021-0352-3 • This study explores the structures of the correlations between infrared (IR) brightness temperatures (BTs) from the three water vapor channels of the Advanced Baseline Imager (ABI) onboard the GOES-16 satellite and the atmospheric state. Ensemble-based data assimilation techniques such as the ensemble Kalman filter (EnKF) rely on correlations to propagate innovations of BTs to increments of model state variables. Because the three water vapor channels are sensitive to moisture in different layers of the troposphere, the heights of the strongest correlations between these channels and moisture in clear-sky regions are closely related to the peaks of their respective weighting functions. In cloudy regions, the strongest correlations appear at the cloud tops of deep clouds, and ice hydrometeors generally have stronger correlations with BT than liquid hydrometeors. The magnitudes of the correlations decrease from the peak value in a column with both vertical and horizontal distance. Just how the correlations decrease depend on both the cloud scenes and the cloud structures, as well as the model variables. Horizontal correlations between BTs and moisture, as well as hydrometeors, in fully cloudy regions decrease to almost 0 at about 30 km. The horizontal correlations with atmospheric state variables in clear-sky regions are broader, maintaining non-zero values out to ~100 km. The results in this study provide information on the proper choice of cut-off radii in horizontal and vertical localization schemes for the assimilation of BTs. They also provide insights on the most efficient and effective use of the different water vapor channels. 摘要: 本文研究了GOES-16卫星搭载的先进基线成像仪(ABI)的三个水汽通道的红外亮温与大气状态量的相关性。集合资料同化方法,如集合卡尔曼滤波(EnKF),使用相关性和协方差将红外亮温的观测增量转换为模式变量的更新向量。因为这三个水汽通道对对流层不同高度的水汽敏感度不同,因此,在晴空区域下,这三个水汽通道和水汽的相关性最强的高度和各个通道各自的权重函数密切相关。在云区,最强的相关性通常出现在深对流云的云顶,并且冰粒子和亮温的相关性强于水粒子。随着垂直和水平距离的增加,相关性也逐渐减弱。相关性减弱的幅度和距离取决于是否有云出现和云的结构,亮温和不同的大气状态量的相关性减弱的特征也不同。在云区,亮温和水汽以及水成物粒子的相关性在大约30 km距离处减弱为0。在晴空区域,相关性的水平尺度更大,在100 km距离处依旧有着一定的相关性。本文的结果对同化红外亮温时协方差矩阵的垂直和水平局地化,有重要参考价值,也为更有效地同化不同的红外水汽通道提供了重要信息。 • Figure 1.  (a) Simulated ABI channel 14 BTs obtained by applying the CRTM to the ensemble mean fields, (b) ensemble mean CTP, (c) pressures of lowest altitude cloud tops of any ensemble member, (d) pressures of highest altitude cloud tops of any ensemble member, (e) standard deviation of CTPs, and (f) probability over all members of cloud somewhere in the vertical column of the EnKF prior of the CH10 experiment at 2040 UTC 12 June 2017. We did not include the observations in the figure because the EnKF analysis cloud-affected radiances are almost identical to them. Figure 2.  Time averages of the vertical correlations between the BTs and (a) U, (b) V, (c) T, and (d) Qv for clear-sky scenes. Figure 3.  (a), (c), (e) Normalized averaged weighting functions and (b), (d), (f) PDFs of the peaks of the weighting functions of all the ensemble members for the three water vapor channels of ABI in (a), (b) clear-sky scenes, (c), (d) high altitude cloud regions, and (e), (f) low altitude cloud regions. Figure 4.  Time averages of the PDFs of the correlations between the BTs and the clear-sky scene BTs at vertical levels of (a) 300 hPa, (b) 500 hPa, and (c) 700 hPa. The narrow peaks in the PDF close to correlation values of −1, especially noticeable in (a) and (b), are the result of a deleterious impact of the boundary conditions along the southwestern corner of the model domain. Figure 5.  Time averages of the vertical correlations between the BTs and (a) U, (b) V, (c) T, (d) Qv, (e) Qc, (f) Qi, (g) Qr, (h) Qs, and (i) Qg for the high and low altitude cloud regions. The PDF of CTP for all member columns within either of the two cloud structure regions is illustrated by the same black line in all subpanels. Figure 6.  Time averages of the vertical correlations between the BTs and (a) Qv, (b) Qc, (c) Qi, (d) Qr, (e) Qs, and (f) Qg for partly cloudy scenes and mixed altitude cloud regions. The PDFs of CTP across all members are also illustrated for the partly cloudy scenes (dashed black lines) and the mixed altitude cloud regions (solid black lines). Figure 7.  Time averages of the layer-mean correlations between the BTs and Qv with respect to horizontal distance for vertical levels from 100 hPa to 800 hPa in 50 hPa steps. The time averages are over all 21 EnKF data assimilation cycles for (a), (d) channel 8, (b), (e) channel 9, and (c), (f) channel 10 for (a), (b), (c) the clear-sky scenes, and (d), (e), (f) the fully cloudy scenes. Figure 8.  Time averages of the layer-mean correlations between the BTs and Qv with respect to horizontal distance. The time averages are over all 21 EnKF data assimilation cycles at 200 hPa, 300 hPa, and 450 hPa for (a), (d) channel 8, (b), (e) channel 9, and (c), (f) channel 10 for (a), (b), (c) the clear-sky scenes, and (d), (e), (f) the fully cloudy scenes. Correlations for horizontal distances longer than 300 km are omitted due to very small sample sizes of model grids at these distances. Figure 9.  Averages over all 21 EnKF data assimilation cycles and all vertical levels of the absolute values of the layer-mean correlations between the BTs and Qv with respect to horizontal distance for (a), (b) the clear-sky scenes, and (c), (d) the fully cloudy scenes for the (a), (c) CH10 and (b), (d) CH10-5KM EnKF experiments. Correlations for horizontal distances longer than 300 km are omitted due to very small sample sizes of model grids at these distances. Figure 10.  Averages over all 21 EnKF data assimilation cycles and all vertical levels of the absolute values of the layer-mean correlations between the BTs and hydrometeors with respect to horizontal distance for (a) channel 8, (b) channel 9, and (c) channel 10 for the fully cloudy scenes. Correlations for horizontal distances longer than 300 km are omitted due to very small sample sizes of model grids at these distances. Figure 11.  Similar to Fig. 7, but for correlations between the BTs and T. Figure 12.  Averages over all 21 EnKF data assimilation cycles and all vertical levels of the absolute values of the layer-mean correlations between the BTs and T, U, and V with respect to horizontal distance for (a), (d) channel 8, (b), (e) channel 9, and (c), (f) channel 10 for (a), (b), (c) clear-sky scenes and (d), (e), (f) fully cloudy scenes. Correlations for horizontal distances longer than 300 km are omitted due to very small sample sizes of model grids at these distances. Export: ## Manuscript History Manuscript received: 17 October 2020 Manuscript revised: 03 December 2020 Manuscript accepted: 06 January 2021 ###### 通讯作者: 陈斌, [email protected] • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 ## Correlation Structures between Satellite All-Sky Infrared Brightness Temperatures and the Atmospheric State at Storm Scales ###### Corresponding author: Yunji ZHANG, [email protected]; • Center for Advanced Data Assimilation and Predictability Techniques, and Department of Meteorology and Atmospheric Science, The Pennsylvania State University, University Park, Pennsylvania 16802, USA Abstract: This study explores the structures of the correlations between infrared (IR) brightness temperatures (BTs) from the three water vapor channels of the Advanced Baseline Imager (ABI) onboard the GOES-16 satellite and the atmospheric state. Ensemble-based data assimilation techniques such as the ensemble Kalman filter (EnKF) rely on correlations to propagate innovations of BTs to increments of model state variables. Because the three water vapor channels are sensitive to moisture in different layers of the troposphere, the heights of the strongest correlations between these channels and moisture in clear-sky regions are closely related to the peaks of their respective weighting functions. In cloudy regions, the strongest correlations appear at the cloud tops of deep clouds, and ice hydrometeors generally have stronger correlations with BT than liquid hydrometeors. The magnitudes of the correlations decrease from the peak value in a column with both vertical and horizontal distance. Just how the correlations decrease depend on both the cloud scenes and the cloud structures, as well as the model variables. Horizontal correlations between BTs and moisture, as well as hydrometeors, in fully cloudy regions decrease to almost 0 at about 30 km. The horizontal correlations with atmospheric state variables in clear-sky regions are broader, maintaining non-zero values out to ~100 km. The results in this study provide information on the proper choice of cut-off radii in horizontal and vertical localization schemes for the assimilation of BTs. They also provide insights on the most efficient and effective use of the different water vapor channels. Reference /
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044078946113586, "perplexity": 2067.5869738699303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00510.warc.gz"}
http://physics.stackexchange.com/questions/64917/how-many-measurements-should-be-done/64918
# How many measurements should be done? [closed] I am measuring time of a computer operation. The operation should run roughly same time each time I measure it. How many times should I measure it to get good average and standard deviation? - Reposted at stats.stackexchange.com/questions/59318/… –  Bogdan0x400 May 17 at 17:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645947813987732, "perplexity": 1002.2022814180867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163069032/warc/CC-MAIN-20131204131749-00039-ip-10-33-133-15.ec2.internal.warc.gz"}
https://learn.careers360.com/ncert/ncert-solutions-class-8-maths-chapter-13-direct-and-inverse-proportions/
# NCERT Solutions for Class 8 Maths Chapter 13- Direct and Inverse Proportions ## NCERT Solutions for Class 8 Maths Chapter 13 Direct and Inverse Proportions In this chapter, we are going to learn how to deal with problems based on direct and indirect proportions. The NCERT Solutions for Class 8 Maths Direct and Inverse Proportions are prepared and explained by the experts. Chapter 13 direct and inverse proportions contain 2 exercises with 21 questions.  The first exercise of this chapter covers questions on Direct proportion and similarly, the second exercise of this chapter covers questions from inverse proportions. In our daily life, we come across many situations where changes in one quantity results changes in other quantity. In mathematics, two quantities are said to be in proportional - if the values of two quantities are related in such a way that changes in one results in a corresponding change in another quantity. For a better understanding of this chapter, NCERT is providing solved examples, exercises, and daily life activities at the end of every concept. In NCERT Class 8 Maths Chapter 13 Direct and Inverse Proportions we learn about two types of proportion- Direct Proportion and Inverse Proportion and also study there properties with applications. The brief about both proportional are following- ## What is Direct Proportion? Two values are Directly proportional it means to increase/decrease in one value results in a gradual increase/decrease in other value. Let's take an example: Example-  Sunil goes to the stationery shop to buy two pencils. The two pencils cost him Rs. 4. Next week again Sunil goes to the stationery shop to buy 6 pencils. The six pencils cost in Rs. 12. So, in this situation when the number of pencils increases causes gradual increases in the total cost. It means the number of pencils and total cost are directly proportional. What is Inverse Proportional? Two values are Inversely proportional it means decreases in one value results in a gradual increase in other value. Example- If the distance is fixed, speed and time are inversely proportional to each other. $Speed=\frac{Distance}{Time}$ $Speed \propto \frac{Distance}{Time}$        ( If Distance is fixed) ## Topics of NCERT Class 8 Maths Chapter 13 Direct and Inverse Proportions- • 13.1 Introduction • 13.2 Direct Proportion • 13.3 Inverse Proportion ## NCERT Solutions For Class 8 Maths: Chapter-wise Chapter -1 Chapter -2 Linear Equations in One Variable Chapter-3 Understanding Quadrilaterals Chapter-4 Practical Geometry Chapter-5 Data Handling Chapter-6 Squares and Square Roots Chapter-7 Cubes and Cube Roots Chapter-8 Comparing Quantities Chapter-9 Algebraic Expressions and Identities Chapter-10 Visualizing Solid Shapes Chapter-11 Mensuration Chapter-12 Exponents and Powers Chapter-14 Factorization Chapter-15 Introduction to Graphs Chapter-16 Playing with Numbers Exams Articles Questions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221163153648376, "perplexity": 1879.1329758788188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00083.warc.gz"}
http://mathhelpforum.com/differential-geometry/195281-limsup-liminf-simply-explained.html
# Math Help - Limsup, liminf simply explained 1. ## Limsup, liminf simply explained I don't really understand these and why are they important. Can someone explain me these concepts? Thanks! 2. ## Re: Limsup, liminf simply explained There are a few equivalent ways to define these concepts. Let $\{a_n\}_{n=1}^{\infty}$ be a sequence of real numbers. For each positive integer n, define the number $b_n := \sup \{a_k : k \ge n\}$. Then we define $\limsup_{n \to \infty} a_n := \lim_{n \to \infty} b_n$. There are a few immediate consequences of this definition: 1. If $m \le n$ then $\{a_k : k \ge n\} \subset \{a_k : k \ge m\}$. So $b_m \ge b_n$, hence the sequence $\{b_n\}_{n=1}^{\infty}$ is a decreasing sequence. So $\lim_{n \to \infty} b_n$ always exists, since a monotonic sequence of real numbers always has a limit in the extended reals. Similarly, we can define $c_n := \inf \{a_k : k \ge n\}$ and defining $\liminf_{n \to \infty} a_n := \lim_{n \to \infty} c_n$. Then a similar analysis shows that the sequence $\{c_n\}_{n=1}^{\infty}$ is a monotonically increasing sequence, and so its limit always exists in the extended reals. 2. $\lim_{n \to \infty} a_n$ exists if and only if $\limsup_{n \to \infty} a_n = \liminf_{n \to \infty} a_n$; in which case, the limit is the same. 3. $\liminf_{n \to \infty} a_n \le \limsup_{n \to \infty} a_n$ 4. If $B = \limsup_{n \to \infty} a_n$ and $C = \liminf_{n \to \infty} a_n$, then B and C satisfy the following: $a_n > B$ for only finitely many n and $a_n < C$ for only finitely many n. 5. There exist subsequences of $\{a_n\}_{n=1}^{\infty}$ which converge to B and C. Now, why do we care? For starters, the limsup and liminf always exist. Compare this with the limit, which need not exist for any particular sequence. For example, the sequence $a_n = (-1)^n$ has no limit. The limsup and liminf provide tools which can be used to establish some properties for the sequence. An elementary example of this is the root test from the analysis of infinite sums. Root test: Given a sequence $\{a_n\}_{n=1}^{\infty}$ of real numbers, set $A := \limsup_{n \to \infty} (|a_n|)^{\frac{1}{n}}$ Then: If A < 1, the sum converges If A > 1, the sum diverges If A = 1, there is no conclusion. Now note in the statement of the theorem that there's no question of the existence of A. You might have encountered this theorem once before with the limsup replaced by an ordinary limit. The same result holds, so long as the limit exists. By extending the statement to deal with a limsup, there is no need to assume the stronger hypothesis that the limit exists. Analysis is often concerned with estimations and approximations. The limsup and liminf provide a convenient method for constructing bounds on the end behavior of the sequence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689960479736328, "perplexity": 176.53493012207622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990609.0/warc/CC-MAIN-20150728002310-00108-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/144954-convergence-sequence-proof.html
# Math Help - Convergence sequence proof 1. ## Convergence sequence proof How would i go about proving this: Let $(a_n)$ be a sequence convergent to the limit $a \in \mathbb{R}$. Prove that the sequence of absolute values $(b_n)$ with $(b_n) = |a_n|$ is convergent to the limit $|a|$ 2. Originally Posted by Stylis10 How would i go about proving this: Let $(a_n)$ be a sequence convergent to the limit $a \in \mathbb{R}$. Prove that the sequence of absolute values $(b_n)$ with $(b_n) = |a_n|$ is convergent to the limit $|a|$ Hint: $\big|b_n-|a|\big| = \big||a_n|-|a|\big|\leq |a_n-a|\longrightarrow 0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994757354259491, "perplexity": 162.7024489718888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007797.72/warc/CC-MAIN-20141125155647-00223-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/radiative-transfer-optically-thin-cloud.186640/
# Homework Help: Radiative Transfer (Optically Thin Cloud) 1. Sep 24, 2007 ### cepheid Staff Emeritus 1. The problem statement, all variables and given/known data An optically thin cloud at temperature T radiates power $P_{\nu}$ per unit volume. Find an expression for the cloud's brightness $I_{\nu}$ as a function of distance from the centre of the cloud in the case where: (a) the cloud is a cube of side d (b) the cloud is a sphere of radius R (c) How would your answers change if the cloud were optically thick? 2. Relevant equations First of all, terminology varies wildly, but astronomical terminology is being used here i.e. "brightness" refers to the radiometric unit that is a measure of the rate (with time) at which energy arrives per unit of a given perpendular area per frequency band and from a given direction (i.e. per unit solid angle subtended presumably by the source) as measured in $\textrm{W} \cdot \textrm{m}^{-2} \cdot \textrm{Hz}^{-1} \cdot \textrm{sr}^{-1}$ The relevant equation given is the Radiative Transfer Equation, which describes how this "brightness" varies with distance from the centre of the source, one term being a loss due to absorption, and the other term being a gain due to radiation: $$\frac{dI_{\nu}(s)}{ds} = -\alpha_{\nu}(s)I_{\nu}(s) + j_{\nu} [/itex] ​ where $\alpha_{\nu}(s)$ is the absorption coefficient, and $j_{\nu}$ is the rate of change of brightness with distance due to emission i.e. the energy radiated per unit time, per unit volume, per unit solid angle. The optical depth, $\tau_{\nu}(s)$ is defined by: [tex]\tau_{\nu}(s) = \int_{s_0}^s \alpha_{\nu}(s^\prime)\, ds^\prime$$ Optically thin means tau << 1 3. The attempt at a solution I wasn't 100% sure how to proceed, but my first thought was that maybe the relationship between $\j_{\nu}$ and $P_{\nu}$ just depends on the shape of the cloud. Furthermore, we're given an equation that (from what I understand), is true under any circumstances, so I set about trying to solve the ODE using the method of integrating factors: let, $$\phi(s) = \exp{(\int_{s_0}^s \alpha_{\nu}(s^\prime)\, ds^\prime)} = e^{\tau_{\nu}(s)}$$​ then, $$e^{\tau_{\nu}}\frac{dI_{\nu}}{ds} + e^{\tau_{\nu}}\alpha_{\nu}I_{\nu} = e^{\tau_{\nu}}j_{\nu}$$ $$e^{\tau_{\nu}}\frac{dI_{\nu}}{ds} + \frac{d}{ds}\left(e^{\tau_{\nu}}\right)I_{\nu} = e^{\tau_{\nu}}j_{\nu}$$ $$\frac{d}{ds}\left(e^{\tau_{\nu}}I_{\nu}\right) = e^{\tau_{\nu}}j_{\nu}$$ $$e^{\tau_{\nu}}I_{\nu}= \int e^{\tau_{\nu}}j_{\nu} \, ds$$ ​ Now this is where I am stuck (i.e. I don't know what to do with this, or whether I'm on the right track. Last edited: Sep 24, 2007 2. Sep 24, 2007 ### cepheid Staff Emeritus Any thoughts, guys? 3. Sep 24, 2007 ### JeffKoch If it's optically thin, how important is the absorption term in the radiative transfer equation?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487209320068359, "perplexity": 599.9131461309627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00221.warc.gz"}
http://papers.neurips.cc/paper/6468-one-vs-each-approximation-to-softmax-for-scalable-estimation-of-probabilities
# NIPS Proceedingsβ ## One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities [PDF] [BibTeX] [Supplemental] [Reviews] ### Abstract The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we introduce an efficient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classification problems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636460542678833, "perplexity": 522.9052421486817}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740848.39/warc/CC-MAIN-20200815124541-20200815154541-00280.warc.gz"}
https://www.macinchem.org/blog/files/2da1ccb6d514f7bbeff656c653cb859f-244.php
# VMWare Now I have a new MacBook Pro I've decided to install VMware so that I can test more of the Opensource scientific applications under Linux, any that look interesting I'll look at compiling under MacOSX. VMware Fusion (Mac)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535451531410217, "perplexity": 4543.019671566924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00220.warc.gz"}
https://www.physicsforums.com/threads/schrodinger-equation-and-energy-quantization.252765/
# Schrodinger Equation and Energy Quantization 1. Aug 30, 2008 ### quantumlaser I'm a bit embarrased to ask this (thats why I'm asking here and not asking one of my professors), as a grad student in Physics I've had a good deal of quantum mechanics, but one thing I haven't fully understood yet is the mechanism in the Schrodinger Equation that forces eigenvalue quantization (energy, angular momentum, etc...) in bound state solutions. For quantum well potentials, energy quantization is forced by the boundary conditions, but in the harmonic oscillator and hydrogen atom potentials, the eigenvalue quantization seems to just pop out as a consequence of the math. It seems there should be some deeper explaination as to why the bound state eigenvalues are quantized. Could someone enlighten me on this? 2. Aug 31, 2008 ### malawi_glenn what is wrong with math? A particles eq of motion is described by the Schrödinger eq, and as a consequence of that, eigenvalues for bound states are discrete. And what is a "quantum well", and why is a "quantum well" different from the "the harmonic oscillator and hydrogen atom potentials" ??? The discrete states comes from, as you say, the boundary conditions with you impose on your solution, this you do for ALL potentials no matter how they look like. (well = potential) And this B.C has to do with the definition of a bound state, that it's wavefunction goes to zero for large radii, and in spherical systems, that the wavefunction is periodic, etc. Of course, the VERY definition for bound state is E < 0, but you also impose the B.C, otherwise the prob. to find particle outside well is very large (infinity) and that is not so nice. Last edited: Aug 31, 2008 3. Aug 31, 2008 ### quantumlaser By "quantum well" I mean a well potential, i.e square wells, spherical wells, etc... I just thought that there might be something deeper in eigenvalue quantization than just boundary conditions. I guess I'm just looking for something deeper where there is nothing. My original impetus for posting this was me trying to explain quantization to my friends outside of physics and I didn't really have a suitable answer for why energy is quantized other than "its in the math." 4. Aug 31, 2008 ### malawi_glenn The discrete values for square wells are also due to mathematics of the solution to the boundary conditions... same thing for all wells remember. Explaining quantum physics and special relativity etc without math is quite impossible since those branches are non-intutive with our ordinary life language and reasoning, which has developed during thousands of years without encountering neither high velocites nor very very tiny objects. Look at math as the language of physics, and also how math and physics has developed hand in hand during the centuries, maybe you'll stop thinking "it's just math" and similar. 5. Aug 31, 2008 ### Marty If you think about it, the quantization of energy really does not come from the Schroedinger equation. It's just a differential equation, which we solve by separation of variables, automatically leading to an orthogonal set of basis states. How is this different from the Bessel equation, which we might use to solve for the vibration modes of a circular membrane streched across a drum. Do we then say that the energy levels of the drum are quantized because of the Bessel equation? 6. Aug 31, 2008 ### ZapperZ Staff Emeritus Humm.. I'm not sure I understand this. The quantization in both the harmonic oscillator and the hydrogen atom ARE due to the boundary conditions. The harmonic oscillator potential imposes such boundary condition for the harmonic oscillator, and the central force potential imposes the boundary condition for the hydrogen atom. So there are boundary conditions for both cases. When you remove any form of bound states (i.e. your boundary conditions are at infinity), then you get the continuous states, as what you get out of a free-particle scenario, for example. Zz. 7. Aug 31, 2008 ### Staff: Mentor In both cases (a vibrating circular drumhead and a circular quantum well), we get solutions that are standing-wave modes with discrete frequencies, determined by the boundary conditions. In the circular drumhead, the energy is determined by the amplitude of the wave, which can have a continuous range of values. In the circular quantum well, the energy is determined by the frequency of the wave, so it has discrete values. 8. Aug 31, 2008 ### Marty But this doesn't explain why the hydrogen atom can only have certain fixed energy levels, for two different reasons: 1. The circular drumhead can vibrate in mixed modes with cominations of different frequencies. So why can't the hydrogen atom vibrate in mixed modes with different combinations of energy? 2. The modes of the circular drum can be excited to any desired amplitude. Why not the modes of the hydrogen atom? I'm not asking these questions because I expect answers. I'm asking them because I want to point out that you simply can't explain things away by saying "it comes out of the equations". 9. Aug 31, 2008 ### Dr Transport The discretization of the energy eigen-values is a result of the conditions placed on the wave-functions. When you solve the SE for the radial component, the general form of the wave function is a hypergeometric function or sometimes a confluent hypergeometric equation. The coefficients are required to be integer values leading to quantization of the energy levels. See Schiff, Mertzbacher, Messiah or any host of QM texts. 10. Aug 31, 2008 ### malawi_glenn Marty, what if I or or someone else can show you that discretization of energy levels in an atom follows the Quantum mechanics formalism? The Language of physics is math, Dirac predicted existence of antiparticles just from the solution from the Dirac equation and so on. And if you know your atomic physics, hydrogen atom don't vibrate... And as Jtbell said, the energy of an electron in hydrogen atom is determined by its frequency of the wavefunction, and the freq's are discrete, hence discrete energy levels. So what you really want to ask is why an electron in hydrogen atom can't take any wave function that is a superposition of the eigenfunctions, the other stuff you mentioned is not applicable to the hydrogen atom. 11. Aug 31, 2008 ### Marty Malawi Glen already said that there's no essential difference between the discrete boundary conditions of the square well versus the more or less extended boundary conditions of the hydrogen atom. Yes, we know the eigenvalues are discrete. But so are the eigenvalues of the circular drum. You haven't explained why the energy is quantized in one case and not the other. 12. Aug 31, 2008 ### malawi_glenn Marty, it is a trivial exercise in introQM to solve the Schrodinger eq and find bound states and show that the energies are quantized. Do you want us to show you all this? Otherwise see the sources Dr Transport posted, those are really good QM-textbooks. This was the first hit on google; http://musr.physics.ubc.ca/~jess/p200/sq_well/sq_well.html ("Equation (11) implies restrictions on the allowed values of E"). Tell me/us if you want to have some middle steps or whatever you need to understand how the shrodinger eq + boundary conditions leads to discrete energy levels of the bound states. Also, the fundemental reason why there is a difference is that the physical systems are different... quite trivial. Last edited: Aug 31, 2008 13. Aug 31, 2008 ### Staff: Mentor It can, under the right circumstances. For example, during a transition from one energy eigenstate to another, the atom is briefly in a superposition (linear combination) of the two states: $$\psi = a(t) \psi_1 + b(t) \psi_2$$ As the transition proceeds, a(t) decreases from 1 to 0 and b(t) increases from 0 to 1. Also, some people study "Rydberg atoms" which are hydrogen (or other) atoms in highly-excited states, with large values of n, and energies just below zero (i.e. almost ionized). If I remember correctly, they actually produce states that are superpositions of energy eigenstates with close-together values of n, to form a localized wave-packet that travels around the nucleus in a classical-like orbit. In other words, they're working in the boundary zone between "quantum-like" behavior and "classical-like" behavior. The amplitude of the QM wave function is fixed by the requirement that the total probability of the electron being found somewhere equals 1. The absolute amplitude of the quantum wave function (taken as a whole) actually doesn't have any physical significance. It can be anything, or can be left unspecified. It's merely a convenience to "normalize" it so the integral of $\psi^*\psi$ over all space equals 1. What really matters are the relative values of $\psi^*\psi$ at different locations, because that determines the relative probabilities of the particle being at those locations (e.g. twice as likely to be at point 1 as at point 2). Last edited: Aug 31, 2008 14. Aug 31, 2008 ### nrqed It's still the boundary conditions that forces quantization upon us. Consider the harmonic oscillator. This time, the boundary conditions are that the wavefunction must go to zero fast enought at spatial inifnity to make the wavefunction square integrable. Let's say you would solve the equation grpahically and you would adjust the value of the parameter E to whatever value you please and observe the behavior of the wavefunction. What you would observe is that for most values of E the wavefunction is not normalizable. It might approach zero for a while but then start to divereg and shoots off to an infinite value. As you woudl approach one the quantized values of the enery of the harmonic oscillator you would see the tail of the wavefunction at spatial infinities get closer and closer to zero but would still would eventually shoot off to infinity. If you reach exactly one of the special values of the enery, the wavefunction would die off and not ever shoot to infinity. Then the wavefunction is normalizable. Hope this helps 15. Aug 31, 2008 ### Marty Yes, this is a good point. But you haven't quite explained why the superposition of two states cannot persist indefinitely. In which case the energy would have an intermediate value. Then your seem to be arguing that ultimatley the quantization of energy is a consequence of the quantization of charge. Because if you had half an electron orbitin a hydrogen nucleus, then the energy would be different than the case of a whole electron. I'm not sure this argument is conclusive. Consider two hydrogen atoms, or rather two nuclei, located one centimeter apart. If you write the Schroedinger equation for this arrangement, there should be a ground state solution that looks very much like the expected ground states for two independent atoms. (Actually there would be two degenerate solutions - one odd, and one even. But that's not my point.) If you "fill" this ground state with a single electron, wouldn't you have half an electron in the vicinity of each nucleus? And then what becomes of energy quantization? 16. Aug 31, 2008 ### Avodyne It would persist indefinitely if the atom was not coupled to the quantized radiation field. In general, if you have a completely isolated system with a time-independent hamiltonian, ANY state can be written as a superposition of energy eigenstates. For such a superposition, the energy does not have a definite value. 17. Aug 31, 2008 ### Dr Transport Let's think about the hydrogen atom. From my well worn copy of Schiff, pg 92, when you determine the series solution for the radial portion of the wave function, "you must terminate the series, thus the highest power of $\rho$ in $L$ is $\rho^{n'} (n' \ge 0)$, we must chose the parameter $\lambda$ to be a positive integer $n$....." In other words for the series to terminate, you must pick an integer, thus leading to quantization of the energy. 18. Aug 31, 2008 ### Staff: Mentor A superposition state does not have an intermediate value of the energy. It has an indeterminate value of the energy until it is measured, at which point the energy becomes one of the two values corresponding to the states that make up the superposition. The "choice" is random, with probabilities that depend on the relative weights of the two states in the superposition. 19. Sep 1, 2008 ### Marty I don't know how you distinguish between what I call an intermediate value and what you call an indeterminate value. Experimentally that is. 20. Sep 1, 2008 ### malawi_glenn Recall some postulates of quantum mechanics: A measurment of an observable makes the wavefunction collapse into one eigenstates of the observable and the outcome of a measurement gives you ONE of the eigenvalues to the observable. i.e suppose: $$\Psi _E = a\psi _{E1} + b\psi _{E2} + d\psi _{E4}$$ where $a,b,d$ are complex numbers. A measurment can give you $E_1$ with probability $|a|^2$, $E_2$ with prob. $|b|^2 etc.$ You never get a mixture and hence you'll only get discrete energy values. Another postulate of QM is that is meaningless to ask what properties (such as energy etc.) a state has before a measurment, so talking about the energy of the wavefunction $\Psi _E$ is meaningless. Last edited: Sep 1, 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903231680393219, "perplexity": 522.0859645691215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721405.66/warc/CC-MAIN-20161020183841-00131-ip-10-171-6-4.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/ebooks/book/222/chapter-abstract/2930752/A-Non-Newtonian-Fluid-Flow-in-a-Pipe?redirectedFrom=fulltext
Skip Nav Destination # Case Studies in Fluid Mechanics with Sensitivities to Governing Variables By M. Kemal Atesmen M. Kemal Atesmen Search for other works by this author on: ISBN: 9781119524717 No. of Pages: 198 Publisher: ASME Press Publication date: 2019 In this chapter we will consider time-independent non-Newtonian fluids flowing in a pipe in a fully developed state and in a laminar flow regime. Non-Newtonian fluids are studied in rheology, which is the science of deformation and fluid flow. In Newtonian fluids like water, air, milk, and so on, the shear stress applied to a fluid element is proportional to the shear rate, namely the local velocity gradients, where the proportionality constant is the fluid’s viscosity μ [(N s) m−2], constant at a given temperature. On the other hand, for non-Newtonian fluids, the fluid’s viscosity is not a... This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182701110839844, "perplexity": 1296.5979050227534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00487.warc.gz"}
https://totallydisconnected.wordpress.com/2016/09/13/w-local-spaces-are-amazing/
# w-local spaces are amazing Let $X$ be a spectral space.  Following Bhatt-Scholze, say $X$ is w-local if the subset $X^c$ of closed points is closed and if every connected component of $X$ has a unique closed point.  This implies that the natural composite map $X^c \to X \to \pi_0(X)$ is a homeomorphism (cf. Lemma 2.1.4 of BS). For the purposes of this post, a w-local adic space is a qcqs analytic adic space whose underlying spectral topological space is w-local.  These are very clean sorts of spaces: in particular, each connected component of such a space is of the form $\mathrm{Spa}(K,K^+)$, where $K$ is a nonarchimedean field and $K^+$ is an open and bounded valuation subring of $K$, and therefore has a unique closed point and a unique generic point. I’ve been slowly internalizing the philosophy that w-local affinoid perfectoid spaces have a lot of amazing properties.  Here I want to record an example of this sort of thing. Given a perfectoid space $\mathcal{X}$ together with a subset $T \subseteq |\mathcal{X}|$, let’s say $T$ is perfectoid (resp. affinoid perfectoid) if there is a pair $(\mathcal{T},f)$ where $\mathcal{T}$ is a perfectoid space (resp. affinoid perfectoid space) and $f: \mathcal{T} \to \mathcal{X}$ is a map of adic spaces identifying $|\mathcal{T}|$ homeomorphically with $T$ and which is universal for maps of perfectoid spaces $\mathcal{Y} \to \mathcal{X}$ which factor through $T$ on topological spaces. Note that if the pair $(\mathcal{T},f)$ exists, it’s unique up to unique isomorphism. Theorem. Let $\mathcal{X}$ be a w-local affinoid perfectoid space. Then any subset $T$ of $X = |\mathcal{X}|$ which is closed and generalizing, or which is quasicompact open, is affinoid perfectoid. Proof when $T$ is closed and generalizing. The key point here is that the map $\gamma: X \to \pi_0(X)$ defines a bijection between closed generalizing subsets of $X$ and closed subsets of the (profinite) space $pi_0(X)$, by taking preimages of the latter or images of the former. To check that this is true, note that if $T$ is closed and generalizing, then its intersection with a connected component $X'$ of $X$ being nonempty implies (since $T$ is generalizing) that $T \cap X'$ contains the unique rank one point of $X'$. But then $T \cap X'$ contains all specializations of that point (since $T \cap X'$ is closed in $X'$), so $T \cap X' = X'$, so any given connected component of $X$ is either disjoint from $T$ or contained entirely in $T$.  This implies that $T$ can be read off from which closed points of $X$ it contains.  Finally, one easily checks that $\gamma(T)$ is closed in $\pi_0(X)$, since $\pi_0(X)$ is profinite and $\gamma(T)$ is quasicompact.  Therefore $T = \gamma^{-1}(\gamma(T))$. Returning to the matter at hand, write $\gamma(T)$ as a cofiltered intersection of qc opens $V_i \subset \pi_0(X)$, $i \in I$. But qc opens in $\pi_0(X)$ are the same as open-closed subsets, so each $V_i$ pulls back to an open-closed subset $U_i \subset X$, and its easy to check that any such $U_i$ comes from a unique rational subset $\mathcal{U}_i \subset \mathcal{X}$.  Then $\mathcal{T} := \lim_{\leftarrow i \in I} \mathcal{U}_i$ is the perfectoid space we seek. Proof when $T$ is quasicompact open. First we prove the result when $X$ is connected, i.e. when $\mathcal{X} = \mathrm{Spa}(K,K^+)$ as above.  We claim that in fact $T$ is a rational subset of $X$. When $T$ is empty, this is true in many stupid ways, so we can assume $T$ is nonempty. Since $T$ is a qc open, we can find finitely many nonempty rational subsets $\mathcal{W}_i = \mathrm{Spa}(K,K^{+}_{(i)}) \subset \mathcal{X}$ such that $T = \cup_i |\mathcal{W}_i|$.  But the $\mathcal{W}_i$‘s are totally ordered, since any finite set of open bounded valuation subrings of $K$ is totally ordered by inclusion (in the opposite direction), so $T = |\mathcal{W}|$ where $\mathcal{W}$ is the largest $\mathcal{W}_i$. Now we turn to the general case. For each point $x \in \pi_0(X)$, we’ve proved that $T \cap \gamma^{-1}(x)$ is a rational subset (possibly empty) of the fiber $\gamma^{-1}(x)$.  Since $\gamma^{-1}(x) = \lim_{\substack{\leftarrow}{V_x \subset \pi_0(X) \mathrm{qc\,open}, x\in V_x}} \gamma^{-1}(V_x)$ and each $\gamma^{-1}(V_x)$ is the topological space of a rational subset $\mathcal{U}_x$ of $\mathcal{X}$, it’s now easy to check* that for every $x$ and for some small $V_x$ as above, there is a rational subset $\mathcal{T}_x \subset \mathcal{U}_x$ such that $|\mathcal{T}_x| = U_x \cap T$. Choose such a $V_x$ for each point in $\pi_0(X)$.  Since $\pi_0(X) = \cup_x V_x$, we can choose finitely many $x$‘s $\{x_i\}_{i\in I}$ such that the $V_{x_i}$‘s give a covering of $\pi_0(X)$.  Since each of these subsets is open-closed in $\pi_0(X)$, we can refine this covering to a covering of $\pi_0(X)$ by finitely many pairwise-disjoint open-closed subsets $W_j, j \in J$ where $W_j \subseteq V_{x_{i(j)}}$ for all $j$ and for some (choice of) $i(j) \in I$. Then $\gamma^{-1}(W_j)$ again comes from a rational subset $\mathcal{S}_j$ of $\mathcal{X}$, so the intersection $|\mathcal{T}_{x_{i(j)}}| \cap \gamma^{-1}(W_j)$ comes from the rational subset $\mathcal{T}_j := \mathcal{T}_{x_{i(j)}} \times_{\mathcal{U}_{x_{i(j)}}} \mathcal{S}_j$ of $X$, and since $|\mathcal{T}_j| = T \cap \gamma^{-1}(W_j)$ by design, we (finally) have that $\mathcal{T} = \coprod_{j} \mathcal{T}_j \subset \coprod_{j} S_j = \mathcal{X}$ is affinoid perfectoid. Whew! $\square$ *Here we’re using the “standard” facts that if $X_i$ is a cofiltered inverse system of affinoid perfectoid spaces with limit $X$, then $|X| = \lim_{\leftarrow i} |X_i|$, and any rational subset $W \subset X$ is the preimage of some rational subset $W_i \subset X_i$, and moreover if we have two such pairs $(i,W_i)$ and $(j,W_j)$ with the $W_{\bullet}$‘s both pulling back to $W$ then they pull back to the same rational subset of $X_k$ for some large $k \geq i,j$. Let $T$ be a subset of a spectral space $X$; according to the incredible Lemma recorded in Tag 0A31 in the Stacks Project, the following are equivalent: • $T$ is generalizing and pro-constructible; • $T$ is generalizing and quasicompact; • $T$ is an intersection of quasicompact open subsets of $X$. Moreover, if $T$ has one of these equivalent properties, $T$ is spectral. (Johan tells me this lemma is “basically due to Gabber”.) Combining this result with the Theorem above, and using the fact that the category of affinoid perfectoid spaces has all small limits, we get the following disgustingly general statement. Theorem. Let $\mathcal{X}$ be a w-local affinoid perfectoid space. Then any generalizing quasicompact subset $T \subset |\mathcal{X}|$ is affinoid perfectoid. By an easy gluing argument, this implies even more generally (!) that if $T \subset |\mathcal{X}|$ is a subset such that every point $t\in T$ has a qc open neighborhood $U_t$ in $|\mathcal{X}|$ such that $T \cap U_t$ is quasicompact and generalizing, then $T$ is perfectoid (not necessarily affinoid perfectoid).  This condition* holds, for example, if $T$ is locally closed and generalizing; in that situation, I’d managed to prove that $T$ is perfectoid back in May (by a somewhat clumsy argument, cf. Section 2.7 of this thing if you’re curious) after Peter told me it was so.  But the argument here gives a lot more. *Johan’s opinion of this condition: “I have no words for how nasty this is.”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701155424118042, "perplexity": 302.0396637360996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00689.warc.gz"}
http://blogs.ethz.ch/kowalski/2013/10/24/james-maynard-auteur-du-theoreme-de-lannee/
# James Maynard, auteur du théorème de l’année How many times in a year is an analytic number theorist supposed to faint from admiration? We’ve learnt of the full three prime Vinogradov Theorem by Helfgott, then of Zhang’s proof of the bounded gap property for primes. Now, from Oberwolfach, comes the equally (or even more) amazing news that James Maynard has announced a proof of the bounded gap property that manages not only to ask merely for the Bombieri-Vinogradov theorem in terms of information concerning the distribution of primes in arithmetic progressions, but also obtains a gap smaller than 700 (in fact, even better when using optimal narrow k-tuples), where the efforts of the Polymath8 project only lead to 4680, using quite a bit of machinery. (The preprint should be available soon, from what I understand, and thus a full independent verification of these results.) Two remarks, one serious, one not (the reader can guess which is which): (1) Again, from friends in Oberwolfach (teaching kept me, alas, from being able to attend the conference), I heard that Maynard’s method leads to the bounded gap property (with increasing bounds on the gaps) using as input any positive exponent of distribution for primes in arithmetic progressions (where Bombieri-Vinogradov means exponent 1/2; incidentally, this also means that the Generalized Riemann Hypothesis is strong enough to get bounded gaps, which did not follow from Zhang’s work). From the point of view of modern proofs, there is essentially no difference between positive exponent of distribution and exponent 1/2, since either property would be proved using the large sieve inequality and the Siegel-Walfisz theorem, and it makes little sense to prove a weaker large sieve inequality than the one that gives exponent 1/2. Question: could one conceivably even dispense with the large sieve inequality, i.e., prove the bounded gap property only using the Siegel-Walfisz theorem? This is a bit a rhetorical question, since the large sieve is nowadays rather easy, but maybe the following formulation is of some interest: do we know an example of an increasing sequence of integers $n_k$, not sparse, not weird, that satisfies the Siegel-Walfisz property, but has unbounded gaps, i.e., $\liminf (n_{k+1}-n_k)=+\infty?$ (2) There are still a bit more than two months to go before the end of the year; will a bright PhD student rise to the challenge, and prove the twin prime conjecture? [P.S. Borgesian readers will understand the title of this post, although a spanish version might have been more appropriate…] ### Kowalski I am a professor of mathematics at ETH Zürich since 2008. ## 7 thoughts on “James Maynard, auteur du théorème de l’année” 1. Greg says: To my understanding from hearing James speak at Oberwolfach: he expects that his method will end up only requiring any positive exponent of distribution, but he did not announce that he had a proof of that yet. Another remarkable consequence of his work is that assuming the Elliott-Halberstam conjecture, one actually gets double-gaps p_{n+2} – p_n bounded by 700 infinitely often; no previous method could achieve that under any reasonable hypothesis, I believe. 2. Gil Kalai says: Dear Emmanuel, Does Maynard’s method deal (or expected to) with a large number of primes (more than two) in bounded intervals? 3. Dear Gil Kalai, my understanding (from what I heard of Maynard’s Oberwolfach talk) is that he can obtain a large number primes in bounded intervals. Basically (or hopefully), with a new way of weighting the translates of a tuple, he can show that the average number of primes in such a tuple is greater than 100, say. For the same reason he can do with any positive level of distribution: the average number of primes in the translated tuple is the available level of distribution times a large number, say a million. 4. Dear Gergely, many thanks. This is very impressive. 5. j. says: In Spanish: James Maynard, autor del teorema del año.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584038853645325, "perplexity": 894.8897194905621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00154.warc.gz"}
https://forum.allaboutcircuits.com/threads/amplifier-feedback-fraction-and-critical-frequency.148527/
# Amplifier feedback fraction and critical frequency #### ardtuy Joined Mar 22, 2018 5 I am unsure about a question stated as: "An amplifier has an open loop gain of 2000 with an upper corner frequency of 500Hz. Determine the required feedback fraction to give a midband gain of 120 and determine the new corner frequency." I found β = 1/500Hz = 0.002 Then the closed loop bandwidth of the amplifier by BWcl = BWol(1+βAol(mid)) = 2.5kHz Then to find the the new feedback fraction I used β = 1/120 = 0.0083 I'm not sure if I should have kept the closed loop bandwidth found before and got 2.5kHz/(1+0.0083(120)) = 2.252kHz for the new corner frequency. Is this wrong? Joined Mar 10, 2018 4,057 GBW is a constant, so calculating the new corner trivial. GBW = Aol x Fcorner beta is G related Regards, Dana. #### ardtuy Joined Mar 22, 2018 5 GBW is a constant, so calculating the new corner trivial. GBW = Aol x Fcorner beta is G related View attachment 152097 Regards, Dana. Thank you Dana I was going in the wrong direction. The correct method using the gain bandwidth product was right in front of me and I just didn't put the pieces together.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849916458129883, "perplexity": 2955.947987630958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00110.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Under_Construction_Map%3A_Physical_Chemistry_for_the_Chemical_and_Biological_Scientists_(Chang)/8%3A_Electrolyte_Solutions/8.1%3A_Electrolyte_Solution_Nomenclature
# 8.1: Electrolyte Solution Nomenclature Biological and many chemical systems are aqueous. The rates of most biochemical reactions are dependent upon the concentration of the ions in the system. Most biological systems involve the presence of electrolytes, or substances that when dissolved in a solvent (usually water) will produce a system that conducts electricity. ### Introduction Solutions found in nature, therefore, do not behave ideally, meaning it is necessary to describe this ‘non-ideal’ behavior with a new mathematical formula.Consider an electrolyte solution: $$\mathrm{M}_{z+}\mathrm{X}_{z-} \longrightarrow z_+\mathrm{M}^{(z_-)+}+z_-\mathrm{X}^{(z_+)-}$$ $$\mu=z_+ \mu_+ + z_- \mu_-$$ $$= z_+ \left( \mu_+^\mathrm{o} + RT \ln m_+ \right) + z_- \left( \mu_-^\mathrm{o} + RT \ln m_- \right)$$ $$= \left( z_+ \mu_+^\mathrm{o} + z_- \mu_-^\mathrm{o} \right) + RT \ln \left( m_+^{z_+} m_-^{z_-} \right)$$ $$= \left( z_+ \mu_+^\mathrm{o} + z_- \mu_-^\mathrm{o} \right) + RT \ln \left( ( \gamma_+ m_+ )^{z_+} ( \gamma_- m_- )^{z_-} \right)$$ $$= \left( z_+ \mu_+^\mathrm{o} + z_- \mu_-^\mathrm{o} \right) + RT \ln \left( a_+^{z_+} a_-^{z_-} \right)$$ $$= \left( z_+ \mu_+^\mathrm{o} + z_- \mu_-^\mathrm{o} \right) + RT \ln a$$ $$\mu_+^\mathrm{o}$$ and $$\mu_-^\mathrm{o}$$ standard state chemical potentials for their respective electrolyte species $$\mu_+$$ and $$\mu_-$$ chemical potentials for their respective electrolyte species $$a$$ electrolyte activity $$R$$ gas constant, 8.314 J/(mol * K) $$T$$ temperature (K) of the electrolyte solution $$z_+$$ cationic charge of the electrolyte for $$\gamma_\pm$$ $$z_-$$ anionic charge of the electrolyte for $$\gamma_\pm$$ $$m$$ molality of the electrolyte solution where and $$a = a_\pm^z$$ $$a_\pm = \left(a_+^{z_+} a_-^{z_-}\right) ^{1/z}$$ mean ionic activity $$z = z_+ + z_-$$ $$\gamma_\pm = \left(\gamma_+^{z_+} \gamma_-^{z_-} \right) ^{1/z}$$ mean ionic activity coefficent $$a_\pm = \gamma_\pm m_\pm$$ $$m_\pm = \left(m_+^{z_+} m_-^{z_-} \right) ^{1/z}$$ mean ionic molality ### Example $$\mathrm{Mg}\mathrm{Cl}_{2} \longrightarrow (1)\mathrm{Mg}^{(2)+} + (2) \mathrm{Cl}^{(1)-}$$ $$\mu=\left( 1 \right) \mu_\mathrm{Mg^{2+}} + \left( 2 \right) \mu_\mathrm{Cl^{1-}}$$ $$= 1 \left( \mu_\mathrm{Mg^{2+}}^\mathrm{o} + RT \ln m_\mathrm{Mg^{2+}} \right) + 2 \left( \mu_\mathrm{Cl^{1-}}^\mathrm{o} + RT \ln m_\mathrm{Cl^{1-}} \right)$$ $$= \left( 1 \mu_\mathrm{Mg^{2+}}^\mathrm{o} + 2 \mu_\mathrm{Cl^{1-}}^\mathrm{o} \right) + RT \ln \left( m_\mathrm{Mg^{2+}}^{1} m_\mathrm{Cl^{1-}}^{2} \right)$$ $$= \left( 1 \mu_\mathrm{Mg^{2+}}^\mathrm{o} + 2 \mu_\mathrm{Cl^{1-}}^\mathrm{o} \right) + RT \ln \left( ( \gamma_\mathrm{Mg^{2+}} m_\mathrm{Mg^{2+}} )^{1} ( \gamma_\mathrm{Cl^{1-}} m_\mathrm{Cl^{1-}} )^{2} \right)$$ $$= \left( 1 \mu_\mathrm{Mg^{2+}}^\mathrm{o} + 2 \mu_\mathrm{Cl^{1-}}^\mathrm{o} \right) + RT \ln \left( a_\mathrm{Mg^{2+}}^{1} a_\mathrm{Cl^{1-}}^{2} \right)$$ $$= \left( 1 \mu_\mathrm{Mg^{2+}}^\mathrm{o} + 2 \mu_\mathrm{Cl^{1-}}^\mathrm{o} \right) + RT \ln a$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152827024459839, "perplexity": 2708.897769453684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00544.warc.gz"}
https://www.jiskha.com/questions/1069250/a-butane-lighter-had-a-mass-of-24-643-grams-before-lighting-and-a-mass-of-24-592-grams
# Chemistry A butane lighter had a mass of 24.643 grams before lighting and a mass of 24.592 grams after. How many mL of CO2 were produced? Assume STP. 1. 👍 0 2. 👎 0 3. 👁 119 asked by Heidi 1. 2C4H10 + 13O2 ==> 8CO2 + 10H2O 24.643 initial mass -24.592 after using -------- 00.051g mass butane used. mols butane = grams/molar mass = estimated 0.00088 but you need to get a more accurate figures. Using the coefficients in the balanced equation, convert mols butane to mols CO2. That's estimated 0.00088 x (8 mols CO2/2 mols C4H10) = ? Then use PV = nRT and solve for L; convert to mL. 1. 👍 0 2. 👎 0 posted by DrBob222 ## Similar Questions 1. ### Chem, could someone cheak my work please. I did this experiment where I filled a graduated cylinder with water. The I put a stopper on it and put it under water and removed the stopper. The with a modified lighter (that wouldn't release sparks) I added butane gas. I got asked by Lena on December 12, 2008 2. ### Chem I did this experiment where I filled a graduated cylinder with water. The I put a stopper on it and put it under water and removed the stopper. The with a modified lighter (that wouldn't release sparks) I added butane gas. I got asked by Lena on December 11, 2008 3. ### chemistry A lighter contains 3.59 g of butane. How many moles of butane are present at STP? Thoughts: So, correct me if I'm wrong however is it appropriate just to calculate its molar mass, and then use that molar mass and the given mass to asked by Gerry on January 10, 2016 4. ### Chemistry Ok, I'm trying my best here, I'm learning, and I'm thinking I'm on a roll, until I can't get the right answer again..... First.. the butane question Calculate the mass of Butane needed to produce 82.7g Carbon dioxide. 2C4H10 +13 asked by Jennie on March 2, 2015 5. ### chemistry In a butane lighter, 9.2g g of butane combines with 32.9g g of oxygen to form 27.8g g carbon dioxide and how many grams of water? asked by Anonymous on September 22, 2014 1. ### Chemistry In a butane lighter, 9.8 g of butane combines with 35.1 g of oxygen to form 29.6 g carbon dioxide and how many grams of water? asked by Amy on September 14, 2018 2. ### chemistry In a butane lighter, 9.5 of butane combines with 34.0 of oxygen to form 28.7 carbon dioxide and how many grams of water? asked by peanut on January 22, 2012 3. ### math/chem I need to find the mass of butane gas and I only know the following things (you cannot use the molar mass of butane): volume = 80 mL pressure = 98.23 KPa temp.= 298K So I found the number of mols and I got 0.003171814 mols. of asked by Lena on December 12, 2008 4. ### Chem I need help I don't get this 1. Since the gas in your graduated cylinder is a mixture of butane and water vapor, you must determine the partial pressure of the butane, Pbutane, alone. To do this, consult a reference and record the asked by |-/ on March 6, 2017 5. ### Physics Two small spheres of mass 451 g and 695 g are suspended from the ceiling at the same point by massless strings of equal length 10.9 m . The lighter sphere is pulled aside through an angle of 65 degrees from the vertical and let asked by amanda and leah on January 16, 2012 More Similar Questions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411728143692017, "perplexity": 4525.416549497108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00148.warc.gz"}
http://www.gradesaver.com/to-build-a-fire/q-and-a/in-the-paragraph-where-the-man-reflects-on-old-timers-and-men-who-are-men-what-do-you-think-of-his-mentality-and-beliefs-104441
# in the paragraph where the man reflects on "old-timers" and "men who are men" what do you think of his mentality and beliefs? From to build a fire
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926358163356781, "perplexity": 3149.848408921878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00188.warc.gz"}
https://www.physicsforums.com/threads/harmonic-motion-help.77340/
# Harmonic Motion Help 1. May 30, 2005 ### jaymay I am having trouble with this problem: A particle of mass 4.00 kg is attached to a spring with a force constant of 100 N/m. It is oscillating on a horizontal frictionless surface with an amplitude of 2.00 m. A 6.00-kg object is dropped vertically on top of the 4.00-kg object as it passes through its equilibrium point. The two objects stick together. (A) By how much does the amplitude of the vibrating system change as a result of the collision. (B) By how much does the period change? (C)By how much does the energy change? (D) Account for the change in energy? This is pretty much a plug-in problem, but my main question is how to solve for the new amplitude and energy after the collision. I am going over the formulas and it seems like the amplitude is in the formula for energy and energy is part of the formula for amplitude, how can I solve for one or the other when they both change after the collision? Can someone give me a clue? 2. May 30, 2005 ### OlderDan Ignore the spring forces during collision and treat the collision in terms of conservation of momentum. The vertical component will not be a factor; the normal force will stop the vertical motion. Horizontal momentum will be conserved. Calculate the kinetic energy after the collision. Use that to find the maximum displacement of the spring. Use the new mass to find the new period/frequency. 3. May 30, 2005 ### HallsofIvy Staff Emeritus I don't see a "collision". You are not told the height from which the new mass is dropped and it is, anyway, vertical, which will not affect horizontal motion. The only thing that happens is that the mass suddenly changes from 4 to 10 kg. 4. May 30, 2005 ### Staff: Mentor While the vertical motion doesn't matter, the horizontal motion does. Treat this as an inelastic collision between the moving 4 kg particle and the stationary 6 kg object. 5. May 30, 2005 ### jaymay Thanks for all the help. I was able to solve the problem as you all suggested by treating it as an inelastic collision. I solve for the energy of the system before the collision. Using the result of the energy I was able to solve for the initial velocity. Once I had the initial velocity I was able to solve for the final velocity by treating it as an ineslatic collision. Once I had the final velocity, I was able to solve for the new Amplitude and the rest was just plug-in. Thanks again for all the help. Similar Discussions: Harmonic Motion Help
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400427937507629, "perplexity": 286.08084418325046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00340-ip-10-145-167-34.ec2.internal.warc.gz"}
https://natural-language-understanding.fandom.com/wiki/Rule_extraction
## FANDOM 303 Pages TODO: Nickel et al. (2016)[1] "The basis for the Semantic Web is Description Logic and [109, 110, 111] describe approaches for logic-oriented machine learning approaches in this context. Also to mention are data mining approaches for knowledge graphs as described in [112, 113, 114]." ## Extracting rules from a neural network Edit Sushil et al. (2018)[2] experimented with extracting rules from a small neural network trained on the 20 newsgroups dataset. The extracted rule set achieve a macro-average F1 of 0.8 why not micro average? (the source neural net gets 0.82). They open the source code and analyze examples. ## Statistical approaches Edit From Yang et al. (2015)[3]: "The key problem of extracting Horn rules like the aforementioned example is how to effectively explore the search space. Traditional rule mining approaches directly operate on the KB graph – they search for possible rules (i.e. closed-paths in the graph) by pruning rules with low statistical significance and relevance (Schoenmackers et al., 2010). These approaches often fail on large KB graphs due to scalability issues." ## Embedding-based approaches Edit ### Extracting Horn rules Edit Yang et al. (2015)[3] use embeddings to extract Horn rules of length 2: $B_1(a,b) \wedge B_2(b,c) \Rightarrow H(a,c)$. For a pair of relations, they create a composed representation (adding the two relation vectors or multiplying if they are matrices). The composed representation should be "similar" to the representation of H (Euclidean distance for vectors and Frobenius norm for matrices). The algorithm only needs to iterate (a subset of) relation combinations instead of nodes in the graph therefore it runs much faster than traditional statistical approaches. Yang et al. also demonstrated that it is more accurate than AMIE (Galárraga et al., 2013[4]). ## Evaluation Edit From Yang et al. (2015)[3]: "We consider precision as the evaluation metric, which is the ratio of predictions that are in the test (unseen) data to all the generated unseen predictions. Note that this is an estimation, since a prediction is not necessarily “incorrect” if it is not seen. Gal´ arraga et al. (2013) suggested to identify incorrect predictions based on the functional property of relations. However, we find that most relations in our datasets are not functional." ## References Edit 1. Nickel, M., Murphy, K., Tresp, V., & Gabrilovich, E. (2016). A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1), 11–33. doi:10.1109/JPROC.2015.2483592 2. Sushil, M., Šuster, S., & Daelemans, W. (2018). Rule induction for global explanation of trained models. Retrieved from http://arxiv.org/abs/1808.09744 3. 3.0 3.1 3.2 Yang, B., Yih, W., He, X., Gao, J., & Deng, L. (2015). Embedding Entities and Relations for Learning and Inference in Knowledge Bases. ICLR 2015, 12. Retrieved from http://arxiv.org/abs/1412.6575 4. Galárraga, L. A., Preda, N., & Suchanek, F. M. (2013). Mining Rules to Align Knowledge Bases. Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, 43–48. doi:10.1145/2509558.2509566
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405664801597595, "perplexity": 3149.459906157416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247522457.72/warc/CC-MAIN-20190222180107-20190222202107-00141.warc.gz"}
https://tex.stackexchange.com/questions/linked/116506
19 questions linked to/from How are big operators defined? 718 views ### How to create an expectation symbol that behave similarly to \sum? [duplicate] Introduction This question is already asked here. If you want you could consider this one a duplicate. However, even if the proposed solution satisfied the one that posted the question, it doesn't ... 58 views ### A large “#” symbol [duplicate] I´m trying to typeset a large # symbol. I tried \newcommand*{\con}{\scalebox{1.5}{\ensuremath{\#}}} but i would also like to add subscripts in math mode so that they appear underneath the symbol. I ... 43 views ### Displaystyle/limits for custom operators [duplicate] I want to write this: ${\text{\huge N}}\limits_{i<k}$ and have it produce output comparable to this: $\sum\limits_{i<k}$ That is, I want a sum with limits underneath it, but I want it ... 32 views ### Create my own symbols such as Bigotimes [duplicate] I am using the symbol $\Box$ as a binary operator and would like to create a unary operator symbol starting from that. My goal is to have a symbol that behaves like $\Bigotimes$ compared to $\otimes$, ... 11k views ### How to create my own math operator with limits? How can i write my own math operator with limits? I want it to look like: \sum\limits_{e=1}^{m} but with a capital A (if possible bigger than the normal text) instead of the sum. Thanks for the help!... 100k views ### How to make math symbols bigger? Is there a way to make math symbols bigger? Reason: I've used \sfrac{q}{m}, and those symbols appear far to small, so I would like to make them a bit bigger. edit: I just thought that \sfrac{q}{m} ... 2k views ### How can I define a big plus operator that works like \bigcup? I want to define a "\bigplus" operator that is a big + symbol that changes size according to environment and has limits, just like \bigcup works. I've read about defining a "\bigtimes" (How can I get ... 2k views ### Writing X as a symbol with limits I wish to write X as product like $X_{n=1}^k$. How to write it? For example, we write $\sum \limits _{n=1}^k$ 974 views ### How to control the size of math symbols in an equation? I've been struggling with this task for some time now. Basically I want to make this: Into something like this: I've tried \mathlarger from relsize package, but it simply didn't work (no effect at ... 505 views ### What wrapping should I use to create a new symbol? It is a truth universally acknowledged, that a mathematician in possession of a good theorem must be in want of notation.1 It is as true, but perhaps less universal, that unicode doesn't contain ... 252 views 374 views ### Using any symbol for “\limits” I want to use different symbols instead of Sigma for \sum and the like. I tried \mathop{\Lambda}\limits^n_{i=1} But the symbol is small compared to \sum Then I tried the relsize package \mathop{\...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500676393508911, "perplexity": 1069.9796527188864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00369.warc.gz"}
https://www.geteasysolution.com/166.666666666666667_as_a_fraction
# 166.666666666666667 as a fraction ## 166.666666666666667 as a fraction - solution and the full explanation with calculations. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. If it's not what You are looking for, type in into the box below your number and see the solution. ## What is 166.666666666666667 as a fraction? To write 166.666666666666667 as a fraction you have to write 166.666666666666667 as numerator and put 1 as the denominator. Now you multiply numerator and denominator by 10 as long as you get in numerator the whole number. 166.666666666666667 = 166.666666666666667/1 = 1666.6666666667/10 = 16666.666666667/100 = 166666.66666667/1000 = 1666666.6666667/10000 = 16666666.666667/100000 = 166666666.66667/1000000 = 1666666666.6667/10000000 = 16666666666.667/100000000 = 166666666666.67/1000000000 = 1666666666666.7/10000000000 = 16666666666667/100000000000 And finally we have: 166.666666666666667 as a fraction equals 16666666666667/100000000000
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117860198020935, "perplexity": 1395.1797416836894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00150.warc.gz"}
https://www.scm.com/highlights/parametrizing-gfn1-xtb-for-hybrid-perovskites/
# Parametrizing GFN1-xTB for hybrid perovskites Halide perovskites are promising materials for optoelectronic applications in solar cell devices due to their excellent optoelectronic performance. However, they suffer from several dynamical degradation problems, which are difficult to characterize. Atomistic simulations can provide valuable insights, although the high computational cost of first-principles methods such as DFT makes it challenging to model dynamical processes in large perovskite systems. To overcome these limitations, researchers from Eindhoven University of Technology (TU/e) have refined GFN1-xTB parameters to accurately describe the structural, energetic, and dynamical properties of inorganic halide perovskites. In a recent study, the semi-empirical density functional tight binding method, GFN1-xTB, has been refined using ParAMS to improve the performance of computing properties of perovskites containing Cs, Pb, I, and Br atoms. A training set based on DFT calculations has been generated to train a set of parameters of the GFN1-xTB Hamiltonian. The performance of the refined parameters has been benchmarked against experiments and DFT calculations, showing an accurate description of the phase transition of these halide perovskites. The study shows that the phase stability is strongly correlated to the displacement of ions in the perovskites. In the orthorhombic phase, the directional movement of the Cs cations increases their distance to the surrounding halides, which can trigger decomposition to the nonperovskite phase. However, once enough thermal energy is available, the increased Cs−halide distance can be compensated by increased halide fluctuations, resulting in a transition to the phase-stable tetragonal or cubic phases. Furthermore, it is shown that the mixing of halides increases halide displacement, thus decreasing the phase transition temperatures and therefore improving the phase stability of the perovskites.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555407524108887, "perplexity": 1732.2188581662328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00465.warc.gz"}
https://cosmicreflections.skythisweek.info/2017/11/15/average-orbital-distance/
# Average Orbital Distance If a planet is orbiting the Sun with a semi-major axis, a, and orbital eccentricity, e, it is often stated that the average distance of the planet from the Sun is simply a.  This is only true for circular orbits (e = 0) where the planet maintains a constant distance from the Sun, and that distance is a. Let’s imagine a hypothetical planet much like the Earth that has a perfectly circular orbit around the Sun with a = 1.0 AU and e = 0.  It is easy to see in this case that at all times, the planet will be exactly 1.0 AU from the Sun. If, however, the planet orbits the Sun in an elliptical orbit at a = 1 AU and e > 0, we find that the planet orbits more slowly when it is farther from Sun than when it is nearer the Sun.  So, you’d expect to see the time-averaged average distance to be greater than 1.0 AU.  This is indeed the case. The Earth’s current osculating orbital elements give us: a = 0.999998 and e = 0.016694 Earth’s average distance from the Sun is thus: Mercury, the innermost planet, has the most eccentric orbit of all the major planets: a = 0.387098 and e = 0.205638 Mercury’s average distance from the Sun is thus:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474766254425049, "perplexity": 342.30953937524527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00459.warc.gz"}
http://dlmf.nist.gov/33.18
§33.18 Limiting Forms for Large $\ell$ As $\ell\to\infty$ with $\epsilon$ and $r$ ($\neq 0$) fixed, 33.18.1 $\displaystyle\mathop{f\/}\nolimits\!\left(\epsilon,\ell;r\right)$ $\displaystyle\sim\frac{(2r)^{\ell+1}}{(2\ell+1)!},$ $\displaystyle\mathop{h\/}\nolimits\!\left(\epsilon,\ell;r\right)$ $\displaystyle\sim\frac{(2\ell)!}{\pi(2r)^{\ell}}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 22, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988678514957428, "perplexity": 1030.1047385411198}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379355.46/warc/CC-MAIN-20141119123259-00038-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.mathemania.com/lesson/trigonometric-form-complex-numbers/
# Trigonometric form of complex numbers Let $M (x,y)$ be the point in the complex plane which is joined to the complex number $z = x + yi$. We can determine the position of the point $M$ (and thus of the complex number $z$) by using numbers $r$ and $\varphi$, where $r = |z|= \sqrt{x^2+y^2}$ (the distance from the point $M$ to the origin) and $\varphi \in \left [0, 2\pi \right \rangle$ is an angle between the segment line $\overline{OM}$ and positive part of the real axis. Number $r$ is called modulus of a complex number and angle $\varphi$ is called an argument of a complex number and it is denoted by $\varphi = arg(z)$. Then $$\cos \varphi = \frac{x}{r} \Rightarrow x = r \cos \varphi$$ and $$\sin \varphi = \frac{y}{r} \Rightarrow y = r \sin \varphi$$ is valid. Substituting in the expression $z=x + yi$, we obtain the trigonometric form of the complex number: $$z= r (\cos \varphi + i \sin \varphi).$$ If  a complex number is given in the algebraic form $z = x+yi$, then we determine $r$ and $\varphi$ from equations: $$r = \sqrt{x^2 + y^2}$$ $$tg \varphi = \frac{y}{x} , x \neq 0.$$ The last equation gives two solutions for an angle $\varphi \in \left [0, 2\pi \right \rangle$. We choose the angle depending on in which quadrant number $z$ is located. Example 1: Write in the trigonometric form complex numbers $z$ and $\overline{z}$, if $z = \frac{1}{2} – \frac{\sqrt{3}}{2}i$. Solution: We need to determine numbers $r$ and $\varphi$. $$r = |z| =\left|\frac{1}{2} – \frac{\sqrt{3}}{2}i\right|$$ $$= \sqrt{\left( \frac{1}{2} \right) ^2 + \left( -\frac{\sqrt{3}}{2} \right)^2 }$$ $$= \sqrt{\frac{1}{4} + \frac{3}{4}}$$ $$= 1.$$ $$tg \varphi = \frac{-\frac{\sqrt{3}}{2}}{\frac{1}{2}} = -\sqrt{3},$$ that is $\varphi = \frac{2\pi}{3}$ or $\varphi = \frac{5\pi}{3}$. Since the number $z = \frac{1}{2} – \frac{\sqrt{3}}{2}i$ is located in the fourth quadrant, it follows that $\varphi = \frac{5\pi}{3}$. The complex number $z= \frac{1}{2} – \frac{\sqrt{3}}{2}i$ has the trigonometric form: $$z = \cos \frac{5 \pi}{3} + i \sin \frac{5\pi}{3}.$$ The complex conjugate numbers have the same modulus. Therefore, for $\overline{z} = \frac{1}{2} + \frac{\sqrt{3}}{2}i$ , $r =1$. $\overline{z}$ is located in the first quadrant, so we have: $$tg \varphi = \frac{\frac{\sqrt{3}}{2}}{\frac{1}{2}} = \sqrt{3} \Rightarrow \varphi = \frac{\pi}{3}.$$ Finally, the complex number $\overline{z} = \frac{1}{2} + \frac{\sqrt{3}}{2}$ has the following trigonometric form: $$\overline{z} = \cos \frac{\pi}{3} + i \sin \frac{\pi}{3}.$$ Example 2: Write in the trigonometric form the complex number $z$: $$z = – \cos \frac{\pi}{5} + i \sin \frac{\pi}{5}.$$ Solution: The function cosine is negative in the second and third quadrant, and sine is positive in the first and second quadrant. This means that the given complex number $z$ is located in the second quadrant. $r = \sqrt{\left( -cos \frac{\pi}{5}\right) ^2 + \left( sin\frac{\pi}{5} \right)^2 } = \sqrt{1} = 1$ Now we have: $$tg \varphi = \frac{\sin \frac{\pi}{5}}{-\cos \frac{\pi}{5}} = – tg \frac{\pi}{5}.$$ That is, $\varphi = – \frac{\pi}{5}$ or $\varphi = \frac{4\pi}{5}$. We know that the complex number $z$ is located in the second quadrant, which means that $\varphi = \frac{4\pi}{5}$. Now we can write the given complex number $z$ in the trigonometric form: $$z = \cos \frac{4 \pi}{5} + i \sin \frac{4 \pi}{3}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948833584785461, "perplexity": 114.71149335347822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00619.warc.gz"}
https://linearalgebras.com/solution-abstract-algebra-exercise-2-2-8.html
If you find any mistakes, please make a comment! Thank you. ## Compute the order of a stabilizer in Sym(n) Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 2.2 Exercise 2.2.8 Let $G = S_n$ and fix $i \in \{ 1, 2, \ldots, n \}$. Prove that $\mathsf{stab}_G(i)$ is a subgroup of $G$, and find $|\mathsf{stab}_G(i)|$. Solution: Note that $\mathsf{stab}_G(i)$ is not empty since $1 \in \mathsf{stab}_G(i)$. Now suppose $\sigma$, $\tau \in \mathsf{stab}_G(i)$; we have $$(\sigma \circ \tau^{-1})(i) = \sigma(\tau^{-1}(i)) = \sigma(i) = i,$$ so that $\sigma \circ \tau^{-1} \in \mathsf{stab}_G(i)$. By the subgroup criterion, $\mathsf{stab}_G(i) \leq G$. Now every permutation that fixes $i$ is a permutation of the remaining $n-1$ elements of the set $\{ 1, 2, \ldots, n \}$. There are $(n-1)!$ such permutations. Thus $|\mathsf{stab}_G(i)| = (n-1)!$. #### Linearity This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784976840019226, "perplexity": 170.07412138949653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00212.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_latest/clhtml/g13/g13ndc.html
NAG CL Interfaceg13ndc (cp_​binary) Settings help CL Name Style: 1Purpose g13ndc detects change points in a univariate time series, that is, the time points at which some feature of the data, for example the mean, changes. Change points are detected using binary segmentation using one of a provided set of cost functions. 2Specification #include void g13ndc (Nag_TS_ChangeType ctype, Integer n, const double y[], double beta, Integer minss, const double param[], Integer mdepth, Integer *ntau, Integer tau[], double sparam[], NagError *fail) The function may be called by the names: g13ndc or nag_tsa_cp_binary. 3Description Let ${y}_{1:n}=\left\{{y}_{j}:j=1,2,\dots ,n\right\}$ denote a series of data and $\tau =\left\{{\tau }_{i}:i=1,2,\dots ,m\right\}$ denote a set of $m$ ordered (strictly monotonic increasing) indices known as change points, with $1\le {\tau }_{i}\le n$ and ${\tau }_{m}=n$. For ease of notation we also define ${\tau }_{0}=0$. The $m$ change points, $\tau$, split the data into $m$ segments, with the $i$th segment being of length ${n}_{i}$ and containing ${y}_{{\tau }_{i-1}+1:{\tau }_{i}}$. Given a cost function, $C\left({y}_{{\tau }_{i-1}+1:{\tau }_{i}}\right)$, g13ndc gives an approximate solution to $minimize m,τ ∑ i=1 m (C(yτi-1+1:τi)+β)$ where $\beta$ is a penalty term used to control the number of change points. The solution is obtained in an iterative manner as follows: 1. 1.Set $u=1$, $w=n$ and $k=0$ 2. 2.Set $k=k+1$. If $k>K$, where $K$ is a user-supplied control parameter, then terminate the process for this segment. 3. 3.Find $v$ that minimizes $C(yu:v) + C(yv+1:w)$ 4. 4.Test $C(yu:v) + C(yv+1:w) + β < C(yu:w)$ (1) 5. 5.If inequality (1) is false then the process is terminated for this segment. 6. 6.If inequality (1) is true, then $v$ is added to the set of change points, and the segment is split into two subsegments, ${y}_{u:v}$ and ${y}_{v+1:w}$. The whole process is repeated from step 2 independently on each subsegment, with the relevant changes to the definition of $u$ and $w$ (i.e., $w$ is set to $v$ when processing the left-hand subsegment and $u$ is set to $v+1$ when processing the right-hand subsegment. The change points are ordered to give $\tau$. g13ndc supplies four families of cost function. Each cost function assumes that the series, $y$, comes from some distribution, $D\left(\Theta \right)$. The parameter space, $\Theta =\left\{\theta ,\varphi \right\}$ is subdivided into $\theta$ containing those parameters allowed to differ in each segment and $\varphi$ those parameters treated as constant across all segments. All four cost functions can then be described in terms of the likelihood function, $L$ and are given by: $C ( y ( τ i-1 +1) : τi ) = -2 ⁢ log⁡ L (θ^i,ϕ| y ( τ i-1 +1) : τi )$ where the ${\stackrel{^}{\theta }}_{i}$ is the maximum likelihood estimate of $\theta$ within the $i$th segment. Four distributions are available; Normal, Gamma, Exponential and Poisson distributions. Letting $Si= ∑ j=τi-1 τi yj$ the log-likelihoods and cost functions for the four distributions, and the available subdivisions of the parameter space are: • Normal distribution: $\Theta =\left\{\mu ,{\sigma }^{2}\right\}$ $-2⁢log⁡L = ∑ i=1 m ∑ j=τi-1 τi log(2⁢π) + log(σi2) + (yj-μi)2 σi2$ • Mean changes: $\theta =\left\{\mu \right\}$ $C(yτi-1+1:τi) = ∑ j=τi-1 τi (yj-ni−1⁢Si) 2 σ2$ • Variance changes: $\theta =\left\{{\sigma }^{2}\right\}$ $C(yτi-1+1:τi) = ni ⁢ (log( ∑ j=τi-1 τi (yj-μ) 2 )-log⁡ni)$ • Both mean and variance change: $\theta =\left\{\mu ,{\sigma }^{2}\right\}$ $C(yτi-1+1:τi) = ni ⁢ (log( ∑ j=τi-1 τi (yj-ni−1⁢Si) 2 )-log⁡ni)$ • Gamma distribution: $\Theta =\left\{a,b\right\}$ $-2⁢log⁡L = 2× ∑ i=1 m ∑ j=τi-1 τi log⁡Γ(ai)+ ai⁢log⁡bi+ (1-ai)⁢log⁡yj+ yj bi$ • Scale changes: $\theta =\left\{b\right\}$ $C(yτi-1+1:τi) = 2⁢ a⁢ ni (log⁡Si-log(a⁢ni))$ • Exponential Distribution: $\Theta =\left\{\lambda \right\}$ $- 2⁢log⁡L = 2× ∑ i=1 m ∑ j=τi-1 τi log⁡λi+ yj λi$ • Mean changes: $\theta =\left\{\lambda \right\}$ $C(yτi-1+1:τi) = 2⁢ ni (log⁡Si-log⁡ni)$ • Poisson distribution: $\Theta =\left\{\lambda \right\}$ $-2⁢log⁡L = 2× ∑ i=1 m ∑ j=τi-1 τi λi- ⌊yj+0.5⌋⁢log⁡λi+ log⁡Γ(⌊yj+0.5⌋+1)$ • Mean changes: $\theta =\left\{\lambda \right\}$ $C(yτi-1+1:τi) = 2⁢ Si ⁢ (log⁡ni-log⁡Si)$ when calculating ${S}_{i}$ for the Poisson distribution, the sum is calculated for $⌊{y}_{i}+0.5⌋$ rather than ${y}_{i}$. 4References Chen J and Gupta A K (2010) Parametric Statistical Change Point Analysis With Applications to Genetics Medicine and Finance Second Edition Birkhäuser West D H D (1979) Updating mean and variance estimates: An improved method Comm. ACM 22 532–555 5Arguments 1: $\mathbf{ctype}$Nag_TS_ChangeType Input On entry: a flag indicating the assumed distribution of the data and the type of change point being looked for. ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$ Data from a Normal distribution, looking for changes in the mean, $\mu$. ${\mathbf{ctype}}=\mathrm{Nag_NormalStd}$ Data from a Normal distribution, looking for changes in the standard deviation $\sigma$. ${\mathbf{ctype}}=\mathrm{Nag_NormalMeanStd}$ Data from a Normal distribution, looking for changes in the mean, $\mu$ and standard deviation $\sigma$. ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$ Data from a Gamma distribution, looking for changes in the scale parameter $b$. ${\mathbf{ctype}}=\mathrm{Nag_ExponentialLambda}$ Data from an exponential distribution, looking for changes in $\lambda$. ${\mathbf{ctype}}=\mathrm{Nag_PoissonLambda}$ Data from a Poisson distribution, looking for changes in $\lambda$. Constraint: ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$, $\mathrm{Nag_NormalStd}$, $\mathrm{Nag_NormalMeanStd}$, $\mathrm{Nag_GammaScale}$, $\mathrm{Nag_ExponentialLambda}$ or $\mathrm{Nag_PoissonLambda}$. 2: $\mathbf{n}$Integer Input On entry: $n$, the length of the time series. Constraint: ${\mathbf{n}}\ge 2$. 3: $\mathbf{y}\left[{\mathbf{n}}\right]$const double Input On entry: $y$, the time series. If ${\mathbf{ctype}}=\mathrm{Nag_PoissonLambda}$, that is the data is assumed to come from a Poisson distribution, $⌊y+0.5⌋$ is used in all calculations. Constraints: • if ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$, $\mathrm{Nag_ExponentialLambda}$ or $\mathrm{Nag_PoissonLambda}$, ${\mathbf{y}}\left[\mathit{i}-1\right]\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$; • if ${\mathbf{ctype}}=\mathrm{Nag_PoissonLambda}$, each value of y must be representable as an integer; • if ${\mathbf{ctype}}\ne \mathrm{Nag_PoissonLambda}$, each value of y must be small enough such that${{\mathbf{y}}\left[\mathit{i}-1\right]}^{2}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$, can be calculated without incurring overflow. 4: $\mathbf{beta}$double Input On entry: $\beta$, the penalty term. There are a number of standard ways of setting $\beta$, including: SIC or BIC $\beta =p×\mathrm{log}\left(n\right)$ AIC $\beta =2p$ Hannan-Quinn $\beta =2p×\mathrm{log}\left(\mathrm{log}\left(n\right)\right)$ where $p$ is the number of parameters being treated as estimated in each segment. This is usually set to $2$ when ${\mathbf{ctype}}=\mathrm{Nag_NormalMeanStd}$ and $1$ otherwise. If no penalty is required then set $\beta =0$. Generally, the smaller the value of $\beta$ the larger the number of suggested change points. 5: $\mathbf{minss}$Integer Input On entry: the minimum distance between two change points, that is ${\tau }_{i}-{\tau }_{i-1}\ge {\mathbf{minss}}$. Constraint: ${\mathbf{minss}}\ge 2$. 6: $\mathbf{param}\left[1\right]$const double Input On entry: $\varphi$, values for the parameters that will be treated as fixed. If ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$ then param must be supplied, otherwise param may be NULL. If supplied, then when ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$ ${\mathbf{param}}\left[0\right]=\sigma$, the standard deviation of the normal distribution. If not supplied then $\sigma$ is estimated from the full input data, ${\mathbf{ctype}}=\mathrm{Nag_NormalStd}$ ${\mathbf{param}}\left[0\right]=\mu$, the mean of the normal distribution. If not supplied then $\mu$ is estimated from the full input data, ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$ ${\mathbf{param}}\left[0\right]$ must hold the shape, $a$, for the gamma distribution, otherwise param is not referenced. Constraint: if ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$ or $\mathrm{Nag_GammaScale}$, ${\mathbf{param}}\left[0\right]>0.0$. 7: $\mathbf{mdepth}$Integer Input On entry: $K$, the maximum depth for the iterative process, which in turn puts an upper limit on the number of change points with $m\le {2}^{K}$. If $K\le 0$ then no limit is put on the depth of the iterative process and no upper limit is put on the number of change points, other than that inherent in the length of the series and the value of minss. 8: $\mathbf{ntau}$Integer * Output On exit: $m$, the number of change points detected. 9: $\mathbf{tau}\left[\mathit{dim}\right]$Integer Output Note: the dimension, dim, of the array tau must be at least • $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(⌈\frac{{\mathbf{n}}}{{\mathbf{minss}}}⌉,{2}^{{\mathbf{mdepth}}}\right)$, when ${\mathbf{mdepth}}>0$; • $⌈\frac{{\mathbf{n}}}{{\mathbf{minss}}}⌉$, otherwise. On exit: the first $m$ elements of tau hold the location of the change points. The $i$th segment is defined by ${y}_{\left({\tau }_{i-1}+1\right)}$ to ${y}_{{\tau }_{i}}$, where ${\tau }_{0}=0$ and ${\tau }_{i}={\mathbf{tau}}\left[i-1\right],1\le i\le m$. The remainder of tau is used as workspace. 10: $\mathbf{sparam}\left[2×{\mathbf{n}}\right]$double Output On exit: the estimated values of the distribution parameters in each segment ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$, $\mathrm{Nag_NormalStd}$ or $\mathrm{Nag_NormalMeanStd}$ ${\mathbf{sparam}}\left[2i-2\right]={\mu }_{i}$ and ${\mathbf{sparam}}\left[2i-1\right]={\sigma }_{i}$ for $i=1,2,\dots ,m$, where ${\mu }_{i}$ and ${\sigma }_{i}$ is the mean and standard deviation, respectively, of the values of $y$ in the $i$th segment. It should be noted that ${\sigma }_{i}={\sigma }_{j}$ when ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$ and ${\mu }_{i}={\mu }_{j}$ when ${\mathbf{ctype}}=\mathrm{Nag_NormalStd}$, for all $i$ and $j$. ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$ ${\mathbf{sparam}}\left[2i-2\right]={a}_{i}$ and ${\mathbf{sparam}}\left[2i-1\right]={b}_{i}$ for $i=1,2,\dots ,m$, where ${a}_{i}$ and ${b}_{i}$ are the shape and scale parameters, respectively, for the values of $y$ in the $i$th segment. It should be noted that ${a}_{i}={\mathbf{param}}\left[0\right]$ for all $i$. ${\mathbf{ctype}}=\mathrm{Nag_ExponentialLambda}$ or $\mathrm{Nag_PoissonLambda}$ ${\mathbf{sparam}}\left[i-1\right]={\lambda }_{i}$ for $i=1,2,\dots ,m$, where ${\lambda }_{i}$ is the mean of the values of $y$ in the $i$th segment. The remainder of sparam is used as workspace. 11: $\mathbf{fail}$NagError * Input/Output The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface). 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_INT On entry, ${\mathbf{minss}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{minss}}\ge 2$. On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{n}}\ge 2$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. See Section 7.5 in the Introduction to the NAG Library CL Interface for further information. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library CL Interface for further information. NE_REAL On entry, ${\mathbf{ctype}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{param}}\left[0\right]=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{ctype}}=\mathrm{Nag_NormalMean}$ or $\mathrm{Nag_GammaScale}$ and param has been supplied, then ${\mathbf{param}}\left[0\right]>0.0$. NE_REAL_ARRAY On entry, ${\mathbf{ctype}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{ctype}}=\mathrm{Nag_GammaScale}$, $\mathrm{Nag_ExponentialLambda}$ or $\mathrm{Nag_PoissonLambda}$ then ${\mathbf{y}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. On entry, ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$, is too large. NW_TRUNCATED To avoid overflow some truncation occurred when calculating the cost function, $C$. All output is returned as normal. To avoid overflow some truncation occurred when calculating the parameter estimates returned in sparam. All output is returned as normal. 7Accuracy The calculation of means and sums of squares about the mean during the evaluation of the cost functions are based on the one pass algorithm of West (1979) and are believed to be stable. 8Parallelism and Performance g13ndc is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information. None. 10Example This example identifies changes in the mean, under the assumption that the data is normally distributed, for a simulated dataset with $100$ observations. A BIC penalty is used, that is $\beta =\mathrm{log}n\approx 4.6$, the minimum segment size is set to $2$ and the variance is fixed at $1$ across the whole input series. 10.1Program Text Program Text (g13ndce.c) 10.2Program Data Program Data (g13ndce.d) 10.3Program Results Program Results (g13ndce.r) This example plot shows the original data series, the estimated change points and the estimated mean in each of the identified segments.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 208, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421220421791077, "perplexity": 739.4758423587749}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00769.warc.gz"}
https://fr.mathworks.com/help/deeplearning/ug/train-fast-style-transfer-network.html
# Train Fast Style Transfer Network This example shows how to train a network to transfer the style of an image to a second image. It is based on the architecture defined in [1]. This example is similar to Neural Style Transfer Using Deep Learning, but it works faster once you have trained the network on a style image S. This is because, to obtain the stylized image Y you only need to do a forward pass of the input image X to the network. Find a high-level diagram of the training algorithm below. This uses three images to calculate the loss: the input image X, the transformed image Y and the style image S. Note that the loss function uses the pretrained network VGG-16 to extract features from the images. You can find its implementation and mathematical definition in the Style Transfer Loss section of this example. Download and extract the COCO 2014 train images and captions from http://cocodataset.org/#download by clicking the "2014 Train images". Save the data in the folder specified by `imageFolder`. Extract the images into `imageFolder`. The COCO 2014 was collected by the Coco Consortium. Create directories to store the COCO data set. ```imageFolder = fullfile(tempdir,"coco"); if ~exist(imageFolder,'dir') mkdir(imageFolder); end``` Create an image datastore containing the COCO images. `imds = imageDatastore(imageFolder,'IncludeSubfolders',true);` Training can take a long time to run. If you want to decrease the training time at the cost of accuracy of the resulting network, then select a subset of the image datastore by setting `fraction` to a smaller value. ```fraction = 1; numObservations = numel(imds.Files); imds = subset(imds,1:floor(numObservations*fraction));``` To resize the images and convert them all to RGB, create an augmented image datastore. `augimds = augmentedImageDatastore([256 256],imds,'ColorPreprocessing',"gray2rgb");` ```styleImage = imread('starryNight.jpg'); styleImage = imresize(styleImage,[256 256]);``` Display the chosen style image. ```figure imshow(styleImage) title("Style Image")``` ### Define Image Transformer Network Define the image transformer network. This is an image-to-image network. The network consists of 3 parts: 1. The first part of the network takes as input an RGB image of size [256x256x3] and downsamples it to a feature map of size [64x64x128]. 2. The second part of the network consists of five identical residual blocks defined in the supporting function `residualBlock. ` 3. The third and final part of the network upsamples the feature map to the original size of the image and returns the transformed image. This last part uses the `upsampleLayer`, which is a custom layer attached to this example as a supporting file. ```layers = [ % First part. imageInputLayer([256 256 3], 'Name', 'input', 'Normalization','none') convolution2dLayer([9 9], 32, 'Padding','same','Name', 'conv1') groupNormalizationLayer('channel-wise','Name','norm1') reluLayer('Name', 'relu1') convolution2dLayer([3 3], 64, 'Stride', 2,'Padding','same','Name', 'conv2') groupNormalizationLayer('channel-wise' ,'Name','norm2') reluLayer('Name', 'relu2') convolution2dLayer([3 3], 128, 'Stride', 2, 'Padding','same','Name', 'conv3') groupNormalizationLayer('channel-wise' ,'Name','norm3') reluLayer('Name', 'relu3') % Second part. residualBlock("1") residualBlock("2") residualBlock("3") residualBlock("4") residualBlock("5") % Third part. upsampleLayer('up1') convolution2dLayer([3 3], 64, 'Stride', 1, 'Padding','same','Name', 'upconv1') groupNormalizationLayer('channel-wise' ,'Name','norm6') reluLayer('Name', 'relu5') upsampleLayer('up2') convolution2dLayer([3 3], 32, 'Stride', 1, 'Padding','same','Name', 'upconv2') groupNormalizationLayer('channel-wise' ,'Name','norm7') reluLayer('Name', 'relu6') convolution2dLayer(9,3,'Padding','same','Name','conv_out')]; lgraph = layerGraph(layers);``` Add missing connections in residual blocks. ```lgraph = connectLayers(lgraph,"relu3","add1/in2"); lgraph = connectLayers(lgraph,"add1","add2/in2"); lgraph = connectLayers(lgraph,"add2","add3/in2"); lgraph = connectLayers(lgraph,"add3","add4/in2"); lgraph = connectLayers(lgraph,"add4","add5/in2");``` Visualize the image transformer network in a plot. ```figure plot(lgraph) title("Transform Network")``` Create a `dlnetwork` object from the layer graph. `dlnetTransform = dlnetwork(lgraph);` ### Style Loss Network This example uses a pretrained VGG-16 deep neural network to extract the features of the content and style images at different layers. These multilayer features are used to compute respective content and style losses. To get a pretrained VGG-16 network, use the `vgg16` function. If you do not have the required support packages installed, then the software provides a download link. `netLoss = vgg16;` To extract the feature necessary to calculate the loss you need the first 24 layers only. Extract and convert to a layer graph. ```lossLayers = netLoss.Layers(1:24); lgraph = layerGraph(lossLayers);``` Convert to a `dlnetwork`. `dlnetLoss = dlnetwork(lgraph);` ### Define the Loss Function and Gram Matrix Create the `styleTransferLoss` function defined in the Style Transfer Loss section of this example. The function `styleTransferLoss` takes as input the loss network `dlnetLoss`, a mini-batch of input transformed images `dlX,` a mini-batch of transformed images `dlY`, an array containing the Gram matrices of the style image `dlSGram`, the weight associated with the content loss `contentWeight` and the weight associated with the style loss `styleWeight`. The function returns the total loss `loss` and the individual components: the content loss `lossContent` and the style loss `lossStyle.` The `styleTransferLoss` function uses the supporting function `createGramMatrix` in the computation of the style loss. The `createGramMatrix` function takes as an input the features extracted by the loss network and returns a stylistic representation for each image in a mini-batch. You can find the implementation and mathematical definition of the Gram matrix in the section Gram Matrix. ### Define the Model Gradients Function Create the function `modelGradients`, listed in the Model Gradients Function section of the example. This function takes as input the loss network `dlnetLoss`, the image transformer network `dlnetTransform`, a mini-batch of input images `dlX`, an array containing the Gram matrices of the style image `dlSGram`, the weight associated with the content loss `contentWeight` and the weight associated with the style loss `styleWeight`. The function returns the `gradients` of the loss with respect to the learnable parameters of the image transformer, the state of the image transformer network, the transformed images `dlY,` the total loss `loss`, the loss associated with the content `lossContent` and the loss associated with the style `lossStyle.` ### Specify Training Options Train with a mini-batch size of 4 for 2 epochs as in [1]. ```numEpochs = 2; miniBatchSize = 4;``` Set the read size of the augmented image datastore to the mini-batch size. `augimds.MiniBatchSize = miniBatchSize;` Specify the options for ADAM optimization. Specify a learn rate of 0.001 with a gradient decay factor of 0.01, and a squared gradient decay factor of 0.999. ```learnRate = 0.001; gradientDecayFactor = 0.9; squaredGradientDecayFactor = 0.999;``` Train on a GPU if one is available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). `executionEnvironment = "auto";` Specify the weight given to the style loss and the one given to the content loss in the calculation of the total loss. Note that, in order to find a good balance between content and style loss, you might need to experiment with different combinations of weights. ```weightContent = 1e-4; weightStyle = 3e-8; ``` Choose the plot frequency of the training progress. This specifies how many iterations there are between each plot update. `plotFrequency = 10;` ### Train Model In order to be able to compute the loss during training, calculate the Gram matrices for the style image. Convert the style image to `dlarray`. `dlS = dlarray(single(styleImage),'SSC');` In order to calculate the Gram matrix, feed the style image to the VGG-16 network and extract the activations at four different layers. ```[dlSActivations1,dlSActivations2,dlSActivations3,dlSActivations4] = forward(dlnetLoss,dlS, ... 'Outputs',["relu1_2" "relu2_2" "relu3_3" "relu4_3"]);``` Calculate the Gram matrix for each set of activations using the supporting function `createGramMatrix`. ```dlSGram{1} = createGramMatrix(dlSActivations1); dlSGram{2} = createGramMatrix(dlSActivations2); dlSGram{3} = createGramMatrix(dlSActivations3); dlSGram{4} = createGramMatrix(dlSActivations4);``` The training plots consists of two figures: 1. A figure showing a plot of the losses during training 2. A figure containing an input and an output image of the image transformer network Initialize the training plots. You can check the details of the initialization in the supporting function `initializeFigures. `This function returns: the axis `ax1` where you plot the loss, the axis `ax2` where you plot the validation images, the animated line `lineLossContent` which contains the content loss, the animated line `lineLossStyle `which contains the style loss and the animated line `lineLossTotal` which contains the total loss. `[ax1,ax2,lineLossContent,lineLossStyle,lineLossTotal]=initializeStyleTransferPlots();` ```averageGrad = []; averageSqGrad = [];``` Calculate total number of training iterations. `numIterations = floor(augimds.NumObservations*numEpochs/miniBatchSize);` Initialize iteration number and timer before training. ```iteration = 0; start = tic;``` Train the model. This could take a long time to run. ```% Loop over epochs. for i = 1:numEpochs % Reset and shuffle datastore. reset(augimds); augimds = shuffle(augimds); % Loop over mini-batches. while hasdata(augimds) iteration = iteration + 1; % Read mini-batch of data. data = read(augimds); % Ignore last partial mini-batch of epoch. if size(data,1) < miniBatchSize continue end % Extract the images from data store into a cell array. images = data{:,1}; % Concatenate the images along the 4th dimension. X = cat(4,images{:}); X = single(X); % Convert mini-batch of data to dlarray and specify the dimension labels % 'SSCB' (spatial, spatial, channel, batch). dlX = dlarray(X, 'SSCB'); % If training on a GPU, then convert data to gpuArray. if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu" dlX = gpuArray(dlX); end % Evaluate the model gradients and the network state using % dlfeval and the modelGradients function listed at the end of the % example. [gradients,state,dlY,loss,lossContent,lossStyle] = dlfeval(@modelGradients, ... dlnetLoss,dlnetTransform,dlX,dlSGram,weightContent,weightStyle); dlnetTransform.State = state; % Update the network parameters. [dlnetTransform,averageGrad,averageSqGrad] = ... adamupdate(dlnetTransform,gradients,averageGrad,averageSqGrad,iteration,... learnRate, gradientDecayFactor, squaredGradientDecayFactor); % Every plotFequency iterations, plot the training progress. if mod(iteration,plotFrequency) == 0 addpoints(lineLossTotal,iteration,double(gather(extractdata(loss)))) addpoints(lineLossContent,iteration,double(gather(extractdata(lossContent)))) addpoints(lineLossStyle,iteration,double(gather(extractdata(lossStyle)))) % Use the first image of the mini-batch as a validation image. dlV = dlX(:,:,:,1); % Use the transformed validation image computed previously. dlVY = dlY(:,:,:,1); % To use the function imshow, convert to uint8. validationImage = uint8(gather(extractdata(dlV))); transformedValidationImage = uint8(gather(extractdata(dlVY))); % Plot the input image and the output image and increase size imshow(imtile({validationImage,transformedValidationImage}),'Parent',ax2); end % Display time elapsed since start of training and training completion percentage. D = duration(0,0,toc(start),'Format','hh:mm:ss'); completionPercentage = round(iteration/numIterations*100,2); title(ax1,"Epoch: " + i + ", Iteration: " + iteration +" of "+ numIterations + "(" + completionPercentage + "%)" +", Elapsed: " + string(D)) drawnow end end``` ### Stylize an Image Once training has finished, you can use the image transformer on any image of your choice. Load the image you would like to transform. ```imFilename = 'peppers.png'; im = imread(imFilename);``` Resize the input image to the input dimensions of the image transformer. `im = imresize(im,[256,256]);` Convert it to `dlarray.` `dlX = dlarray(single(im),'SSCB');` To use the GPU convert to `gpuArray` if one is available. ```if canUseGPU dlX = gpuArray(dlX); end``` To apply the style to the image, forward pass it to the image transformer using the function `predict.` `dlY = predict(dlnetTransform,dlX);` Rescale the image into the range [0 255]. First, use the function `tanh` to rescale `dlY` to the range [-1 1]. Then, shift and scale the output to rescale into the [0 255] range. `Y = 255*(tanh(dlY)+1)/2;` Prepare `Y` for plotting. Use the function `extraxtdata` to extract the data from `dlarray.`Use the function gather to transfer Y from the GPU to the local workspace. `Y = uint8(gather(extractdata(Y)));` Show the input image (left) next to the stylized image (right). ```figure m = imtile({im,Y}); imshow(m)``` The function `modelGradients` takes as input the loss network `dlnetLoss`, the image transformer network `dlnetTransform`, a mini-batch of input images `dlX`, an array containing the Gram matrices of the style image `dlSGram`, the weight associated with the content loss `contentWeight` and the weight associated with the style loss `styleWeight`. It returns the `gradients` of the loss with respect to the learnable parameters of the image transformer, the state of the image transformer network, the transformed images `dlY`, the total loss `loss`, the loss associated with the content `lossContent` and the loss associated with the style `lossStyle.` ```function [gradients,state,dlY,loss,lossContent,lossStyle] = ... modelGradients(dlnetLoss,dlnetTransform,dlX,dlSGram,contentWeight,styleWeight) [dlY,state] = forward(dlnetTransform,dlX); dlY = 255*(tanh(dlY)+1)/2; [loss,lossContent,lossStyle] = styleTransferLoss(dlnetLoss,dlY,dlX,dlSGram,contentWeight,styleWeight); gradients = dlgradient(loss,dlnetTransform.Learnables); end``` ### Style Transfer Loss The function `styleTransferLoss` takes as input the loss network `dlnetLoss`, a mini-batch of input images `dlX,` a mini-batch of transformed images `dlY`, an array containing the Gram matrices of the style image `dlSGram`, the weights associated with the content and style `contentWeight` and `styleWeight,` respectively. It returns the total loss `loss` and the individual components: the content loss `lossContent` and the style loss `lossStyle.` The content loss is a measure of how much difference in spatial structure there is between the input image `X` and the output images `Y`. On the other hand, the style loss tells you how much difference in the stylistic appearance there is between the style image `S` and the output image `Y`. The graph below explains the algorithm that `styleTransferLoss` implements to calculate the total loss. First, the function passes the input images `X`, the transformed images `Y` and the style image `S` to the pretrained network VGG-16. This pretrained network extracts several features from these images. The algorithm then calculates the content loss by using the spatial features of the input image X and of the output image Y. Moreover, it calculates the style loss by using the stylistic features of the output image Y and of the style image S. Finally, it obtains the total loss by adding the content and style losses. #### Content Loss For each image in the mini-batch, the content loss function compares the features of the original image and of the transformed image output by the layer `relu_3_3`. In particular, it calculates the mean square error between the activations and returns the average loss for the mini-batch: `$\text{lossContent}=\frac{1}{N}\sum _{n=1}^{N}\text{mean}\left(\left[\varphi \left({X}_{n}\right)-\varphi \left({Y}_{n}\right){\right]}^{2}\right),$` where $X$ contains the input images, $Y$ contains the transformed images, $N$ is the mini-batch size, and $\varphi \left(\right)$ represents the activations extracted at layer `relu_3_3.` #### Style Loss To calculate the style loss, for each single image in the mini-batch: 1. Extract the activations at the layers `relu1_2`, `relu2_2`, `relu3_3` and `relu4_3`. 2. For each of the four activations ${\varphi }_{j}$ compute the Gram matrix $G\left({\varphi }_{j}\right)$. 3. Calculate the squared difference between the corresponding Gram matrices. 4. Add up the four outputs for each layer $j$ from the previous step. To obtain the style loss for the whole mini-batch, compute the average of the style loss for each image $n$ in the mini-batch: `$\text{lossStyle}=\frac{1}{N}\sum _{n=1}^{N}\sum _{j=1}^{4}\left[G\left({\varphi }_{j}\left({X}_{n}\right)\right)-G\left({\varphi }_{j}\left(S\right)\right){\right]}^{2},$` where $j$ is the index of the layer, and $G\left(\right)$ is the Gram Matrix.` ` #### Total Loss ```function [loss,lossContent,lossStyle] = styleTransferLoss(dlnetLoss,dlY,dlX, ... dlSGram,weightContent,weightStyle) % Extract activations. dlYActivations = cell(1,4); [dlYActivations{1},dlYActivations{2},dlYActivations{3},dlYActivations{4}] = ... forward(dlnetLoss,dlY,'Outputs',["relu1_2" "relu2_2" "relu3_3" "relu4_3"]); dlXActivations = forward(dlnetLoss,dlX,'Outputs','relu3_3'); % Calculate the mean square error between activations. lossContent = mean((dlYActivations{3} - dlXActivations).^2,'all'); % Add up the losses for all the four activations. lossStyle = 0; for j = 1:4 G = createGramMatrix(dlYActivations{j}); lossStyle = lossStyle + sum((G - dlSGram{j}).^2,'all'); end % Average the loss over the mini-batch. miniBatchSize = size(dlX,4); lossStyle = lossStyle/miniBatchSize; % Apply weights. lossContent = weightContent * lossContent; lossStyle = weightStyle * lossStyle; % Calculate the total loss. loss = lossContent + lossStyle; end``` ### Residual Block The `residualBlock` function returns an array of six layers. It consists of convolution layers, instance normalization layers, a ReLu layer and an addition layer. Note that `groupNormalizationLayer('channel-wise')` is simply an instance normalization layer. ```function layers = residualBlock(name) layers = [ convolution2dLayer([3 3], 128, 'Stride', 1,'Padding','same','Name', "convRes"+name+"_1") groupNormalizationLayer('channel-wise','Name',"normRes"+name+"_1") reluLayer('Name', "reluRes"+name+"_1") convolution2dLayer([3 3], 128, 'Stride', 1,'Padding','same', 'Name', "convRes"+name+"_2") groupNormalizationLayer('channel-wise','Name',"normRes"+name+"_2") additionLayer(2,'Name',"add"+name)]; end``` ### Gram Matrix The function `createGramMatrix` takes as an input the activations of a single layer and returns a stylistic representation for each image in a mini-batch`. `The input is a feature map of size [H, W, C, N], where H is the height, W is the width, C is the number of channels and N is the mini-batch size. The function outputs an array `G` of size [C,C,N]. Each subarray `G(:,:,k)` is the Gram matrix corresponding to the ${k}^{th}$ image in the mini-batch. Each entry $G\left(i,j,k\right)$ of the Gram matrix represents the correlation between channels ${c}_{i}$ and ${c}_{j}$, because each entry in channel ${c}_{i}$ multiplies the entry in the corresponding position in channel ${c}_{j}$: `$G\left(i,j,k\right)=\frac{1}{C×H×W}\sum _{h=1}^{H}\sum _{w=1}^{W}{\varphi }_{k}\left(h,w,{c}_{i}\right){\varphi }_{k}\left(h,w,{c}_{j}\right),$` where ${\varphi }_{k}$ are the activations for the ${k}^{th}$ image in the mini-batch. The Gram matrix contains information about which features activate together but has no information about where the features occur in the image. This is because the summation over height and width loses the information about the spatial structure. The loss function uses this matrix as a stylistic representation of the image. ```function G = createGramMatrix(activations) [h,w,numChannels] = size(activations,1:3); features = reshape(activations,h*w,numChannels,[]); featuresT = permute(features,[2 1 3]); G = dlmtimes(featuresT,features) / (h*w*numChannels); end``` ### References 1. Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." European conference on computer vision. Springer, Cham, 2016.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304433822631836, "perplexity": 1037.8262294860706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00055.warc.gz"}
http://motls.blogspot.com/2008/10/physics-nobel-prize-nambu-kobayashi.html
## Tuesday, October 07, 2008 ... ///// ### Physics Nobel Prize: Nambu, Kobayashi, Maskawa! After a 30-minute delay, the 2008 physics Nobel prize was awarded to Yoichiro Nambu (1/2), Makoto Kobayashi (1/4), Toshihide Maskawa (1/4). Yoichiro Nambu joins a long sequence of string theorists who have won the prestigious award: the average string theorist's chance to win the award exceeds 0.1%. For co-founders of heterotic strings, it jumps to 25% and it is over 33% for fathers of string theory, as we will see. ;-) David Gross is one of his colleagues in this elite group. And I am not even mentioning many Nobel-prize-winning strong supporters of string theory such as Gell-Mann, Weinberg, or Smoot. Nambu: string theory, color, and broken symmetry Nambu, a Japanese-born American, is often described as one of the "fathers of string theory": the other two are Susskind and Nielsen. (Veneziano had the right formula but could see no strings.) Together with Goto, Nambu understood that the action of the string is proportional to the proper area of the worldsheet, analogously with the proper time of a particle's worldline. I find it stunning that not a single media outlet or a blog besides TRF mentions who Nambu actually is. It's like if the word "relativity" were not mentioned in 1921 when Einstein picked his prize for the photoelectric effect. Well, newspapers and blogs are mostly piles of sh*t (except for Scientific American that happens to reprint a nice detailed 1995 story about the seer Nambu: and Nambu was the kind of seer whom sub-par craftsmen like Lee Smolin could not even see as a seer: a real one). But because of Alfred Nobel's limited respect for pure theorists (recall his wife's lover but don't forget that Nobel never married), the Nobel prize is formally given to Nambu for a comparably famous discovery that is less often associated with his name, namely for his explanation of the importance of spontaneous symmetry breaking in the subatomic world. See e.g. the Nambu's and Jona Lasinio's paper (which has 2,500 citations). Well, Jona Lasinio is also alive but he is not a co-father of string theory. I am subtly kidding, of course. There are also other reasons. What are they? Nambu is, equally importantly, a co-author of the Nambu-Goldstone bosons of which the paper mentioned above is a particular example. For every spontaneously broken continuous [approximate] symmetry, you find one species of a [nearly] massless scalar. This massless scalar is analogous to the "phase of the collective wave function" known from superconductivity. One can prove that this is what happens in field theory. The Nambu-Goldstone bosons can also be "eaten up" by gauge bosons in spontaneously broken gauge theories, to bring the third (longitudinal) polarization of the massive gauge bosons: that's the Higgs mechanism but the 2008 prize is not going here. In the context of QCD, the relevant nearly massless scalars are the mesons (such as pions) that arise from the chiral symmetry breaking. Besides strings and spontaneous symmetry breaking, Nambu is also the forefather of "color" as a new kind of charge in strong interactions whose dynamics dominate QCD. His stringy work was mostly focusing on the interpretation of the theory in the context of strong interactions, see e.g. this paper, which is why his relativistic picture of the string (or a fluxtube based on the Nielsen-Olsen vortex) is also our standard qualitative explanation of confinement in QCD. Recently, it was his namesake who was working on cosmology (hat tip: anonymous). ;-) Needless to say, he clearly deserves the award. The CKM matrix Kobayashi and Maskawa, who are both Japanese, receive the Nobel prize for the CKM matrix governing the quark masses, especially for the realization that a broken CP-symmetry in the quark sector requires at least three generations (but they're enough). By the way, their paper is the third most cited particle physics paper as of today. Recall that "C" in "CKM" stands for "Cabibbo", after Nicola Cabibbo who wrote down the analogous matrix for two generations that is determined by a single angle, the Cabibbo angle. The general unitary matrix in U(3) - the case of three generations - that maps lower-quark mass eigenstates to the isospin partners of the upper-quark mass eigenstates has 9 parameters. However, 6-1 = 5 of them are phases that can be absorbed to the normalization of the six eigenvectors (one overall change of all six phases doesn't change the CKM matrix). You're still left with 9-5 = 4 nontrivial parameters of the CKM matrix which is more than 3 parameters of an SO(3) matrix: in general, the CKM matrix must be allowed to be complex and the additional phase not included in O(3) is breaking the CP-symmetry because the mass terms in the Lagrangian are "inherently" complex while the CP-symmetry is linked to complex conjugation. Yes, if you wrote the three previous paragraphs before they did, with a few obvious formulae around, you could probably be half a million bucks richer today. ;-) But it was hard to use the SU(3) matrices in this way because, as the Nobel committee correctly mentions, the relationships between maths and physics were lousy in the 1970s. You know, what's revolutionary here is not the mathematical exercise itself but the correct sequence of physical arguments that use these non-quite-trivial yet not-quite-hard mathematical insights to explain an aspect of the Universe. The breaking of the CP-symmetry by the phase in the CKM matrix is the only experimentally confirmed breaking of the CP-symmetry which is a potential paradox because we know another possible source of the breaking - the QCD theta-angle (the coefficient of the trace of F wedge F in QCD). The question why the latter is small is referred to as the strong CP-problem. Helpfully enough, the CP-violation (by the CKM matrix) is advertised by nobelprize.org as the source of matter-antimatter symmetry of the early Universe (well, there should be some stronger additional source because the CKM matrix doesn't seem enough - but it is the only known/established proof of the concept as of today). We wouldn't be here without that breaking. That's clearly one of the reasons why the Japanese contributions are more important than Cabibbo's original 2x2 matrix according to the committee. Obviously, Cabibbo's work was important (to lead people to study different eigenvectors etc. in the flavor space) but the committee has to decide in some way and none of them can be perfect for everyone. Summary It's a good pick! Needless to say, all three names were debated as possible candidates in the particle physics circles. I think that Lars Brink, a string theorist in the committee, may be given credit for the good choice. Incidentally, Kobayashi said he was surprised but Maskawa said that he predicted that he would win this year because he found a pattern. ;-) However, he's not too happy about that - too much noise. However, it's great that Nambu won. (Hat tip: Willie Soon!) #### snail feedback (4) : My comment is that the theory was pure betting,more than 30 years a go,and no serious book of Particle Physics has never quoted the KM theory.The Nobel Foundation has her opinions,of course,as had about the ridicoulous theory of Yukawa Meson.I am afraid that you can buy their Sympathy,if you pay enough.The payoff is chauvinists satisfaction,as Japanese scientists are showing. the conservation of cp to stronger interactions couls to be explained by the violation of symmetry PT,but not CPT,.the origin of violation of pt is the invariance of lorentz,but cpt implies the conservation of group of poincare or lorentz's with complex factor. the violation of pt is associated to violation antichrous of lorentz,but does appear the orthochrous invariance of lorentz with time with real factors and the imaginary part with the time rotation does appear the speed of light as constant,but is a variable in the complex group of poincare. the particles are vibration only one( frequency) of each "particles" ( particles more reversal pt) in the spacetime( curvatures of spacetimes with one only one frequency) then particles and antiparticles as reversion of pt are an entity only one,it is are "holes" in the strings. the violation cp implies the existence of antiparticles as energy in relativistic motions that implies in the variations of space and time,this is asymetry.the asymetry in the transformation of energy into mass and viceversa implies in the existence of antiparticles as bundleled locally energy in the relastivistics equations. then antiparticles doesn't exist in the universe. cp places the relativistics transformations as asymmetrics in the space and time,and cp is broken to the spacetime.the time and space are not linear with the variations of velocities,this appear the constant factor k as metrics of the curvatures of spacetimes thaty are variables.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068910241127014, "perplexity": 1701.2545889566425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00075-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2669128/showing-mathbbf-q-times-is-cyclic-using-character-theory
# Showing $\mathbb{F}_q^{\times}$ is cyclic using Character theory I was wondering if there is a way to prove that the multiplicative of a finite field is cyclic by looking at the character table of such a group. In particular, I was wondering if there is a way to avoid direct mention of classification of finite abelian groups (I am aware of the standard proofs from root counting of polynomials etc.). My initial thoughts were as follows: Suppose we somehow manage to find the character table of $\mathbb{F}_q^{\times}$. We know this group is abelian so compare it to the character tables of other abelian groups. Over $\mathbb{C}$ the character table of a finite abelian group uniquely distinguishes the group, so if we were to find the character table for $\mathbb{F}_q^{\times}$ we would be done. However the finding of the character table seems like the hard thing to do- we must have to use the field structure somehow. So instead maybe we should look for representations over characteristic $p$ rather than $\mathbb{C}$, but I do not know anything about such representations. I know somewhere we may be indirectly using the classification of abelian groups in these results, but I think it would be an instructive thing to see it all link up still. Any thoughts would be appreciated. • If you use an algebraic closure $K$ of $\Bbb{F}_q$ as the range of the characters, then the group of characters $Hom(\Bbb{F}_q^* ,K^*)$ is generated by the identity mapping. In light of this I don't think you will make any headway looking at the $K$-valued characters. – Jyrki Lahtonen Feb 27 '18 at 17:14 • You might be able to locate a suitable prime ideal $\mathfrak{p}$ of $R=\Bbb{Z}[\zeta]$, $\zeta=e^{2\pi i/(q-1)}$, and show that $R/\mathfrak{p}$ is a field of $q$ elements and all the powers of $\zeta$ are in distinct cosets. But, that doesn't look very natural to me. – Jyrki Lahtonen Feb 27 '18 at 17:20 • I guess that you are looking for a way to get around the fact that the proof of the isomorphism $\hat G\simeq G$, $G$ a finite abelian group, relies on the structure theorem. The problem is that there is no natural isomorphism here. For the characters of the additive group $(\Bbb{F}_q,+)$ we can take advantage of the fact that the non-degenerate bilinear trace form $(x,y)=tr(xy)$ gives a somewhat naturally available isomorphism between the additive group and its dual. – Jyrki Lahtonen Feb 27 '18 at 17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272832632064819, "perplexity": 109.18190840362591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000175.78/warc/CC-MAIN-20190626053719-20190626075719-00261.warc.gz"}
https://projecteuclid.org/euclid.die/1356039062
## Differential and Integral Equations ### A priori estimates for infinitely degenerate quasilinear equations #### Abstract We prove a priori bounds for derivatives of solutions $w$ of a class of quasilinear equations of the form \begin{equation*} \mathrm {div} \mathcal{A} ( x,w ) \nabla w+\vec{\gamma} ( x,w ) \cdot \nabla w+f ( x,w ) =0, \end{equation*} where $x \! = \! ( x_{1},\dots ,x_{n} )$, and where $f$, $\vec{\gamma} = ( \gamma^{i} ) _{1\leq i\leq n}$ and $\mathcal{A}= ( a_{ij} ) _{1\leq i,j\leq n}$ are $\mathcal{C}^{\infty }$. The rank of the square symmetric matrix $\mathcal{A}$ is allowed to degenerate, as all but one eigenvalue of $\mathcal{A}$ are permitted to vanish to infinite order. We estimate derivatives of $w$ of arbitrarily high order in terms of just $w$ and its first derivatives. These estimates will be applied in a subsequent work to establish existence, uniqueness and regularity of weak solutions of the Dirchlet problem. #### Article information Source Differential Integral Equations, Volume 21, Number 1-2 (2008), 131-200. Dates First available in Project Euclid: 20 December 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790762066841125, "perplexity": 538.8175521714855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156857.16/warc/CC-MAIN-20180921053049-20180921073449-00358.warc.gz"}
https://www.physicsforums.com/threads/higher-order-derivatives-rules.929716/
# I Higher order derivatives rules 1. Oct 26, 2017 ### LauwranceGilbert mod: moved from homework Does anyone know why and when this equation holds? I have searched online but cannot find the reason or the rules for the higher order derivatives. Last edited by a moderator: Oct 26, 2017 2. Oct 26, 2017 ### Staff: Mentor Can you provide some context of where you found this equation and what you were investigating? It looks like ordinary power rules applied to derivatives ie derivative chain rule. https://en.wikipedia.org/wiki/Chain_rule There's a section further into the article talking about generalizations of the rule: Here's a presentation where they use something like this for trig derivatives: https://www.cs.drexel.edu/classes/Calculus/MATH121_Fall02/lecture14.pdf Last edited: Oct 26, 2017 3. Oct 26, 2017 ### Ray Vickson The first equality is easy: the $(4M+4)$th derivative of $F$ is just the 4th derivative of the $(4M)$th derivative. The second equality is false, in general, because there are many counterexamples. The $(4M+4)$th derivative is usually not a constant ($= (-4)^M$ ) times the 4th derivative.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823923707008362, "perplexity": 913.5062233096227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00174.warc.gz"}
http://mathoverflow.net/questions?page=1987&sort=hot
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). All Questions 1k views Should there be a true model of set theory? As I understand it, there is a program in set theory to produce an ultimate, canonical model of set theory which, among other things, positively answers the Continuum Hypothesis an … 169 views I've been drawn to a problem that requires ascertaining the existence of fixed points in the following recurrence relation, any ideas would be much appreciated. I seek neccessary c … 970 views What is the Euler characteristic of a Hilbert scheme of points of a singular algebraic curve? Let $X$ be a smooth surface of genus $g$ and $S^nX$ its n-symmetrical product (that is, the quotient of $X \times ... \times X$ by the symmetric group $S_n$). There is a well know … 497 views Examples of $G_\delta$ sets Recall that a subset A of a metric space X is a $G_\delta$ subset if it can be written as a countable intersection of open sets. This notion is related to the Baire category theore … 748 views Is completeness of a field an algebraic property? Pretty straitforward: If a field has a metric in which it is complete can it have a metric in which it is not complete? By metric I mean field norm of course 437 views Stably free module not finitely generated is free Hi. I have read that stably free modules not finitely generated are free; this is proved in M.R. Gabel, stably free projectives over commutative rings, Thesis, Brandeis Univ., Wal … 402 views In classical logic plus ZF, the field of real numbers admits infinitely many isomorphic realizations as a numeral system --- as the radix varies. The intuitionistic status of thes …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290219306945801, "perplexity": 614.8706024380228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00050-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.britannica.com/topic/probability-theory/Brownian-motion-process
## Brownian motion process The most important stochastic process is the Brownian motion or Wiener process. It was first discussed by Louis Bachelier (1900), who was interested in modeling fluctuations in prices in financial markets, and by Albert Einstein (1905), who gave a mathematical model for the irregular motion of colloidal particles first observed by the Scottish botanist Robert Brown in 1827. The first mathematically rigorous treatment of this model was given by Wiener (1923). Einstein’s results led to an early, dramatic confirmation of the molecular theory of matter in the French physicist Jean Perrin’s experiments to determine Avogadro’s number, for which Perrin was awarded a Nobel Prize in 1926. Today somewhat different models for physical Brownian motion are deemed more appropriate than Einstein’s, but the original mathematical model continues to play a central role in the theory and application of stochastic processes. Let B(t) denote the displacement (in one dimension for simplicity) of a colloidally suspended particle, which is buffeted by the numerous much smaller molecules of the medium in which it is suspended. This displacement will be obtained as a limit of a random walk occurring in discrete time as the number of steps becomes infinitely large and the size of each individual step infinitesimally small. Assume that at times kδ, k = 1, 2,…, the colloidal particle is displaced a distance hXk, where X1, X2,… are +1 or −1 according as the outcomes of tossing a fair coin are heads or tails. By time t the particle has taken m steps, where m is the largest integer ≤ t/δ, and its displacement from its original position is Bm(t) = h(X1 +⋯+ Xm). The expected value of Bm(t) is 0, and its variance is h2m, or approximately h2t/δ. Now suppose that δ → 0, and at the same time h → 0 in such a way that the variance of Bm(1) converges to some positive constant, σ2. This means that m becomes infinitely large, and h is approximately σ(t/m)1/2. It follows from the central limit theorem (equation equation (12) that lim P{Bm(t) ≤ x} = G(xt1/2), where G(x) is the standard normal cumulative distribution function defined just below equation (12). The Brownian motion process B(t) can be defined to be the limit in a certain technical sense of the Bm(t) as δ → 0 and h → 0 with h2/δ → σ2. The process B(t) has many other properties, which in principle are all inherited from the approximating random walk Bm(t). For example, if (s1, t1) and (s2, t2) are disjoint intervals, the increments B(t1) − B(s1) and B(t2) − B(s2) are independent random variables that are normally distributed with expectation 0 and variances equal to σ2(t1  − s1) and σ2(t2 − s2), respectively. Einstein took a different approach and derived various properties of the process B(t) by showing that its probability density function, g(x, t), satisfies the diffusion equation ∂g/∂t = D2g/∂x2, where D = σ2/2. The important implication of Einstein’s theory for subsequent experimental research was that he identified the diffusion constant D in terms of certain measurable properties of the particle (its radius) and of the medium (its viscosity and temperature), which allowed one to make predictions and hence to confirm or reject the hypothesized existence of the unseen molecules that were assumed to be the cause of the irregular Brownian motion. Because of the beautiful blend of mathematical and physical reasoning involved, a brief summary of the successor to Einstein’s model is given below. Unlike the Poisson process, it is impossible to “draw” a picture of the path of a particle undergoing mathematical Brownian motion. Wiener (1923) showed that the functions B(t) are continuous, as one expects, but nowhere differentiable. Thus, a particle undergoing mathematical Brownian motion does not have a well-defined velocity, and the curve y = B(t) does not have a well-defined tangent at any value of t. To see why this might be so, recall that the derivative of B(t), if it exists, is the limit as h → 0 of the ratio [B(t + h) − B(t)]/h. Since B(t + h) − B(t) is normally distributed with mean 0 and standard deviation h1/2σ, in very rough terms B(t + h) − B(t) can be expected to equal some multiple (positive or negative) of h1/2. But the limit as h → 0 of h1/2/h = 1/h1/2 is infinite. A related fact that illustrates the extreme irregularity of B(t) is that in every interval of time, no matter how small, a particle undergoing mathematical Brownian motion travels an infinite distance. Although these properties contradict the commonsense idea of a function—and indeed it is quite difficult to write down explicitly a single example of a continuous, nowhere-differentiable function—they turn out to be typical of a large class of stochastic processes, called diffusion processes, of which Brownian motion is the most prominent member. Especially notable contributions to the mathematical theory of Brownian motion and diffusion processes were made by Paul Lévy and William Feller during the years 1930–60. A more sophisticated description of physical Brownian motion can be built on a simple application of Newton’s second law: F = ma. Let V(t) denote the velocity of a colloidal particle of mass m. It is assumed that The quantity f retarding the movement of the particle is due to friction caused by the surrounding medium. The term dA(t) is the contribution of the very frequent collisions of the particle with unseen molecules of the medium. It is assumed that f can be determined by classical fluid mechanics, in which the molecules making up the surrounding medium are so many and so small that the medium can be considered smooth and homogeneous. Then by Stokes’s law, for a spherical particle in a gas, f = 6πaη, where a is the radius of the particle and η the coefficient of viscosity of the medium. Hypotheses concerning A(t) are less specific, because the molecules making up the surrounding medium cannot be observed directly. For example, it is assumed that, for t ≠ s, the infinitesimal random increments dA(t) = A(t + dt) − A(t) and A(s + ds) − A(s) caused by collisions of the particle with molecules of the surrounding medium are independent random variables having distributions with mean 0 and unknown variances σ2 dt and σ2 ds and that dA(t) is independent of dV(s) for s < t. The differential equation (18) has the solution where β = f/m. From this equation and the assumed properties of A(t), it follows that E[V2(t)] → σ2/(2mf) as t → ∞. Now assume that, in accordance with the principle of equipartition of energy, the steady-state average kinetic energy of the particle, m limt → ∞E[V2(t)]/2, equals the average kinetic energy of the molecules of the medium. According to the kinetic theory of an ideal gas, this is RT/2N, where R is the ideal gas constant, T is the temperature of the gas in kelvins, and N is Avogadro’s number, the number of molecules in one gram molecular weight of the gas. It follows that the unknown value of σ2 can be determined: σ2 = 2RTf/N. If one also assumes that the functions V(t) are continuous, which is certainly reasonable from physical considerations, it follows by mathematical analysis that A(t) is a Brownian motion process as defined above. This conclusion poses questions about the meaning of the initial equation (18), because for mathematical Brownian motion the term dA(t) does not exist in the usual sense of a derivative. Some additional mathematical analysis shows that the stochastic differential equation (18) and its solution equation (19) have a precise mathematical interpretation. The process V(t) is called the Ornstein-Uhlenbeck process, after the physicists Leonard Salomon Ornstein and George Eugene Uhlenbeck. The logical outgrowth of these attempts to differentiate and integrate with respect to a Brownian motion process is the Ito (named for the Japanese mathematician Itō Kiyosi) stochastic calculus, which plays an important role in the modern theory of stochastic processes. The displacement at time t of the particle whose velocity is given by equation (19) is For t large compared with β, the first and third terms in this expression are small compared with the second. Hence, X(t) − X(0) is approximately equal to A(t)/f, and the mean square displacement, E{[X(t) − X(0)]2}, is approximately σ2/f 2 = RT/(3πaηN). These final conclusions are consistent with Einstein’s model, although here they arise as an approximation to the model obtained from equation (19). Since it is primarily the conclusions that have observational consequences, there are essentially no new experimental implications. However, the analysis arising directly out of Newton’s second law, which yields a process having a well-defined velocity at each point, seems more satisfactory theoretically than Einstein’s original model. ## Stochastic processes A stochastic process is a family of random variables X(t) indexed by a parameter t, which usually takes values in the discrete set Τ = {0, 1, 2,…} or the continuous set Τ = [0, +∞). In many cases t represents time, and X(t) is a random variable observed at time t. Examples are the Poisson process, the Brownian motion process, and the Ornstein-Uhlenbeck process described in the preceding section. Considered as a totality, the family of random variables {X(t), t ∊ Τ} constitutes a “random function.” ## Stationary processes The mathematical theory of stochastic processes attempts to define classes of processes for which a unified theory can be developed. The most important classes are stationary processes and Markov processes. A stochastic process is called stationary if, for all n, t1 < t2 <⋯< tn, and h > 0, the joint distribution of X(t1 + h),…, X(tn + h) does not depend on h. This means that in effect there is no origin on the time axis; the stochastic behaviour of a stationary process is the same no matter when the process is observed. A sequence of independent identically distributed random variables is an example of a stationary process. A rather different example is defined as follows: U(0) is uniformly distributed on [0, 1]; for each t = 1, 2,…, U(t) = 2U(t − 1) if U(t − 1) ≤ 1/2, and U(t) = 2U(t − 1) − 1 if U(t − 1) > 1/2. The marginal distributions of U(t), t = 0, 1,… are uniformly distributed on [0, 1], but, in contrast to the case of independent identically distributed random variables, the entire sequence can be predicted from knowledge of U(0). A third example of a stationary process is where the Ys and Zs are independent normally distributed random variables with mean 0 and unit variance, and the cs and θs are constants. Processes of this kind can be useful in modeling seasonal or approximately periodic phenomena. A remarkable generalization of the strong law of large numbers is the ergodic theorem: if X(t), t = 0, 1,… for the discrete case or 0 ≤ t < ∞ for the continuous case, is a stationary process such that E[X(0)] is finite, then with probability 1 the average if t is continuous, converges to a limit as s → ∞. In the special case that t is discrete and the Xs are independent and identically distributed, the strong law of large numbers is also applicable and shows that the limit must equal E{X(0)}. However, the example that X(0) is an arbitrary random variable and X(t) ≡ X(0) for all t > 0 shows that this cannot be true in general. The limit does equal E{X(0)} under an additional rather technical assumption to the effect that there is no subset of the state space, having probability strictly between 0 and 1, in which the process can get stuck and never escape. This assumption is not fulfilled by the example X(t) ≡ X(0) for all t, which gets stuck immediately at its initial value. It is satisfied by the sequence U(t) defined above, so by the ergodic theorem the average of these variables converges to 1/2 with probability 1. The ergodic theorem was first conjectured by the American chemist J. Willard Gibbs in the early 1900s in the context of statistical mechanics and was proved in a corrected, abstract formulation by the American mathematician George David Birkhoff in 1931. MEDIA FOR: probability theory Previous Next Citation • MLA • APA • Harvard • Chicago Email You have successfully emailed this. Error when sending the email. Try again later. Edit Mode Probability theory Mathematics Tips For Editing We welcome suggested improvements to any of our articles. You can make it easier for us to review and, hopefully, publish your contribution by keeping a few points in mind. 1. Encyclopædia Britannica articles are written in a neutral objective tone for a general audience. 2. You may find it helpful to search within the site to see how similar or related subjects are covered. 3. Any text you add should be original, not copied from other sources. 4. At the bottom of the article, feel free to list any sources that support your changes, so that we can fully understand their context. (Internet URLs are the best.) Your contribution may be further edited by our staff, and its publication is subject to our final approval. Unfortunately, our editorial approach may not be able to accommodate all contributions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474633932113647, "perplexity": 303.75440773032307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125654.80/warc/CC-MAIN-20170423031205-00580-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/130737-equation-line.html
Math Help - Equation of a line... 1. Equation of a line... The given figure represents the lines y = x +1 and y = 3 x -1. Write down the angles which the lines make with the positive direction of x-axis. Hence determine . Attached Thumbnails 2. Originally Posted by snigdha The given figure represents the lines y = x +1 and y = 3 x -1. Write down the angles which the lines make with the positive direction of x-axis. Hence determine . Notice where the lines cross the x-axis. $y=x+1$ crosses at $x=-1$ and $\sqrt{3}x-1$ crosses at $\frac{1}{\sqrt{3}}$. Since the lines form a triangle with the x-axis, you can find the lenght of the side opposite of $\theta$ using the x-intercepts. So just find the point where the lines intersect by solving $x+1=\sqrt{3}x-1$. Once you find the point, you should be able to find the lenghts of the sides. 3. The slopes of the lines are the coefficients of the 'x' term. What are the slopes of the hypotenuse of a 45,45,90 triangle? Of a 60,30,90 triangle? 4. Originally Posted by snigdha The given figure represents the lines y = x +1 and y = 3 x -1. Write down the angles which the lines make with the positive direction of x-axis. Hence determine . Hi snigdha, The line whose equation is $y = x + 1$ has a slope of 1. The acute angle formed with the positive direction of the x-axis (which is inside the triangle) is $\tan^{-1}(1)={\color{red}45^{\circ}}$ The line whose equation is $y=\sqrt{3}x-1$ has a slope of $\sqrt{3}$. The acute angle formed with the positive direction of the x-axis is $\tan^{-1}(\sqrt{3})=60^{\circ}$. Now we're interested in the obtuse angle inside the triangle, so we determine the supplement of $60^{\circ}$ which is ${\color{red}120^{\circ}}$ Now, you have two angles of the triangle and can determine the value of ${\color{red}\Theta}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989267945289612, "perplexity": 284.82755538663827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136963.94/warc/CC-MAIN-20140914011216-00102-ip-10-234-18-248.ec2.internal.warc.gz"}
http://tex.stackexchange.com/users/4071/lorem-ipsum?tab=activity&sort=all&page=3
Lorem Ipsum Reputation 731 Top tag Next privilege 1,000 Rep. Jul 29 comment Increasing indentation in an enumerate @JLDiaz Ah, I see what you mean (he means). Hmmm... it might work in this case, but I think I do have some cases where the items extend below the column too (items 5,6 for example), so it'll be the same issue again, but roles reversed. Jul 29 comment Increasing indentation in an enumerate @percusse I don't see how that would align 3 & 4 with 1 & 2... or am I missing something/misunderstanding you? I'm fine with anything that aligns the last two items with the first two. The alignment of the figure in the right column is not that important Jul 29 comment Increasing indentation in an enumerate @percusse Points 3 and 4 talk about a figure in the right column, but 1–4 are all part of the same theme, so I don't want to split the list. I'm open to other solutions... this was the first thing that popped into my mind Jul 29 asked Increasing indentation in an enumerate Jun 5 awarded Nice Question Jun 4 comment Inserting a proof-reader's remark in a presentation title @percusse Your answer solves my immediate need and the rest is mere idle curiosity... I don't feel comfortable making someone dig up the internals just to satisfy a stray thought (and I don't even understand the internals). Thanks though :) Jun 4 accepted Inserting a proof-reader's remark in a presentation title Jun 4 comment Inserting a proof-reader's remark in a presentation title @percusse Just thinking about it, it might still be possible if one were to use the optional short title with \title[short]{long}, since the short one is what gets set to footlines... Jun 4 comment Inserting a proof-reader's remark in a presentation title @percusse Ah, very good point about \title directing the text to footlines. I am fine with this, since I'll only be adding it after the title is finalized (and in only one location). I'll wait a bit longer to see if there are other answers before accepting yours :) Jun 4 awarded Commentator Jun 4 comment Inserting a proof-reader's remark in a presentation title This is nice, and I like the font :) Would it be possible to do the positioning a bit more relatively? I guess the use of some absolute units is unavoidable, but something like "Lorem ipsum sit amet", with being invisible to the title command, but tikz is aware of it and then you can do the shifts from the position. You don't have to, if it's hard/complicated; your answer is more than sufficient for my purposes :) Jun 4 asked Inserting a proof-reader's remark in a presentation title Apr 25 awarded Nice Answer Mar 9 awarded Yearling Nov 11 comment How to add some visual style and pizzazz to course notes? Those crazy margins give me a headache. Oct 30 comment How do you create .+? @Jonas Ah, thanks for confirming. In any case, this is a diversion. The OP probably had something else in mind for its use... Oct 30 comment How do you create .+? @Jonas I don't have MATLAB right now and haven't tried it out, but I believe it should give an error. 1.+2 will work, because it treats the . as if it were a decimal, but a.+b should give an error. But if it doesn't, I'd be curious... Oct 29 comment How do you create .+? Can you show me where .+ is used in MATLAB? Oct 3 comment How do I define a new command in algorithmicx Ah, thanks :) I tried defining \algorithmicinput with \algnewcommand like you have, but my mistake was to follow it up with \newcommand without \item, which only messed up the layout. Now I realize what I should've done. Oct 3 awarded Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919587731361389, "perplexity": 1459.018421470879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159031.19/warc/CC-MAIN-20160205193919-00115-ip-10-236-182-209.ec2.internal.warc.gz"}
https://jamaevidence.mhmedical.com/content.aspx?bookid=847&sectionid=69031475
## Introduction For every treatment, there is a true, underlying effect that any individual experiment can only estimate (see Chapter 6, Why Study Results Mislead: Bias and Random Error). Investigators use statistical methods to advance their understanding of this true effect. This chapter explores the logic underlying one approach to statistical inquiry: hypothesis testing. Readers interested in how to teach the concepts reviewed in this chapter to clinical learners may be interested in an interactive script we have developed for this purpose.1 The hypothesis-testing approach to statistical exploration is to begin with what is called a null hypothesis and try to disprove that hypothesis. Typically, the null hypothesis states that there is no difference between the interventions being compared. To start our discussion, we will focus on dichotomous (yes/no) outcomes, such as dead or alive or hospitalized or not hospitalized. For instance, in a comparison of vasodilator treatment in 804 men with heart failure, investigators compared the proportion of enalapril-treated patients who died with the proportion of patients who received a combination of hydralazine and nitrates who died.2 We start with the assumption that the treatments are equally effective, and we adhere to this position unless the results make it untenable. We could state the null hypothesis in the vasodilator trial more formally as follows: the true difference in the proportion of patients surviving between those treated with enalapril and those treated with hydralazine and nitrates is 0. In this hypothesis-testing framework, the statistical analysis addresses the question of whether the observed data are consistent with the null hypothesis. Even if the treatment truly has no positive or negative effect on the outcome (ie, the effect size is 0), the results observed will rarely agree exactly with the null hypothesis. For instance, even if a treatment has no true effect on mortality, seldom will we see exactly the same proportion of deaths in treatment and control groups. As the results diverge farther and farther from the finding of “no difference,” however, the null hypothesis that there is no true difference between the treatments becomes progressively less credible. If the difference between results of the treatment and control groups becomes large enough, we abandon belief in the null hypothesis. We further develop the underlying logic by describing the role of chance in clinical research. ## The Role of Chance In Chapter 6, Why Study Results Mislead: Bias and Random Error, we considered a balanced coin with which the true probability of obtaining either heads or tails in any individual coin toss is 0.5. We noted that if we tossed such a coin 10 times, we would not be surprised if we did not see exactly 5 heads and 5 tails. Occasionally, we would get results quite divergent from the 5:5 split, such as 8:2 or even 9:1. Furthermore, very infrequently, the 10 ... ### Pop-up div Successfully Displayed This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029469847679138, "perplexity": 793.8079986554551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00538.warc.gz"}
https://www.varsitytutors.com/gre_math-help/how-to-divide-negative-numbers
GRE Math : How to divide negative numbers Example Questions Example Question #1 : How To Divide Negative Numbers Find the value of . Explanation: To solve for , divide each side of the equation by -2. is the same as which is POSITIVE Example Question #11 : Negative Numbers What is ? 45 Explanation: A negative number divided by a negative number always results in a positive number.  divided by  equals . Since the answer is positive, the answer cannot be  or any other negative number. Example Question #1 : Negative Numbers Solve for : Explanation: Subtract  from both sides: , or Next, subtract  from both sides: , or Then, divide both sides by : Recall that division of a negative by a negative gives you a positive, therefore: or Example Question #1 : Negative Numbers Solve for : Explanation: To solve this equation, you need to isolate the variable on one side. We can accomplish this by dividing by  on both sides: Anytime you divide, if the signs are the same (i.e. two positive, or two negative), you'll get a positive result.  If the signs are opposites (i.e. one positive, one negative) then you get a negative. Both of the numbers here are negative, so we will have a positive result: Solve for :
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754660725593567, "perplexity": 1474.999620690195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863967.46/warc/CC-MAIN-20180521063331-20180521083331-00067.warc.gz"}
https://papers.nips.cc/paper/2019/hash/21fe5b8ba755eeaece7a450849876228-Abstract.html
#### Authors Rinu Boney, Norman Di Palo, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, Harri Valpola #### Abstract <p>Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning. This procedure often suffers from exploiting inaccuracies of the learned model. We propose to regularize trajectory optimization by means of a denoising autoencoder that is trained on the same trajectories as the model of the environment. We show that the proposed regularization leads to improved planning with both gradient-based and gradient-free optimizers. We also demonstrate that using regularized trajectory optimization leads to rapid initial learning in a set of popular motor control tasks, which suggests that the proposed approach can be a useful tool for improving sample efficiency.</p>
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8479607701301575, "perplexity": 1391.6423720607722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00485.warc.gz"}
https://link.springer.com/article/10.1007%2Fs00222-011-0317-8
Inventiones mathematicae , Volume 186, Issue 1, pp 191–236 # $$\mathcal{C}^{2}$$ surface diffeomorphisms have symbolic extensions Article DOI: 10.1007/s00222-011-0317-8 Burguet, D. Invent. math. (2011) 186: 191. doi:10.1007/s00222-011-0317-8 ## Abstract We prove that $$\mathcal{C}^{2}$$ surface diffeomorphisms have symbolic extensions, i.e. topological extensions which are subshifts over a finite alphabet. Following the strategy of Downarowicz and Maass (Invent. Math. 176:617–636, 2009) we bound the local entropy of ergodic measures in terms of Lyapunov exponents. This is done by reparametrizing Bowen balls by contracting maps in a approach combining hyperbolic theory and Yomdin’s theory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332508444786072, "perplexity": 2399.441914990019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608773.42/warc/CC-MAIN-20170527040600-20170527060600-00100.warc.gz"}
https://physicstravelguide.com/advanced_tools/spinors
# Spinors ## Intuitive A spinor is a mathematical object similar to a vector. However, while a vector points in some spatial direction, like, for example, in the direction of the north pole, a spinor points in a direction in an internal space. A curious property of a spinor is that if you rotate it by 360° it isn't the same but get's a minus sign. Only after a rotation by 720° a spinor is again the same. In contrast a vector is completely unchanged if you rotate it by 360°. This crazy property can be illustrated as shown, for example, here: ## Concrete A Dirac spinor field $\Psi$ and its conjugate $\overline\Psi$ are equivalent to two left-handed Weyl spinors $\chi$ and $\tilde\chi$ and their right-handed conjugates $\chi^\dagger$ and $\tilde\chi^\dagger$; $\chi$ and $\chi^\dagger$ describe the left-chiral fermion and the right-chiral antifermion (e.g. \ $e^-_L$ and $e^+_R$), while $\tilde\chi$ and $\tilde\chi^\dagger$ describe the left-chiral antifermion and the right-chiral fermion (e.g. $e^+_L$ and $e^-_R$). Things to take care of: Representing the u, as vectors is a heuristic oversimplification though, and in fact is not really correct, as operations like spinor addition work a little differently than vector addition. (See Winter 3.) However, temporarily visualizing them as such can aid in our understanding of how they and spin behave, relative to the at-rest coordinate system, for varying particle velocities.page 99 in Student Friendly Quantum Field Theory, by R. Klauber Reference 3 is Winter, Rolf G., Quantum Physics, Wadsworth (1979), Chap. 9. ## Abstract Spinors arise as mathematical objects when we study the representations of the Lorentz group. The objects that transform under the $(\frac{1}{2},0)$ or $(0,\frac{1}{2})$ representation of the Lorentz group are called Weyl spinors, objects transform under the (reducible) $(\frac{1}{2},0) \oplus (0,\frac{1}{2})$ representation are called Dirac spinors. "spinor representations are the square root of a principle fiber bundle ## Why is it interesting? Spinors are the appropriate mathematical objects to describe particles with spin 1/2, like, for example, electrons. "One could say that a spinor is the most basic sort of mathematical object that can be Lorentz-transformed." An introduction to spinors by Andrew M. Steane ## FAQ Why is there no classical theory of spinors? [V]ia the Pauli exclusion principle, fermions cannot occupy the same state within the same macro system. So, whereas photons (bosons) can occupy the same state and a lot of them can therefore reinforce one another to produce a macroscopic electromagnetic field, spinors (fermions) cannot do so. In other words, we have no classical macroscopic spinor fields to sense, interact with, and study experimentally. And thus, we have no classical theory of spinors. Student Friendly Quantum Field Theory by Klauber
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775674104690552, "perplexity": 613.2450811489373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824525.29/warc/CC-MAIN-20181213054204-20181213075704-00046.warc.gz"}
https://www.geeksforgeeks.org/change-in-state-of-motion/?ref=lbp
## Related Articles • CBSE Class 8 Science Revision Notes # Change in State of Motion • Last Updated : 08 Dec, 2021 In science, a push or pull of an entity is identified as a Force. The interaction between two objects arises from the force. Force has both magnitude and direction. The strength of a force is articulated in magnitude. Force brings about an altar in the direction or state of motion of a body. Characteristics of forces: • The net resultant force on an object is the sum of these two forces when two forces act in the same direction. • The net resultant force is the difference between these two forces when two forces act in opposite directions. The magnitude of the force describes its strength. • The force all the time has a direction in which it is applied and determines its strength or magnitude. • When the direction of the magnitude of the force has changed the effects of a force may alter. • By evaluating the net force acting on that object the effect of more than one force being applied on an entity is calculated. • The net force acting on the entity will be zero if two forces are acting upon each other having the same magnitudes (strength) and in conflicting directions. • Force can bring dissimilar effects to an object’s position, size and shape. • F = m × a, where F = Force, m = Mass of object and A = Acceleration • Newton (N) is the SI unit of force. Force can alter the state of motion of an entity: Motion of an object: • If an object is moving at a certain speed in a particular direction then it is said to be in motion. • The direction of the magnitude of the force is changed if the object is at rest. It means that it is not altering its position with respect to an observing point. • When the entity starts moving it means that its spot is being distorted with respect to a surveillance point. • To move an entity from one place to another, a force is necessary to bring that object in motion. • Not only this, a force applied to an object can change its speed, bring it to rest, or even change the direction of its motion. • It may bring alter in the speed of the motion in total and an amalgamation of these effects as well such as an alter in direction of motion. • Force can alter the state of motion of an entity. • Without the application of a force, any object cannot move by itself or change its state of motion on its own. • This change of state of motion will not take place every time with every kind of object. For instance, if someone tries to move forward a very weighty entity such as a wall, it would not at all. Force can change the shape of an object: The shape of an object can be altered if some force is applied to it. Depending upon the magnitude of the applied force and the rigidity of the object, the effect on its shape and size can be observed. Push: A force exerted away from the body is called push, e.g: Hitting a ball, kicking a football. Pull: A force exerted towards the body is called pull, e.g: drawing a bucket of water from a well, playing tug of war. Force: • A push or a pull can be a force. • The interaction between objects can change the state of the objects. • The state of an object from rest to motion or vice versa can be changed by a force. • Two or more objects must interact with each other, to let a force approach into play. Net force: • The resultant of all the forces acting on an object is known as the net force. • The body’s acceleration is along the direction of the net force. Vector: • In magnitude as well as the direction of the object vector quantities are expressed. E.g: Velocity, displacement, weight, momentum, force, acceleration, etc. • To find the resultant component acting on an object, vectors are used. • When several forces act on a body, they can be determined into one component. It is called the net force acting on the entity. When the force acts at an angle to the horizontal, vectors are also used. Application of Force: • A force is an attempt that changes the state of an entity at rest or in motion. • It can alter an entity’s velocity and direction. • The shape of an entity can also be altered by force. ### State of Motion The state of motion of an entity is defined by its velocity – the speed with a direction. Thus, inertia could be redefined as follows: Inertia = tendency of an entity to oppose changes in its velocity. An entity at rest has zero velocity – and (in the nonappearance of an unhinged force) will stay with a zero velocity; it will not alter its state of motion (i.e., velocity). Objects oppose changes in their velocity. ### Contact Force Touch or contact is necessary to do the majority of our daily actions. E.g Lifting, pulling, kicking, pushing, etc. Forces that require a touch or contact to be applied are known as contact forces. E.g: Muscular forces, frictional forces • Muscular force: The force that comes into engaging in recreation because of the action of muscles is called muscular force. For example: • In order to walk human beings use muscular force. • The expansion and contraction of the lungs are because of muscular force. • Movement of food along the food pipe. • Frictional force: Whenever the object moves on the surface this force is exerted by the surface over an object. Characteristics of the force of friction: • The force of friction forever acts in the contradictory direction of the motion of the entity. • It leads to the generation of heat as two surfaces come in contact with each other. For example, heat is produced as a result of friction between our hands, when we rub our hands together. • Frictional force also leads to wear and tear of the surfaces of entities that get in touch with each other. For example, the sole of shoes often gets worn out due to friction force that acts between them and the ground as we walk. • The relative motion between two surfaces is opposed by this force. • Acts between the surfaces of the two bodies in contact. Air Resistance: An object experiences a force called air resistance, whenever it moves or flies in the air, it experiences a force called air resistance. ### Non-contact forces These forces do not need contact or have their effect without a touch. Example: magnetic force, electrostatic force, gravitational force. • Magnetic force: • The force of attraction or repulsion between two magnetic bodies thanks to their poles is understood as a magnetic force. • The force exerted by any magnetic object is named magnetic force. • We know that like magnetic poles forever drive back each other, that is, they shove each other away. • Also, opposite magnetic poles constantly attract each other, that is, they pull each other towards themselves. • Gravitational force: • The attractive force that a body experiences towards the center of the earth is called the force of gravity due to the earth. • Every entity attracts or exerts a force on every other entity, the property of the universe. • That acts upon all the objects that are present on or near the Earth’s surface is also called the force of gravity or gravity. • Gravity is a property shown by every entity present in space and not only the earth. Hence, all the planets, the moons, and even the sun have a gravitational force of their individual. Electrostatic force: Electrostatic Force is the force of attraction or repulsion experienced by a charged body from another charged body in the same neighborhood. Nuclear forces: • The nuclear force acts among all the particles in the nucleus. i.e., between two protons, between two neutrons, and between a neutron and a proton. • In all cases, it is an attractive force. • This force by overcoming the enormous repulsive force between positive protons keeps the nucleus intact. ### Acceleration Acceleration is defined because of the rate of altering velocity with reference to time. Acceleration may be a vector quantity because it has both magnitude and direction. It is also the second derivative of position with reference to time, or it’s the primary derivative of velocity with reference to time. Instantaneous Acceleration: Instantaneous acceleration is defined as the ratio of alter in velocity during a given time period such that the time interval goes to zero. Acceleration Formula: Acceleration formula is given as: Acceleration = (final velocity – initial velocity)/time = (change in velocity)/(time) = a = Where, a is the acceleration in m.s-2 vf  is the final velocity in m.s-1 vi is the initial velocity in m.s-1 t is the time interval in s Δv is that the chicken feed within the velocity in m.s-1 Unit of Acceleration: The SI unit of acceleration is given as m/s2. Uniform and Non-uniform acceleration: It is possible in circular where speed remains constant but since the direction is changing hence the speed changes, and therefore the body is claimed to be accelerated. Average acceleration: The average acceleration over a period of time is defined because the total change in velocity within the given interval divided by the entire time taken for the change. For a given interval of time, it’s denoted as ā. Mathematically, Where v2 and v1 are the instantaneous velocities at time t2 and t1 and ā is that the average acceleration. ### Deceleration You must have noticed that always we hamper the speed of our bikes during heavy traffic when more bikes are obstructing us. So, a decrease in speed because the body moves far away from the start line is defined as Deceleration. Deceleration is the opposite of acceleration. It is expressed as Deceleration = (Final time – Initial time)/(Time taken) Deceleration also is known as negative acceleration. Hence, it is denoted by (– a). Deceleration Formula is given by it is the final velocity minus the initial velocity, with a negative sign in the result because the velocity is decreasing, if starting velocity, final velocity and time taken are given. If initial velocity, final velocity and distance travelled are given, deceleration is known by a = Where, v = final velocity, u = initial velocity, t = time taken, s = distance covered. Deceleration Formula is employed to calculate the deceleration of the given body in motion. It is expressed in m/s2. ### Sample Problems Question 1: A boy weighing 56 kgf stands on a platform of dimensions 3.5 cm × 1.5 cm. What pressure in pascal does he exert? Solution: Force = Weight = 56 kgf  N = 560N Area =  m2 Pressure = Question 2: A wheel of diameter 4 m can be rotated about an axis passing through its centre by a moment of force equal to 5.0 N m. What minimum force must be applied on its rim? Solution: Diameter = 4 m Therefore, ⊥ distance = 2 m Moment of force = 5.0 Nm Moment of force = Force × ⊥ distance 5.0 Nm = F × ⊥ distance 5.0 Nm = F × 2 m F = 2.5 N Question 3: The moment of a force of 60N about a point is 3 Nm. Find the perpendicular distance of force from that point. Solution: Force applied = 60 N ⊥ Distance from the point of rotation =? Moment of force = Force × ⊥ distance 3 = 60 × ⊥ distance Distance = 3/60 = 1/20 m = 20 cm Question 4: Find the thrust required to exert a pressure of 40000 pascals on an area of 0.006 m2? Solution: Pressure Force = Pressure × Area F = 40000 × 0.006 = 240 Newton Question 5: A car moving with a uniform velocity of 55 Kmph is brought to rest in traveling a distance of 2.5 m. Compute the deceleration formed by brakes? Solution: Given: Initial velocity u = 55 Kmph, Final velocity v = 0 Distance covered s = 2.5m We know that v2 = u2 + 2as Deceleration a = = a = -605x 106 m/s2 Question 6: A toy automobile accelerates from 2 m/s to 6 m/s in 4 s. What is its acceleration? Solution: Given: Initial Velocity u = 2 m/s, Final Velocity v = 6m/s, Time taken t = 4s. The acceleration is given by a = = = 1m/s2 Question 7: From a bridge, a stone is released into the river. It takes 5s for the stone to contact the river’s water surface. From the water level calculate the height of the bridge. Solution: Because the stone was at rest, Initial Velocity, u = 0 t = 5s (t is Time taken) Acceleration due to gravity, a = g = 9.8 m/s2 Distance covered by stone = Height of bridge = s The distance covered is articulated by m/s2 Therefore, s = 24.5 m/s2 My Personal Notes arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835205078125, "perplexity": 697.6622376878835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00486.warc.gz"}
http://mathhelpforum.com/statistics/204514-maths.html
1. ## Maths Hi, For each Australia dollar, Amy receives 80 United States cents, I.e $A1 = US80c. She wants$US600 for a trip, how much should she receive in Australian dollars? A)$480 B)$800 C)$750 (D)$720 2. ## Re: Maths For each Australian dollar, Amy receives 4/5 U.S. dollars. Letting x be the number of Australian dollars, we can multiply this by 4/5 and equate to 600 to find the number of Australian dollars that is equivalent to 600 U.S. dollars. $\frac{4}{5}\cdot x=600$ Now solve for x. 3. ## Re: Maths what the best way of finding X? guess? trial and error?? 4. ## Re: Maths No, solve the equation. If you multiply both sides by 5/4, you will be left with x on the left, and on the right you have 5/4 times 600, which is the answer to the question. 5. ## Re: Maths Thanks the only way i could figure out doing is multiplying A) B) C) D) until it gave me 600.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051169514656067, "perplexity": 2517.125060861489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00475-ip-10-171-6-4.ec2.internal.warc.gz"}
http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node49.html
# 6.6 Entropy and Unavailable Energy (Lost Work by Another Name) Consider a system consisting of a heat reservoir at in surroundings (the atmosphere) at . The surroundings are equivalent to a second reservoir at . For an amount of heat, , transferred from the reservoir, the maximum work we could derive is times the thermal efficiency of a Carnot cycle operated between these two temperatures: (6..7) Only part of the heat transferred can be turned into work, in other words only part of the heat energy is available to be used as work. Suppose we transferred the same amount of heat from the reservoir directly to another reservoir at a temperature . The maximum work available from the quantity of heat, , before the transfer to the reservoir at is The maximum amount of work available after the transfer to the reservoir at is There is an amount of energy that could have been converted to work prior to the irreversible heat transfer process of magnitude , or However, is the entropy gain of the reservoir at and ( ) is the entropy decrease of the reservoir at . The amount of energy, , that could have been converted to work (but now cannot be) can therefore be written in terms of entropy changes and the temperature of the surroundings as The situation just described is a special case of an important principle concerning entropy changes, irreversibility and the loss of capability to do work. We thus now develop it in a more general fashion, considering an arbitrary system undergoing an irreversible state change, which transfers heat to the surroundings (for example the atmosphere), which can be assumed to be at constant temperature, . The change in internal energy of the system during the state change is . The change in entropy of the surroundings is (with the heat transfer to the system) Now consider restoring the system to the initial state by a reversible process. To do this we need to do work, , on the system and extract from the system a quantity of heat, . (We did this, for example, in undoing'' the free expansion process.) The change in internal energy is (with the quantities and both regarded, in this example, as positive for work done by the surroundings and heat given to the surroundings)6.2. In this reversible process, the entropy of the surroundings is changed by For the combined changes (the irreversible state change and the reversible state change back to the initial state), the energy change is zero because the energy is a function of state, Thus, For the system, the overall entropy change for the combined process is zero, because the entropy is a function of state, The total entropy change is thus only reflected in the entropy change of the surroundings: The surroundings can be considered a constant temperature heat reservoir and their entropy change is given by We also know that the total entropy change, for system plus surroundings is, The total entropy change is associated only with the irreversible process and is related to the work in the two processes by The quantity represents the extra work required to restore the system to the original state. If the process were reversible, we would not have needed any extra work to do this. It represents a quantity of work that is now unavailable because of the irreversibility. The quantity can also be interpreted as the work that the system would have done if the original process were reversible. From either of these perspectives we can identify as the quantity we denoted previously as , representing lost work. The lost work in any irreversible process can therefore be related to the total entropy change (system plus surroundings) and the temperature of the surroundings by To summarize the results of the above arguments for processes where heat can be exchanged with the surroundings at : 1. represents the difference between work we actually obtained and work that would be done during a reversible state change. It is the extra work that would be needed to restore the system to its initial state. 2. For a reversible process, ; . 3. For an irreversible process, ; . 4. is the energy that becomes unavailable for work during an irreversible process. Muddy Points Is path dependent? (MP 6.11) Are and the and going from the final state back to the initial state? (MP 6.12) UnifiedTP
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274227023124695, "perplexity": 297.46241358628816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650193.38/warc/CC-MAIN-20141024030050-00135-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mymathforum.com/algebra/340281-solve-pressure-applied.html
My Math Forum Solve For Pressure Applied Algebra Pre-Algebra and Basic Algebra Math Forum April 30th, 2017, 08:24 PM #1 Member   Joined: Jan 2017 From: US Posts: 86 Thanks: 5 Solve For Pressure Applied The volume V of a gas varies inversely as the pressure P on it. If the volume is 250 cm$\displaystyle ^3$ under a pressure of 35 kg/cm$\displaystyle ^3$, solve for the pressure applied to have a volume of 180 cm$\displaystyle ^3$. Be sure to define your variables, find your constant variable, construct your variation equation, and then solve. May 2nd, 2017, 01:23 AM #2 Senior Member   Joined: Apr 2014 From: Glasgow Posts: 2,049 Thanks: 680 Math Focus: Physics, mathematical modelling, numerical and computational solutions The question states the steps you need to take to solve it. 1. Define your variables 2. Find your constant variable 3. Construct your variation equation 4. Solve it The question also hints at the things that are going to be variables in your equation, volume and pressure, and how they are related to each other. Therefore, have a go and let us know how it goes Thanks from Indigo28 May 3rd, 2017, 07:05 PM #3 Member   Joined: Jan 2017 From: US Posts: 86 Thanks: 5 V= 250 cm$\displaystyle ^3$ when P=35 kg/cm$\displaystyle ^3$ Correct me if I'm wrong, but since the volume depends on the pressure applied, the constant variable would be V=250cm$\displaystyle ^3$, right? So: y=250cm$\displaystyle ^3$ y=kx when p=35 kg/cm$\displaystyle ^3$, v=250cm$\displaystyle ^3$ 35kg/cm$\displaystyle ^3$=k*250cm$\displaystyle ^3$ So the equation would be 35kg/cm$\displaystyle ^3$=250cm$\displaystyle ^3$*x. Am I at all on the right track here? May 4th, 2017, 02:10 AM   #4 Senior Member Joined: Apr 2014 From: Glasgow Posts: 2,049 Thanks: 680 Math Focus: Physics, mathematical modelling, numerical and computational solutions Quote: Originally Posted by Indigo28 V= 250 cm$\displaystyle ^3$ when P=35 kg/cm$\displaystyle ^3$ Correct me if I'm wrong, but since the volume depends on the pressure applied, the constant variable would be V=250cm$\displaystyle ^3$, right? Nope, the volume variable, V, varies with the pressure variable, P, so if you change the pressure, you change the volume... Here's some further information that might help: There are different names for different quantities in equations depending on their behaviour. - Variables: These are quantities that can change and are usually things that describe the state of a system at a particular point in time. - Independent variables: These are particular variables that do not depend on anything. Usually they are things like space and time, which you characterise everything else in terms of. Most calculations are decribed as 1D, 2D, 3D, etc. based on the number of independent variables used (hence some people describing 4D as referring to 3 space variables and 1 time variable). - Dependent variables: These are particular variables which have dependencies on other things. They have relationships between each other which describe how they vary. The relationships can be described using proportionalities or functions/formulae. - Constant: These are quantities that do not change. Therefore, they usually describe the system as a whole or how its processes are working. Variables that just happen to stay the same for some sort of process are sometimes called constants. - Parameters: These are constants that are required to solve something at the beginning of a calculation. They are often called input parameters. Now some information on relationships between variables. Consider two variables, $\displaystyle a$ and $\displaystyle b$... - Proportional This means that if you increase one thing, you increase the other (so if you double one quantity, you double the other). Written down, this is $\displaystyle a \propto b$ where the $\displaystyle \propto$ symbol just means "proportional to". You can turn any proportionality into an equation by multiplying the right-hand side by a constant (most people use k) and swapping the $\displaystyle \propto$ with an equals sign. So you end up with $\displaystyle a = kb$ If you plot $\displaystyle a$ as a function of $\displaystyle b$ you get a straight line on a plot going through the origin, so this relationship is often called a "linear relationship". - Inversely proportional This means that if you increase one thing, you decrease the other (so if you double one quantity, you halve the other). Written down, this is $\displaystyle a \propto \frac{1}{b}$ Making an equation from this looks like $\displaystyle a = \frac{k}{b}$ In the question there is a statement: "volume of a gas varies inversely as the pressure". That means that both the volume and the pressure are variables and the relationship between those two variables is inverse proportionality (i.e. you halve one thing, you double the other). Written down, this looks like $\displaystyle V \propto \frac{1}{P}$ Turning this into an equation gives $\displaystyle V = \frac{k}{P}$ where k is the constant. You can then substitute the initial volume and pressure parameters to find k. Once you have k, you can use that together with the target volume to get the pressure required. Last edited by Benit13; May 4th, 2017 at 02:15 AM. Tags applied, pressure, solve Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post srecko Chemistry 1 January 8th, 2017 07:02 AM Studentno666 Physics 3 July 13th, 2015 02:17 AM hobilla Pre-Calculus 4 March 21st, 2015 05:11 AM Phatossi Physics 1 June 17th, 2012 12:29 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199860453605652, "perplexity": 1212.4859524163946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00460.warc.gz"}
http://math.stackexchange.com/questions/221016/if-every-subset-of-m-is-clopen-then-show-that-any-function-f-m-rightarrow/221017
# If every subset of $M$ is clopen, then show that any function $f: M \rightarrow N$ is continuous where M and N are metric spaces Here is the reasoning I think fleshes things out more. Some say that the fact follows obviously from the definitions (which may be true), but students often want an answer that explains in some level of detail. What do you think of the following answer? Does this explain enough or are there areas that need more elaboration. Any comments are appreciated. - Let $f: M \rightarrow N$ where M and N are metric spaces (or Topological spaces) By the topological defintion of continuity, $f$ is continuous iff the pre-image of each open(closed) set in N is also open(closed) in M. Now, pick any open subset $X \in N$, the pre-image of $N$ under $f$ must be open since we are given that every subset of $M$ is open (since it is clopen). QED.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897087216377258, "perplexity": 284.5611757186461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164004057/warc/CC-MAIN-20131204133324-00097-ip-10-33-133-15.ec2.internal.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2016/deducteam/uid8.html
Overall Objectives Application Domains New Software and Platforms Partnerships and Cooperations Bibliography PDF e-Pub Section: Research Program Models of computation The idea of Deduction modulo is that computation plays a major role in the foundations of mathematics. This led us to investigate the role played by computation in other sciences, in particular in physics. Some of this work can be seen as a continuation of Gandy's [31] on the fact that the physical Church-Turing thesis is a consequence of three principles of physics, two well-known: the homogeneity of space and time, and the existence of a bound on the velocity of information, and one more speculative: the existence of a bound on the density of information. This led us to develop physically oriented models of computations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088356494903564, "perplexity": 404.6448583179493}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00197.warc.gz"}
https://www.studyadda.com/question-bank/fractions_q7/3383/283470
• # question_answer What is the sum of the shaded parts of the given figures? A)  $1\frac{2}{3}$B)  $1\frac{1}{4}$C)  $1\frac{3}{4}$D)  $2\frac{1}{4}$ For figure I, Total number of equal parts = 4 Number of shaded parts = 3 $\therefore$ Shaded fraction $=\frac{3}{4}$ For figure II, Total number of equal parts = 16 Number of shaded parts = 8 $\therefore$ Shaded fraction $=\frac{8}{16}=\frac{1}{2}$ So, required sum$=\frac{3}{4}+\frac{1}{2}=\frac{3+2}{4}=\frac{5}{4}=1\frac{1}{4}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055695533752441, "perplexity": 1764.4859694033942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00031.warc.gz"}
http://math.stackexchange.com/users/23290/harald-hanche-olsen?tab=activity&sort=all&page=4
Harald Hanche-Olsen Reputation 22,944 Top tag Next privilege 25,000 Rep. Mar 10 awarded Informed Mar 9 comment Let $p_n$ be the $n$th prime, for any integer $n$, prove that: $p_n+p_{n+1}\geq{p_{n+2}}$ @DDaren \sim will come out as $\sim$. Mar 9 comment Can $n^4 + n^3 + n^2 + n + 1$ for $n \in \mathbb{N} \backslash \{ 0,3\}$ yield a perfect square number? @Silenttiffy Because we're looking for a perfect square, meaning the square of an integer, so $n^2+n/2+x$ has to be an integer. Given that $n$ is an integer, that means $2x$ must be an integer. Mar 7 comment Can $n^4 + n^3 + n^2 + n + 1$ for $n \in \mathbb{N} \backslash \{ 0,3\}$ yield a perfect square number? @Silenttiffy Try $x=0$: $(n^2+n/2)^2=n^4+n^3+n^2/4n^4+n^3+n^2+n+1$. Mar 5 answered Deduce that if $A$ is a subset of $C$, then $\sup A\leq \sup C$. Mar 5 comment Deduce that if $A$ is a subset of $C$, then $\sup A\leq \sup C$. To your last question: Yes, that is possible. But it has no bearing on the main question. Mar 4 comment Simplify the function of x To me, this answer looks right if the question were about the limit of $(1+x)(1+x^2)(1+x^4)\cdots(1+x^{2^n})$. What am I missing? Mar 4 comment Simplify the function of x Convolution is something like this: $f*g(x)=\int_{-\infty}^\infty f(t)g(x-t)\,dt$. Mar 4 comment Simplify the function of x But please, don't use $*$ for multiplication. That is the symbol for convolution. (I fixed it for you.) Mar 4 revised Simplify the function of x Don't use * for multiplication Mar 1 comment What does omega limit sets have with invariant sets? That invariant sets only exists in two dimensions, is news to me. I think your perception that this is so is an artifact of your textbook, which (I guess) does all the theory in two dimensions first. Feb 27 comment Proving that if a sequence converges weakly, then their set of norms is bounded. Uniform boundedness principle, a.k.a. Banach–Steinhaus? Feb 26 comment Why is the ring of integers initial in Ring? @MarianoSuárez-Alvarez Ah, that makes sense. Sorry I didn't catch on. Feb 26 answered Why is the ring of integers initial in Ring? Feb 26 comment Why is the ring of integers initial in Ring? @MarianoSuárez-Alvarez Surely, any ring is a $\mathbb{Z}$-algebra, and the dot must be multiplication by a scalar in that algebra? Feb 26 comment Series Convergence $\sum_{n=1}^{\infty} \frac{1}{n} \left(\frac{2n+2}{2n+4}\right)^n$ @LudovicoL Yes, since $(1-1/k)^{k-2}\to e^{-1}$ when $k\to\infty$, and the harmonic series diverges. Feb 26 answered Series Convergence $\sum_{n=1}^{\infty} \frac{1}{n} \left(\frac{2n+2}{2n+4}\right)^n$ Feb 25 comment Mathematical induction for inequalities That's it, indeed. The sum of the middle positive term and the one negative term is easily simplified, so start there. You can get fancy and use the AM-HM inequality, or you can get the needed inequality by hand. Feb 25 comment Differential Geometry-Wedge product There are many equivalent ways of organizing the definitions that go into that, and the answer to your question depends on which one of them has been used. Feb 25 answered Mathematical induction for inequalities
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560009241104126, "perplexity": 421.5467150981233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398538144.64/warc/CC-MAIN-20151124205538-00316-ip-10-71-132-137.ec2.internal.warc.gz"}
https://weeklymathematics.com/2019/02/16/counterexamples-in-topology-and-algebra-i/
# Counterexamples in Topology and Algebra-I This is a part of a series of posts which will deal with different types of counterexamples in topology and algebra. So, what exactly is a counterexample in mathematics. It must be an example which disproves a conjecture but here I view it more broadly. A counterexample is simply an interesting example which deviates from the common results associated with various ‘common’ spaces and objects(eg: Euclidean space and real line) which most students encounter. The mathematical intuition, so to speak, is a truly fascinating aspect of studying mathematics. While the ability to condense a block full of mathematical jargon and symbols into a single precise ‘picture’  or performing the opposite, that is, delineating mysterious mathematical intuition into pages of rigorous proof and exposition may sound quite romantic, there’s quite often the sporadic chance of failing to translate intuition to reality-the counterexample. Topology is a notorious example of this unfortunate possibility, especially in instances of generalizing finite-dimensional results. But the mathematicians must persevere, transcending spatial imagery while still grounding his work, in some manner or another, on the basic axioms of mathematics and the Euclidean space. Starting from the well-known Hausdorff condition and the distinction between the box and product topology, I often question whether my incessant quest for abstraction is leading me astray either to the swamp of unscrupulous simplistic intuition or to a bleak cave of incoherent definitions and theory where I forever sit, merely content with knowing but not understanding-the daring step to rebuild my intuition, a lost dream. Terence Tao has a rather interesting take on the topic. In his blog, he discussed how mathematicians learn and evolve from their naive mathematical thinking but finally revert back to it, albeit with new insight and wisdom which they previously lacked- a refined intuition of sorts. Counterexamples are a great way to properly ‘train’ our intuition and unearth our mathematical insecurities. Enough rambling, here are some pretty cool counter-examples to think of. ## THE DELETED COMB SPACE A topological space $X$ is connected if it is not possible to write it as the union of two open disjoint subsets of $X$. A topological space $X$ is path-connected if for every pair of points $x,y$ in X, there exists a continuous function/path $p:[0,1] \mapsto X$ such that $p(0)=x,p(1)=y$. Now, it is easy to see that a path-connected space is always connected but the converse need not be true. The best example of the failure of the converse is the Topologist’s sine curve defined as $X=\{sin(\frac{1}{x})|x>0\} \cup (0,0)$. The Deleted Comb Space is another amazing example of this failure. It is defined as $D=\{[0,1] \times \{0\} \} \cup \bigcup\limits_{n=1}^{\infty} (\{\frac{1}{n}\} \times [0,1]) \cup (0,1)$ Proving that it is connected is pretty simple. Now, to prove that $D$ is not path-connected, we’ll show that no path connects  $(0,1)$ to any other point on $D$ First, we assume that there exists a path $p:[0,1] \mapsto D$ such that $p(0)=(0,1)$. Then, we basically show that $p(t)=(0,1) \forall t \in [0,1]$ which basically means that the function can’t ‘move past’ $(0,1)$. Consider $A=\{t \in [0,1]| f(t)=(0,1) \}$. The proof is complete when we show that A is both closed and open(since the only clopen sets of $\mathbb{R}$ are $\mathbb{R}$ and the void set). By definition, $A$ is closed since $A=p^{-1}(0,1)$ . Choose some $t_{0} \in A$. By the continuity of $p$, there exists some $\delta >0$ such that $B(t_{0};\delta) \subset f^{-1}(V)$ where $V$ is an open set containing $(0,1)$ in $A$ under the subspace topology such that $V$ doesn’t intersect the X-axis. Now, the projection function $\pi_{1}:\mathbb{R}^{2} \mapsto \mathbb{R}$ defined by $\pi_{1}(x,y)=x$ is continuous. Since the composition of two continuous functions is also continuous, the function $f:B(t_{0};\delta) \mapsto \mathbb{R}$ given by $f(t)=\pi_{1}(p(t))$ is continuous. Let $I=B(t_{0};\delta) \cap [0,1]$. $I$ is clearly connected.So, $f(I)$ is also connected. This means that $f(I) \subset X$ where $X=\bigcup\limits_{n=1}^{\infty} \{\frac{1}{n}\}$. Any connected set $C$of $X$ contains some point of the form $\frac{1}{n}$. But $\frac{1}{n}$ is both closed and open in $X$. So, $C=\frac{1}{n} \cup (C-\frac{1}{n})$ is a separation of $C$. Since $C$ is connected,$C-\frac{1}{n}=\phi$ hence $C=\frac{1}{n}$ for some $n \in Z^{+}$. Therefore,$f(I)$ is a one point set. Since $t_{0} \in I, f(I)=f(t_{0})=\pi_{1}(p(t_{0})=\pi_{1}(0,1)=0$. $f(I)=\{0\}$. So, $I \subset A$. This means that $A$ is open as for every $t_{0} \in A$, there exists an neigbourhood $I$ of $t_{0}$ in $A$. The only clopen non-empty subset of $[0,1]$ is $[0,1]$ itself. So,$A=[0,1]$ and the function $p$ is a constant function which is a contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 60, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334100484848022, "perplexity": 301.3320050846755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00110.warc.gz"}
https://issuecounsel.com/argument/tidal-energy-generates-very-little-energy-comparatively/
# Argument: Tidal energy generates very little energy comparatively ## Support Ocean Energy Council: Tidal Energy – “there is only one major tidal generating station in operation. This is a 240 megawatt (1 megawatt = 1 MW = 1 million watts) at the mouth of the La Rance river estuary on the northern coast of France (a large coal or nuclear power plant generates about 1,000 MW of electricity). The La Rance generating station has been in operation since 1966 and has been a very reliable source of electricity for France.” “PROS AND CONS OF TIDAL ENERGY USE”. Energy Consumers Edge – The World Energy Council Ocean Current Report states that total electrical power available from tidal energy use is about 450 GW of installed capacity. The report is a bit confusing to read and appears to be mixing tidal information with other ocean current information. Still, it’s worth reading. That 450 GW figure seems to be compatible with the data on the WEC Tidal Energy page. If we apply the .27 average load factor for tidal energy use, we can expect it to deliver about 450GW x 24 hrs x 365 days x .27 LF = 1064 TWh (Terrawatt hours) annually, or a little over 6% of global electrical demand. James Nash. “The Power of Tidal Energy”. ArticleBase. 22 Aug. 2008 – This energy of tidal waves is harnessed by trapping the water so that it is used to turn turbines. The energy so produced is released through tidal barrages found in either direction. However, generally implementation of tidal power technology worldwide proves to have little potential because of its environmental constraints. Another reason that is considered to be attributed to the low potential of tidal wave energy is that it would produce electricity more efficiently if it generates electricity in bursts at gaps of perhaps six hours. However this is not feasible as it is the limits of tidal energy applications that hinder the use of tidal energy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576544523239136, "perplexity": 1818.745994027055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00403.warc.gz"}
https://www.physicsforums.com/threads/central-force-motion.297544/
# Homework Help: Central Force motion 1. Mar 5, 2009 ### roeb 1. The problem statement, all variables and given/known data Consider the motion of a particle in the central force F(r) = -kr. Solve for the particle's location as a function of time, r(t) and theta(t). 2. Relevant equations 3. The attempt at a solution $$E = 1/2 m r'^2 + \frac{L^2}{2mr^2} + U(r)$$ I know U(r) = 1/2 k r^2 $$\frac{dr}{dt} = \sqrt{\frac{2}{m}(E-U(r)) - \frac{L^2}{m^2 r^2}}$$ (Where L is ang. momentum) Plugging in U(r) I get a really nasty integral $$dt = \frac{r^2 dr}{\sqrt{2E/m r^2 - k/m r^4 - L^2/m^2}}$$ According to my professor I can use a trig sub to solve this, but I am not getting anywhere. Is there some sort of relationship that I am missing? I know it's supposed to be an ellipse but I seem to get any sort of substitutions to work. Last edited: Mar 5, 2009 2. Mar 5, 2009 ### weejee This integral is such a mess. I think the easiest way is to solve this problem in cartesion coordinates and convert it to polar coordinates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542914628982544, "perplexity": 751.0800415955229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00060.warc.gz"}
http://mathhelpforum.com/differential-equations/140707-true.html
# Math Help - Is this true? 1. ## Is this true? If y1 = e^(ax)*Cos(Bx) is a solution of a certain homogenrous second order linear equation with constant coefficents then y2 = e^(ax)*Sin(Bx) is also a solution of this equation? 2. Originally Posted by DCU If y1 = e^(ax)*Cos(Bx) is a solution of a certain homogeneous second order linear equation with constant coefficents then y2 = e^(ax)*Sin(Bx) is also a solution of this equation? Saying that $e^{ax}Cos(bx)$ is a solution of a certain homogenreous second order linear equation with constant coefficents implies that a+ Bi is a solution to the characteristic equation. IF the differential equation (and therefore characteristic equation) has real coefficients, then a- Bi must also be a solution to the characteristic equation and so $e^{ax}Cos(bx)$ is an independent solution to the differential equation. However, if any of the coefficients is non-real complex, that is not necessarily true. 3. So it's true if the coefficients are real numbers but not if the coefficents aren't?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743288159370422, "perplexity": 439.8239604498682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117783.16/warc/CC-MAIN-20160428161517-00120-ip-10-239-7-51.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Mahlo_cardinal
Mahlo cardinal In mathematics, a Mahlo cardinal is a certain kind of large cardinal number. Mahlo cardinals were first described by Paul Mahlo (1911, 1912, 1913). As with all large cardinals, none of these varieties of Mahlo cardinals can be proved to exist by ZFC (assuming ZFC is consistent). A cardinal number κ is called Mahlo if κ is inaccessible and the set U = {λ < κ: λ is inaccessible} is stationary in κ. A cardinal κ is called weakly Mahlo if κ is weakly inaccessible and the set of weakly inaccessible cardinals less than κ is stationary in κ. Minimal condition sufficient for a Mahlo cardinal • If κ is a limit ordinal and the set of regular ordinals less than κ is stationary in κ, then κ is weakly Mahlo. The main difficulty in proving this is to show that κ is regular. We will suppose that it is not regular and construct a club set which gives us a μ such that: μ = cf(μ) < cf(κ) < μ < κ which is a contradiction. If κ were not regular, then cf(κ) < κ. We could choose a strictly increasing and continuous cf(κ)-sequence which begins with cf(κ)+1 and has κ as its limit. The limits of that sequence would be club in κ. So there must be a regular μ among those limits. So μ is a limit of an initial subsequence of the cf(κ)-sequence. Thus its cofinality is less than the cofinality of κ and greater than it at the same time; which is a contradiction. Thus the assumption that κ is not regular must be false, i.e. κ is regular. No stationary set can exist below $\aleph_0$ with the required property because {2,3,4,...} is club in ω but contains no regular ordinals; so κ is uncountable. And it is a regular limit of regular cardinals; so it is weakly inaccessible. Then one uses the set of uncountable limit cardinals below κ as a club set to show that the stationary set may be assumed to consist of weak inaccessibles. • If κ is weakly Mahlo and also a strong limit, then κ is Mahlo. κ is weakly inaccessible and a strong limit, so it is strongly inaccessible. We show that the set of uncountable strong limit cardinals below κ is club in κ. Let μ0 be the larger of the threshold and ω1. For each finite n, let μn+1 = 2μn which is less than κ because it is a strong limit cardinal. Then their limit is a strong limit cardinal and is less than κ by its regularity. The limits of uncountable strong limit cardinals are also uncountable strong limit cardinals. So the set of them is club in κ. Intersect that club set with the stationary set of weakly inaccessible cardinals less than κ to get a stationary set of strongly inaccessible cardinals less than κ. Example: showing that Mahlo cardinals are hyper-inaccessible Suppose κ is Mahlo. We proceed by transfinite induction on α to show that κ is α-inaccessible for any α ≤ κ. Since κ is Mahlo, κ is inaccessible; and thus 0-inaccessible, which is the same thing. If κ is α-inaccessible, then there are β-inaccessibles (for β < α) arbitrarily close to κ. Consider the set of simultaneous limits of such β-inaccessibles larger than some threshold but less than κ. It is unbounded in κ (imagine rotating through β-inaccessibles for β < α ω-times choosing a larger cardinal each time, then take the limit which is less than κ by regularity (this is what fails if α ≥ κ)). It is closed, so it is club in κ. So, by κ's Mahlo-ness, it contains an inaccessible. That inaccessible is actually an α-inaccessible. So κ is α+1-inaccessible. If λ ≤ κ is a limit ordinal and κ is α-inaccessible for all α < λ, then every β < λ is also less than α for some α < λ. So this case is trivial. In particular, κ is κ-inaccessible and thus hyper-inaccessible. To show that κ is a limit of hyper-inaccessibles and thus 1-hyper-inaccessible, we need to show that the diagonal set of cardinals μ < κ which are α-inaccessible for every α < μ is club in κ. Choose a 0-inaccessible above the threshold, call it α0. Then pick an α0-inaccessible, call it α1. Keep repeating this and taking limits at limits until you reach a fixed point, call it μ. Then μ has the required property (being a simultaneous limit of α-inaccessibles for all α < μ) and is less than κ by regularity. Limits of such cardinals also have the property, so the set of them is club in κ. By Mahlo-ness of κ, there is an inaccessible in this set and it is hyper-inaccessible. So κ is 1-hyper-inaccessible. We can intersect this same club set with the stationary set less than κ to get a stationary set of hyper-inaccessibles less than κ. The rest of the proof that κ is α-hyper-inaccessible mimics the proof that it is α-inaccessible. So κ is hyper-hyper-inaccessible, etc.. α-Mahlo, hyper-Mahlo and greatly Mahlo cardinals A cardinal κ is α-Mahlo for some ordinal α if and only if κ is Mahlo and for every ordinal β<α, the set of β-Mahlo cardinals below κ is stationary in κ. We can define "hyper-Mahlo", "α-hyper-Mahlo", "weakly α-Mahlo", "weakly hyper-Mahlo", "weakly α-hyper-Mahlo", etc. by analogy with the definitions for inaccessibles. A cardinal κ is greatly Mahlo or κ+-Mahlo if and only if it is inaccessible and there is a normal (i.e. nontrivial and closed under diagonal intersections) κ-complete filter on the power set of κ that is closed under the Mahlo operation, which maps the set of ordinals S to {α$\in$S: α has uncountable cofinality and S∩α is stationary in α} The properties of being inaccessible, Mahlo, weakly Mahlo, α-Mahlo, greatly Mahlo, etc. are preserved if we replace the universe by an inner model. The Mahlo operation If X is a class of ordinals, them we can form a new class of ordinals M(X) consisting of the ordinals α of uncountable cofinality such that α∩X is stationary in α. This operation M is called the Mahlo operation. It can be used to define Mahlo cardinals: for example, if X is the class of regular cardinals, then M(X) is the class of weakly Mahlo cardinals. The condition that α has uncountable cofinality ensures that the closed unbounded subsets of α are closed under intersection and so form a filter; in practice the elements of X often already have uncountable cofinality in which case this condition is redundant. Some authors add the condition that α is in X, which in practice usually makes little difference as it is often automatically satisfied. For a fixed regular uncountable cardinal κ, the Mahlo operation induces an operation on the Boolean algebra of all subsets of κ modulo the non-stationary ideal. The Mahlo operation can be iterated transfinitely as follows: • M0(X) = X • Mα+1(X) = M(Mα(X)) • If α is a limit ordinal then Mα(X) is the intersection of Mβ(X) for β<α These iterated Mahlo operations produce the classes of α-Mahlo cardinals starting with the class of strongly inaccessible cardinals. It is also possible to diagonalize this process by defining • MΔ(X) is the set of ordinals α that are in Mβ(X) for β<α. And of course this diagonalization process can be iterated too. The diagonalized Mahlo operation produces the hyper-Mahlo cardinals, and so on.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973806619644165, "perplexity": 1070.1462097830763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770071.75/warc/CC-MAIN-20141217075250-00033-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.fvihob.co/taylor-polynomial-ln-x.html
# taylor polynomial ln x You can try graphing your own Taylor polynomials by just typing in a function for f and setting c. Remember not to set nmax > 6 if the function’s higher order derivatives get more complex. You might try f (x) = ln(x), or try a polynomial function, like f (x) = x² – x c? so as you see i don`t get ln 2 Anood, The Taylor series expression for f(x) at x = a is where f (n) (a) is the n-th derivative of f(x) at x=a if n ≥ 1 and f (0) (a) is f(a). The series you developed for ln(x) at a=2 is correct for n ≥ 1 but what about The first term (n = 0) is f Find the Maclaurin series expansion for f = sin(x)/x.The default truncation order is 6. Taylor series approximation of this expression does not have a fifth-degree term, so taylor approximates this expression with the fourth-degree polynomial: We look at various Taylor polynomials: \$\begin{eqnarray} T_1(x)&=&f(a) + f'(a) (x-a)\\ T_2(x)&=&f(a)+f'(a)(x-a)+\frac{f」(a)}{2!}(x-a)^2\\ T_3(x)&=&f(a)+f'(a)(x-a Python: Approximating ln(x) using Taylor Series Ask Question Asked 4 years, 5 months ago Active 4 years, 5 months ago Viewed 2k times 2 I’m trying to build an approximation for ln(1.9) within ten digits of accuracy (so .641853861). I’m using a simple Here is MATH1901 Quizzes You are here: Maths & Stats Home / Teaching program / Junior / MATH1901 / Quizzes / Quiz 8 Find the coefficient of x n in the Taylor polynomial of degree n (n · PDF 檔案 3. Consider the function h(x) = ex Compute the 5th degree Taylor polynomial of h(x) centered at 0. How could you use this to approximate e? Answer: We need to know the first 5 derivatives of ex.But the derivative of ex is itself. So h(n)(x) = dn dxn (ex) = ex. · PDF 檔案 260 10 The Taylor Series and Its Applications f(x) ≈ n j=0 f(j)(a) (x−a)jj! (10.9) Example 10.1 Finding the Taylor expansion of a polynomial function is pointless in that we already have the expansion. Nevertheless, such an exercise is quite useful in terms of illustrating · PDF 檔案 Note that the 1st-degree Taylor polynomial is just the tangent line to at :0 B Bœ+a b X B œ 0 + 0 + B +」 a b a b a ba bw This is often called the linear approximation to near , i.e. the tangent line to the0 B Bœ+a b graph. Taylor polynomials can be viewed · PDF 檔案 The nth-degree Taylor polynomial for f(x) = ln(x) about x = 1 is For each of the functions on the following pages: a. Find the indicated Taylor polynomial approximations. b. Graph each Taylor polynomial approximation in the ZDecimal viewing window along with Tip: Technically, you could go on forever with iterations of the Taylor polynomial, but usually five or six iterations is sufficient for a good approximation. Maclaurin Series Overview A Maclaurin series is a special case of a Taylor series, where “a” is centered around x · PDF 檔案 9.2—Taylor Polynomials Taylor Polynomials and Approximations Polynomial functions can be used to approximate other elementary functions such as sinx, x e, and lnx. Example 1: Find the equation of the tangent line for f x x sin at x 0, then use it to sin 0.2 · PDF 檔案 Taylor Polynomials — Approximating Functions Near a Specified Point Suppose that you are interested in the values of some function f(x) for x near some fixed point x0.The function is too complicated to work with directly. So you wish to work instead with some · PDF 檔案 (5) (3) f x Taylor polynomial x 4 4.5 5 5.5 6 100 200 300 400 Taylor Polynomial At x = 5, for the function f x = ex, a graph of f x and theapproximating Taylor polynomial(s) of degree(s) 2. 2) By hand activity: Using the concavity of the graph above at x=0, will the sign of the =1-x²/2!+x⁴/4!-+(-1)ⁿ⁻¹(xⁿ/(2n)!)+ =∑(-1)ⁿ⁻¹(xⁿ/(2n)!) (all real x) Taylor Polynomials II Part 6: Summary Geometric polynomials with each term x times the preceding one are also Taylor polynomials for some function of x.What function? What is the interval of convergence for this sequence of Taylor polynomials? How can Describe the procedure for finding a Taylor polynomial of a given order for a function. Explain the meaning and significance of Taylor’s theorem wi Skip to Content Calculus Volume 2 6.3 Taylor and Maclaurin Series Calculus Volume 2 6.3 Taylor and Maclaurin Find the Taylor series expansion of any function around a point using this online calculator. Input the function you want to expand in Taylor serie : Variable : Around the Point a = (default a = 0) Maximum Power of the Expansion: How to Input Find the second Taylor polynomial T2(x) for the function f(x)=ln(x) based at b=1 Let a be a real number such that 0 Use this error bound to find the largest value of 30/10/2013 · Generally speaking it is a difficult problem to determine whether some Taylor approximation to a function is an over or under estimate. Your example fits this pattern. The Taylor series for f(x) = x ln(x) at x = 1 is (x – 1), plus an alternating series of terms that · PDF 檔案 6. Taylor polynomials and Taylor series These lecture notes present my interpretation of Ruth Lawrence’s lec-ture notes (in Hebrew) 1 6.1 Preliminaries 6.1.1 Polynomials A polynomial of degree n (.&1*-&5) is a function of the form p(x)=b nxn +b n−1xn−1 +⋅⋅⋅+b 1x+b Clash Royale CLAN TAG #URR8PPP · PDF 檔案 6 Taylor Polynomials The textbook covers Taylor polynomials as a part of its treatment of infinite series (Chapter 10). We are spending only a short time on infinite series (the next unit, Unit 7) and will therefore learn Taylor polynomials with a more direct, hands-on use the fourth taylor polynomial of f(x)=ln(1+x) to approximate ∫0.2 ln (x+1)/x dx 0.1 asked by sofi on May 16, 2012 Calculus Hi~ Thank you for your help! I was trying to work on a problem about Taylor series, but i don’t think im approaching the problem · PDF 檔案 TAYLOR AND MACLAURIN SERIES 3 Note that cos(x) is an even function in the sense that cos( x) = cos(x) and this is re ected in its power series expansion that involves only even powers of x. The radius of convergence in this case is also R = 1. Example 3. 2. (Taylor polynomials) (a) Write down the Taylor polynomials Pn(x) of degree n = 0, 1, 2,3 for the function f(x) = ln x about the point x = 1. (h) Plot the polynomials Pn(x) and the function f(x) on the interval [0, 3] using. Solution Preview Please see the attached file for 1/4/2020 · Wait, what about functions like the natural logarithm ( ln) or e to the power x? What if we had a simple expression through which we could approximate the value of these non-polynomial functions · PDF 檔案 Figure 4: A plot of f(x) = ex and its 5th degree Maclaurin polynomial p5(x). Example 2.2.Finding and using Taylor polynomials 1.Find the nth Taylor polynomial of y= lnxcentered at x= 1. 2.Use p 6(x) to approximate the value of ln1:5. 3.Use p 6(x) to approximate 5 Use binomial series to find the Taylor series about 0 for the function f(x)=(1+x)^-3/5 giving all terms up to the one in x^4. Then use this series and Taylor series for sin x to find the quartic Taylor polynomial about 0 for the asked by Jay on April 24, 2016 20/11/2013 · (a) Approximate f by a Taylor polynomial with degree n at the number a. T2(x) =__?__ (b) Use Taylor’s Inequality to estimate the accuracy of the approximation f~Tn(x) when x Taylor and Maclaurin series are like polynomials, except that there are infinitely many terms. Read on to find out what you need to know for the AP test! About Shaun Ault Shaun earned his Ph. D. in mathematics from The Ohio State University in 2008 (Go Bucks!!). · PDF 檔案 Lecture 25 Section 11.5 Taylor Polynomials in x; Taylor Series in x Jiwen He 1 Taylor Polynomials 1.1 Taylor Polynomials Taylor Polynomials Taylor Polynomials The nth Taylor polynomial at 0 for a function f is P n(x) = f(0)+f0(0)x+ f00(0) 2! x2 +···+ f(n)(0) n! xn; P n is the polynomial that has the same value as f at 0 and the same first n 29. An enthusiastic math student, having discovered that ln x = x – 1 -. (x-1). 2. 2. Taylor Polynomials Taylor Polynomials Question A broker offers you bonds at 90% of their face value. When you cash them in later at their full face value, what percentage profit Expansions Which Have Logarithm-Based Equivalents · PDF 檔案 n(x) is a polynomial called the nth degree Taylor polynomial for f(x) centered at x= a. Example: Find the rst, second, and third degree Taylor polynomials for f(x) = ex centered at x= 0. The Maclaurin series for f(x) = ex is ex = X1 n=0 xn n! = 1 + x+ x2 2 + x3 3! T · PDF 檔案 COMPUTING TAYLOR POLYNOMIALS AND TAYLOR SERIES Joseph Breen In this note, we’ll get some practice computing Taylor polynomials and Taylor series of functions. Just to review, the nth-degree Taylor polynomial of f at a is the polynomial T n(x) = f(a)+f0(a)(x a)+ + The polynomial , which is centered at x = 1, is tangent to f(x) = e x at x = 1 and has the same concavity as f(x) = e x at that point. 24.3.1 Find the second-order Taylor polynomial centered at 1 for the function f(x) = ln x. Graph this polynomial together with f(xx Taylor Polynomials and Infinite Series , Calculus and It’s Applications 14th – Larry J. Goldstein, David C. Lay, David I. Schneider | All the textbook answers Books (current) Test Prep (current) Courses (current) Office Hours Earn
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252955317497253, "perplexity": 1035.2324025659507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00158.warc.gz"}
https://www.physicsforums.com/threads/improper-integral-limit.622668/
# Improper integral limit 1. Jul 22, 2012 ### marellasunny $$\int_0^1 \frac{1}{\sqrt{x}}\,\mathrm{d}x$$ = $$\lim_{\varepsilon \to 0+}\int_\varepsilon^1 \frac{1}{\sqrt{x}}\,\mathrm{d}x$$ My question is about the usage of 0+ in the limit.(I evaluated the integrals and arrived at the part where I substitute upper and lower limits.) Did the author deliberately choose to use $$\lim_{\varepsilon \to 0+}$$ instead of 0 or 0- so that any imaginary numbers arising from the expression $$2\sqrt{x}$$ do not arise? Or is there any other reason? Thanks. 2. Jul 22, 2012 ### DonAntonio I don't know what author you're talking about, but taking that limit is what has to be done simply by the definition of improper integral with one of the limits being a point of unboundness for the function... DonAntonio 3. Jul 22, 2012 ### marellasunny Yes,i understand this case.But,what if I had a case of a function best described by limit->0-? Will I not have a problem when I substitute 0- into the square root? I can't exactly describe the function,I mean it for some arbitrary function have variable 'x' under the square root and me having to apply limit->0-.Wont this give rise to a imaginary number? 4. Jul 22, 2012 ### Number Nine Taking the limit from below would result in the expression being undefined. 5. Jul 22, 2012 ### DonAntonio I'm not completely sure I follow you, but the function in the integral is defined only for positive real numbers: whatever you want to do with it will have to comply with this restriction. Thus, there is not meaning to the expression $$\lim_{x\to 0^-}\sqrt x$$ as it assumes the existence of the square roots of negative numbers within the real numbers, which is absurd. DonAntonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345195293426514, "perplexity": 727.853604991514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210463.30/warc/CC-MAIN-20180816054453-20180816074453-00258.warc.gz"}
http://math.stackexchange.com/questions/214730/fourier-transform-of-a-piecewise-function
Fourier transform of a piecewise function I am trying to find the Fourier transform of $$f(x)=Ae^{-\alpha|x|}$$ where $\alpha>0$. $f(x)$ becomes an even piecewise function defined over the intervals $-\infty$ to $0$ and $0$ to $\infty$. The corresponding figure is shown. My only question is, should I integrate over each interval separately and add the result or is there some other method? What I should get is $$F(k)= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{0}Ae^{\alpha x}e^{-ikx}dx + \frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}Ae^{-\alpha x}e^{-ikx}dx$$ Is my expression for $F(k)$ correct? - What are you doing is correct. –  Mhenni Benghorbal Oct 16 '12 at 8:02 Perhaps you can compute these integrals? –  AD. Oct 16 '12 at 14:20 Your expression is correct. Further, set $x=-y$ in the first integral and observe that $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{0}Ae^{\alpha x}e^{-ikx}dx = \frac{1}{\sqrt{2\pi}}\int_0^{\infty}Ae^{-\alpha y}e^{iky}dy= \left( \frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}Ae^{-\alpha x}e^{-ikx}dx\right)^*,$$ where $(\cdot)^*$ denotes the complex conjugate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830999374389648, "perplexity": 141.64697425828473}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097199.58/warc/CC-MAIN-20150627031817-00143-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/120767/prove-frac1-pi-int-pi-pifx-cos-2nx-geq-0
# Prove$\frac{1}{\pi}\int^{\pi}_{-\pi}f(x)\cos 2nx \geq 0$ Assume f(x) is convex in $[-\pi,\pi ]$, $f^{'}(x)$ is bounded, Prove: $$\frac{1}{\pi}\int^{\pi}_{-\pi}f(x) \cos 2nx \geq 0$$ - What is the source of the problems you have been posting? Are these homework or are you doing self study and plugging through some problem book? –  Aryabhata Mar 16 '12 at 2:39 @Aryabhata It's truly some questions in a problem book. –  89085731 Mar 16 '12 at 2:41 Then please mention the source (including the problem number). btw, did you modify the problem statement? –  Aryabhata Mar 16 '12 at 2:42 Also, this question is very similar: math.stackexchange.com/questions/120748/sib-2009-problem-2. –  Aryabhata Mar 16 '12 at 2:42 Why no $dx$ in integration? –  FiniteA Mar 16 '12 at 8:49 I think I've made an answer. Please correct it if anything is wrong. $$\int^{\pi}_{-\pi}f(x) \cos 2nx=-\frac{1}{2n}\int_{-\pi}^{\pi}f^{'}(x)sin2nx$$,then I will prove$\int_{-\pi}^{\pi}f^{'}(x)sin2nx \leq0$. Divide the interval into $4n$ parts.So the integral becomes $$\Sigma \int_{-\pi+\frac{k\pi}{2n}}^{-\pi+\frac{k+1\pi}{2n}}f^{'}(x)sin2nx$$,in every interval,$sin2nx$ doesn't change the sign,so use Integral Mean Theorem,and it becomes $$\Sigma f^{'}(\xi_i)(-1)^{i-1}$$ With f(x) is convex,so $f^{'}(x)$ is increasing.Thus $f^{'}(\xi_i)-f^{'}(\xi_{i+1})\leq0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300870895385742, "perplexity": 3007.862411904712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985647.51/warc/CC-MAIN-20150728002305-00146-ip-10-236-191-2.ec2.internal.warc.gz"}