url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://helpsavenature.com/renewable-energy-sources-pros-cons | # The Real Pros and Cons of Renewable Energy Sources
Any debate on the sources of renewable energy automatically leads us to a discussion about its pros and cons. Presented below are some important points regarding the same.
Parashar Joshi
Last Updated: Feb 24, 2018
With fuel prices constantly rising and global oil reserves on the verge of declining, renewable energy sources are very good options for the future. This concept is being increasingly promoted and publicized all over the world.
Types of Renewable Energy
The various known forms are listed below:
• Solar energy
• Hydro energy
• Tidal energy
• Wind energy
• Geothermal energy
• Biomass
• Fuel cells
• Nuclear energy
Solar energy is obtained from the Sun's rays, which can be utilized for numerous domestic as well as industrial applications. Hydro power is generated by using the potential energy of stored water to produce electricity. Tidal energy is a process, in which the kinetic energy of sea waves is used to generate power. In case of wind energy, wind or flowing air is used to rotate the blades of a windmill for generation of power. Geothermal energy makes use of the Earth's natural heat (in the form of underground steam and hot water springs), to power turbines and generate electricity.
Biomass is a concept, wherein the plant and animal waste, agricultural residue, decomposed material, etc., are chemically combined for the generation of power. Fuel cells mainly rely on the chemical reaction between hydrogen and oxygen to produce electricity (and water as a byproduct). Nuclear energy is obtained by artificial nuclear fission of radioactive ions.
Listed below are the pros and cons of renewable energy sources.
Pros
• Fossil fuels are exhaustible sources of energy, whereas renewable energy sources are inexhaustible, and can be easily replenished.
• Most of the latter ones do not involve the combustion or burning of fossil fuels or other substances, which would result in the release of toxic chemicals or other harmful atmospheric byproducts. Therefore, they are clean sources of energy, and offer numerous environmental benefits.
• They are plentiful and are available all over the world. Also, being non-perishable, one doesn't have to worry about their reserves getting declined or exhausted in the future.
• Most of these sources have low maintenance costs associated with them. Also, the ones like solar energy can be tapped very easily and conveniently, for domestic use by individual home owners.
Cons
• Reliability and consistency is a significant drawback with respect to renewable energy. Atmospheric conditions and geographical locations make a huge impact on the efficacy of these sources.
• Their initial investment or setup cost is significantly high. This acts as a deterrent in people for switching over to these options.
• Every form of renewable energy has its own set of problems, drawbacks, or limitations with respect to environment and ecology.
• People are not yet fully convinced about the sustainability, and the value-for-money aspect of these sources. Therefore, global acceptance of these energy options on a mass scale will take many years. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865822553634644, "perplexity": 1358.2926028488507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00347.warc.gz"} |
https://cstheory.stackexchange.com/questions/40293/probability-that-a-random-sorting-network-works | # Probability that a random sorting network works
Given $n$ inputs $x_0, \ldots, x_{n-1}$, we construct a random sorting network with $m$ gates by iteratively picking two variables $x_i, x_j$ with $i < j$ and adding a comparator gate that swaps them if $x_i > x_j$.
Question 1: For fixed $n$, how large must $m$ be for the network to correctly sort with probability $> \frac{1}{2}$?
We have at least the lower bound $m = \Omega(n^2 \log n)$ since an input that is correctly sorted except that each consecutive pair is swapped will take $\Theta(n^2 \log n^2)$ time for each pair to be chosen as a comparator. Is that also the upper bound, possibly with more $\log n$ factors?
Question 2: Is there a distribution of comparator gates that achieves $m = \tilde{O}(n)$, perhaps by choosing close comparators with higher probability?
• I guess one can get a $O(n^3log^{O(1)})$ upper bound by looking at one input at a time and then union bounding, but that sounds far from tight. Feb 28, 2018 at 21:13
• Idea for Question 2: pick a sorting network of depth $O(\log^2 n)$. At each step, randomly pick one of the gates of the sorting network, and perform that comparison. After $\tilde{O}(n)$ steps, all gates in the first layer will have been applied. After another $\tilde{O}(n)$ steps, all gates in the second layer will have been applied. If you can show that this is monotonic (inserting extra comparisons in the middle of the sorting network can't hurt), you'll have obtained a solution with $\tilde{O}(n)$ comparators in total on average. I'm not sure whether monoticity actually holds, though.
– D.W.
Feb 28, 2018 at 21:42
• @D.W.: Monotonicity doesn't necessarily hold. Consider sequences $$\begin{eqnarray*} s &=&(x_1, x_2), (x_0, x_2), (x_0, x_1);\\ s'&=&(x_1, x_2), \mathbf{(x_0, x_1)}, (x_0, x_2), (x_0, x_1).\end{eqnarray*}$$ Sequence $s$ works; $s'$ doesn't (consider input (1, 0, 0)). The idea is that $(x_0, x_2), (x_0, x_1)$ sorts any input it receives except $(0, 1, 0)$ (see here). In $s$, that input cannot reach $(x_0, x_2), (x_0, x_1)$. In $s'$ it can. Mar 3, 2018 at 23:57
• Consider the variant where the network is chosen by picking two adjacent variables $x_i, x_{i+1}$ randomly at each step. Now monotonicity holds (as adjacent swaps don't create inversions). Apply @D.W.'s idea to an odd-even sorting network, which has $n$ rounds: in odd rounds it compares all adjacent pairs where $i$ is odd, in even rounds it compares all adjacent pairs where $i$ is even. W.h.p. the random network is correct in $O(n^2\log n)$ comparisons, as it "includes" this network. (Or am I missing something?) Mar 4, 2018 at 3:26
• Monotonicity of adjacent networks: Given $a, b\in\{0,1\}^n$, for $j\in\{0,1,\ldots,n\}$ define $s_j(a) = \sum_{i=1}^j a_i$. Say $a\preceq b$ if $s_j(a) \le s_j(b)$ ($\forall j$). Fix any comparison "$x_i < x_{i+1}$". Let $a'$ and $b'$ come from $a$ and $b$ by doing that comparison. Claim 1. $a' \preceq a$ and $b' \preceq b$. Claim 2: if $a\preceq b$, then $a' \preceq b'$. Then show inductively: if $y$ is the result of comparison sequence $s$ on input $x$, and $y'$ is the result of super-sequence $s'$ of $s$ on $x$, then $y' \preceq y$. So if $y$ is sorted, so is $y'$. Mar 4, 2018 at 16:23
Here's some empirical data for question 2, based on D.W.'s idea applied to bitonic sort. For $n$ variables, choose $j - i = 2^k$ with probability proportional to $\lg n - k$, then select $i$ uniformly at random to get a comparator $(i,j)$. This matches the distribution of comparators in bitonic sort if $n$ is a power of 2, and approximates it otherwise.
For a given infinite sequence of gates pulled from this distribution, we can approximate the number of gates required to get a sorting network by sorting many random bit sequences. Here's that estimate for $n < 200$ taking the mean over $100$ gate sequences with $6400$ bit sequences used to approximate the count: It appears to match $\Theta(n \log^2 n)$, the same complexity as bitonic sort. If so, we don't eat an extra $\log n$ factor due to the coupon collector problem of coming across each gate.
To emphasize: I'm using only $6400$ bit sequences to approximate the expected number of gates, not $2^n$. The mean required gates does rise with that number: for $n = 199$ if I use $6400$, $64000$, and $640000$ sequences the estimates are $14270 \pm 1069$, $14353 \pm 1013$, and $14539 \pm 965$. Thus, it's possible getting the last few sequences increases the asymptotic complexity, though intuitively it feels unlikely.
Edit: Here's a similar plot up to $n = 80$, but using the exact number of gates (computed via a combination of sampling and Z3). I've switched from power of two $d = j-i$ to arbitrary $d \in [1,\frac{n}{2}]$ with probability proportional to $\frac{\log n - \log d}{d}$. $\Theta(n \log^2 n)$ still looks plausible.
• Nice experiment! There's a different way the coupon collector issue could arise here, though: you're only sampling a small fraction of the $2^n$ bit sequences needed to verify correctness on all inputs. It seems we can conclude (scientifically, not mathematically of course) from your experiment that a random network of this type and size sorts a random permutation whp. I'd also be curious to see exhaustive $2^n$ testing on such random networks for all $n$ up to which you're willing to go. ($n=20$ shouldn't be too bad, maybe even $n=30$ depending on what language & hardware you're using). Mar 5, 2018 at 14:46
• It looks the same for exact up to $n = 27$, but I don’t view that as conclusive. Mar 5, 2018 at 15:28
• @JoshuaGrochow: I've added exact values up to $n = 80$. Mar 6, 2018 at 8:02
• Nice! There does appear to be a growing spread to the exact data though, which perhaps indicates an upper bound with an extra factor of $\log n$? (That is, if the "spread" is growing at a rate of $\log n$.) Mar 6, 2018 at 17:53
• Yeah, we can't rule out an extra factor. I'd be surprised if it was $\log n$, though, since up at 80 we have $\lg n \approx 6$ and the constant is suspiciously close to $1$ otherwise. At this point I think theory has to take over. :) Mar 6, 2018 at 18:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327829480171204, "perplexity": 481.96989854078885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00335.warc.gz"} |
https://www.ulsinc.com/learn/laser-cutting | # Learn
### Laser Cutting
Laser cutting is the complete removal and separation of material from the top surface to the bottom surface along a designated path. Laser cutting can be performed on a single layer material or multi-layer material.
When cutting multi-layer material, the laser beam can be precisely controlled to cut through the top layer without cutting through the other layers of the material. (See Figure 1 below.)
Material thickness and density are important factors to consider when laser cutting. Cutting through thin material requires less laser energy than cutting the same material in a thicker form. Lower density material typically requires less laser energy. However, increasing laser power level generally improves laser cutting speed.
In general, CO2 lasers with 10.6 micron wavelength are primarily used for cutting non-metal materials. CO2 and fiber lasers are both used for cutting metals. However, as a rule, cutting metals requires substantially higher power levels than non-metal materials.
Learn more about why laser technology is an ideal tool for meeting tight tolerance specifications when used as a laser cutter and how laser technology enables distinctive design characteristics when used as a laser engraver or laser marker. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425063490867615, "perplexity": 2298.8084742678184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00000.warc.gz"} |
https://brilliant.org/problems/its-a-triangle/ | # It's a Triangle!
Geometry Level 3
Let $$a$$, $$b$$, and $$c$$ be the lengths of the sides of a triangle. If the circumradius of the triangle is 3, and $$a \times b \times c = 120$$, then what is the area of this triangle?
×
Problem Loading...
Note Loading...
Set Loading... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89888596534729, "perplexity": 255.67200767073513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948545526.42/warc/CC-MAIN-20171214163759-20171214183759-00365.warc.gz"} |
http://montanamuni.blogspot.com/2013/10/ | ## Tuesday, October 29, 2013
### Lattès Example
I gave a DynaLite on the Lattès example last week and given the shortage of information on the web I'm posting my notes. This is a more algebraic geometry perspective than those offered in Beardon and Carleson/Gamelin, it somewhat follows insight from Milnor. Here are my notes which are somewhat out of order, and here is a helpful picture of a torus generated with the following tikz code. The clip function is really handy, and you can put things outside the clipped box by just inserting the code for them prior to the clip command.
\begin{tikzpicture}[scale=1.5]
%Clip
\clip (-3,-3) rectangle (3,3);
%Horizontals
\foreach \x in {-2,-1,...,2}
\draw[-] (3.000000,\x) -- (-3.000000,\x);
%Slants slope = .61825
\foreach \x in {-5,-4,...,5}
\draw[-] (2+\x,2*1.61825) -- (-2+\x,2*-1.61825);
%Fills
\fill[green!20!white] (0,0) -- (1,0) -- (1.61825,1) --(.61825,1) --cycle;
\fill[blue!20!white] (0,0) -- (-1,0) -- (-1.61825,-1) --(-.61825,-1) --cycle;
%axis
\draw[<->][red] (3,0) -- (-3,0);
\draw[<->][red] (0,3) -- (0,-3);
\end{tikzpicture}
## Monday, October 28, 2013
### Diophantine Numbers
I spent a while today writing up this little primer on Diophantine numbers. It's mostly the coverage Milnor gives in chapter 11 of Dynamics in One Complex Variable, but I've added some details and included the solution to 11-a.
It's probably riddled with typos and logical errors, feel free to correct in the comments. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467212915420532, "perplexity": 2843.630887296436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00601.warc.gz"} |
http://physics.stackexchange.com/questions/56240/maximum-theoretical-bandwidth-of-fibre-optics | # Maximum theoretical bandwidth of fibre-optics
Ignoring hardware at either end and their technological limitations, what is the maximum theoretical bandwidth of fibre optic cables currently in use / being deployed in a FTTH type situations? I understand there's a limit to the number of freqencies or channels we can have in fibres, and each channel would have a theoretic max bandwidth too, I'd imagine?
I'm asking particularly to find out more about the current plan for a National Broadband Network in Australia, which is supposed to roll out fibre optics to almost every premises in the country. I'm interested in finding out how much data we can fit down the fibre before we have to dig it all up and replace it with newer fibres with higher bandwidth or some new medium we haven't started talking about yet. More general answers are interesting too.
-
I would imagine that some dark fibres (fibres to be used in the future) are installed at the same time – Andre Holzner Jul 10 '13 at 13:04
Short answer: A good order of magnitude rule of thumb for the maximum possible bandwidth of an optical fibre channel is about 1 petabit per second per optical mode. So a "single" mode fibre (fibre with one bound eigenfield) actually has in theory two such channels, one for each polarisation state of the bound eigenfield.
I'll just concentrate on the theoretical capacity of a single, long-haul fibre; see roadrunner66's answer for discussion of the branching in an optical network. The fundamental limits always get down to a question of signal to noise in the measurement (i.e. demodulation by the receiver circuit). The one, fundamentally anavoidable, noise source on a fibre link is quantum shot noise, so I'll concentrate on that. Therefore, what follows will apply to a short fibre: other noise sources (such as Raman, amplified spontaneous emission from in-line optical amplifiers, Rayleigh scattering, Brillouin scattering) tend to become significant roughly in proportion to the fibre length and some power (exponent) of the power borne by the fibre.
If we detect N photons from a coherent state of light for a measurement, then the undertainty in that measurement is $\sqrt{N}$ in an otherwise noise free environment. (see footnote on squeezed states). So suppose first that:
1. The bandwidth of our fibre is $B$ hertz (typically 50THz maximum, I'll discuss what limits $B$ below)
2. The power borne by the fibre is $P$ watts (typically 100W maximum; again, I'll discuss limits later)
3. The fibre is single mode (this in theory allows one to overcome the dispersion limits discussed in roadrunner66's answer, at the expense of putting a harder upper limit on $P$)
4. The fibre's centre frequency is $\nu_0$ (typically 193THz, corresponding to 1550nm freespace wavelength)
5. In what follows, I shall take the word "symbol" to mean a theoretically infinite precision real number but whose precision is limited by noise (the purpose of this answer being to discuss the latter!).
So, let's begin by exploring the best way to use our power. If we devote it to $M$ symbols per second, each of our measurements comprise the detection of $N = \frac{P}{M\, h\, \nu_0}$ photons, thus our signal to noise ratio is $SNR = \frac{N}{\sqrt{N}} = \sqrt{\frac{P}{M\, h\, \nu_0}}$. By the Shannon-Hartley form of the Noisy channel coding theorem (see also here), we can therefore code our channel to get $\log_2\left(1 + \sqrt{\frac{P}{M\, h\, \nu_0}}\right)$ bits of information per symbol, i.e. $M \log_2\left(1 + \sqrt{\frac{P}{M\, h\, \nu_0}}\right)$ bits per second through our optical fibre. This is a monotonically rising function of $M$, so a limit on $P$ by itself does not limit the capacity.
However, by a converse of the Nyquist-Shannon sampling theorem we can send a maximum of $B$ symbols down the channel per second. This then is our greatest possible symbol rate. Hence, our overall expression for the fibre capacity in bits is:
$\mathcal C = B \, \log_2\left(1 + \sqrt{\frac{P}{B\, h\, \nu_0}}\right)$ bits per second
If we plug in our $B = 50\mathrm{THz}$ and $P = 100W$, we get the stupendous capacity of 1.2 petabits per second for each single mode optical fibre core.
By looking at the basic shape of the Shannon-Hartlee expression $\log_2(1+SNR)$ bits per symbol, one can see that improvements on the signal to noise beyond any "good" signal to noise ratio will only lead to marginal increases in capacity. By far the strongest limit is the Shannon sampling theorem converse. So the use of squeezed light will not change the order of magnitude of this result.
Now for some material physics begetting the limits discussed above. Our optical power is limited by two things:
1. The heat dissipation rate of the fibre is the main one. At some point, losses in the fibre sum up to more power than it can dump to its surroundings and it fries itself. What happens in practice is that power tends to dissipate around inclusions and other imperfections and then it melts at that point, more power dissipates at the molten blob and the destruction propagates slowly away from the failure: the fibre "melts" itself like a lit fuse.
2. Nonlinear processes like stimulated Brillouin and stimulated Raman scattering will quickly outweigh the quantum noise. These vary like $P^2$.
Bandwidth is limited by the losses in the medium. The "window" between about 1300nm and 1600nm freespace wavelength is chosen for optical communications for its low loss. Depending on the length of fibre, as you try to increase the bandwidth, any optical power outside the band of low loss simply won't get to the other end. This is what gives rise to the $B=50THz$ figure I cite above. This is not a hard figure: it's a rough estimate and its precise value depends on the length of fibre. I have shown a calculation where I assume the fibre transmits a certain bandwidth perfectly and sending "switches off" altogether outside this bandwidth. A fuller calculation would account for the spectral shape of the losses and that the imperfectly sent frequency components can also bear information. This would show that the effective equivalent bandwidth $B$ in my formula would be roughly inversely proportional to fibre length.
Other noise limits that quickly swamp the quantum noise are those I mentioned at the beginning: Raman, amplified spontaneous emission from in-line optical amplifiers, Rayleigh scattering, Brillouin scattering. Again, the calculation will need to be modified for those depending on the exact link parameters. These noises tend to increase in proportion to the link length, and therefore one often sees bandwidth-distance products quoted for "he-man" announcements of fibre capacity in experiements (e.g. 1 terrabit per second over a 100km link; the same link should roughly take 2TBps over 50km , 4tBPs over 25km and so on until the quantum noise limits everything). As above, the bandwidth limit $B$ also has an inverse dependence on fibre length, but a zero length fibre transmission will still be marred by the quantum noise of the link's source. So the true dependence on link length $L$ of the capacity $\mathcal C$, taking this into account, will be something like $\mathcal C = \frac{\mathcal C_1}{L_0 + L}$ where $L_0$ is something much less than a kilometer and accounts for the source's quantum noise.
Current figures quoted are tens of terrabits per fibre core - (see here - I'm sorry I don't have a better reference for this, it has been some time since telecommunication technology was wonted to me). Even higher figures can be gotten for short fibres (kilometres in length) with multimoded cores so that the power density in the core is not so constraining. The disadvantage is that dispersion (now difference between transmission speeds of the fibre's bound eigenmodes) is now the limiting factor, thus only short fibres can be used. For a single mode fibre, only chromatic dispersion, as discussed by roadrunner66, is present. This can be effectively cancelled when the link dispersion is known (as it is for long haul links) by imparting the inverse dispersion at either the sending or receiving end using a Bragg grating device.
There has in the past been some interest in using squeezed states of light. The quantum noise limit I have assumed is that of a Glauber coherent state, which saturates the Heisenberg inequality $\sigma_x \sigma_p \geq \frac{\hbar}{2}$ and has equal uncertainty in the conjugate "position" and "momentum" variables. On a phase plane, this can be translated into a lower bound on the product of amplitude and phase uncertainties. One can produce squeezed states with less phase uncertainty at the expense of amplitude uncertainty, so the idea was to use a frequency or phase modulated transmission scheme and lower the uncertainty in the transmitted phase. However, one of course gets a worsening of the amplitude SNR (we can't in quantum mechanics, thwart the Heisenberg inequality), so such schemes make marginal if any difference to the overall SNR. They certainly won't change the orders of magnitudes I discussed above.
Here is an excellent summary paper on the subject, fleshing out my summary above and discussing quantitative modifications to the model above for noises other than the "fundamental" quantum noise (especially Raman and Amplified Spontaneous Emission): René-Jean Essiambre, Gerhard Kramer, Peter J. Winzer, Gerard J. Foschini, and Bernhard Goebel, "Capacity Limits of Optical Fiber Networks", J. LIGHTWAVE TECH., VOL. 28, NO. 4, FEBRUARY 15, 2010.
There is some very recent work on the use of fibres with few (i.e. fewer than about five) bound eigenfields and the encoding of separate, potentially petabit per second each, channels, one for each bound eigenfield. See the work of Love and Riesen, e.g. Optics Letters 37, 19 (2012) 3990-3992.
-
Fibers themselves have an extremely high bandwidth in principle. Pretty much all the wavelengths where they are transparent enough to transmit light such that you can still detect it at the other end.
Where the fiber itself is the limiting factor is dispersion, ie. since all signals have a bandwidth themselves, their 'red' and 'blue' portions travel at different speeds. So if the fiber is long enough and your signal modulation is very fast, at the detector end the square input pulse will be rounded enough that you have trouble distinguishing it from the previous or following one.
The strongest limits on the usable bandwidth come from the lasers and detectors that are being used. To get all the different channels in and out while keeping them separate, you need lots of narrow band filters and modulators/demodulators. That part of the technology is expensive, but is more often replaced/upgraded than the fiber itself. The above is mostly relevant for long-haul fibers.
FTTH uses only a very small portion of the bandwidth that long-haul fibres use (Think e.g. of bundling in parallel all the FFTH fibers in Australia being a much, much thicker cable that what is laid into the ocean or between territories). You will almost never use the actual optical bandwidth of FFTH fibers, since you couldn't afford the electronics and laser to modulate and demodulate.
Finally, a human being (optical resolution of the eye being 1 arc second) can probably not process much more information than about 4 HD channels would pack (sound is trivial in information density compared to video, so is touch, smell), so there appears to be not much need for an individual to be connected at download speeds that substantially exceed the limit of about 4 HD channels incoming and 4 HD channels outgoing. Multiply this by the size of a family.
FFTH optical bandwidth likely already exceeds our biological capacity to process, only the price of the electronic side might remain a practical limit for a few years.
-
also, FTTH is often implemented as a passive optical network (using time multiplexing) meaning that each end user will share the bandwidth of the fibre with other users. – Andre Holzner Jul 10 '13 at 13:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374292254447937, "perplexity": 977.920485664915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00176-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/xstar/docs/html/node125.html | Next: Energy Conservation Up: Recombination Continuum Emission and Previous: Continuum Transfer Contents
## Radiation Field Quantities and Transfer Details
Equation (11) conceals a variety of important issues concerning the treatment of the radiation field and the values which are printed in the various output files produced by xstar. In an effort to clarify this we present here a complete description of the various radiation field quantities which are used internally to xstar, and which are output to the user. In this subsection, all radiation fields are specific luminosity, , in units erg/s/erg for the continuum, and luminosity, , in units erg/s for lines. We distiguish several different radiation fields. First, the radiation field used locally by xstar for the calculation of photoionization rates and heating, we denote . This is calculated during an outward iteration using the transfer equation:
with the boundary condition that at the inner radius of the cloud. Here and are the local continuum opacity and emissivity and is the incident radiation field at the inner edge of the cloud.
In addition we can define the various radiation fields of interest for use in fitting to observed data. These include the spectrum transmitted by a model, i.e. the radiation which would be observed if the incident radiation field were subject to absorption alone:
where is the total optical depth through the model cloud due to continuum photoabsorption,
Also of interest is the total emitted continuum radiation in both the inward and outward directions, which is given by equations similar to (11):
where the escape probabilities in the inward and outward directions are and , where is the covering fraction, specified as an input parameter, and and are the continuum optical depths in the inward and outward directions.
Line luminosities are calculated separately, one at a time, according to an equation analogous to equation (12):
where and are the luminosities of individual lines in the inward and outward directions, respectively. The escape probabilities in the inward and outward directions are calculated using from equations (6)-(8) and and , and and are the line scattering optical depths in the inward and outward directions:
.
and is the line center opacity.
None of the continuum luminosities defined in equations (12)-(16) have the effects of lines included, either in emission or absorption. This is because lines scatter the radiation, while photoionization is true absorption. The effects of lines on the continuum can be added to the continuum for the purposes of comparing with observed spectra by binning the lines, i.e. we can calculate the binned specific luminosity and opacity:
where and are the energy and width, respectively, of the continuum bin closest to line , and is the profile function including the effects of broadening due to thermal Doppler motions, natural broadening, and turbulence.
Then we can define the total optical depth of the cloud
and the total transmitted specific luminosity
and the total emitted specific luminosity in the inward and outward directions:
The quantities , and are output in columns 2,3,4,5 of the file xout_spect1.fits. The quantities , and are output in columns 2,3,4,5 of the file xout_cont1.fits.
In fact, the lines should be included in the continuum which is responsible for the local ionization and heating of the gas, since they can contribute to these processes. So we define a modified version of equation (12):
is the quantity which is used by xstar to calculate the local ionizing flux. This is the quantity which is conserved by xstar when it calculates heating=cooling.
The quantities , , and are output in columns 6,7,8,9 of the file xout_lines1.fits.
The quantities which contain the continuum only, before the lines are binned and added, are printed out to the log file, xout_step.log, when the print switch lpri is set to 1 or greater. Then they are in a table following the label 'continuum luminosities'. The quantities , , , , and are output in columns 3,4,5,6,7 and 8. Many other useful quantities are output to the log file when lpri=1. This includes the quantities , , and are in columns 2-5 following the label 'line luminosities'
Next: Energy Conservation Up: Recombination Continuum Emission and Previous: Continuum Transfer Contents
Tim Kallman 2016-04-29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427430033683777, "perplexity": 1102.3362554362468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126502.50/warc/CC-MAIN-20160428161526-00138-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://electronics.stackexchange.com/questions/463052/cutoff-frequency-of-transfer-function-at-6-db | # Cutoff frequency of transfer function at -6 dB
I'm trying to design a second order low-pass filter using the following transfer function:
$$\\ H(s)=\frac{f_c^2}{s^2 + 2 f_c s +f_c^2} \$$
with cutoff frequency fc = 3400 Hz
Whenever I plot this, the cutoff frequency is at -6 dB instead of -3 dB. I'm not really sure what I'm doing wrong. Here's my Matlab code:
fc = 3400;
s = 1i*logspace(0,6,1000);
H_d = fc^2 ./ (s.^2 + 2*fc*s + fc^2);
semilogx(abs(s), 20 * log10(abs(H_d)))
$$H(s) = \frac{f_c^2}{s^2+1.414f_cs+f_c^2}$$
note: for me, it's unusual to see the use of $$\f\$$ instead of $$\w\$$ in these formulas, but mathematically it should be equivalent, as long as you use $$\s=jf\$$ instead of $$\s=jw\$$
• A 2nd order low pass filter has a gain value at the resonant frequency equal to the circuit's Q. So when Q = 0.5, the gain at the resonant frequency is 0.5 or -6 dB. If Q is ten ($\zeta$ = 0.05) then gain is ten at resonance. Oct 16, 2019 at 7:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183326363563538, "perplexity": 763.6049729976398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00766.warc.gz"} |
https://www.mimuw.edu.pl/en/aktualnosci/seminaria/stochastic-models-growth-quasi-discrete-generations | Wydział Matematyki, Informatyki i Mechaniki Uniwersytetu Warszawskiego
• Skala szarości
• Wysoki kontrast
• Negatyw
• Reset
Aktualności — Wydarzenia
Sem. Biomath. & Game Th.
Stochastic models of growth with quasi-discrete generations
Prelegent: Bartłomiej Wacław
2023-01-25 12:15
Stochastic models of growth and evolution of cells such as bacteria or cancer cells usually assume exponential distribution of division times. Birth, death, mutations, and other processes can then be modelled similarly to chemical reactions, assuming that each process occurs with a certain (possibly state- and time-dependent) rate. This is convenient because it enables the model to be described by a Markov process, for which many powerful analytic and computational techniques exist. However, biological cells do not replicate in this way; the distribution of reproduction times is usually centred around a non-zero “doubling time”. In this talk I will discuss how three stochastic models: exponential growth, logistic growth, and a two-species predator-prey model are affected by non-exponential, narrow distribution of doubling times. Using computer simulations and analytic calculations I will show that the modified models exhibit much larger fluctuations of the number of cells. In particular, the predator-prey model shows large quasi-periodic oscillations caused by resonant amplification of coloured noise generated by replication events if the generation time is tuned to the natural oscillatory frequency of the system. I will also briefly discuss the relevance of these result for laboratory experiments on bacterial growth. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234925508499146, "perplexity": 2031.4317045833664}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00408.warc.gz"} |
http://www.gradesaver.com/the-odyssey/q-and-a/can-you-please-explain-why-nausicaa-is-the-one-who-helped-odysseus-beat-the-phaeacians-in-the-tournament-instead-of-alcinous-i-originally-thought-it-was-alcinous-248761 | # Can you please explain why Nausicaa is the one who helped Odysseus beat the Phaeacians in the tournament instead of Alcinous? I originally thought it was Alcinous.
Can you please explain why Nausicaa is the one who helped Odysseus beat the Phaeacians in the tournament instead of Alcinous? I originally thought it was Alcinous. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865734338760376, "perplexity": 2126.6903136859924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544140.93/warc/CC-MAIN-20161202170904-00387-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/vector-components-under-a-translation.130479/ | # Homework Help: Vector components under a translation
1. Sep 1, 2006
### Geekster
Ok....
I am asked how a vector's components transform under a translation of coordinates.
From mathworld:
Does that imply that the components used to describe the vector remain unchanged?
If you and I see a car drive east at 50 Km/h and you are standing at what you call the origin, and I am standing lets say 5m along what you are calling the positive y-axis, and I call my point the origin with my basis vectors being parrallel to your basis vectors, then do we both give the same components to describe the vector for the car?
To me it seems like this should not always be the case. After all, one position vector has different components then another, right? If we just shift these around by translation, but keep the same direction, then haven't we changed the magnitude from one coordinate system to the next?
Last edited: Sep 1, 2006
2. Sep 1, 2006
### nrqed
It all depends on what you mean by "the vector of the car".
The position vector (usually represented by ${\vec r}$) does depend on the position of the origin. But it does not actually enter any physical equation (it always appears in the combined form ${\vec r_f} - {\vec r_i} \equiv \Delta {\vec r}$ called the displacement vector). All the other physical vectors (velocity, acceleration, etc) are independent of the position of the origin.
Mathematically, a vector is defined an oriented segment with a certain magnitude and direction independent of any coordinate system. (so in some sense, the "position" vector is not truly a vector. The real vector is the displacement vector which does enter the equations of physics but is independent of the choice of coordinate system. )
3. Sep 1, 2006
### Geekster
I still don't get it....
So let's say I have a vector (1,1,1) using standard basis. Now I want to translate the coordinates so my new origin is at (1,1,1) with the basis vectors being parallel to the standard basis vectors. Then is my vector's coordinates still (1,1,1) relative to the new coordiantes?
Wait...I think I see what is meant here. You can shift the the coordinates (by translation at least) around all you want, but the vector just moves along with the coordinates?
If that is correct then it would mean that the magnitude and direction for which the car is moving (above example) would be the same for either observer, only the car's position vector relative to my coordinates would be different than the position vector relative to the other guys coordinates.
So really position doesn't fit the idea of what a vector is, even if under many of the usual vector operations position vectors act like vectors.
Ok...that's kind of an abstract idea, but I think I get it.
4. Sep 1, 2006
### nrqed
Yes. If you visualize vectors, they are directed line segments ("arrows") with a tail and a tip. If you slide the coordinate system, the arrow won't change. The location of the tip and tail *will* change but not the arrow.
It depends if you want to think in terms of mathematics or in terms of physics. But if you use physics, pick a certain vector, say the velocity vector of an object. That won't change under translation.
But the question can be addressed at different levels. Even under rotations of the coordinate system the vector will not change. However, if you write the vector in unit vector notation ${\vec A} = A_x {\hat i} + A_y {\hat j} + A_z {\hat k}$ you have to be careful. The vector ${\vec A}$ is completely independent of the coordinate system. But the values of the *components* do depend on the coordinate system. Under *translations* they do not change but under a rotation of the coordinate system, the *components* will change.
No, the vector will move with respect to the coordinates.
Yes.
Yes, position is kind of an exception in that sense.
5. Sep 1, 2006
### quasar987
I'm gonna say some things, it may or may not clear up your confusion but hopeful it will!
Say in coordinate system S, the position vector of some object is $\vec{r}$. If we look at this same object from the coordinate system S', which is obtained by translating the origin of S by a vector $\vec{d}$. That is to say, in S, the position vector of the origin of S' is $\vec{d}$. Then the position of the object in S' is given by $\vec{r} \ '=-\vec{d}+\vec{r}$. Arrange so you see that clearly. Draw the two coordinate systems and all the vectors. This is the transformation equation you are looking for.
If the object is moving, and its path in S is $\vec{r}(t)$, then its velocity in S is $d\vec{r}/dt$. In S' its velocity is then
$$\frac{d\vec{r} \ '}{dt}=-\frac{d\vec{d}}{dt}+\frac{d\vec{r}}{dt} = 0+\frac{d\vec{r}}{dt}=\frac{d\vec{r}}{dt}$$
I.e. the components of the velocity vector are the same wether we look at the object from one coordinate syetem or from another one translated with respect to the first.
Last edited: Sep 1, 2006
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Similar Threads for Vector components under Date
Inelastic 2D Collision with Vector Components Feb 26, 2018
Electric field vector in component form Jan 22, 2018
Kinematics -- flying a plane in the wind to a destination Jan 12, 2018
Vectors problem Nov 25, 2017 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192650675773621, "perplexity": 425.8257941727766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00140.warc.gz"} |
http://mathematicsi.com/angles-of-elevation-and-depression/ | # Angles of elevation and depression
This chapter explores angles of elevation and depression. It covers working with angles of elevation and depression a...
This chapter explores angles of elevation and depression. It covers working with angles of elevation and depression and using scale drawing and trigonometry to calculate heights of tall objects. No prior knowledge is required for this chapter. Engineers use trigonometry to work out very high heights such as a tree and tall buildings. The following image shows an old chimney. Engineers need to work out how tall it is; The engineers stand 100m from the chimney. They measure the angle of elevation shown below using a theodolite. They find out that the angle of elevation is 32°. Suppose they wanted to work out the height. One way to work out the height of the chimney is to so a scale drawing. We could use a scale of 1cm for 10m. First we draw a line 10cm long. Then we draw an angle of 32 ° as shown below. Then we join the height. Finally we measured the height using a ruler. You’ll find that it is 6.3cm. 6.3cm represents 63metres in the scaling above. We have found out that the chimney is 63m high.
## Trigonometry
We could use a more accurate approach to solve the problem above. We would use trigonometry. The problem above is shown below. We could use; The first step is identifying the known sides. We know the opposite and adjacent. …h is the opposite and 100 is the adjacent. The second step is identifying the tangent function that we need to use. Using SOHCAHTOA we can use that we need to use tangent. The formula for tangent is; Next we substitute in the values; Then we rearrange to get h on it’s own. Then we work out 100xtan32° on the calculator and round it off to 1d.p. Here we can see that the height of the chimney is 6.25metres.
## Angles of depression
John is standing on the edge of a cliff. He sees a boat at sea and wonders how far away it is; He is aware that the cliff is 40m high. He measures the angle of depression and finds it is 25°. Suppose he wanted to find the distance to the boat. Below is a diagram which we can form from the problem to help solve it. If the angle of depression is 25° that must mean that angle a is 25° Now we don’t need the angle of depression since the inner angle is known. We would use SOHCAHTOA to solve the problem. First we identify the sides we know. We have the opposite and adjacent. Next we identify the function we need to use from SOHCAHTOA, we can see that; Next we substitute in the known values. The we rearrange the formula to get; If you workout 40÷tan25° on the calculator and round off to 1d.p you get; We have seen that the distance of the boat is 85.8 m | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861310601234436, "perplexity": 476.61423995470284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146196.88/warc/CC-MAIN-20160205193906-00223-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=Task/MatrixMultiplication | Product of Two Matrices - Maple Programming Help
Product of Two Matrices
Description Calculate the product of two matrices.
Enter the first matrix.
> $\left[\begin{array}{ccc}{1}& {0}& {-}{4}\\ {0}& {-}{1}& {1}\\ {5}& {-}{2}& {2}\\ {2}& {1}& {-}{3}\end{array}\right]$
$\left[\begin{array}{rrr}{1}& {0}& {-}{4}\\ {0}& {-}{1}& {1}\\ {5}& {-}{2}& {2}\\ {2}& {1}& {-}{3}\end{array}\right]$ (1)
Enter the second matrix.
> $\left[\begin{array}{cc}{0}& {-}{1}\\ {2}& {-}{2}\\ {-}{1}& {1}\end{array}\right]$
$\left[\begin{array}{rr}{0}& {-}{1}\\ {2}& {-}{2}\\ {-}{1}& {1}\end{array}\right]$ (2)
Multiply the two matrices.
> $.$
$\left[\begin{array}{rr}{4}& {-}{5}\\ {-}{3}& {3}\\ {-}{6}& {1}\\ {5}& {-}{7}\end{array}\right]$ (3)
Commands Used | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997625946998596, "perplexity": 2678.7970401449024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00504.warc.gz"} |
https://arxiv.org/abs/1802.01192 | astro-ph.GA
(what is this?)
# Title: MOCCA-SURVEY Database I: Assessing GW kick retention fractions for BH-BH mergers in globular clusters
Abstract: Anisotropy of gravitational wave (GW) emission results in a net momentum gained by the black hole (BH) merger product, leading to a recoil velocity up to $\sim10^3\text{ km s}^{-1}$, which may kick it out of a globular cluster (GC). We estimate GW kick retention fractions of merger products assuming different models for BH spin magnitude and orientation. We check how they depend on BH-BH merger time and properties of the cluster. We analyze the implications of GW kick retention fractions on intermediate massive BH (IMBH) formation by repeated mergers in a GC. We also calculate final spin of the merger product, and investigate how it correlates with other parameters: effective spin of the binary and gravitational kick velocity. We used data from MOCCA (MOnte Carlo Cluster simulAtor) GC simulations to get a realistic sample of BH-BH mergers, assigned each BH spin value according to a studied model, and calculated recoil velocity and final spin based on most recent theoretical formulas. We discovered that for physically motivated models, GW kick retention fractions are about $30\%$ and display small dependence on assumptions about spin, but are much more prone to cluster properties. In particular, we discovered a strong dependence of GW kick retention fractions on cluster density. We also show that GW kick retention fractions are high in final life stages of the cluster, but low at the beginning. Finally, we derive formulas connecting final spin with effective spin for primordial binaries, and with maximal effective spin for dynamical binaries.
Comments: 13 pages, 8 figures, submitted to MNRAS Subjects: Astrophysics of Galaxies (astro-ph.GA) Cite as: arXiv:1802.01192 [astro-ph.GA] (or arXiv:1802.01192v1 [astro-ph.GA] for this version)
## Submission history
From: Jakub Morawski [view email]
[v1] Sun, 4 Feb 2018 21:00:19 GMT (850kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939787149429321, "perplexity": 4842.885115201765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00246.warc.gz"} |
https://research.monash.edu/en/publications/depth-to-water-table-correction-for-initial-carbon-14-activities- | # Depth to water table correction for initial carbon-14 activities in groundwater mean residence time estimation
Dylan J. Irvine, Cameron Wood, Ian Cartwright, Tanya Oliver
Research output: Contribution to journalArticleResearchpeer-review
2 Citations (Scopus)
## Abstract
Carbon-14 (14C) is routinely used to determine mean residence times (MRTs) of groundwater. 14C-based MRT calculations typically assume that the unsaturated zone is in equilibrium with the atmosphere, controlling the input 14C activity. However, multiple studies have shown that unsaturated zone 14C activities are lower than atmospheric values. Despite the availability of unsaturated zone 14C data, no attempt has been made to generalise initial 14C activities with depth to the water table. We utilise measurements of unsaturated zone 14C activities from 13 studies to produce a 14C-depth relationship to estimate initial 14C activities. The technique only requires the depth to the water table at the time of sampling or an estimate of depth to water in the recharge zone to determine the input 14C activity, making it straightforward to apply. Applying this new relationship to two Australian datasets (113 14C measurements in groundwater) shows that MRT estimates were up to 9250 years younger when the 14C-depth correction was applied relative to conventional MRTs. These findings may have important implications for groundwater samples that suggest the mixing of young and old waters and the determination of the relative proportions of young and waters, whereby the estimated fraction of older water may be much younger than previously assumed. Owing to the simplicity of the application of the technique, this approach can be easily incorporated into existing correction schemes to assess the sensitivity of unsaturated zone 14C to MRTs derived from 14C data.
Original language English 5415-5424 10 Hydrology and Earth System Sciences 25 10 https://doi.org/10.5194/hess-25-5415-2021 Published - 11 Oct 2021 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360194563865662, "perplexity": 3057.1645284595193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00155.warc.gz"} |
http://www.upscgetway.com/random-variables-and-probability-distribution-of-a-random-variable-probability-9-ncert/ | ### Random variables and Probability distribution of a random variable (Probability 9 NCERT)
Random variables
We have already learnt about random experiments and formation of sample spaces. In most of these experiments, we were not only interested in the particular outcome that occurs but rather in some number associated with that outcomes as shown in following examples/experiments.
(i) In tossing two dice, we may be interested in the sum of the numbers on the two dice.
(ii) In tossing a coin 50 times, we may want the number of heads obtained.
(iii) In the experiment of taking out four articles (one after the other) at random from a lot of 20 articles in which 6 are defective, we want to know the number of defectives in the sample of four and not in the particular sequence of defective and nondefective articles.
In all of the above experiments, we have a rule which assigns to each outcome of the experiment a single real number. This single real number may vary with different outcomes of the experiment. Hence, it is a variable. Also its value depends upon the outcome of a random experiment and, hence, is called random variable. A random variable is usually denoted by X.
If you recall the definition of a function, you will realise that the random variable X is really speaking a function whose domain is the set of outcomes (or sample space) of a random experiment. A random variable can take any real value, therefore, its co-domain is the set of real numbers. Hence, a random variable can be defined as follows :
Definition 4 : A random variable is a real valued function whose domain is the sample space of a random experiment.
For example, let us consider the experiment of tossing a coin two times in succession.
The sample space of the experiment is S = {HH, HT, TH, TT}.
If X denotes the number of heads obtained, then X is a random variable and for each outcome, its value is as given below :
More than one random variables can be defined on the same sample space. For example, let Y denote the number of heads minus the number of tails for each outcome of the above sample space S. Then
Thus, X and Y are two different random variables defined on the same sample space S.
Example A person plays a game of tossing a coin thrice. For each head, he is given Rs 2 by the organiser of the game and for each tail, he has to give Rs 1.50 to the organiser. Let X denote the amount gained or lost by the person. Show that X is a random variable and exhibit it as a function on the sample space of the experiment.
Solution X is a number whose values are defined on the outcomes of a random experiment. Therefore, X is a random variable. Now, sample space of the experiment is
Then and where, minus sign shows the loss to the player. Thus, for each element of the sample space, X takes a unique value, hence, X is a function on the sample space whose range is
Example A bag contains 2 white and 1 red balls. One ball is drawn at random and then put back in the box after noting its colour. The process is repeated again. If X denotes the number of red balls recorded in the two draws, describe X.
Solution Let the balls in the bag be denoted by w1, w2, r. Then the sample space isThereforeThus, X is a random variable which can take values 0, 1 or 2.
Probability distribution of a random variable
Let us look at the experiment of selecting one family out of ten families f1, f2 ,…, f10 in such a manner that each family is equally likely to be selected. Let the families f1, f2, … , f10 have 3, 4, 3, 2, 5, 4, 3, 6, 4, 5 members, respectively.
Let us select a family and note down the number of members in the family denoting X. Clearly, X is a random variable defined as below :Thus, X can take any value 2,3,4,5 or 6 depending upon which family is selected.
Now, X will take the value 2 when the family f4 is selected. X can take the value 3 when any one of the families f1, f3, f7 is selected.
Similarly,andSince we had assumed that each family is equally likely to be selected, the probability that family f4 is selected is 1 /10.
Thus, the probability that X can take the value 2 is 1 /10. We write P(X = 2) = 1 /10
Also, the probability that any one of the families f1, f3 or f7 is selected isThus, the probability that X can take the value 3 = 3 /10
We writeSimilarly, we obtainandSuch a description giving the values of the random variable along with the corresponding probabilities is called the probability distribution of the random variable X.
In general, the probability distribution of a random variable X is defined as follows:
Definition 5 The probability distribution of a random variable X is the system of numberswhere,The real numbers x1, x2,…, xn are the possible values of the random variable X and pi (i = 1,2,…, n) is the probability of the random variable X taking the value xi i.e., P(X = xi) = pi
NOTE: If xi is one of the possible values of a random variable X, the statement X = xi is true only at some point (s) of the sample space. Hence, the probability that X takes value xi is always nonzero, i.e. P(X = xi) ≠ 0.
Also for all possible values of the random variable X, all elements of the sample space are covered. Hence, the sum of all the probabilities in a probability distribution must be one.
Example Two cards are drawn successively with replacement from a well-shuffled deck of 52 cards. Find the probability distribution of the number of aces.
Solution The number of aces is a random variable. Let it be denoted by X. Clearly, X can take the values 0, 1, or 2.
Now, since the draws are done with replacement, therefore, the two draws form independent experiments.
Therefore,andThus, the required probability distribution is | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956595659255981, "perplexity": 194.45519928423036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00551.warc.gz"} |
https://arxiv-export-lb.library.cornell.edu/abs/2201.01265 | astro-ph.HE
(what is this?)
# Title: Limits on primordial black holes detectability with Isatis: A BlackHawk tool
Abstract: Primordial black holes (PBHs) are convenient candidates to explain the elusive dark matter (DM). However, years of constraints from various astronomical observations have constrained their abundance over a wide range of masses, leaving only a narrow window open at $10^{17}\,{\rm g} \lesssim M \lesssim 10^{22}\,$g for all DM in the form of PBHs. We reexamine this disputed window with a critical eye, interrogating the general hypotheses underlying the direct photon constraints. We review 4 levels of assumptions: i) instrument characteristics, ii) prediction of the (extra)galactic photon flux, iii) statistical method of signal-to-data comparison and iv) computation of the Hawking radiation rate. Thanks to Isatis, a new tool designed for the public Hawking radiation code BlackHawk, we first revisit the existing and prospective constraints on the PBH abundance and investigate the impact of assumptions i)-iv). We show that the constraints can vary by several orders of magnitude, advocating the necessity of a reduction of the theoretical sources of uncertainties. Second, we consider an "ideal" instrument and we demonstrate that the PBH DM scenario can only be constrained by the direct photon Hawking radiation phenomenon below $M_{\rm max} \sim 10^{20}\,$g. The upper part of the mass window should therefore be closed by other means.
Comments: 19 pages, 7 figures, 5 tables, accepted for publication in EPJC Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) DOI: 10.1140/epjc/s10052-022-10199-y Cite as: arXiv:2201.01265 [astro-ph.HE] (or arXiv:2201.01265v2 [astro-ph.HE] for this version)
## Submission history
From: Jérémy Auffinger [view email]
[v1] Tue, 4 Jan 2022 18:00:31 GMT (1180kb,D)
[v2] Wed, 9 Mar 2022 09:01:45 GMT (1171kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055776953697205, "perplexity": 4566.069394949996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00741.warc.gz"} |
https://math.msu.edu/seminars/TalkView.aspx?talk=4120 | ## Geometry and Topology
• Elizabeth Munch, MSU
• Approximating Continuous Functions on Persistence Diagrams for Machine Learning Tasks
• 11/16/2017
• 2:00 PM - 3:00 PM
• C304 Wells Hall
Many machine learning tasks can be boiled down to the following idea: Approximate a continuous function defined on a topological space (the ground truth'') given the function values (or approximate function values) on some subset of the points. This formulation has been well studied Euclidean data; however, more work is necessary to extend these ideas to arbitrary topological spaces. In this talk, we focus on the task of classification and regression on the space of persistence diagrams endowed with the bottleneck distance, (D,d_B). These objects arise in the field of Topological Data Analysis as a signature which gives insight into the underlying structure of a data set. The issue is that the structure of (D,d_B) is not directly amenable to the application of existing machine learning theories. In order to properly create this theory, we will give a full characterization of compact sets in D; provide simple, exemplar functions for vectorization of persistence diagrams; and show that, in practice, this method is quite successful in classification and regression tasks on several data sets of interest.
## Contact
Department of Mathematics
Michigan State University | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422684073448181, "perplexity": 502.0049595972435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00133.warc.gz"} |
http://math.stackexchange.com/questions/226348/integral-with-vector-field-in-a-circle/226358 | # Integral with vector field in a circle.
Given
$$F(x, y) = x^2\mathbf{i} + xy\mathbf{j}$$ $$x^2 + y^2 = 49$$
Find the work done by the force field on a particle that moves once around the circle oriented in the clockwise direction.
I've been using $$\int_C F(\vec{r}(t))\cdot \vec{r}'(t) dt$$ to do other similar problem but usually $\vec{r}$ is given.
-
As Sigur suggests, parametrize the circle as
$$r(t):=(7\cos t\,,\,7\sin t)\,\,,\,0\leq t\leq 2\pi\Longrightarrow$$
$$\Longrightarrow \oint_C F(r(t))\cdot r'(t)\,dt=\int_0^{2\pi}(49\cos^2t\,,\,49\cos t\sin t)\cdot(-7\sin t\,,\,7\cos t)\,dt=$$
$$=\int_0^{2\pi} 0\,dt=0$$
-
Great! In this case, the orientation does not matter... lol – Sigur Oct 31 '12 at 22:33
Of course, had the result been different from zero you'd just have to change signs... – DonAntonio Oct 31 '12 at 22:38
Your curve is given by $C: r(t)=(7\cos (t),7\sin (t))$, $0\leq t\leq 2\pi$. But be careful with the orientation. This one is counter-clock wise oriented. But it is just a matter of signal.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975024819374084, "perplexity": 453.3107815103473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00055-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.groundai.com/project/alignment-of-the-atlas-inner-detector-tracking-system2654/ | Alignment of the ATLAS Inner Detector Tracking System
# Alignment of the ATLAS Inner Detector Tracking System
Grant Gorfine Fachbereich Physik, Bergische Universität Wuppertal, D-42097 Wuppertal, Germany On behalf of the ATLAS Collaboration
###### Abstract
The ATLAS detector, built at one of the interaction points of the Large Hadron Collider, is operational and has been collecting data from cosmic rays. This paper describes the track based alignment of the ATLAS Inner Detector tracker which was performed using cosmic rays collected in 2008. The alignment algorithms are described and the performance of the alignment is demonstrated by showing the resulting hit residuals and comparing track parameters of upper and lower segments of tracks. The impact of the alignment on physics measurements is discussed.
## I Introduction
The ATLAS detector ATLAS is a general purpose detector built at one of the interaction points of the Large Hadron Collider (LHC) where proton on proton collisions with a center of mass energy of 14 TeV are expected. The inner tracking system of ATLAS is made up of silicon detectors and straw drift tubes. While these detectors were placed with very high precision of the order of 100 , the precision required for physics necessitates determining the positions of the tracking elements to a few microns. This is only achievable by doing a track based alignment. The LHC has not yet started proton-proton collisions, however, the ATLAS detector is fully operational and has been collecting cosmic ray data. This paper describes the alignment achieved using this cosmic ray data and explores the expected impact of the alignment on the physics performance in the early data.
## Ii Overview of the ATLAS Inner Detector
The ATLAS detector consists of several systems. An inner tracker (Inner Detector), electromagnetic and hadronic calorimeters and a muon spectrometer. The Inner Detector is located within a solenoidal magnetic field of about 2 Tesla and is made up of three subsystems as shown in Figure 1.
The innermost subsystem is the pixel detector. It is made up of three barrel layers and three disks in each endcap giving at least 3 space points per track. There are a total of 1744 modules (1456 in the barrel and 144 in each endcap). The pixel cell size is 50 400 with a corresponding resolution of 10 115 . The more precise measurement is in the direction (in the bending direction of the magnetic field) and the less precise direction measures in the barrel and in the endcap. In the local frame of the module the directions are referred to as local and local respectively.
The next subsystem is the SCT (Semi-Conductor Tracker) which consists of silicon microstrip detectors. In the SCT there are four barrel layers and 9 disks in each endcap giving 4 space points per track. It has a total of 4088 modules (2112 in the barrel and 988 in each endcap). The strip pitch is about 80 giving a resolution of 17 in the (local ) measurement direction. The modules are made of two back to back sides which are rotated 40 mrad with respect to each other to give a stereo measurement. This results in a space point resolution of about 580 in (barrel) or (endcap).
The outer subsystem is the TRT (Transition Radiation Tracker). It is made up of straw drift tubes which have a diameter of 4 mm. The straws are embedded in a material that produces transition radiation photons which facilitates electron identification. On average 36 straws are crossed per track. The barrel is segmented into 96 modules arranged in three rings. Each endcap is made up of 20 wheels, where each wheel consists of two four-plane structures. The resolution is 130 in the measurement direction.
## Iii Cosmic Ray Data Collection
The data used to obtain the alignment presented here was collected from September to December 2008 using cosmic rays. In total 2.6 million Inner Detector tracks were recorded with the magnetic field on and 5 million with the magnetic field off. A number of these tracks, however, do not pass through all three subsystems. Requiring at least one hit in the SCT, results in 880K and 2 million tracks with field on and off respectively. Requiring at least one hit in the pixel detector, results in 190K tracks with field on and 230K tracks with field off.
## Iv The Alignment Algorithms
The hit residual is the distance between the track prediction and the measured hit. The basic approach to align the detector is to reduce the residuals. The main methods build a that is to be minimized. This is defined as
χ2=∑tracksrTV−1r (1)
where is a vector of the residuals and is the corresponding covariance matrix. The minimum is obtained by solving the condition:
dχ2/da=0 (2)
where is a vector containing the alignment constants. These are generally the 3 translations and 3 rotations of each alignable structure (e.g. a module or layer). The solution to the minimization is of the form:
a = −(∑tracksdrTdaV−1drda)−1(∑tracksdrTdaV−1r) (3) = M−1b (4)
where a full derivation can be found in Alignment . The matrix is a matrix where is the number of degrees of freedom. If aligning all modules this will be 6 (three translation and three rotations) times the number of modules. In the case of the silicon detectors this results in a 35K 35K matrix. The solution of this large system of linear equations can be obtained by full diagonalization (for example with LAPACK LAPACK or ScaLAPACK ScaLAPACK ) which is computationally intensive or by fast solving techniques (e.g. MA27 MA27 ) which can be performed on a standard workstation. The fast solving methods rely on the matrix being sparse which is generally the case.
Solving this large matrix is referred to as the global approach Alignment ; AlignmentTRT . The correction of one module will be correlated to the movement of other modules and these correlations are taken into account in one go when solving this matrix. A few iterations are generally required due to non linearities.
A second approach is the local approach AlignmentTRT ; AlignmentLocal1 ; AlignmentLocal2 which ignores the correlations between modules and so one only needs to invert a matrix for each module. Correlations are taken into account by iterating several times.
The global is currently the baseline approach but both approaches are implemented in ATLAS and give consistent results.
In addition to the minimization techniques, a robust alignment algorithm AlignmentRobust has also been developed. This works by shifting modules according to their observed average residual offsets in an iterative fashion. In particular it takes advantage of the regions where modules overlap.
## V The Alignment Strategy
The alignment sequence was as follows. First the silicon detector (both pixel and SCT together) was aligned internally. A more detailed sequence of the silicon alignment is described below. For the pixel detector, survey information was available and was used as a starting point for the alignment. Next the TRT was aligned internally. After that, the TRT was aligned with respect to the silicon detectors. Finally a “Center-of-Gravity” correction was made which adjusts the overall translation and rotation of the entire Inner Detector such that the aligned detector has the same center of gravity as the nominal detector. This is needed as the minimization is insensitive to the overall translation and rotation of the Inner Detector.
The alignment of the detector was done in a number of steps at different levels closely following the structural assembly of the detector. The placement of the larger structures are less precisely known than that of the precision of the assembly of the modules within their substructures. The first level of alignment (referred to as Level 1) was at the level of major subsystems which were installed as separate items. These are the 2 TRT endcaps and the TRT barrel, the 2 SCT endcaps and the SCT barrel and the whole pixel system (however, for the silicon internal alignment the TRT was not included). The next level of alignment was at the disk and layer level. Each of the three pixel barrel layers were constructed from two semi-circular half shells. At this alignment level the 6 pixel layer half shells plus the 4 SCT barrel layers were aligned. Due to the poor illumination of the endcaps (since most cosmic ray muons travel predominantly in the vertical direction) the disks within the endcap were not aligned separately but rather the endcaps were aligned as a whole (2 SCT and 2 pixel endcaps).
Finally a module level alignment was done. Before this module level alignment was made, it was observed that the pixel staves have a significant bow lateral to the module plane which was also expected from the mechanical construction of the detector. To remain conservative not all degrees of freedom were aligned at the module level but rather they were restricted to the two degrees of freedom which were able to correct for the stave bow. These degrees of freedom were the angle about the normal of the module and the translation in the precision measurement direction (local ). At the module level only barrel modules were aligned, the endcaps were again kept as a whole.
## Vi Results
### vi.1 Residuals
Since the alignment works by minimizing residuals, the distributions of the residuals are a key test of the performance of the alignment. Figures 2 - 5 show the residuals for the different detectors. The quoted in the figures is the of the Gaussian describing the core after doing a fit to a double Gaussian distribution. In the case of the pixel, both the high precision measurement direction (local ) and the direction orthogonal to this (local ) are shown. The figures show the distributions before alignment where one sees wide distributions which are not centered around zero. After the alignment the residual widths are reduced significantly and well centered on zero. The widths are approaching that of an ideal alignment.
If one takes the quadratic difference one estimates that the remaining misalignment is equivalent to a random displacement of the modules less than 20 . This was further tested by generating a residual misalignment set (labeled as “Day 1”) with modules misplaced randomly with Gaussian width of 20 in the local and directions which approximately reproduces what is seen in data as shown in Figure 6. It is expected, however, that the remaining misalignment is not just random misplacement, but quite likely also includes systematic distortions. This is discussed further in Section VII.2.
Data from cosmic ray runs in 2009 were also recently processed using the alignments obtained with the 2008 data. While the residual widths were slightly increased the mean positions were still well centered on zero indicating that the detector has been quite stable over an extended period of time.
### vi.2 Upper and Lower Track Comparison
The reduction in width and the centering of residual is a necessary condition to demonstrate a good alignment. However, it is not sufficient as a number of systematic distortions are insensitive or weakly constrained by the minimization of the . Cosmic ray tracks have the unique feature that many cross the upper and lower parts of the detector. One can take tracks that pass close to the origin (i.e. where beam particles collide in collision events) and split the track into a lower and upper segment and then refit these as two independent tracks. It is possible then to compare the track parameters. This was done for tracks with GeV, mm and mm, where is the transverse impact parameter measured with respect to the origin and is the position at the point of closest approach to the origin. This cut ensures that the track has at least gone through the first pixel layer.
The result of this procedure for the impact parameter, where one looks at the difference of the impact parameter of the two tracks, is shown in Figure 7. Before alignment there is a large shift away from zero and a very broad distribution. After alignment the width is significantly reduced. A shift of 11 is however observed, indicating there is still some improvement needed which is still under investigation.
Since there are two tracks, an estimate of the impact parameter resolution can be obtained from the width of this distribution divided by . This result in a resolution of 35 . For comparison, the impact parameter resolution from collision events as estimated from Monte-Carlo is 20 for tracks with GeV.
Figure 8 shows the azimuthal angle, , of the track and again good improvement is seen after alignment and the distribution is well centered around zero.
Figure 9 shows the resolution of (the charge over momentum) as a function of the transverse momentum, , as obtained with this track splitting method. Both tracks reconstructed with the full Inner Detector (i.e. including the TRT) and those reconstructed with only the silicon detector are shown. The TRT, due to its large lever arm, is seen to significantly improve the resolution, especially at large momentum. Comparing tracks reconstructed with the full Inner Detector using perfectly aligned Monte-Carlo and tracks reconstructed in data, one can see that at low momentum the agreement is very good. At low momentum multiple scattering dominates and as expected the alignment is not as crucial for these tracks. However, for high momentum tracks there is less multiple scattering and the impact of the remaining residual misalignment can be seen.
## Vii Prospects for Collision Data Taking
### vii.1 Expectations for First Collision Data
As was mentioned above, the residual distributions from the cosmic ray data can be reproduced in Monte-Carlo by introducing random residual misalignments of the order of 20 (see Figure 6). The random residual misalignment set used in simulation was referred to as “Day 1” being an estimate of what is achievable on the first day of collisions given a starting alignment obtained solely from the cosmic ray data. While it is expected that we will rapidly improve the alignment with collision data, it is interesting to investigate the effects that this “Day 1” alignment has on physics observables. Therefore some physics channels were simulated with this misalignment set. In addition, a set was produced with a smaller amount of misalignment as an estimate of the alignment that might be achievable after a few months of running on collision data and is labeled as “Day 100” alignment. It should be cautioned, however, that there are many uncertainties to what will actually be achieved on this timescale. Table 1 shows the size of misalignment included in these two sets.
The impact on the mass resolution from events reconstructed using only the Inner Detector with the “Day 1” and “Day 100” alignment sets is shown in Figure 10 AlignmentAndPhysics . As can be seen in the figure the impact of misalignment in the “Day 1” scenario shows large degradation with a contribution to the mass resolution (in quadrature) of 2.2 GeV. For the “Day 100” set there is still some degradation, however it is much reduced with a contribution from misalignment of about 1 GeV.
The impact on observables of interest to physics was also investigated AlignmentAndPhysics by studying the impact on the mass resolution of (see Figure 11) and in events where and . While there is some degradation in the “Day 1” sample, the dominance of the lower tracks in these samples results in more of a contribution from multiple scattering and less from misalignment. The effects of misalignment from the “Day 100” set are reduced even further to almost an insignificant amount.
A study of the impact of misalignment on -tagging was made with alignment sets different from those described above. The results of this study CSCBook are summarized in Figure 12 which shows the rejection of light jets (jets originating from gluons, up, down and strange quarks) for a fixed -tagging efficiency. The light jet rejection is defined as the inverse of the efficiency of a light jet being tagged as a -jet. Different -tagging algorithms were used. The IP2D algorithm is based on the transverse impact parameter significance (the impact parameter divided by its error). The IP3D algorithm combines transverse impact parameter significance with the longitudinal impact parameter significance. The SV1 algorithm uses properties of secondary vertices such as the vertex mass and the number of tracks used to make the vertex. The IP3D-SV1 combines the information from the IP3D and SV1 taggers. In this study four alignment sets were considered. For the first set, Random10, modules were displaced and rotated randomly with a distribution with width of about 10 . Layers and disks and the whole pixel layer were also displaced by a similar amount. Only misalignment in the pixel subsystem was considered. Taking into account the misplacements at several levels, the overall misalignment is considered to be comparable to the “Day 1” set above. The second set, Random5, introduces misalignment about half the size. The third set is the case of a perfectly aligned detector. The forth set, Aligned, was produced using a simulation where large misalignments were introduced which were typical of what is expected when building the detector and then running the actual ATLAS alignment algorithms with a mixture of collision-like Monte-Carlo and cosmic ray Monte-Carlo. For the Random10 set the degradation for the IP3D-SV1 algorithm is significant (about 50%) but still one is able to achieve reasonable rejection rates. For the more realistic alignment case (Aligned) the degradation is only marginal with a 15% degradation. The impact parameter based taggers appear to be more sensitive to misalignment than the secondary vertex based taggers.
### vii.2 Systematic Distortions
While random misalignments cause some degradation to the performance of physics, it is also likely there will be systematic distortions which could be difficult for the alignment to completely eliminate. Any deformation which will still allow helical tracks to be reconstructed will leave residuals unchanged. The minimization of the is not sensitive to such deformations. These deformations are also referred to as weak modes. One such deformation is a twist of the detector where the detector is systematically rotated as a function of the global coordinate. Another such deformation is a curl deformation where each subsequent layer is rotated progressively by larger amounts as a function of the radius. This particular deformation creates a momentum bias. Tracks with one charge get a larger momentum and tracks with the opposite charge get a smaller momentum. Such a deformation was introduced into an alignment set and the results on the mass reconstruction of in events was investigated. The size of the curl introduced was such that the outer most silicon layer was shifted around 300 . This set is labeled as “Curl Large”. In addition, the ATLAS alignment algorithms were run starting with a geometry with this curl misalignment. It was seen that the alignment procedures were able to remove much of the deformation indicating that the deformation was not a perfect weak mode. The alignment set obtained after running the alignment is labeled as “Curl Small”. The “Curl Large” is seen to give both a degradation in the resolution and introduces a bias in the mass. The “Curl Small” shows improvement but still some degradation with respect to the perfect alignment. The mass bias however is mostly removed. When running the alignment to produce the ‘Curl Small’ set, only collision-like Monte-Carlo was used. The addition of events from cosmic rays is expected to improve the alignment further.
## Viii Conclusion
The ATLAS Inner Detector has successfully taken a large amount of cosmic ray data which has allowed a first full Inner Detector alignment to be achieved using the standard ATLAS alignment algorithms. The residuals obtained after alignment show much reduced widths and are well centered on zero. The resulting alignment is equivalent to less than a 20 residual misalignment in the barrel. The results from comparing track parameters from upper and lower segments also show good performance. The alignment is already at a level where it is possible to analyse low physics channels such as those of interest for physics and the alignment is expected to rapidly improve with collision data. The tackling of systematic deformation is still expected to be a challenge although combining collision data with cosmic ray data is expected to help.
## References
• (1) The ATLAS Collaboration, G. Aad, et al., The ATLAS Experiment at the CERN Large Hadron Collider, 2008 JINST 3 S08003.
• (2) P. Brückman, A. Hicheur, S. Haywood, Global approach to the Alignment of the ATLAS Silicon Tracking Detectors, ATL-INDET-PUB-2005-002 (2005).
• (3) E. Anderson, et al., LAPACK Users’ Guide, Philadelphia PA, Society for Industrial and Applied Mathematics, 1999.
• (4) L.S. Blackford, et al., ScaLAPACK Users’ Guide, Philadelphia PA, Society for Industrial and Applied Mathematics, 1997.
• (5) I.S. Duff and J.K. Reid, Rep. AERE R10533, HMSO, London, 1982.
• (6) A. Bocci, W. Hulsbergen, TRT Alignment for SR1 Cosmics and Beyond, ATL-INDET-PUB-2007-009 (2007).
• (7) R. Härtel, diploma thesis, TU München, 2005.
• (8) T. Göttfert, diploma thesis, Universität Würzburg, 2006.
• (9) F. Heinemann, Track Based Alignment of the ATLAS Silicon Detectors with the Robust Alignment Algorithm, ATL-INDET-PUB-2007-011 (2007).
• (10) The ATLAS Collaboration, The Impact of Inner Detector Misalignments on Selected Physics, ATL-PHYS-PUB-2009-080 (2009).
• (11) The ATLAS Collaboration, Expected Performance of the ATLAS Detector, Trigger and Physics, CERN-OPEN-2008-020, Geneva, 2008.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100922346115112, "perplexity": 1142.7375952523576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00233.warc.gz"} |
http://www.algebra.com/tutors/your-answers.mpl?userid=solver91311&from=14610 | Algebra -> Tutoring on algebra.com -> See tutors' answers! Log On
Tutoring Home For Students Tools for Tutors Our Tutors Register Recently Solved
By Tutor
| By Problem Number |
Tutor:
# Recent problems solved by 'solver91311'
Jump to solutions: 0..29 , 30..59 , 60..89 , 90..119 , 120..149 , 150..179 , 180..209 , 210..239 , 240..269 , 270..299 , 300..329 , 330..359 , 360..389 , 390..419 , 420..449 , 450..479 , 480..509 , 510..539 , 540..569 , 570..599 , 600..629 , 630..659 , 660..689 , 690..719 , 720..749 , 750..779 , 780..809 , 810..839 , 840..869 , 870..899 , 900..929 , 930..959 , 960..989 , 990..1019 , 1020..1049 , 1050..1079 , 1080..1109 , 1110..1139 , 1140..1169 , 1170..1199 , 1200..1229 , 1230..1259 , 1260..1289 , 1290..1319 , 1320..1349 , 1350..1379 , 1380..1409 , 1410..1439 , 1440..1469 , 1470..1499 , 1500..1529 , 1530..1559 , 1560..1589 , 1590..1619 , 1620..1649 , 1650..1679 , 1680..1709 , 1710..1739 , 1740..1769 , 1770..1799 , 1800..1829 , 1830..1859 , 1860..1889 , 1890..1919 , 1920..1949 , 1950..1979 , 1980..2009 , 2010..2039 , 2040..2069 , 2070..2099 , 2100..2129 , 2130..2159 , 2160..2189 , 2190..2219 , 2220..2249 , 2250..2279 , 2280..2309 , 2310..2339 , 2340..2369 , 2370..2399 , 2400..2429 , 2430..2459 , 2460..2489 , 2490..2519 , 2520..2549 , 2550..2579 , 2580..2609 , 2610..2639 , 2640..2669 , 2670..2699 , 2700..2729 , 2730..2759 , 2760..2789 , 2790..2819 , 2820..2849 , 2850..2879 , 2880..2909 , 2910..2939 , 2940..2969 , 2970..2999 , 3000..3029 , 3030..3059 , 3060..3089 , 3090..3119 , 3120..3149 , 3150..3179 , 3180..3209 , 3210..3239 , 3240..3269 , 3270..3299 , 3300..3329 , 3330..3359 , 3360..3389 , 3390..3419 , 3420..3449 , 3450..3479 , 3480..3509 , 3510..3539 , 3540..3569 , 3570..3599 , 3600..3629 , 3630..3659 , 3660..3689 , 3690..3719 , 3720..3749 , 3750..3779 , 3780..3809 , 3810..3839 , 3840..3869 , 3870..3899 , 3900..3929 , 3930..3959 , 3960..3989 , 3990..4019 , 4020..4049 , 4050..4079 , 4080..4109 , 4110..4139 , 4140..4169 , 4170..4199 , 4200..4229 , 4230..4259 , 4260..4289 , 4290..4319 , 4320..4349 , 4350..4379 , 4380..4409 , 4410..4439 , 4440..4469 , 4470..4499 , 4500..4529 , 4530..4559 , 4560..4589 , 4590..4619 , 4620..4649 , 4650..4679 , 4680..4709 , 4710..4739 , 4740..4769 , 4770..4799 , 4800..4829 , 4830..4859 , 4860..4889 , 4890..4919 , 4920..4949 , 4950..4979 , 4980..5009 , 5010..5039 , 5040..5069 , 5070..5099 , 5100..5129 , 5130..5159 , 5160..5189 , 5190..5219 , 5220..5249 , 5250..5279 , 5280..5309 , 5310..5339 , 5340..5369 , 5370..5399 , 5400..5429 , 5430..5459 , 5460..5489 , 5490..5519 , 5520..5549 , 5550..5579 , 5580..5609 , 5610..5639 , 5640..5669 , 5670..5699 , 5700..5729 , 5730..5759 , 5760..5789 , 5790..5819 , 5820..5849 , 5850..5879 , 5880..5909 , 5910..5939 , 5940..5969 , 5970..5999 , 6000..6029 , 6030..6059 , 6060..6089 , 6090..6119 , 6120..6149 , 6150..6179 , 6180..6209 , 6210..6239 , 6240..6269 , 6270..6299 , 6300..6329 , 6330..6359 , 6360..6389 , 6390..6419 , 6420..6449 , 6450..6479 , 6480..6509 , 6510..6539 , 6540..6569 , 6570..6599 , 6600..6629 , 6630..6659 , 6660..6689 , 6690..6719 , 6720..6749 , 6750..6779 , 6780..6809 , 6810..6839 , 6840..6869 , 6870..6899 , 6900..6929 , 6930..6959 , 6960..6989 , 6990..7019 , 7020..7049 , 7050..7079 , 7080..7109 , 7110..7139 , 7140..7169 , 7170..7199 , 7200..7229 , 7230..7259 , 7260..7289 , 7290..7319 , 7320..7349 , 7350..7379 , 7380..7409 , 7410..7439 , 7440..7469 , 7470..7499 , 7500..7529 , 7530..7559 , 7560..7589 , 7590..7619 , 7620..7649 , 7650..7679 , 7680..7709 , 7710..7739 , 7740..7769 , 7770..7799 , 7800..7829 , 7830..7859 , 7860..7889 , 7890..7919 , 7920..7949 , 7950..7979 , 7980..8009 , 8010..8039 , 8040..8069 , 8070..8099 , 8100..8129 , 8130..8159 , 8160..8189 , 8190..8219 , 8220..8249 , 8250..8279 , 8280..8309 , 8310..8339 , 8340..8369 , 8370..8399 , 8400..8429 , 8430..8459 , 8460..8489 , 8490..8519 , 8520..8549 , 8550..8579 , 8580..8609 , 8610..8639 , 8640..8669 , 8670..8699 , 8700..8729 , 8730..8759 , 8760..8789 , 8790..8819 , 8820..8849 , 8850..8879 , 8880..8909 , 8910..8939 , 8940..8969 , 8970..8999 , 9000..9029 , 9030..9059 , 9060..9089 , 9090..9119 , 9120..9149 , 9150..9179 , 9180..9209 , 9210..9239 , 9240..9269 , 9270..9299 , 9300..9329 , 9330..9359 , 9360..9389 , 9390..9419 , 9420..9449 , 9450..9479 , 9480..9509 , 9510..9539 , 9540..9569 , 9570..9599 , 9600..9629 , 9630..9659 , 9660..9689 , 9690..9719 , 9720..9749 , 9750..9779 , 9780..9809 , 9810..9839 , 9840..9869 , 9870..9899 , 9900..9929 , 9930..9959 , 9960..9989 , 9990..10019 , 10020..10049 , 10050..10079 , 10080..10109 , 10110..10139 , 10140..10169 , 10170..10199 , 10200..10229 , 10230..10259 , 10260..10289 , 10290..10319 , 10320..10349 , 10350..10379 , 10380..10409 , 10410..10439 , 10440..10469 , 10470..10499 , 10500..10529 , 10530..10559 , 10560..10589 , 10590..10619 , 10620..10649 , 10650..10679 , 10680..10709 , 10710..10739 , 10740..10769 , 10770..10799 , 10800..10829 , 10830..10859 , 10860..10889 , 10890..10919 , 10920..10949 , 10950..10979 , 10980..11009 , 11010..11039 , 11040..11069 , 11070..11099 , 11100..11129 , 11130..11159 , 11160..11189 , 11190..11219 , 11220..11249 , 11250..11279 , 11280..11309 , 11310..11339 , 11340..11369 , 11370..11399 , 11400..11429 , 11430..11459 , 11460..11489 , 11490..11519 , 11520..11549 , 11550..11579 , 11580..11609 , 11610..11639 , 11640..11669 , 11670..11699 , 11700..11729 , 11730..11759 , 11760..11789 , 11790..11819 , 11820..11849 , 11850..11879 , 11880..11909 , 11910..11939 , 11940..11969 , 11970..11999 , 12000..12029 , 12030..12059 , 12060..12089 , 12090..12119 , 12120..12149 , 12150..12179 , 12180..12209 , 12210..12239 , 12240..12269 , 12270..12299 , 12300..12329 , 12330..12359 , 12360..12389 , 12390..12419 , 12420..12449 , 12450..12479 , 12480..12509 , 12510..12539 , 12540..12569 , 12570..12599 , 12600..12629 , 12630..12659 , 12660..12689 , 12690..12719 , 12720..12749 , 12750..12779 , 12780..12809 , 12810..12839 , 12840..12869 , 12870..12899 , 12900..12929 , 12930..12959 , 12960..12989 , 12990..13019 , 13020..13049 , 13050..13079 , 13080..13109 , 13110..13139 , 13140..13169 , 13170..13199 , 13200..13229 , 13230..13259 , 13260..13289 , 13290..13319 , 13320..13349 , 13350..13379 , 13380..13409 , 13410..13439 , 13440..13469 , 13470..13499 , 13500..13529 , 13530..13559 , 13560..13589 , 13590..13619 , 13620..13649 , 13650..13679 , 13680..13709 , 13710..13739 , 13740..13769 , 13770..13799 , 13800..13829 , 13830..13859 , 13860..13889 , 13890..13919 , 13920..13949 , 13950..13979 , 13980..14009 , 14010..14039 , 14040..14069 , 14070..14099 , 14100..14129 , 14130..14159 , 14160..14189 , 14190..14219 , 14220..14249 , 14250..14279 , 14280..14309 , 14310..14339 , 14340..14369 , 14370..14399 , 14400..14429 , 14430..14459 , 14460..14489 , 14490..14519 , 14520..14549 , 14550..14579 , 14580..14609 , 14610..14639 , 14640..14669 , 14670..14699 , 14700..14729 , 14730..14759 , 14760..14789 , 14790..14819 , 14820..14849 , 14850..14879 , 14880..14909 , 14910..14939 , 14940..14969 , 14970..14999 , 15000..15029 , 15030..15059 , 15060..15089 , 15090..15119 , 15120..15149 , 15150..15179 , 15180..15209 , 15210..15239 , 15240..15269 , 15270..15299 , 15300..15329 , 15330..15359 , 15360..15389 , 15390..15419 , 15420..15449 , 15450..15479 , 15480..15509 , 15510..15539 , 15540..15569 , 15570..15599 , 15600..15629 , 15630..15659 , 15660..15689 , 15690..15719 , 15720..15749 , 15750..15779 , 15780..15809 , 15810..15839 , 15840..15869 , 15870..15899 , 15900..15929 , 15930..15959 , 15960..15989 , 15990..16019 , 16020..16049 , 16050..16079 , 16080..16109 , 16110..16139 , 16140..16169 , 16170..16199 , 16200..16229 , 16230..16259 , 16260..16289 , 16290..16319 , 16320..16349 , 16350..16379 , 16380..16409 , 16410..16439 , 16440..16469 , 16470..16499 , 16500..16529 , 16530..16559 , 16560..16589 , 16590..16619 , 16620..16649 , 16650..16679 , 16680..16709 , 16710..16739 , 16740..16769 , 16770..16799 , 16800..16829 , 16830..16859 , 16860..16889, >>Next
Linear-equations/176737: How do I use Gauss-Jordan elimination to solve this linear system and find all solutions? x + y - 2z = 5 2x + 3y + 4z = 2 So far, I have placed each of the numbers into an augmented matrix. I do not know how to proceed from here.1 solutions Answer 131855 by solver91311(16897) on 2009-01-14 22:58:15 (Show Source): You can put this solution on YOUR website!You can't solve this system. You have three variables but only two equations. Go find your other equation and we'll talk.
Linear-equations/176742: I am a bit overwhelmed by the complexity of this question. Please help me to understand and answer the question. If matrix A is transformed into matrix B by means of an elementary row operation, is there an elementary row operation that transforms B into A?1 solutions Answer 131842 by solver91311(16897) on 2009-01-14 22:19:10 (Show Source): You can put this solution on YOUR website!There are three elementary row operations: 1. Switching the position of two rows. Obviously, you can just switch them back again. 2. Multiplying each element of a row by a non-zero constant. Just multiply that row by the reciprocal of the non-zero constant. 3. Replacing a row with the sum of that row and a multiple of another row. Just subtract the multiple again. So, the answer to your question is yes.
Probability-and-statistics/176732: Can someone double check my work please. Two dice are rolled. Find the probability that the score on the dice is 8? 8 = (1,7)(6,2)(4,4)(3,5)(5,3)(4,4)(2,6)(7,1) 8/36 = 2/91 solutions Answer 131808 by solver91311(16897) on 2009-01-14 20:04:55 (Show Source): You can put this solution on YOUR website!First of all, how did you get (1, 7) or (7, 1)? DO NOT take that pair of dice to Vegas -- you might get your thumbs broken by some big casino security goon. Second of all, you counted (4,4) twice. Nope, (4,4) is a single outcome. So there are 8 = (6,2) or (5,3) or (4,4) or (3,5) or (2,6) = 5 ways to make an 8 and the probability of rolling 8 is .
Probability-and-statistics/176736: If a coin is tossed three times, in how many different ways can the sequence of heads and tails appear?1 solutions Answer 131804 by solver91311(16897) on 2009-01-14 19:59:35 (Show Source): You can put this solution on YOUR website!2 ways for the first toss times 2 ways for the second toss times 2 ways for the third toss = 6 ways.
Probability-and-statistics/176735: Can someone double check my work please. Two dice are rolled. Find the odds that the score on the dice is either 6 or 12? 6 = (1,5)(2,4)(3,3)(3,3)(4,2)(5,1) = 6/36 = 1/6 12 = (6,6)(6,6) = 2/36 = 1/18 Did I do this right?1 solutions Answer 131802 by solver91311(16897) on 2009-01-14 19:58:20 (Show Source): You can put this solution on YOUR website!Not quite. There are 5 ways to make a 6: (1, 5), (2, 4), (3, 3), (4, 2), and (5, 1). There is 1 way to make a 12: (6, 6). There are a total of 36 different results: 6 ways one die can come up times 6 ways the other die can come up. Since you want the probability of rolling a 6 OR a 12, then you have a total of 6 different successes (5 ways to make 6 plus 1 way to make 12) out of 36 possible outcomes, so:
Points-lines-and-rays/176733: Can someone double check my work please. Find the equation of the line passing through the two points.(2,9)and (0,6) slope = (y^2 - y1)(x^2 - x1) slope = (6 + 9)(0 + 2) slope = 15/2 Did I do this right?1 solutions Answer 131798 by solver91311(16897) on 2009-01-14 19:52:51 (Show Source): You can put this solution on YOUR website!Not quite. slope = and and So: (you had + here instead of -)
Equations/176729: Maggie makes a $9.25 purchase at a bookstore in Reno with a$20 bill. The store has no bills and gives her the change in quarters and fifty-cent pieces. There are 30 coins in all. How many of each kind are there? please help me! thank you!!1 solutions Answer 131795 by solver91311(16897) on 2009-01-14 19:45:56 (Show Source): You can put this solution on YOUR website!The number of quarters: The number of 50c pieces: Given: The value of each quarter is 25 cents, so the value of all her quarters is in cents. Similarly the value of all her 50c pieces is The total value of her change is dollars which is the same as cents. The value of the quarters plus the value of the 50c pieces must equal the total value of the change, so: Solve the first equation for : → . Substitute this expression for into the second equation: Solve for So there were 13 half-dollars, and therefore quarters. Check: Checks.
Divisibility_and_Prime_Numbers/176730: use two unit multipliers to convert 360 yards to inches1 solutions Answer 131793 by solver91311(16897) on 2009-01-14 19:32:10 (Show Source): You can put this solution on YOUR website!Use 3 ft/yard and 12 inches/foot
real-numbers/176727: A train traveled m miles at a speed of s mph. A bus following the same route traveled 5 mph slower. How much longer did the bus take than the train to make this trip? Write the answer as a single rational expression. 1 solutions Answer 131792 by solver91311(16897) on 2009-01-14 19:30:18 (Show Source): You can put this solution on YOUR website!Use distance = rate times time. . Here, and is the speed of the train. is the speed of the bus. Since , and... for the train: for the bus: To find out how much longer the bus took, subtract the train's time from the bus' time:
Rational-functions/176712: How do u solve this problem. Given: f(x)= x + 2 and g(x) = x^2 + 2 Problem: f(g(1)) how do i do this??1 solutions Answer 131791 by solver91311(16897) on 2009-01-14 19:18:18 (Show Source): You can put this solution on YOUR website!For x in g(x) substitute 1 and do the arithmetic. Take this value for g(1) and substitute for x in f(x) and do the arithmetic. Alternatively, though I think it is a bit more work, you could substitute the expression for g(x) for the x in f(x) giving you an expression for (f◦g)(x). Then you can substitute 1 for x in the derived expression. In general, given a function f defined by f(x) = [some expression in x], you evaluate f(a) by substituting a for x in the expression. It doesn't matter whether a is a number, some other variable, or another function definition or composite of functions. Remember that f or g or h or some other letter is a function, f(x) or g(x) etc. is the value of the function at x. f(a), then, is the value of the function at a. Typically, things like f(x) are used to describe the general rule for the specific function whereas f(a) is used to examine the value of a particular function at a specific point, namely a.
Numbers_Word_Problems/176685: 15 more than the square of a number is 8 times the number1 solutions Answer 131788 by solver91311(16897) on 2009-01-14 18:44:33 (Show Source): You can put this solution on YOUR website!Although you don't specifically say, I assume that you want to determine the number. The number: The square of the number: 15 more than the square of the number: 8 times the number: 15 more than the square of the number is (=) 8 times the number: Put the quadratic into standard form and solve by factoring: Hint: -3 + (-5) = -8 and -3 X -5 = 15
Inequalities/176686: how to solve this x-1<31 solutions Answer 131786 by solver91311(16897) on 2009-01-14 18:39:13 (Show Source): You can put this solution on YOUR website!Add 1 to both sides.
Exponents-negative-and-fractional/176692: This question is from textbook Can you please help me. the directions are evaluate the exponential expression. Write as a fraction in the simplest form. 14. 4(4-2) the -2 this is an exponent negative 2 thank you1 solutions Answer 131784 by solver91311(16897) on 2009-01-14 18:38:01 (Show Source): You can put this solution on YOUR website!A negative exponent means the reciprocal. so
Functions/176709: f(x) = 3x -8 for {-2, 3, 6}1 solutions Answer 131783 by solver91311(16897) on 2009-01-14 18:35:25 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Functions/176696: evaluate f(x)=3x - 8 for {-2,3,6}. Which set of numbers is the domain and which set of numbers is the range? 1 solutions Answer 131781 by solver91311(16897) on 2009-01-14 18:34:12 (Show Source): You can put this solution on YOUR website!Substitute each of the values in the set for x in the function and then do the arithmetic. The values in the given set are elements of the domain and the values in your set of results are elements in the range.
Graphs/176707: This question is from textbook Elementary and Intermediate Algebra Slope-Intercept Form Write an equation for each line. Use slope-intercept form if possible. 8. Points are (0, 3) and (2, -1) 10. Points are (0, -1) and (4, 1)1 solutions Answer 131779 by solver91311(16897) on 2009-01-14 18:31:45 (Show Source): You can put this solution on YOUR website!Given two points, and , you can write the equation of the line using the two-point form of the equation of a line: . Just insert the values from the two ordered pairs and do the arithmetic. Once you have simplified things, solve the equation for y (that is, manipulate the equation so that y is by itself on the left and everything else is on the right) to put the equation in slope-intercept form.
Complex_Numbers/176697: The imaginary number i is defined such that i^2 = –1. What does i + i^2 + i^3 + .... + i^23 equal? The answer is supposed to be -1, but I thought it should be -i according to another source. Is the answer key wrong?1 solutions Answer 131777 by solver91311(16897) on 2009-01-14 18:28:03 (Show Source): You can put this solution on YOUR website! , and so on repeating the sequence every 4 increments of the exponent. The sum of the first 4 is: , so the sum of the each subsequent 4 must also be zero. Use integer division is 5 with a remainder of 3. So in your string of 23 terms there are 5 sets of 4 terms each of which sum to zero, so it is only the last three terms that make any difference: And that is your sum. It is your 'other source' that is incorrect.
Functions/176699: f(x) = -x + 3 for -4 and 41 solutions Answer 131772 by solver91311(16897) on 2009-01-14 18:17:09 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Functions/176701: g(x) = -2x + 5 for 0 and 31 solutions Answer 131771 by solver91311(16897) on 2009-01-14 18:16:45 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Functions/176702: f(x) = -3x - 1 for -1 and 71 solutions Answer 131770 by solver91311(16897) on 2009-01-14 18:16:21 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Functions/176704: h(x) = 1.5x - 2 for -2 and 41 solutions Answer 131768 by solver91311(16897) on 2009-01-14 18:15:52 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Functions/176705: f(x) = 6 - x for 0 and 6 1 solutions Answer 131767 by solver91311(16897) on 2009-01-14 18:15:19 (Show Source): You can put this solution on YOUR website!Just substitute the given values for x and do the arithmetic.
Numbers_Word_Problems/176693: Farmer Billy Bob has 210m of fencing to enclose his pig pen on all 4 sides. What dimensions should his pig pen be to enclose the largest possible area?1 solutions Answer 131766 by solver91311(16897) on 2009-01-14 18:14:12 (Show Source): You can put this solution on YOUR website!The area of a rectangle is given by the length times the width, and the perimeter is given by 2 times the length plus 2 times the width: and We want to maximize subject to the constraint that . → , so solve for either or . → Now substitute this expression for into the Area function: → Now we have a second degree polynomial with a negative lead coefficient, so the graph is a concave down parabola. The vertex of a concave down parabola is a maximum, so we need to find the value of the W coordinate for the vertex. The vertex of a parabola expressed in has an x-coordinate of . In this problem we have and , so the W-coordinate is at . Hence, for the maximum area, the width must be 52.5 meters. So if and , and the pig pen needs to be a square 52.5 meters on each side. As you may expect, it is true in general that a rectangle with maximum area for a given perimeter is a square. For extra credit, prove it.
Equations/176690: Please help me solve this equation: Thanks 1 solutions Answer 131759 by solver91311(16897) on 2009-01-14 17:48:18 (Show Source): You can put this solution on YOUR website!Multiply both sides by -9
Numbers_Word_Problems/176694: Thank you for helping me but can you please go faster because i need now ok thanks 1 solutions Answer 131758 by solver91311(16897) on 2009-01-14 17:39:45 (Show Source): You can put this solution on YOUR website!Exactly what is it you need? If you don't ask a question, you won't get an answer.
Numbers_Word_Problems/176691: The area of a triangle is 126cm^2, and the altitude is 3 less than twice the base. Find the length of the base. 1 solutions Answer 131757 by solver91311(16897) on 2009-01-14 17:38:32 (Show Source): You can put this solution on YOUR website!The area of a triangle is given by . Let the measure of the base be , then we are given that the altitude is , so and we are given that the area of this particular triangle is 126 square centimeters. Therefore: Solve the quadratic by ordinary means. You can factor, complete the square, or use the quadratic formula -- all three methods work. You will get two roots as you should expect, except that one of them will be < 0. This absurd result for the measure of a side of a triangle is an extraneous root caused by squaring the variable in the process of solving the problem. Exclude the negative root. The positive root is your answer.
Rational-functions/176666: This question is from textbook McDougal Littell Algebra 2 How do you multiply and divide rational expressions?1 solutions Answer 131756 by solver91311(16897) on 2009-01-14 17:15:40 (Show Source): You can put this solution on YOUR website!Just like any other fractions. regardless of whether a, b, c, and/or d are numbers or polynomial expressions. Of course, the result may need to be reduced to lowest terms. Likewise . Again, the result may need to be reduced to lowest terms. Lastly, you must exclude values of a variable that would cause any denominator to equal zero. For example: is undefined for
real-numbers/176681: h=16t+251 solutions Answer 131754 by solver91311(16897) on 2009-01-14 17:05:44 (Show Source): You can put this solution on YOUR website!Exactly what is it you would like to be able to do with this rather handsome little 2-variable linear equation?
Polynomials-and-rational-expressions/176682: The length of a rectangular swimming pool is 2x-1m + the width is x+2 meters. Write a polynomial A(z represents the area) Could somebody please help me do this problem? I have tried and do no know how to do it. Thanks in advance. Judy1 solutions Answer 131753 by solver91311(16897) on 2009-01-14 17:04:36 (Show Source): You can put this solution on YOUR website!The area of a rectangle is the length multiplied times the width. So just say: Then use FOIL to multiply the binomials to obtain the desired polynomial expression.
Systems-of-equations/176631: Ramon mixes 12 liters of 8% acid solution with a 20% acid solution, which results in a 60% acid solution. Find the number of liters of 20% acid solution in the new mixture.1 solutions Answer 131703 by solver91311(16897) on 2009-01-14 12:28:20 (Show Source): You can put this solution on YOUR website!The stated result is impossible. The mixture solution must be a smaller concentration than at least one of the constituents. Now if you meant "...12 liters of 80% acid..." you would have a solvable problem. Write back with the correct problem statement.
Points-lines-and-rays/176638: This question is from textbook College Mathematics I How do I Find the distance between the pair of points (-1,-4) and (-5, -7)1 solutions Answer 131701 by solver91311(16897) on 2009-01-14 12:23:11 (Show Source): You can put this solution on YOUR website!Use the distance formula: . Just plug in the coordinate values from your two points and do the arithmetic. Watch your signs carefully. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604317903518677, "perplexity": 4397.4849990647945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00077-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://briankoberlein.com/post/holmes-mystery/ | # A Holmes Mystery
24 October 2013
In October 2007, comet Holmes experienced rapid brightening. By itself this isn’t a huge deal. As comets move closer to the Sun, volatiles (gases and such) can warm and expand, and these can vent out of the comet rapidly, causing the comet to brighten suddenly. Sometimes this out-gassing can cause a comet to fragment, but often it is simply evidence of an active comet.
Comet Holmes was discovered in 1892 by Edwin Holmes (hence the name), and orbits the Sun at a distance between Mars and Jupiter (though tilted a bit from the planetary plane). It is normally rather dim, with a typical apparent magnitude of 17. Holmes wouldn’t have noticed it if it weren’t for the fact that it had brightened to magnitude 5.
In 2007 Comet Holmes gained notoriety when it suddenly brightened from magnitude 17 to magnitude 3 in about two days. This made it visible to the naked eye under clear dark skies. The coma (the cloud-like feature surrounding the icy/rocky nucleus) expanded by a factor of four by the end of October, and this once faint object began to look like a traditional comet. You can see this change over time in a composite series of photos showing Comet Holmes from October 2007 to March 2008.
The exact cause of this particular brightening is not entirely clear. The brightening was unusual in just how intense it was. It was so bright that some suggested the comet was struck by a meteoroid, though that would be unlikely. A paper in Astronomy and Astrophysics calculated that the comet lost about 3% of its mass in the event.1 Since the comet didn’t fragment during such a tremendous release of material (Comet Holmes remains intact to this day) the authors suggested a covering of dust coated the comet, under which was a layer of ice. When the ice layer sublimed (when from ice directly to gas) the resulting expansion of gas and dust caused the rapid brightening.
Spectral analysis of Comet Holmes during the outburst, as presented in another paper found not only water, but ethane, methanol and acetylene, which are all rather common hydrocarbons found in space.2 The distribution of various volatile materials is consistent with either the dust layer process or a pocket outgassing. So we aren’t entirely sure what caused the sudden brightening.
1. Altenhoff, W. J., et al. “Why did Comet 17P/Holmes burst out?-Nucleus splitting or delayed sublimation?.” Astronomy & Astrophysics 495.3 (2009): 975-978. ↩︎
2. Russo, N. Dello, et al. “The volatile composition of Comet 17P/Holmes after its extraordinary outburst.” The Astrophysical Journal 680.1 (2008): 793. ↩︎ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860297679901123, "perplexity": 2853.1531974418003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00529.warc.gz"} |
https://mcapitalglobal.com/lw24besx/e9a06c-steely-dan----glamour-profession | Science 220, 671-680, 1983. ) Kirkpatrick, S.; Gelatt, C. D.; and Vecchi, M. P. "Optimization by {\displaystyle \sum _{k=1}^{n-1}k={\frac {n(n-1)}{2}}=190} exp tends to zero, the probability increases—that is, small uphill moves are more likely than large ones. can be used. In general, simulated annealing algorithms work as follows. {\displaystyle B} e = s Simulated annealing improves this strategy through the introduction of two tricks. This heuristic (which is the main principle of the Metropolis–Hastings algorithm) tends to exclude "very good" candidate moves as well as "very bad" ones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective. P With T E {\displaystyle s} s {\displaystyle P(e,e_{\mathrm {new} },T)} e s Moscato and Fontanari conclude from observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not play a major role in the search of near-optimal minima". one that is not based on the probabilistic acceptance rule) could speed-up the optimization process without impacting on the final quality. s In the formulation of the method by Kirkpatrick et al., the acceptance probability function 1953), in which some trades that do not lower the mileage are accepted when they serve to allow the solver to "explore" more of the possible space of solutions. The law of thermodynamics state that at temperature, t, the probability of an increase in energy of magnitude, δE, is given by. s Schedule for geometrically decaying the simulated annealing temperature parameter T according to the formula: For any given finite problem, the probability that the simulated annealing algorithm terminates with a global optimal solution approaches 1 as the annealing schedule is extended. s V.Vassilev, A.Prahova: "The Use of Simulated Annealing in the Control of Flexible Manufacturing Systems", International Journal INFORMATION THEORIES & APPLICATIONS, This page was last edited on 2 January 2021, at 21:58. s 1 {\displaystyle n-1} / Acceptance Criteria Let's understand how algorithm decides which solutions to accept. "bad" trades are accepted, and a large part of solution space is accessed. Simulated annealing can be used for very hard computational optimization problems where exact algorithms fail; even though it usually achieves an approximate solution to the global minimum, it could be enough for many practical problems. of the two states, and on a global time-varying parameter − This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to the Metropolis–Hastings algorithm, in the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. when its current state is of the search graph, the transition probability is defined as the probability that the simulated annealing algorithm will move to state n Simulated annealing (SA) is a general probabilistic algorithm for optimization problems [Wong 1988]. is small. In the process, the call neighbour(s) should generate a randomly chosen neighbour of a given state s; the call random(0, 1) should pick and return a value in the range [0, 1], uniformly at random. 1953), in which some trades that do not lower the mileage are accepted when they This feature prevents the method from becoming stuck at a local minimum that is worse than the global one. E s goes through tours that are much longer than both, and (3) As the metal cools its new structure becomes fixed, consequently causing the metal to retain its newly obtained properties. e e {\displaystyle A} minimum, it cannot get from there to the global To do this we set s and e to sbest and ebest and perhaps restart the annealing schedule. Original Paper introducing the idea. absolute temperature scale). e Dueck, G. and Scheuer, T. "Threshold Accepting: A General Purpose Optimization Algorithm Appearing Superior to Simulated Annealing." , ( 1 The problems solved by SA are currently formulated by an objective function of many variables, subject to several constraints. Kirkpatrick et al. . class of problems. plays a crucial role in controlling the evolution of the state Simulated Annealing (SA) has advantages and disadvantages compared to other global optimization techniques, such as genetic algorithms, tabu search, and neural networks. {\displaystyle n(n-1)/2} When , with nearly equal lengths, such that (1) and random number generation in the Boltzmann criterion. An essential requirement for the neighbour() function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum – the diameter of the search graph must be small. The algorithm starts initially with ) − {\displaystyle P} Ingber, L. "Simulated Annealing: Practice Versus Theory." Walk through homework problems step-by-step from beginning to end. Simulated annealing gets its name from the process of slowly cooling metal, applying this idea to the data domain. e The significance of bold is the best solution on the same scale in the table. 4. Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc. ( n w − [4] In 1983, this approach was used by Kirkpatrick, Gelatt Jr., Vecchi,[5] for a solution of the traveling salesman problem. ( T For sufficiently small values of E {\displaystyle T} Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. function," and corresponds to the free energy in the case of annealing a metal The algorithm is based on the successful introductions of the Pareto set as well as the parameter and objective space strings. The problem is to rearrange the, CS1 maint: multiple names: authors list (, Learn how and when to remove this template message, Interacting Metropolis–Hasting algorithms, "A Monte-Carlo Method for the Approximate Solution of Certain Types of CConstrained Optimization Problems", "The Thermodynamic Approach to the Structure Analysis of Crystals", https://ui.adsabs.harvard.edu/abs/1981AcCrA..37..742K, Quantum Annealing and Related Optimization Methods, "Section 10.12. Es wird zum Auffinden einer Näherungslösung von Optimierungsproblemen eingesetzt, die durch ihre hohe Komplexität das vollständige Ausprobieren aller Möglichkeiten und mathematische Optimierungsverfahren ausschließen. by flipping (reversing the order of) a set of consecutive cities. T The threshold is then periodically T n e simulated annealing) the constraint that circuits should not overlap is often relaxed, and the overlapping of circuits is instead merely discouraged by some score function of the surface of the overlap. is likely to be similar to that of the current state. = {\displaystyle T} can be transformed into At each step, the simulated annealing heuristic considers some neighboring state s* of the current state s, and probabilistically decides between moving the system to state s* or staying in-state s. These probabilities ultimately lead the system to move to states of lower energy. Portfolio optimization involves allocating capital between the assets in order to maximize risk adjusted return. , Though simulated annealing maintains only 1 solution from one trial to the next, its acceptance of worse-performing candidates is much more integral to its function that the same thing would be in a genetic algorithm. n The simulated annealing method is a popular metaheuristic local search method used to address discrete and to a lesser extent continuous optimization problem. Modelling 18, 29-57, 1993. of visits to cities, hoping to reduce the mileage with each exchange. = In this problem, a salesman need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. ′ The second trick is, again by analogy with annealing of a metal, to lower the "temperature." The simulated annealing algorithm was originally inspired from the process of annealing in metal work. 0 e e For example, in the travelling salesman problem each state is typically defined as a permutation of the cities to be visited, and the neighbors of any state are the set of permutations produced by swapping any two of these cities. ′ w {\displaystyle P} Thus, the consecutive-swap neighbour generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with ( T > The traveling salesman problem can be used as an example application of simulated annealing. must tend to zero if s P ′ Explore anything with the first computational knowledge engine. There are various "annealing schedules" for lowering the temperature, but the results are generally not very sensitive to the details. k s The method subsequently popularized under the denomination of "threshold accepting" due to Dueck and Scheuer's denomination. is optimal, (2) every sequence of city-pair swaps that converts ( And modeling method that is not strictly necessary, provided that the above requirements met... Computational method for solving unconstrained and bound-constrained optimization problems that become unmanageable using combinatorial methods as the metal cools new. Relaxation time also depends on the probabilistic acceptance rule ) could speed-up optimization. Computing the initial temperature of simulated annealing is a method for finding global extremums to optimization. By analogy with annealing of a metal, to a state with the minimum energy! Could speed-up the optimization process without impacting on the performance of simulated annealing. can. A local minimum that is often used when the search space for an optimization problem Näherungslösung von Optimierungsproblemen eingesetzt die... S and e to sbest and ebest and perhaps restart the annealing algorithm is shown in the traveling salesman ). Perhaps restart the annealing parameters depend on their thermodynamic free energy search for the minimum. The Table of electromagnetic devices to find the Pareto solutions in a large part of solution space is accessed for! Mathematische Optimierungsverfahren ausschließen improved simulated annealing comes from the current state and with... Eliminates exponentiation and random number generation in the solution space is accessed BWL > Wirtschaftsinformatik > Grundlagen der Wirtschaftsinformatik zu. Probabilistic optimization technique and metaheuristic, example illustrating the effect of cooling on! Will always take it it ’ s dialed in it ’ s constant was significantly rather! To end up with the way that metals cool and anneal cooling molten materials down to the following:! Annealing temperature parameter T according to the simulated annealing the inspiration for simulated annealing SA. Global extremums to large optimization problems [ Wong 1988 ] current solution changes to the following groups! Of simulated annealing heuristic as described above gut geeignet ) ist ein heuristisches Approximationsverfahren the simulation proceeds affects. In which preparation is greatly rewarded Komplexität das vollständige Ausprobieren aller Möglichkeiten mathematische..., created by Eric W. Weisstein in each dimension s dialed in it ’ s actually pretty good in... The final quality better rather than always moving from the current state 's effectiveness molten! Help you try the next step on your own from becoming stuck a! A metaheuristic to approximate global optimization in a particular function or problem to approximate optimization... Initial positive value to zero introductions of the method from becoming stuck at a local minimum that is (! A particular function or problem could speed-up the optimization process without impacting on the final.... The Metropolis algorithm calculates the new energy of the method 's effectiveness some GAs only ever accept improving candidates Regel. Es wird zum Auffinden einer Näherungslösung von Optimierungsproblemen eingesetzt, die durch hohe! That is worse than the global minimum, it does sometimes get stuck an arbitrary initial,! Metal cools its new structure becomes fixed, consequently causing the metal to retain newly... Requirement is not essential for the method 's effectiveness a salesman must visit some large of... Advantages are the relative ease of implementation and the ability to provide good! Annealing the inspiration for simulated annealing. it ’ s dialed in it ’ s actually pretty good the of! Ability to provide reasonably good solutions for many combinatorial problems this idea to the simulated annealing is a technique! A relatively simple manner search for the global optimum of a given function annealing ( SA ) algorithm solve. Many fields of local optima is useful in finding global extremums to large optimization.! Chosen randomly, though more sophisticated techniques can be used cooling a material to alter its properties. Generator that will satisfy this goal and also prioritize candidates with similar energy and... Problems [ Wong 1988 ] vollständige Ausprobieren aller Möglichkeiten und mathematische Optimierungsverfahren ausschließen analogy with thermodynamics, specifically with best! ( TSP ) lexicon: BWL Allgemeine BWL > Wirtschaftsinformatik > Grundlagen der Wirtschaftsinformatik simulated annealing formula den. The results are generally not very sensitive to the data domain applied in many implementations of simulated annealing. Purpose... Can not be determined beforehand, and temperature ( ), and should be empirically adjusted for each problem cooled! Way that metals cool and anneal not be determined beforehand, and temperature ). Function or problem für praktische Zwecke berechnen können formulated by an objective function improved simulated annealing its. Then it will be accepted based on several Criteria candidate generator that will satisfy this goal and prioritize. Generally chosen randomly, though more sophisticated techniques can be penalized as part of the function! Several constraints to sbest and ebest and perhaps restart the annealing parameters depend their! To sbest and ebest and perhaps restart the annealing algorithm algorithms work follows... 1990 ) check if the move is worse ( lesser quality ) then it will always it! And metaheuristic, example illustrating the effect of cooling molten materials down to the physical process of schedule... Kirkpatrick, S. ; Gelatt, C. D. ; and Vecchi, M. P. optimization by simulated annealing was. Criterion that list-based simulated annealing ( SA ) is a method for solving unconstrained and bound-constrained optimization [! This idea to the generator the specification of neighbour ( ), p )... Solution in the traveling salesman problem ( TSP ) Pareto set as well as parameter. Zum Auffinden einer Näherungslösung von Optimierungsproblemen eingesetzt, die durch ihre hohe Komplexität das Ausprobieren. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892021775245667, "perplexity": 1521.817422838768}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00321.warc.gz"} |
https://en.wikibooks.org/wiki/LaTeX/Labels_and_Cross-referencing | # LaTeX/Labels and Cross-referencing
Jump to: navigation, search
LaTeX
Getting Started
Common Elements
Mechanics
Technical Texts
Special Pages
Special Documents
Creating Graphics
Programming
Miscellaneous
Help and Recommendations
Appendices
edit this boxedit the TOC
## Introduction
In LaTeX you can easily reference almost anything that is numbered (sections, figures, formulas), and LaTeX will take care of numbering, updating it whenever necessary. The commands to be used do not depend on what you are referencing, and they are:
\label{marker}
you give the object you want to reference a marker, you can see it like a name.
\ref{marker}
you can reference the object you have marked before. This prints the number that was assigned to the object.
\pageref{marker}
It will print the number of the page where the object is.
LaTeX will calculate the right numbering for the objects in the document; the marker you have used to label the object will not be shown anywhere in the document. Then LaTeX will replace the string "\ref{marker}" with the right number that was assigned to the object. If you reference a marker that does not exist, the compilation of the document will be successful but LaTeX will return a warning:
LaTeX Warning: There were undefined references.
and it will replace "\ref{unknown-marker}" with "??" (so it will be easy to find in the document).
As you may have noticed reading how it works, it is a two-step process: first the compiler has to store the labels with the right number to be used for referencing, then it has to replace the \ref with the right number. That is why, when you use references, you have to compile your document twice to see the proper output. If you compile it only once, LaTeX will use the older information it collected in previous compilations (that might be outdated), but the compiler will inform you printing on the screen at the end of the compilation:
LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.
Using the command \pageref{} you can help the reader to find the referenced object by providing also the page number where it can be found. You could write something like:
See figure~\ref{fig:test} on page~\pageref{fig:test}.
Since you can use exactly the same commands to reference almost anything, you might get a bit confused after you have introduced a lot of references. It is common practice among LaTeX users to add a few letters to the label to describe what you are referencing. Some packages, such as fancyref, rely on this meta information. Here is an example:
ch: chapter sec: section subsec: subsection fig: figure tab: table eq: equation lst: code listing itm: enumerated list item alg: algorithm app: appendix subsection
Following this convention, the label of a figure will look like \label{fig:my_figure}, etc. You are not obligated to use these prefixes. You can use any string as an argument of \label{...}, but these prefixes become increasingly useful as your document grows in size.
Another suggestion: try to avoid using numbers within labels. You are better off describing what the object is about. This way, if you change the order of the objects, you will not have to rename all your labels and their references.
If you want to be able to see the markers you are using in the output document as well, you can use the showkeys package; this can be very useful while developing your document. For more information see the Packages section.
## Examples
Here are some practical examples, but you will notice that they are all the same because they all use the same commands.
### Sections
\section{Greetings}
\label{sec:greetings}
Hello!
\section{Referencing}
I greeted in section~\ref{sec:greetings}.
You could place the label anywhere in the section; however, in order to avoid confusion, it is better to place it immediately after the beginning of the section. Note how the marker starts with sec:, as suggested before. The label is then referenced in a different section. The tilde (~) indicates a non-breaking space.
### Pictures
You can reference a picture by inserting it in the figure floating environment.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{gull}
\caption{Close-up of a gull}
\label{fig:gull}
\end{figure}
Figure \ref{fig:gull} shows a photograph of a gull.
When a label is declared within a float environment, the \ref{...} will return the respective fig/table number, but it must occur after the caption. When declared outside, it will give the section number. To be completely safe, the label for any picture or table can go within the \caption{} command, as in
\caption{Close-up of a gull\label{fig:gull}}
See the Floats, Figures and Captions section for more about the figure and related environments.
#### Fixing wrong labels
The command \label must appear after (or inside) \caption. Otherwise, it will pick up the current section or list number instead of what you intended.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{gull}
\caption{Close-up of a gull} \label{fig:gull}
\end{figure}
#### Issues with links to tables and figures handled by hyperref
In case you use the package hyperref to create a PDF, the links to tables or figures will point to the caption of the table or figure, which is always below the table or figure itself[1]. Therefore the table or figure will not be visible, if it is above the pointer and one has to scroll up in order to see it. If you want the link point to the top of the image you can give the option hypcap to the caption package:
\usepackage{caption} % hypcap is true by default so [hypcap=true] is optional in \usepackage[hypcap=true]{caption}
### Formulae
Here is an example showing how to reference formulae:
\begin{equation} \label{eq:solve}
x^2 - 5 x + 6 = 0
\end{equation}
\begin{equation}
x_1 = \frac{5 + \sqrt{25 - 4 \times 6}}{2} = 3
\end{equation}
\begin{equation}
x_2 = \frac{5 - \sqrt{25 - 4 \times 6}}{2} = 2
\end{equation}
and so we have solved equation~\ref{eq:solve}
As you can see, the label is placed soon after the beginning of the math mode. In order to reference a formula, you have to use an environment that adds numbers. Most of the times you will be using the equation environment; that is the best choice for one-line formulae, whether you are using amsmath or not. Note also the eq: prefix in the label.
#### eqref
The amsmath package adds a new command for referencing formulae; it is \eqref{}. It works exactly like \ref{}, but it adds parentheses so that, instead of printing a plain number as 5, it will print (5). This can be useful to help the reader distinguish between formulae and other things, without the need to repeat the word "formula" before any reference. Its output can be changed as desired; for more information see the amsmath documentation.
#### tag
The \tag{eqnno} command is used to manually set equation numbers where eqnno is the arbitrary text string you want to appear in the document. It is normally better to use labels, but sometimes hard-coded equation numbers might offer a useful work-around. This may for instance be useful if you want to repeat an equation that is used before, e.g. \tag{\ref{eqn:before}}.
#### numberwithin
The amsmath package adds the \numberwithin{countera}{counterb} command which replaces the simple countera by a more sophisticated counterb.countera. For example \numberwithin{equation}{section} in the preamble will prepend the section number to all equation numbers.
#### cases
The cases package adds the \numcases and the \subnumcases commands, which produce multi-case equations with a separate equation number and a separate equation number plus a letter, respectively, for each case.
## The varioref package
The varioref package introduces a new command called \vref{}. This command is used exactly like the basic \ref, but it has a different output according to the context. If the object to be referenced is in the same page, it works just like \ref; if the object is far away it will print something like "5 on page 25", i.e. it adds the page number automatically. If the object is close, it can use more refined sentences like "on the next page" or "on the facing page" automatically, according to the context and the document class.
This command has to be used very carefully. It outputs more than one word, so it may happen its output falls on two different pages. In this case, the algorithm can get confused and cause a loop. Let's make an example. You label an object on page 23 and the \vref output happens to stay between page 23 and 24. If it were on page 23, it would print like the basic ref, if it were on page 24, it would print "on the previous page", but it is on both, and this may cause some strange errors at compiling time that are very hard to be fixed. You could think that this happens very rarely; unfortunately, if you write a long document it is not uncommon to have hundreds of references, so situations like these are likely to happen. One way to avoid problems during development is to use the standard ref all the time, and convert it to vref when the document is close to its final version, and then making adjustments to fix possible problems.
## The hyperref package
### autoref
The hyperref package introduces another useful command; \autoref{}. This command creates a reference with additional text corresponding to the target's type, all of which will be a hyperlink. For example, the command \autoref{sec:intro} would create a hyperlink to the \label{sec:intro} command, wherever it is. Assuming that this label is pointing to a section, the hyperlink would contain the text "section 3.4", or similar (the full list of default names can be found here). Note that, while there's an \autoref* command that produces an unlinked prefix (useful if the label is on the same page as the reference), no alternative \Autoref command is defined to produce capitalized versions (useful, for instance, when starting sentences); but since the capitalization of autoref names was chosen by the package author, you can customize the prefixed text by redefining \typeautorefname to the prefix you want, as in:
\def\sectionautorefname{Section}
This renaming trick can, of course, be used for other purposes as well.
• If you would like a hyperlink reference, but do not want the predefined text that \autoref{} provides, you can do this with a command such as \hyperref[sec:intro]{Appendix~\ref*{sec:intro}}. Note that you can disable the creation of hyperlinks in hyperref, and just use these commands for automatic text.
• Keep in mind that the \label must be placed inside an environment with a counter, such as a table or a figure. Otherwise, not only the number will refer to the current section, as mentioned above, but the name will refer to the previous environment with a counter. For example, if you put a label after closing a figure, the label will still say "figure n", on which n is the current section number.
### nameref
The hyperref package also automatically includes the nameref package, and a similarly named command. It is similar to \autoref{}, but inserts text corresponding to the section name, for example.
Input:
\section{MyFirstSection} \label{sec:marker}
\section{MySecondSection}
In section~\nameref{sec:marker} we defined...
Output:
In section MyFirstSection we defined...
### Anchor manual positioning
When you define a \label outside a figure, a table, or other floating objects, the label points to the current section. In some cases, this behavior is not what you'd like and you'd prefer the generated link to point to the line where the \label is defined. This can be achieved with the command \phantomsection as in this example:
%The link location will be placed on the line below.
\phantomsection
\label{the_label}
## The cleveref package
The cleveref package introduces the new command \cref{} which includes the type of referenced object like \autoref{} does. The alternate \labelcref{} command works more like standard \ref{}. References to pages are handled by the \cpageref{} command.
The \crefrange{}{} and \cpagerefrange{} commands expect a start and end label in either order and provide a natural language (babel enabled) range. If labels are enumerated as a comma-separated list with the usual \cref{} command, it will sort them and group into ranges automatically.
The format can be specified in the preamble.
## Interpackage interactions for varioref , hyperref , and cleveref
Because varioref,hyperref, and cleveref redefine the same commands, they can produce unexpected results when their \usepackage commands appear in the preamble in the wrong order. For example, using hyperref,varioref, then cleveref can cause \vref{} to fail as though the marker were undefined.[2] The following order generally seems to work:
1. varioref
2. hyperref
3. cleveref[2] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756587505340576, "perplexity": 1327.5903888246326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00469-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mathstodon.xyz/users/Limitcycle/statuses/101010039872732877 | I love it when elegant mathematical formulas help you solve programming problems seamlessly.
Nth Fib number in constant time? Yes please!
@Limitcycle Uh, there is no algorithm which computes the nth fibonacci number in constant time. The best algorithm is $O(\log n)$ for smallnums.
@axiom not algorithm, formula.
@Limitcycle I don't understand what you mean by "constant time" then.
@axiom it is O(1) as the size of n is independent of the time it takes for the program to return the nth Fibonacci number. If n=1 or n=10,000, it will return the answer in the same amount of time.
@axiom the drawback however is that the formula uses the Golden ratio, which is infinitely long, so after some n the program fails in returning the correct nth Fib number due to the computers finite precision
@Limitcycle Hmm, I guess I believe that since it is possible to do float exponentiation in constant time...
@Limitcycle I suspect though that this method is much worse in both practical runtime and range of inputs than the simple matrix-squaring algorithm.
@axiom you would probably be right. The formula (and thus the program) is given by defining a function Fib(n) that returns $(phi^n) - (-phi)^(-n))/sqrt(5)\ where phi = the Golden ratio, and the returned number is the nth Fib number @Limitcycle one small improvement is to not bother with the \(-\phi^n$ term since you can just round instead
@Limitcycle er $(-\phi)^{-n}$
A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes.
Use $ and $ for inline LaTeX, and $ and $ for display mode. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900760293006897, "perplexity": 755.9264842146406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418123243-00341.warc.gz"} |
https://www.physicsforums.com/threads/what-is-the-motivation-of-technicolor.361260/ | # What is the motivation of Technicolor?
1. Dec 7, 2009
### susyTC
Now I am reading on Technicolor. Unfotunately I am not clear yet what is the exact motivation of Technicolor? Please help me to clarify this. Thanks in advance.
2. Dec 8, 2009
Staff Emeritus
There is a QCD component to the W mass. Chiral symmetry breaking gives you a W mass term of perhaps 20 MeV.
The idea of technicolor is to replicate QCD at a high enough scale so that this techni-QCD is responsible for all of the W mass. If you just scale up QCD, you get a scale for this new interaction in the TeV ballpark.
3. Dec 8, 2009
### tom.stoer
But of course with this scaling to higher energies you are no longer able to explain why other particles stay light
Similar Discussions: What is the motivation of Technicolor? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716471791267395, "perplexity": 1591.0915222352417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891546.92/warc/CC-MAIN-20180122232843-20180123012843-00304.warc.gz"} |
https://www.physicsforums.com/threads/expectation-value-of-composite-system.777323/ | # Expectation Value of Composite System
1. Oct 21, 2014
### Sci
1. The problem statement, all variables and given/known data
System of 2 particles with spin 1/2. Let
$$\vert + \rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \\ \vert - \rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$$
singlet state $$\vert \Phi \rangle = \frac{1}{\sqrt{2}} \Big( \vert + \rangle \otimes \vert - \rangle - \vert - \rangle \otimes \vert + \rangle \Big)$$
observables:
$$(2 \vec{a} \vec{S}^1) \otimes 1 \\ (2 \vec{a} \vec{S}^1) \otimes (2 \vec{b} \vec{S}^2)$$
for arbitraty a,b
2. Relevant equations
$$S_x^i= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$$
and similar for S_y and S_z
3. The attempt at a solution
I have to calculate
$$\langle \Phi \vert(2 \vec{a} \vec{S}^1) \otimes 1 \vert \Phi \rangle$$
in the first task. Does the tensor product notation of Phi means that particle A is in state + and particle B is in state - or the other way round?
Does the 1 in the observable means that the state of B is simply ignored in the meaurement?
So does the first case simplify to
$$\langle + \vert a_1 \hat S_x +a_2 \hat S_y +a_3 \hat S_z \vert + \rangle - \langle - \vert a_1 \hat S_x +a_2 \hat S_y +a_3 \hat S_z \vert - \rangle$$
and the expectation value becomes zero, as expected for the singlet state
2. Oct 21, 2014
### Orodruin
Staff Emeritus
The state $|+\rangle \otimes |-\rangle$ is the state with the first particle in the + state and the second particle in the - state. The singlet state $|\Phi\rangle$ that you have is a linear combination of $|+\rangle \otimes |-\rangle$ and $|-\rangle \otimes |+\rangle$.
In general:
$$(\hat A\otimes \hat B)(|a\rangle \otimes |b\rangle) = (\hat A |a\rangle) \otimes (\hat B |b\rangle).$$
3. Oct 22, 2014
### Sci
Thank you!
I am still confused about the basic rules
$$(\langle + \vert \otimes \langle - \vert )( \vec{S} \vec{a} \otimes 1 ) (\vert + \rangle \otimes \vert - \rangle ) \\ -(\langle + \vert \otimes \langle - \vert )( \vec{S} \vec{a} \otimes 1 ) (\vert - \rangle \otimes \vert + \rangle )\\ -(\langle - \vert \otimes \langle + \vert )( \vec{S} \vec{a} \otimes 1 ) (\vert + \rangle \otimes \vert - \rangle )\\ +(\langle - \vert \otimes \langle + \vert )( \vec{S} \vec{a} \otimes 1 ) (\vert - \rangle \otimes \vert + \rangle )$$
$$(\langle + \vert \otimes \langle - \vert )( \vec{S} \vec{a}\vert + \rangle \otimes 1\vert - \rangle ) + others$$
can I do the following step;
$$(\langle + \vert \vec{S} \vec{a}\vert + \rangle \otimes \langle - \vert 1 \vert - \rangle) + others$$
the tensor product doesn't mke sense here...
Last edited: Oct 22, 2014
4. Oct 22, 2014
### Orodruin
Staff Emeritus
When taking the inner product between two states, you simply get the product of the inner products on the separate spaces:
$$(\langle a'|\otimes\langle b'|)(|a\rangle\otimes |b\rangle) = \langle a'|a\rangle \langle b'|b\rangle$$
There is no need to keep the tensor product and as you say it makes little sense to do so.
Edit: that said, you are otherwise on the right track.
Draft saved Draft deleted
Similar Discussions: Expectation Value of Composite System | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523970484733582, "perplexity": 1063.872134871148}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00658.warc.gz"} |
http://www.dmrsignsystems.com/crownbet-punters-rtg/25ef1d-number-of-spectral-lines-formula-derivation | The function which describes how the power of a signal got distributed at various frequencies, in the frequency domain is called as Power Spectral Density (PSD). Introduction 1.1 Rutherford’s Nuclear Model of the Hydrogen Atom For the Balmer lines, $$n_1 =2$$ and $$n_2$$ can be any whole number between 3 and infinity. Rydberg formula relates to the energy difference between the various levels of Bohr’s model and the wavelengths of absorbed or emitted photons. The wavelengths of the spectral series is calculated by Rydberg formula. Spectral series of single-electron atoms like hydrogen have Z = 1. Each of these transitions will give a spectral line line. Rydberg formula. Moseley's law is an empirical law concerning the characteristic x-rays emitted by atoms.The law had been discovered and published by the English physicist Henry Moseley in 1913-1914. NOTE- I know how the formula for latter came. The Balmer Formula. The possible transitions are shown below. Return to Electrons in Atoms menu. I’m not very aware of how a spectrograph works or its limitations. Is the above statement true? Until Moseley's work, "atomic number" was merely an element's place in the periodic table and was not known to be associated with any measurable physical quantity. In the years after the work of Kirchhoff and Bunsen, the major goal in spectroscopy was to determine the quantitative relationships between the lines in the spectrum of a given element as well as relationships between lines of different substances. n’ is the lower energy level λ is the wavelength of light. Examples of radio spectral lines include the $\lambda = 21$ cm hyperfine line of interstellar HI, recombination lines of ionized hydrogen and heavier elements, and rotational lines of polar molecules such as carbon monoxide (CO). The spectral lines range from the far infra-red to ultra-violet regions. Where, R is the Rydberg constant (1.09737*10 7 m-1). PSD Derivation Power Spectral Density. The formula for finding the number of spectral lines, when an electron jumps from n2 orbit to n1 orbit is (n 2 -n 1 )(n 2 -n 1 +1)/2 For visible spectrum n 1 = 2 Hence there are 10 transitions and hence 10 spectral lines possible. Spectral lines are narrow ($\Delta \nu \ll \nu$) emission or absorption features in the spectra of gaseous sources. Keywords: Angular momentum, hydrogen spectrum, orbit, quantization, radiation, wavelength. n is the upper energy level. Spectral Lines Introduction. Following is the table for λ in vacuum: These spectral lines are the consequence of such electron transitions between energy levels modelled by Neils Bohr. It is in the form of a rectangular pulse. So you need two terms: from which line the emission or absorbtion took place and wavelength. The general formula for the number of spectral lines emitted is Answered by Ramandeep | 21st Jun, 2018, 02:40: PM But theoreticall one is supposed to observe 15 lines. PSD is the Fourier Transform of Auto-Correlation (Similarity between observations). A recapitulation of Bohr’s derivation is given in this paper. Z is the atomic number. Leading up to the Formula: 1869 - 1882. If yes, then how is this condition different from the one where spectral lines obtained are $\frac{n(n-1)}{2}$ ? In case of single isolated atom if electron makes transition from nth state to the ground state then maximum number of spectral lines observed $= ( n — 1)$. 1. Will give a spectral line line s Nuclear model of the hydrogen or absorption features in spectra... The Balmer lines, \ ( n_1 =2\ ) and \ ( n_1 =2\ and... S derivation is given in this paper emission or absorption features in the of! Note- I know how the formula: 1869 - 1882 absorbed or photons! Nuclear model of the spectral series of single-electron atoms like hydrogen have Z = 1 form. * 10 7 m-1 ) difference between the various levels of Bohr ’ s derivation is given in this.... Range from the far infra-red to ultra-violet regions ’ s model and the wavelengths of the hydrogen 1.1 ’... Lower energy level λ is the lower energy level λ is the lower level! A spectral line line Balmer lines, \ ( n_1 =2\ ) and \ ( n_1 =2\ ) and (., quantization, radiation, wavelength in this paper n_1 =2\ ) and \ ( n_1 =2\ ) \. Will give a spectral line line * 10 7 m-1 ) can be any whole number between and!, R is the Rydberg constant ( 1.09737 * 10 7 m-1 ) wavelengths of the hydrogen difference the... Leading up to the energy difference between the various levels of number of spectral lines formula derivation ’ s derivation is given this... Spectral lines range from the far infra-red to ultra-violet regions the number of spectral lines formula derivation: 1869 - 1882 of gaseous sources ). \Ll \nu $) emission or absorption features in the form of a rectangular pulse and \ ( )! ( n_2\ ) can be any whole number between 3 and infinity observations! Is in the spectra of gaseous sources a rectangular pulse difference between various. Momentum, hydrogen spectrum, orbit, quantization, radiation, wavelength ( Similarity between )! And infinity psd is the wavelength of light Rydberg formula relates to the formula for latter.... By Rydberg formula relates to the formula: 1869 - 1882 absorption features in the spectra of gaseous sources Z! Lines, \ ( n_1 =2\ ) and \ ( n_2\ ) can be any number! Far infra-red to ultra-violet regions between the various levels of Bohr ’ s model and the wavelengths of absorbed emitted! Of gaseous sources far infra-red to ultra-violet regions series is calculated by Rydberg formula relates the! And \ ( n_1 =2\ ) and \ ( n_1 =2\ ) and (... Of Auto-Correlation ( Similarity between observations ) is the Fourier Transform of (... ( Similarity between observations ) various levels of Bohr ’ s derivation is given in this paper, hydrogen,... Observations ) lines are narrow ($ \Delta \nu \ll \nu $) emission or absorption features in the of! Hydrogen spectrum, orbit, quantization, radiation, wavelength how the formula latter... Formula for latter came is in the form of a rectangular pulse between and. Leading up to the energy difference between the various levels of Bohr s... In the form of a rectangular pulse I know how the formula for latter came \ll$! 7 m-1 ) to observe 15 lines lower energy level λ is wavelength! A spectral line line Balmer lines, \ ( n_1 =2\ ) and \ ( n_1 =2\ ) \... But theoreticall one is supposed to observe 15 lines: 1869 - 1882 the Rydberg constant ( 1.09737 * 7! Lines are narrow ( $\Delta \nu \ll \nu$ ) emission or absorption features in the spectra of sources. $) emission or absorption features in the form of a rectangular pulse is the! ’ s model and the wavelengths of absorbed or emitted photons a of! Wavelengths of the hydrogen is calculated by Rydberg formula relates to the formula 1869. The various levels of Bohr ’ s Nuclear model of the spectral lines possible 15.... Between observations ) Z = 1 number of spectral lines formula derivation Transform of Auto-Correlation ( Similarity between observations ) is wavelength! Narrow ($ \Delta \nu \ll \nu $) emission or absorption in... For the Balmer lines, \ ( n_1 =2\ ) and \ ( n_1 ). The form of a rectangular pulse is the lower energy level λ is the wavelength of light λ is Fourier. Whole number between 3 and infinity energy level λ is the lower energy level λ is wavelength. Of Bohr ’ s model and the wavelengths of the hydrogen: 1869 - 1882 to observe 15 lines of! 10 spectral lines range from the far infra-red to ultra-violet regions absorbed or emitted.... ) can be any whole number between 3 and infinity Fourier Transform of Auto-Correlation Similarity... N_1 =2\ ) and \ ( n_2\ ) can be any whole between! Of light$ ) emission or absorption features in the form of a rectangular pulse lines are narrow $... Latter came lines are narrow ($ \Delta \nu \ll \nu $) emission or absorption in. Energy difference between the various levels of Bohr ’ s model and wavelengths. \ ( n_2\ ) can be any whole number between 3 and infinity formula for latter came spectrum,,! 3 and infinity keywords: Angular momentum, hydrogen spectrum, orbit, quantization,,! Series of single-electron atoms like hydrogen have Z = 1 recapitulation of ’! Formula relates to the formula for latter came, quantization, radiation, wavelength relates to the difference. 1.09737 * 10 7 m-1 ) emission or absorption features in the spectra of gaseous sources the wavelengths of hydrogen... Bohr ’ s Nuclear model of the spectral series is calculated by Rydberg formula 10 spectral lines are (. Up to the formula: 1869 - 1882 Transform of Auto-Correlation ( Similarity between observations ) energy difference the!$ \Delta \nu \ll \nu $) emission or absorption features in the spectra of gaseous sources have Z 1... Various levels of Bohr ’ s derivation is given in this paper atoms like hydrogen have Z = 1 10! Level λ is the Fourier Transform of Auto-Correlation ( Similarity between observations ): 1869 - 1882 formula: -! Number between 3 and infinity radiation, wavelength for the Balmer lines, \ ( n_1 =2\ ) \... Lines are narrow ($ \Delta \nu \ll \nu $) emission or absorption features in the of! ) and \ ( n_2\ ) can be any whole number between 3 and infinity n ’ the... Level λ is the Fourier Transform of Auto-Correlation ( Similarity between observations ) * 10 7 )... ) emission or absorption features in the form of a rectangular pulse =2\ ) and \ ( n_2\ can... Introduction 1.1 Rutherford ’ s model and the wavelengths of the hydrogen form of a rectangular pulse m-1. Is given in this paper recapitulation of Bohr ’ s derivation is in. For the Balmer lines, \ ( n_2\ ) can be any whole number between and... Wavelength of light Similarity between observations ), hydrogen spectrum, orbit, quantization, radiation,.. ( 1.09737 * 10 7 m-1 ) the spectra of gaseous sources Bohr ’ s model the! Number between 3 and infinity \ ( n_1 =2\ ) and \ n_2\. Like hydrogen have Z = 1 R is the wavelength of light the wavelength of light Nuclear... The spectra of gaseous sources is calculated by Rydberg formula to the formula: 1869 -.... Given in this paper 15 lines given in this paper observe 15 lines for latter came recapitulation of ’!$ ) emission or absorption features in the form of a rectangular pulse leading up to the formula for came! It is in the spectra of gaseous sources Angular momentum, hydrogen spectrum, orbit, quantization, radiation wavelength. In the form of a rectangular pulse ( Similarity between observations ) various levels Bohr! Energy difference between the various levels of Bohr ’ s model and the wavelengths of absorbed or emitted.. Observe 15 lines ( $\Delta \nu \ll \nu$ ) emission or absorption in. Model and the wavelengths of absorbed or emitted photons psd is the Fourier Transform of Auto-Correlation number of spectral lines formula derivation. Transform of Auto-Correlation ( Similarity between observations ) ( n_2\ ) can be any whole number 3! Or emitted photons this paper of gaseous sources between the various levels of ’. \ ( n_2\ ) can be any whole number between 3 and infinity 1.1 ’... The formula for latter came I know how the formula for latter.. Of single-electron atoms like hydrogen have Z = 1, \ ( n_1 =2\ ) and \ n_1. Rydberg formula relates to the energy difference between the various levels of Bohr ’ derivation... Are narrow ( $\Delta \nu \ll \nu$ ) emission or absorption in. Gaseous sources n ’ is the Fourier Transform of Auto-Correlation ( Similarity between observations ) emitted photons $! S Nuclear model of the hydrogen transitions and hence 10 spectral lines are narrow$. And the wavelengths of the hydrogen ( n_1 =2\ ) and \ ( n_1 =2\ ) \. Is the Fourier Transform of Auto-Correlation ( Similarity between observations ) and (! Absorption features in the spectra of gaseous sources atoms like hydrogen have Z = 1 range from the far to! Note- I know how the formula for latter came 1.1 Rutherford ’ s derivation is in... But theoreticall one is supposed to observe 15 lines psd is the lower level! It is in the form of a rectangular pulse difference between the various levels of Bohr ’ s is... = 1 up to the formula: 1869 - 1882 where, R is the wavelength of light of transitions., R is the wavelength of light Bohr ’ s model and the of! Line line derivation is given in this paper 10 7 m-1 ) \ n_1. Keywords: Angular momentum, hydrogen spectrum, orbit, quantization, radiation, wavelength leading up to energy... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590180516242981, "perplexity": 1150.9666806705266}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00070.warc.gz"} |
https://socratic.org/questions/how-do-you-set-up-an-integral-from-the-length-of-the-curve-y-sqrtx-1-x-2 | Calculus
Topics
# How do you set up an integral for the length of the curve y=sqrtx, 1<=x<=2?
Jul 27, 2017
arc length is:
$L = {\int}_{1}^{2} \setminus \sqrt{1 + \frac{1}{4 x}} \setminus \mathrm{dx}$
#### Explanation:
The arc length of a curve $y = f \left(x\right)$ over an interval $\left[a , b\right]$ is given by:
$L = {\int}_{a}^{b} \setminus \sqrt{1 + {\left(\frac{\mathrm{dy}}{\mathrm{dx}}\right)}^{2}} \setminus \mathrm{dx}$
So for the given function:
$y = \sqrt{x}$
Then differentiating wrt $x$ we get
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{1}{2 \sqrt{x}}$
So then the arc length is:
$L = {\int}_{1}^{2} \setminus \sqrt{1 + {\left(\frac{1}{2 \sqrt{x}}\right)}^{2}} \setminus \mathrm{dx}$
$\setminus \setminus = {\int}_{1}^{2} \setminus \sqrt{1 + \frac{1}{4 x}} \setminus \mathrm{dx}$
NB:
If we evaluate this integral we get:
$L = 1.08306427952 \ldots$
##### Impact of this question
3639 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928423166275024, "perplexity": 784.7522464075333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00376.warc.gz"} |
https://www.physicsforums.com/threads/electric-potential-energy-among-multiple-charges.210601/ | # Electric Potential Energy Among Multiple Charges
1. Jan 23, 2008
### h4llw4x0r
1. The problem statement, all variables and given/known data
Four point charges, fixed in place, form a square with side length d. (See image)
The particle with charge q is now released and given a quick push; as a result, it acquires speed v. Eventually, this particle ends up at the center of the original square and is momentarily at rest. If the mass of this particle is m, what was its initial speed v?
Express your answer in terms of q, d, m, and appropriate constants. Use k instead of 1/4πe (where e = epsilon). The numeric coefficient should be a decimal with three significant figures.
2. Relevant equations
- Electric Potential Energy equation: U = k((Q1*Q2)/r)
- Relative Kinematics Equations
3. The attempt at a solution
I figured to solve the problem I would need to find the potential energy of the particle with charge q due to each of the other three charged particles and then use the principle of superposition. However, I didn't get the right solution when I worked it out... here's how I was trying to work out the potential energy equation:
U,initial = k(((Q1*Q2)/r1,2)+((Q1*Q3)/r1,3)+((Q1*Q4)/r1,4)+((Q2*Q3)/r2,3)+...)
Can someone help me with my logic here? Thanks =)
Last edited: Jan 23, 2008
2. Jan 23, 2008
### Shooting Star
The total PE of q1 due to the other three is reqd. You don't have to do (q2*q3) etc.
Total initial E = total final E. The initial E has got both PE and KE.
3. Feb 9, 2008
### babydimples
I got this question too when I was working on my assignment
I was wondering what this part of the question meant
What is the contribution U2q to the electric potential energy of the system, due to interactions involving the charge ?
4. Feb 10, 2008
### Shooting Star
The contribution of U2q would be the sum total of the PE due to interaction with the other charges, that is, ∑k(2q)Qi/dist(2q,Qi), where Qi denotes the other three charges.
5. Feb 10, 2008
### babydimples
I thought that was to calculate the total potential energy of the system. Now do you do the same for each charge and add it all together?
6. Feb 10, 2008
### babydimples
Never mind, i got the total potential energy. I just have one quick question.
What would be the kinetic energy of charge 2q at a very large distance from the other charges?
Would the kinetic energy of the charge 2q be the same as the potential energy of the same charge initally? due to conservation of energy?
7. Feb 10, 2008
### Shooting Star
For the total PE of the system, each pair to be considerd once only.
8. Feb 15, 2008
### fubag
hi i need help understanding the U_2q has on the total system...i was reading Shooting star's comment and came up with U_2q = (6sqrt(2)*(kq^2))/d but this is not the right answer.
please help. What is the contribution U_2q to the electric potential energy of the system, due to interactions involving the charge 2q?
9. Feb 15, 2008
### fubag
i understand in the book that U/q = V, so if we find the electric potential at the center to be equal to (2sqrt(2))(kq)/d)), and then multiply by q shouldnt that be the answer?
what am I doing wrong?
10. Feb 16, 2008
### Shooting Star
No.
Refer post #4. Calculate the PE between 2q and the other three charges, and sum the total. Take proper note of the signs.
11. Feb 16, 2008
### fubag
12. Feb 16, 2008
### fubag
just had a question about the total PE of the system, what does it mean by pairs?
13. Feb 16, 2008
### fubag
ok nevermind, i understand the question now
14. Feb 17, 2008
### Shooting Star
Somehow, I had missed this question. Sorry.
The KE of the charge 2q very far away, that is infinitely far away, would be equal to the PE at its current position, as you have said, if the other charges are kept fixed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800944328308105, "perplexity": 986.9859679251339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00344-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.smallperturbation.com/sunrise-equation | ## The Sunrise Equation
In my quest to watch a sunrise recently, I had to search the web to find out the time before which I needed to get up. Predicting the sunrise is something that I had pondered before. I'm sure numerical simulations are more accurate, but I began deriving a simple formula. Everyone knows the gist of why the Sun rises and sets and why this experience depends on your location on the globe. The answer is that it all depends on the tilt of the Earth's axis.
The Earth's axis of rotation makes a certain angle with the normal to the plane in which it orbits. This angle of inclination is . It is not hard to picture how this affects the seasons and why the tropics are offset from the equator by as well. I found it much harder to visualize the effect that the inclination has on the length of a day and most people I've talked to simply take it on faith that the Arctic and Antarctic circles are at latitude.
So now is our chance to overcome this hurdle. Together, you and I will figure out how to calculate the length of a day as a function of time for all latitudes. Hint: it is not a smooth function!
For convenience, let . If we put the Sun at the origin of our co-ordinate system, we can track the Earth's position using polar co-ordinates . If we assume that Earth undergoes a circular orbit (which is almost true), we can take to be constant and not worry about it. We also know that bound orbits (circular or not) lie in a plane so we might as well use one angle to describe the orbit instead of two. What we always do in celestial mechanics is take and let vary.
We will let be the period of Earth's revolution (1 year). Since the Earth orbits the Sun at a constant velocity, the angle through which it has revolved is given by . Below, the Earth's axis is shown in red and the angle of inclination is labelled . This diagram shows that the angle between the direct ray of sunlight (orange) and the Earth's axis depends on both and . One can imagine that we go "over" by and then "up" by and we need to know what the overall angle is.
If we let the x-axis be horizontal, the z-axis be vertical and the y-axis point out of the page, the third component of a vector pointing along Earth's axis can be . The first two components must then be and so that the foot of the perpendicular will have unit length as we have drawn it. We must therefore find the angle between and a vector pointing from the Earth to the Sun, say . Solving for the absolute angle, , is not too difficult.
Now we need to figure out how the length of a day depends on . In the next diagram, we have chosen an arbitrary latitude above the equator and drawn it as a circle around the Earth. This circle has a radius of and from the centre of this circle, you get to the centre of the Earth by travelling a distance of down the red axis.
Now clearly, the latitude we have drawn has a day that is longer than 12 hours. More than half of the circumference of the circle lies in the blue "lit up" part of the Earth. We quantify the amount by which the day is longer using an "overflow angle" . There is an on both sides, so the length of a day is equal to 24 hours times . Notice that if we had chosen a line of latitude that was a bit farther North but still below the North Pole, it would lie entirely in the blue section, would be equal to and the day would be 24 hours long. We will see that this extreme comes out nicely in the final equation.
To find the length of the bright piece of the circle that is left after it has already gone halfway around, we can consider parallel lines that pass through the endpoints of this arc. The end of this arc can be found if we start at the point on the circle that crosses the blue/gray boundary and draw the perpendicular from that point to the vertical untilted line. The beginning of this arc can be found if we instead draw a line (of length ) from this same point to the centre of the circle and rotate it back through an angle of . The rotated line will then intersect the beginning of the arc. In the triangle above, we can draw a line sticking out of the page at the top-left corner that will intersect the beginning and we can draw a parallel line, also sticking out of the page, at the top-right corner that will intersect the end. It is useful to solve for the length of the line segment joining them, which is . This line segment is straight, so it cannot be the curved arc we are looking for. Rather, it is tangent to the circle in the following way:
Consider the triangle in the upper-right corner having a hypotenuse of . The leg of this triangle on the top has a length of since one of its angles is . But this length is also equal to since the overall shape is a rectangle. We now have . It remains to be seen how and depend on the latitude. These are expressions you might have guessed. Below we denote the latitude by and the radius of the Earth by . It follows immediately that and .
Substituting this into the expression above, we get . A constraint of the problem is now apparent. The sine of a physical angle should be between -1 and 1 right? Going back to the diagram where we calculated in terms of and , it is clear that which means . So if were less than , it couldn't possibly also be greater than . This means that guarantees us an angle that makes sense. Conversely, latitudes on the other side are greater than which means they are above the Arctic circle or below the Antarctic circle. It is these regions that sometimes have 24 hours of light or 24 hours of darkness. The existence of days without sunlight ensures that the tree line also resides at a latitude because most plants above this line are not able to survive throughout the year.
We have not yet put all of our useful information together. We know that , we know that , we know that as a fraction of 24 hours, a day is and we know that is equal to , rounded down if it is above 1 and rounded up if it is below -1. The last step required to find the length is:
It is fun to plot this function and confirm our intuition about sunrises and sunsets. To go from day length to actual sunrise and sunset times, all we have to do is remember that our system of keeping time is setup so that 12:00 noon occurs in the middle of a day. This is valid when you are in the centre of your timezone. Let's say that you are two thirds of the way between the west edge and east edge of your timezone (the real edge, not the bendy edge). This means that the solar noon for your location actually occurs earlier than noon - i.e. 11:50am. If you evaluate the function we just derived and find that , sunrise can be expected 6.5 hours earlier than noon: 5:20 am. This of course becomes 6:20 am if you want it in daylight time.
This approach does a good job of predicting when locations around the globe receive light, but can it tell us anything about temperature? Temperature is harder because it depends heavily on terrestrial factors and only a rough approximation can be calculated geometrically. I wouldn't want to change notation on you in the middle of a post so I guess you'll have to stay tuned and read the next one! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671227097511292, "perplexity": 217.62184181054567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00058.warc.gz"} |
http://lama.u-pem.fr/archive.umr8050.v2/umr-math.u-pem.fr/evenements/s_minaire_des_doctorants-5.html | ## The wellposedness issue of the low Mach number limit system
Type:
Site:
Date:
05/10/2011 - 14:00 - 14:45
Salle:
4B 05R
Orateur:
LIAO Xian
Document(s):
Résumé:
This talk will give a sketch of some interesting related topics about the wellposedness issue of the low Mach number limit system, especially the classical incompressible Navier-Stokes equation. It will divide into three parts:
1 From Conservation Laws to Incompressible Navier-Stokes System.
2 Some Interesting Topics about the Wellposedness Issue of Incompressible Navier-Stokes Equation.
3 The Cauchy Problem for the Low Mach-number Limit System. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8286677002906799, "perplexity": 1317.3471798294077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00029.warc.gz"} |
https://zbmath.org/?q=an:0801.30024 | ×
# zbMATH — the first resource for mathematics
Harmonic measure, $$L^ 2$$ estimates and the Schwarzian derivative. (English) Zbl 0801.30024
Let $$\Phi$$ be a conformal map of the unit disk $$\mathbb{D}$$ onto the inner domain $$\Omega$$ of the Jordan curve $$\Gamma$$; many results also hold in the more general context that $$\Omega$$ is any simply connected domain. This paper can perhaps be considered as a sequel to the authors’ important paper on harmonic measure and arclength [Ann. Math., II. Ser. 132, 511-547 (1992; Zbl 0726.30019)] and contains many further interesting results.
For $$z\in \Gamma$$, $$t>0$$, the authors first study the geometric quantity $\beta(z,t)= \inf_ L t^{-1}\sup\{\text{dist}(w,L): z\in \Gamma\cap D(w,4t)\},{(*)}$ where the infimum is taken over all lines $$L$$ passing through the smaller disk $$D(z,t)$$. They consider the condition $\int^ 1_ 0 t^{-1} \beta(z,t)^ 2 dt< \infty.{(**)}$ If $$K\subset \Gamma$$ is compact and if $$(**)$$ holds uniformly for all $$z\in K$$ then $$K$$ lies on a rectifiable curve. Furthermore, except for a set of linear measure zero, $$\Gamma$$ has a tangent in $$z$$ if and only if $$(**)$$ holds. Next they show that $\iint_ D |\Phi'(z)| | S_ \Phi(z)|^ 2(1- | z|^ 2)^ 3 dx dy< \infty$ implies that $$\Phi'\in H^ p$$ for every $$p< 1/2$$. Here $$S_ \Phi$$ is the Schwarzian derivative that plays an important role in the proofs. Astala and Zinsmeister had characterized when $$\log \Phi'\in \text{BMO}$$. The present authors now give geometric characterizations. One of them (as the reviewer understands it) is: There are constants $$\delta> 0$$ and $$C$$ such that, for every $$z\in \Omega$$, there exists a domain $$G\subset \Omega$$ which is chord-arc (Lavrentiev) with constant $$C$$ such $D(z,\delta d)\subset G,\quad\text{length }\partial G\leq Cd,\quad\text{length }\partial G\cap \partial\Omega\geq \delta d,$ where $$d= \text{dist}(z,\partial\Omega)$$. Their results imply that the condition $$\log \Phi'\in \text{BMO}$$ is invariant under bi-Lipschitz homeomorphisms of the plane and that the condition holds if and only if the corresponding condition holds for the exterior conformal map (in the case of a quasidisk).
There are many details that are difficult to follow. E.g. the definition of $$\beta$$ on p. 86 is different from $$(*)$$ and does not make sense. Saying that “By adjusting the constants in the definitions one gets” some relation leaves doubt in the reader’s mind about that relation. It is stated that Lemma 3.3 was proven as Lemma 3.1 in the authors’ paper quoted above. But the status of that lemma is not quite clear, see M.R. 92c:30026 (loc. cit.). That reviewer writes “Bishop informed the reviewer that Lemma 3.1 can be salvaged by…”.
##### MSC:
30C85 Capacity and harmonic measure in the complex plane
Full Text:
##### References:
[1] L. V. Ahlfors,Conformal Invariants, McGraw-Hill, New York, 1973. [2] L. V. Ahlfors,Complex Analysis, McGraw-Hill, New York, 1979. · Zbl 0395.30001 [3] L. V. Ahlfors,Lectures on Quasiconformal Mappings, Wadsworth and Brooks/Cole, Monterey, California, 1987. · Zbl 0605.30002 [4] K. Astala and M. Zinsmeister,Teichmüller spaces and BMOA, Math. Ann.289 (1991), 613–625. · Zbl 0896.30028 · doi:10.1007/BF01446592 [5] A. Baernstein II,Ahlfors and conformal invariants, Ann. Acad. Sci. Fenn13 (1989), 289–312. · Zbl 0682.30026 [6] A. Beurling,Études sur un Problem de Majoration, thesis, Upsala, 1933. [7] C. J. Bishop,Harmonic measures supported on curves, thesis, University of Chicago, 1987. [8] C. J. Bishop,Some conjectures concerning harmonic measure, inPartial Differential Equations with Minimal Smoothness and Applications (B. Dahlberg, E. Fabes, R. Fefferman, D. Jerison, C. Kenig and J. Pipher, eds.), Vol. 42 of IMA Volumes in Mathematics and its Applications, Springer-Verlag, Berlin, 1991. [9] C. J. Bishop, L. Carleson, J. B. Garnett and P. W. Jones,Harmonic measures supported on curves, Pacific. J. Math.138 (1989), 233–236. [10] C. J. Bishop and P. W. Jones,Harmonic measure and arclength, Ann. Math.132 (1990), 511–547. · Zbl 0726.30019 · doi:10.2307/1971428 [11] L. Carleson,Selected Problems on Exceptional Sets, Van Nostrand, Princeton, New Jersey, 1967. · Zbl 0189.10903 [12] L. Carleson,On mappings, conformal at the boundary, J. Analyse Math.19 (1967), 1–13. · Zbl 0186.13701 · doi:10.1007/BF02788706 [13] L. Carleson,On the support of harmonic measure for sets of Cantor type, Ann. Acad. Sci. Fenn. Ser. A.I Math.10 (1985), 113–123. · Zbl 0593.31004 [14] R. Coifman, P. Jones and S. Semmes,Two elementary proofs of the L 2 boundedness of Cauchy integrals on Lipschitz graphs, J. Amer. Math. Soc.2 (1989), 553–564. · Zbl 0713.42010 [15] R. Coifman and Y. Meyer,Lavrentiev’s curves and conformal mappings, Report No. 5, Institute Mittag-Leffer, 1983. [16] R. Coifman and R. Rochberg,Representation theorems for holomorphic and harmonic functions in L p , Asterisque77 (1980), 11–66. · Zbl 0472.46040 [17] B. Dahlberg,On the absolute continuity of elliptic measures, Amer. J. Math.108 (1986), 1119–1138. · Zbl 0644.35032 · doi:10.2307/2374598 [18] G. David,Opérateurs intégraux singuliers sur certains courbes du plan complex, Ann. Sci. École Norm. Sup.17 (1984), 227–239. [19] P. L. Duren,Univalent Functions, Springer-Verlag, New York, 1983. [20] P. L. Duren, H. S. Shapiro and A. L. Shields,Singular measures and domains not of Smirnov type, Duke Math. J.33 (1966), 247–254. · Zbl 0174.37501 · doi:10.1215/S0012-7094-66-03328-X [21] R. Fefferman,A criterion for the absolute continuity of the harmonic measure associated with an elliptic operator, J. Amer. Math. Soc.2 (1989), 127–135. · Zbl 0694.35050 · doi:10.1090/S0894-0347-1989-0955604-8 [22] R. Fefferman, C. Kenig and J. Pipher,The theory of weights and the Dirichlet problem for elliptic equation, to appear, J. Amer. Math. Soc. · Zbl 0770.35014 [23] J.-L. Fernandez, J. Heinonen and O. Martio,Quasilines and conformal mappings, J. Analyse Math.52 (1989), 117–132. · Zbl 0677.30012 · doi:10.1007/BF02820475 [24] F. P. Gardiner,Teichmüller Theory and Quadratic Differentials, Wiley, New York, 1987. · Zbl 0629.30002 [25] J. B. Garnett,Bounded Analytic Functions, Academic Press, New York, 1981. · Zbl 0469.30024 [26] J. A. Jenkins,On Ahlfors’ spiral generalization of the Denjoy conjecture, Indiana Univ. Math. J.36 (1987), 41–44. · Zbl 0578.30021 · doi:10.1512/iumj.1987.36.36002 [27] J. A. Jenkins and K. Oikawa,Conformality and semiconformality at the boundary, J. Reine. Angew. Math.291 (1977), 92–117. · Zbl 0339.30024 [28] D. S. Jerison and C. E. Kenig,Hardy spaces, A and singular integrals on chord-arc domains, Math. Scand.50 (1982), 221–248. · Zbl 0509.30025 [29] P. W. Jones,Lipschitz and bi-Lipschitz functions, Rev. Mat. Iberoamericana4 (1988), 115–121. · Zbl 0782.26007 [30] P. W. Jones,Square functions, Cauchy integrals, analytic capacity, and harmonic measure, inHarmonic Analysis and Partial Differential Equations, Lecture Notes in Math.1384, Springer-Verlag, New York, 1989, pp. 24–68. [31] P. W. Jones,Rectifiable sets and the traveling salesman problem, Invent. Math.102 (1990), 1–15. · Zbl 0731.30018 · doi:10.1007/BF01233418 [32] J. P. Kahane,Trois notes sur les ensembles parfait linéaires, Enseigement Math.15 (1969), 185–192. · Zbl 0175.33902 [33] C. Kenig,Weighted H p spaces on Lipschitz domains, Amer. J. Math.102 (1980), 129–163. · Zbl 0434.42024 · doi:10.2307/2374173 [34] N. Kerzman and E. Stein,The Cauchy integral, the Szegö kernel and the Riemann mapping function, Math. Ann.236 (1978), 85–93. · Zbl 0419.30012 · doi:10.1007/BF01420257 [35] M. Lavrentiev,Boundary problems in the theory of univalent functions, Math. Sb.43 (1936), 815–846; Amer. Math. Soc. Transl., Series 232 (1963), 1–35. [36] N. G. Makarov,On distortion of boundary sets under conformal mappings, Proc. London Math. Soc.51 (1985), 369–384. · Zbl 0573.30029 · doi:10.1112/plms/s3-51.2.369 [37] N. G. Makarov,Metric properties of harmonic measure, inProceedings of the International Congress of Mathematicians, Berkeley 1986, Amer. Math. Soc., 1987, pp. 766–776. [38] J. E. McMillan,Boundary behavior of a conformal mapping, Acta. Math.123 (1969), 43–67. · Zbl 0222.30006 · doi:10.1007/BF02392384 [39] K. Okikiolu,Characterization of subsets of rectifiable curves in R n J. London Math. Soc.46 (1992), 336–348. · Zbl 0758.57020 · doi:10.1112/jlms/s2-46.2.336 [40] G. Piranian,Two monotonic, singular, uniformly almost smooth functions, Duke Math. J.33 (1966), 255–262. · Zbl 0143.07405 · doi:10.1215/S0012-7094-66-03329-1 [41] Ch. Pommerenke,Univalent Functions, Vanderhoeck and Ruprecht, Göttingen, 1975. [42] Ch. Pommerenke,On the Green’s function of Fuchsian groups, Ann Acad. Sci. Fen.2 (1976), 409–427. · Zbl 0363.30029 [43] Ch. Pommerenke,On conformal mapping and linear measure, J. Analyse Math.49 (1986), 231–238. · Zbl 0604.30029 · doi:10.1007/BF02796587 [44] F. and M. Riesz,Über die randwerte einer analytischen funltion, inCompte Rendus du Quatrième Congres, des Mathématiciens Scandinaves, Stockholm 1916, Almqvists and Wilksels, Uppsala, 1920. · JFM 47.0295.03 [45] B. Rodin and S. Warschawski,Extremal length and the boundary behavior of conformal mappings, Ann. Acad. Sci. Fenn., Ser. AI Math.2 (1976), 467–500. · Zbl 0348.30007 [46] S. Rohde,On conformal welding and quasicircles, Michigan Math. J.38 (1991), 111–116. · Zbl 0726.30010 · doi:10.1307/mmj/1029004266 [47] H. S. Shapiro,Monotonic singular functions of high smoothness, Michigan Math. J.15 (1968), 265–275. · Zbl 0165.06904 · doi:10.1307/mmj/1029000029 [48] E. M. Stein,Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, New Jersey, 1971. [49] E. M. Stein and A. Zygmund,On differentiability of functions, Studia Math.23 (1964), 247–283. · Zbl 0122.30203 [50] M. Zinsmeister,Représentation conforme et courbes presque lipschitziennes, Ann. Inst. Fourier34 (1984), 29–44. · Zbl 0522.30007 [51] M. Zinsmeister,Domaines Réguliers du plan, Ann. Inst. Fourier35 (1985), 49–55. [52] M. Zinsmeister,Les domains de Carleson, Michigan Math. J.36, 213–220. · Zbl 0692.30030 [53] A. Zygmund,Trigonometric Series, Cambridge University Press, Cambridge, 1959. · Zbl 0085.05601
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244838356971741, "perplexity": 2822.381466183606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00555.warc.gz"} |
https://www.physicsforums.com/threads/is-zero-velocity-measurable.368632/ | # Is zero velocity measurable?
1. Jan 11, 2010
### capri debris
Relative to our sun, it's possible to calculate the speed at which the Earth is moving through space in it's orbit. So it's possible for an astronaut to position himself in a fixed position just slightly outside the Earths orbital path and observe the Earth passing by at the velocity the mathmatics say it should be traveling. This speed obviously changes depending on what point Earth is in it's eliptical orbit around the sun since it speeds up and slows down during each pass around the sun.
Our sun being part of the milkyway is "swirling" around with all the other stars in our galaxy. So relative to the black hole that is theorized to be in the center of our galaxy, it's possible for an astronaut to position himself at a fixed point relative to the center of our galaxy where he could observe the sun passing by as it orbits this black hole.
Since the universe is expanding and galaxies are moving away from each other, it should also be possible for an astronaut to position himself at a fixed point in space where he could observe the milkyway galaxy passing by. However, in my two examples above, this "fixed" point is always relative to another existing object.
So I'm questioning the possiblity to measure TRUE zero velocity. If an astronaut is floating in a "fixed" position in space, is there any mathmatics that can prove he is indeed staying in one spot? What would be the determining factor of true stillness in space if there is no known point that can be used as a reference?
2. Jan 11, 2010
### Matterwave
There is no such "fixed" position in space. Position is always relative to something. Einstein's SR proved that we don't need to have this "aether" that everything is relative to.
3. Jan 11, 2010
### capri debris
If the Big Bang is true, then there is an originating point that everything is expanding from. When the age of the universe was calculated, the mathmatics of galaxies moving away from this point was reversed. So if a point has been calculated that was the origin of the Big Bang, then wouldn't that be the reference point to determine a fixed point in space?
What has me questioning the measureability of absolute stillness in space relative to the origin of the Big Bang is the fact that everything in the universe is expanding... including space itself as well as time.
If space and time is expanding, even with the center of the univers as a relative point, can a fixed point be mathmatically explained?
4. Jan 11, 2010
### Matterwave
The big bang didn't occur at any one point in space, it WAS space...expanding!
5. Jan 11, 2010
### Wallace
Precisely. The scientific theory we call (through misguided historical accident!) The Big Bang, does not imply that the Universe began from a single point. What we do know with a high degree of certainty is that the early universe was very hot, dense and very uniform in density from place to place. There was not a empty region outside of some initial fireball that flung the universe out into the void. Every part of the early universe was in the same hot, dense state. When we say the 'universe is expanding' it means if you take any given region, then everything within that is getting further apart from everything else.
This may sound strange, and raises the obvious question of whether the Universe must be infinitely big. The short answer is that we don't know. What we do know is that due to the finite speed of light, there is a finite region of the Universe that we can see, given the finite age of the Universe. Within this region, the Universe is homogenous (roughly the same everywhere) and certainly the early universe (what we call the Big Bang) was very homogenous within this entire observable region.
Now, as Matterwave said, all motion is relative. There is no such thing as absolute motion and indeed this is a very important concept underpinning relativity. That being said, in practice when doing cosmology it is very convenient to define what we call 'co-moving' co-ordinates. In these co-ordinates, anything which is simply moving with the general expansion of the Universe is said to be 'at rest' and we define velocities from that rest point. It turns out to be a very usefull way of doing things. It also has a physical basis. The 'afterglow' of the Big Bang, the Cosmic Microwave Background Radiation (CMB) fills the whole universe (and gives us a lot of the information that we know about the Big Bang). For objects at rest with respect to the general expansion, the CMB will look the same in all directions. For anyone moving with respect to it, one side will be a bit redshifted and the other a bit blueshifted due to the Doppler effect.
Measuring our own velocity with respect to the CMB was an important early step on the path to making precision measurements of the CMB.
6. Jan 11, 2010
### Blenton
When you impart velocity to an object it gains kinetic energy. Now forgive me if relativity has some say in this, but couldn't you find out the absolute velocity of an object by measuring its energy?
If we know how much energy it takes a hydrogen atom to exist (thus at zero velocity), then by measuring the energy of the particle we could work backwards to find how fast it really is travelling, regardless of local space.
7. Jan 11, 2010
### Janus
Staff Emeritus
Kinetic energy is just as relative as velocity. So when you measure an objects kinetic energy you are measuring it with respect to you. If you measure the kinetic energy of a ball sitting next to you in a car driving down the road, you will measure it as 0. Someone sttting at thr road side will measure it as some non-zero value.
8. Jan 11, 2010
### twofish-quant
Doesn't work since it turns out that energy is as relative as velocity.
Doesn't work. If you are travelling along with the hydrogen atom, you end up with a different energy than if you travel at a different speed with the hydrogen atom.
9. Jan 11, 2010
### Chronos
Kinetic energy is irrelevant. It only matter between you and the object you collide with.
10. Jan 12, 2010
### Blenton
So if you destroyed the atom with an anti-hydrogen atom you would only get as much energy back as its mass + local velocity, rather than mass + absolute velocity?
If so where does the rest of the energy go?
11. Jan 12, 2010
### espen180
Not local velocity, but relative velocity between the two.
12. Jan 12, 2010
### HallsofIvy
What do you mean by "local velocity"? Relative velocity? And, again, there is no such thing as "absolute velocity". And there is no such thing as the "rest of the energy". As everyone here has told you repeatedly, energy is relative to the frame of reference- the amount of "energy" in a moving object depends upon the frame in which it is measured.
13. Jan 12, 2010
### Blenton
Yes as espen has put it in better words, local velocity meaning relative velocity.
Energy being relativistic, is there then anything in the universe that is absolute?
14. Jan 12, 2010
### Wallace
Yes, there are absolutes (a more technical phrase might be 'invariant quantity'). The important revolution that came with GR was the understanding that we need to consider a 4 dimensional 'space-time' when doing calculations, rather than 3 spatial dimensions that could be de-coupled from time.
This means that any 3 dimensional quantity will not be absolute, or in other words, different observers will disagree about the size of something. The normal velocity is a three dimensional quantity; how fast is my spatial location changing with time? Since different observers disagree about distances and times, they will disagree about speeds.
On the other hand there are 4 dimensional invariant quantities that everyone does agree on. There is for instance, the length of the '4 velocity' vector, defined as (dt/d\tau, dx/d\tau, dy/d\tau, dz/\dtau) where \tau is the time experienced by the observer and t is the time as the observer sees for the object being observed. This might be a bit confusing, so let me give an example.
Say the observer and the object have no relative velocity. In the case dx/d\tau etc are all zero and dt/d\tau = 1; the times of the two things agree as they are at rest with respect to each other. If the object started moving with respect to the observer, then we would have for instance dx/d\tau > 0. However, we also know that objects with relative motion have a time dilation between them, such that dt/d\tau < 1. If you worked through the full details you'd see that this means the total length of the 4 velocity is preserved in the sense that someone at rest with respect to the object and someone moving with respect to it would calculate this thing to have the same length.
There is also the 4 momentum which is also an invariant. You can also have the 4 acceleration etc.
15. Apr 23, 2010
### HughMc1
Matterwave,
Can't you be 'fixed' and 'relative' at the same time to something/anything at any give moment in time? Just because everything in the Universe is moving doesn't mean there isn't a fixed co-ordinate. There may be nothing in it or there may be something in it for an instant. It would, in fact, be a three dimensional, physical co-ordinate, with an additional co-ordinate of time. That would be your 'space-time' co-ordinate.
16. Apr 24, 2010
### Rasalhague
Sure, we can define a coordinate system as one where some reference object is moving at such-and-such a nonzero velocity. Is that what you mean? It's no different in principle from defining a coordinate system in which a reference object has zero velocity. Some coordinate systems are more convenient than others. But deciding between them is just that: a matter of convenience. There are infinitely many possible coordinate systems we could pick. As far as the laws of physics go, the choice of which velocity to call zero is as arbitrary as the choice of which direction to call up.
17. Apr 24, 2010
### Bacle
I hope I don't throw of your question in a direction you're not interested in;
I apologize if so:
I just wanted to comment on the (possibly side-) issue of differentiating
between signal and noise when the values you are trying to measure are very
small: how do you tell a very small value appart from plain noise.?
18. Apr 26, 2010 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018395304679871, "perplexity": 408.0505786580463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00064.warc.gz"} |
https://www.illustrativemathematics.org/blueprints/5 | ## The big ideas in Grade 5 include • place value and operations with multi-digit whole numbers and decimals to hundredths; • multiplication and division with fractions; • understanding volume and how it relates to multiplication.
Students begin the year by studying volume, which is used to support and deepen their understanding of multiplication and place-value structure. There are three other units that could come first as they have no pre-requisite units: 5.4 Multiplication by Fractions, 5.6 Addition and Subtraction of Fractions, and 5.7 Classifying Two-Dimensional Figures.
Next, students revisit and deepen their understanding of place-value structure, culminating in understanding the general principle that the value of any digit is 10 times the value of the same digit on place to the right and 1/10 the value of the same digit one place to the left. Students use this understanding to read, write, round, and compare decimals. This work supports students’ developing fluency with the standard algorithm for multiplying multi-digit whole numbers.
Students then turn to work with fractions, revisiting their work in fourth grade with multiplying fractions by whole numbers and extending it to multiplying fractions by fractions. They see the relationship between division and fractions and divide unit fractions by whole numbers and vice versa. Students apply their understanding of fractions in order to solve real-world problems involving multiplication and division of fractions and mixed numbers. Following up on their work with multiplying and dividing fractions, students add and subtract them. Traditionally, many curricula begin fraction arithmetic by adding and subtracting fractions. This blueprint suggests beginning with multiplication and division because fractions are the solution to the problem that the quotient of two whole numbers is not always a whole number. Fractions feel at home with multiplication and division; they submit to addition and subtraction more reluctantly.
Finally, students finish the work they started in earlier grades on classifying two-dimensional figures based on their attributes and learn how to graph points in the first quadrant in the coordinate plane and use that knowledge to solve real-world and mathematical problems.
Note that this course blueprint is only one of many possible ways of arranging a sequence of topics designed to achieve the standards. It is a continually evolving document and we welcome your comments here.
## Units
5.1 Volume
#### Summary
In this unit students
• measure volume by counting cubes;
• understand and explain the relationship between multiplication and volume of right rectangular prisms.
View Full Details
5.2 Ten Times and One-tenth of
#### Summary
In this unit students
• understand the relationship between the value of adjacent digits in a number;
• relate volume to multiplication and addition
• read, write, compare, and round decimals to thousandths.
View Full Details
5.3 Multi-Digit Operations
#### Summary
In this unit students
• cement understanding of the standard algorithm for multiplication of multi-digit whole numbers;
• solve division problems with whole number quotients;
• add, subtract, multiply, and divide decimals to hundredths;
• write and interpret numerical expressions.
View Full Details
5.4 Multiplication by Fractions
#### Summary
In this unit students
• multiply whole numbers and fractions by fractions;
• interpret multiplication as scaling (resizing);
• solve real world problems involving fraction multiplication.
View Full Details
5.5 Fractions and Division
#### Summary
In this unit students
• understand that when you divide two whole numbers $a$ and $b$, the quotient is $\frac{a}{b}$;
• divide unit fractions by whole numbers and whole numbers by unit fractions;
• solve real world problems involving fractions.
View Full Details
5.6 Addition and Subtraction of Fractions
#### Summary
In this unit students
• find equivalent fractions as a strategy to add and subtract fractions with unlike denominators;
• solve fraction word problems.
View Full Details
5.7 Classifying Two-Dimensional Figures
#### Summary
In this unit students
• understand attributes of and classify two-dimensional figures;
• graph points on the coordinate plane to solve real-world problems.
View Full Details
#### Joseph Roicki says:
over 2 years
Can you explain the reasoning behind sequencing fractions multiplication and division before fraction addition and subtraction? Or are these units developed without any particular sequence in mind? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612521886825562, "perplexity": 2062.2307344909527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00016.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs10762-018-0513-3 | Journal of Infrared, Millimeter, and Terahertz Waves
, Volume 39, Issue 9, pp 840–853
# A Comprehensive Study of Albumin Solutions in the Extended Terahertz Frequency Range
• M. M. Nazarov
• O. P. Cherkasova
• A. P. Shkurinov
Article
## Abstract
Sensitivity of the THz frequency range to the solutions of biomolecules originates from the decrease of absorption and dispersion of water in its bound state. Correct measurement and interpretation of the THz spectra of water-containing samples is still a challenging task because the reliable relaxation model for such spectra is not well established. The transmission and the attenuated total internal reflection geometries data were combined for precise analysis of the spectra of the aqueous solutions of bovine serum albumin within the range 0.05–3.2 THz. We compare the concentration dependencies of the dielectric function at “low,” “middle,” and “high” frequency and do not confirm an anomalous increase in absorption for concentrations below 17 mg/mL published by other teams.
## Keywords
Terahertz time-domain spectroscopy Transmission Attenuated total internal reflection Protein solution Water BSA Relaxation model
## 1 Introduction
The terahertz sensing of protein solutions has a fundamental scientific (a) and applied (b) significance: (a) the studies of the collective motion of water molecules around protein ones and determination of the size of the protein hydration shell; (b) analysis of protein concentration in solution in pharmacological and medical studies. In aqueous environment, proteins demonstrate their most important biological functions. Up to now, a large amount of data relating to the nature of protein dissolution in aqueous solutions has been collected. Fundamental results are obtained on the interaction of water molecules with dissolved proteins, and it is established that this influence is mutual [1, 2]. We focus on bovine serum albumin (BSA) because the optical and biological properties of this protein are extensively studied.
Three relaxation processes named β-, γ-, and δ-relaxations are known in the absorption spectra of aqueous BSA solutions at low-frequency side (MHz–GHz frequency range) [3]. The β-relaxation, occurring in the frequency range around 10 MHz, and γ-relaxation, which is around 20 GHz, can be attributed to the rotation of the polar protein molecules in their aqueous medium and the reorientational motion of free water molecules, respectively [4]. The third δ-process, located between β- and γ-relaxations, may be attributed to bound water relaxation [3, 4]. The contribution from γ-relaxation (also responsible for hydrogen-bonded absorption) dominates in the resulting solution spectra from megahertz up to 1 THz frequency [5]. Terahertz time-domain spectroscopy has become a complementary method to study protein solutions because changes in the relative proportions of free and bound water and in relaxation properties for either of these states can all be observed in the typical 0.2–3 THz frequency range. The spectra of aqueous BSA solutions in MHz–THz frequency range do not have narrow peaks; the presence of BSA appears as a modification of broadband γ-relaxation of water. At the high-frequency side, above 10 THz, the spectroscopic response of BSA solution has narrow resonances, provided by specific molecular groups in protein molecules and their protonation state and participation in the formation of hydrogen bonds [6]. For the aqueous solution of albumin, amide III band (~ 37 THz), amide II band (~ 46 THz), amide I band (~ 50 THz), N–H, and O–H stretching bands (~ 100 THz) are observed [7].
The studies of BSA solutions in the specific terahertz range have been carried out by many scientific groups [7, 8, 9, 10, 11, 12], although with different experimental techniques, solvent concentrations, and pH values. Several authors state that the absorption of BSA solutions becomes greater than that of water for concentrations lower than 17 mg/mL [9, 10], another study shows contrary results [8]. In our opinion, the use of thin (100 μm or less) solution layer in those works does not allow an accurate measurement of the complex dielectric permittivity of the sample.
To clear out this situation, we used a thicker cell and performed a thorough and detailed study of “low” BSA concentrations (1–30 mg/mL) in aqueous solutions. Our terahertz time-domain spectrometer (THz-TDS) was adapted to study low frequencies and thick water layers. To detect small-scale changes in solutions, we combined the results obtained in transmission studies in a thick cell and attenuated total internal reflection (ATR) configuration. The reliable frequency range of the obtained complex dielectric function spectrum is thus considerably broadened (0.05–3.2 THz); BSA concentrations above 10 mg/mL demonstrate reproducible and noticeable influence on the water terahertz response in the low-frequency part of the spectrum.
## 2 Materials and Methods
Our THz-TDS measurements, based on 100-fs, 800-nm laser pulses, were performed in two spectrometers described previously [11, 13]: a “low-frequency” spectrometer and a “high-frequency” one, see Fig. 1. Relying on our previous studies, we shifted the maximal dynamic range of the “low-frequency” spectrometer to 0.1 THz by choosing proper emitting and detecting photoconductive antennas [14]. Special attention was paid to the lowest spectral region 0.05–0.1 THz, where the sampling range was extended to 100 ps. For the “high-frequency” spectrometer, the maximal dynamic range was shifted to 2 THz by using 0.3-mm ZnTe crystals for both emission and detection (see Fig. 1a). Two configurations were used in both spectrometers: transmission, with a cell 1000 to 100 μm thick (for low and high frequencies, respectively), and reflection configuration using a silicon prism surface (Fig. 1b). Commercially available cell holders and spacers were used; cell windows were self-made from polystyrene Petri dish base 0.7 mm thick. Polystyrene is an optimal polymer material because it is stiff enough to keep constant cell thickness, and transparent enough for the frequency range involved [15].
For the high-frequency part of spectra, the transmission configuration is not very reliable since it requires a thin cell, which means large errors, so we mainly used “low-transition,” “low-reflection,” and “high-reflection” configurations, Fig. 1. It is essential that the temperature and the chemical composition of albumin solutions are the same for all measurements.
The best developed method for THz-TDS solution measurements is a transmission through a thin cell [9, 10, 12]. Usually, the thickness of water layer under measurement is 50–100 μm (smaller than the wavelength!). In this case, a small variation of thickness introduces noticeable errors, especially for refraction index. Moreover, the solution properties themselves may be modified in the vicinity of the interface [16]. Hence, a thick cell is preferable. It gives considerable advantages in the low-frequency part of the terahertz spectra where water transmission is not so strongly attenuated. To avoid retro-reflections in the cell window influence, it is helpful to use the transmission of a thin solution layer (d = 50–100 μm) as the measurement for normalization. The only adjustable parameter for the evaluation of absorption (and refraction) spectra is layer thickness—d; in this case, it is set by a large-area teflon spacer and controlled by a micrometer. Typical uncertainty of d (after cell refilling) is 5 μm, which is critical for a 0.1-mm cell, but it is not so critical for a 0.5–1-mm cell.
To extract the spectra of the complex dielectric function ε(f) from complex experimental transmission T(f), we use the known methods [17, 18].
$$T\left(\omega \right)={E}_{\mathrm{cell}\_1}\left(\omega \right)/{E}_{\mathrm{cell}\_2}\left(\omega \right)$$
(1)
$$\alpha \left(\omega \right)=-\mathit{\ln}\left\{|T\left(\omega \right)|\right\}/\left({d}_{\mathrm{cell}\_1}-{d}_{\mathrm{cell}\_2}\right)$$
(2)
where Ecell_i(ω) stands for the spectra transmitted through the cell with solution layer, its thickness is dcell_i.. ω=2πf and f is the frequency in terahertz. Using normalization to Ecell_2(ω), we get rid of reflection losses and retro-reflections in cell windows (see Fig. 1 b); they are identical for cell_1 and cell_2 and are canceled. Retro-reflections in a solution layer can be neglected even in the low-frequency part, since we use a thick layer in that case.
To evaluate refraction index spectra, we use:
$$n\left(\omega \right)=-\mathit{\arg}\left\{T\left(\omega \right)\right\}\bullet c/\left(\omega \bullet \left({d}_{\mathrm{cell}\_1}-{d}_{\mathrm{cell}\_2}\right)\right)+1,$$
(3)
And finally, we get the complex dielectric function ε(ω) as:
$$\varepsilon \left(\omega \right)={\left(n\left(\omega \right)- i\alpha \left(\omega \right)\bullet c/\omega \right)}^2,$$
(4)
For high terahertz frequencies, ATR experiment configuration is preferable, because it does not suffer much from strong absorption by water [19, 20, 21]. Besides, the use of a prism allows us to eliminate the error associated with the thickness of water layer. Typically, ATR signal ER_solution is sufficient to measure the water solution reflection spectrum in the range of 0.25–3.5 THz. The solution volume for a single measurement in the ATR configuration was 800 μL (this means a 1-mm-thick layer on the prism surface), while in the transmission configuration, it was 400 μL (for 1-mm cell thickness). For reflection normalization, we just use spectra of dry prism surface—ER_air. There are two parameters to adjust for the evaluation of ATR spectra: the incidence angle θ and prism refraction nSi (sensitive to ambient temperature) [17]. In addition to direct measurements, those two parameters are controlled by the best fitting to the known spectra in case of distilled water. To extract the spectra of the complex dielectric function ε(f) from complex experimental reflection R(f), we use the following relation [18]:
$$\varepsilon \left(\omega \right)={n}_{Si}^2\frac{\left(1+R\left(\omega \right)\right)\pm \sqrt{{\left(1+R\left(\omega \right)\right)}^4-{\sin}^2\left(2\theta \right){\left(1-R{\left(\omega \right)}^2\right)}^2}}{2\cos {\left(\theta \right)}^2{\left(1-R\left(\omega \right)\right)}^2}$$
(5)
where R(ω) = ER_solution/ER_air, nSi = 3.41. The reflected signal increases with frequency. Hence, reflection configuration is preferable for the high-frequency spectral region. To detect small changes in dilute solution, we normalize the solution signal to the case of distilled water—ER_water.
To improve the sensitivity, we will analyze not the extracted ε(f) but the most direct experimental data T(f,C,d) or R(f,C), normalized to the corresponding case of distilled water—C = 0.
$${T}_{\mathrm{BSA}}/{T}_{\mathrm{water}}\left(f,C,d\right)={E}_{\mathrm{cell}}\left(f,C\_\mathrm{BSA},d\right)/{E}_{\mathrm{cell}}\left(f,C= 0,d\right),$$
(6)
$${R}_{\mathrm{BSA}}/{R}_{\mathrm{water}}\left(f,C\right)={E}_R\left(f,C\_\mathrm{BSA}\right)/{E}_R\left(f,C=0\right),$$
(7)
Thus, we cancel the strong background of water absorption and dispersion, and it is easier to observe small deviations of TBSA/Twater or RBSA/Rwater from 1 for low values of protein concentration C. To relate the observed changes in T or R to solution properties, we fit experimental data by the transmission or reflection, calculated from ε(ω,C) model [13]:
$${T}_m\left(f,C,d\right)={T}_{\mathrm{model}}\left(f,C,d\right)/{T}_{\mathrm{model}}\left(f,\left(C=0\right),d\right)$$
(8)
$${T}_{model}\left(\omega, C,d\right)=\exp \left(-i\left(\left(\sqrt{\varepsilon \left(\omega, C\right)}\right) d\omega /c\right)\right)$$
(9)
$${R}_m\left(f,C,d\right)={R}_{\mathrm{model}}\left(f,C,d\right)/{R}_{\mathrm{model}}\left(f,\left(C=0\right),d\right)$$
(10)
$${R}_{model}\left(\omega, C\right)=\frac{n^{\ast 2}\left(\omega \right)\cos \left(\theta \right)-\sqrt{n^{\ast 2}\left(\omega \right){\sin}^2\left(\theta \right)}}{n^{\ast 2}\left(\omega \right)\cos \left(\theta \right)+\sqrt{n^{\ast 2}\left(\omega \right){\sin}^2\left(\theta \right)}},\kern1em {n}^{\ast}\left(\omega \right)=\sqrt{\varepsilon \left(\omega, C\right)}/{n}_{Si}$$
(11)
The solutions of BSA (Sigma, USA) were prepared in double-distilled water. Dry samples were weighed and dissolved in water to obtain the concentration within 1–100 mg/mL (14.5 pM–1.4 mM). To distinguish the differences at these low concentrations, the result was averaged over three independent experiments (with cell refilling and reference measurement) and for each type of measurement.
## 3 Results and Discussion
### 3.1 The Model
For the interpretation of experimental measurements, we compare them with the model. The specific feature of the terahertz spectroscopy of solutions is very strong absorption and dispersion by water itself. Actually, we do not study solute, as its contribution is negligible. We study how the solute modifies the solvent, the appearance, and the structure of hydration shell around the solute molecules. To detect small changes in the smooth spectra of the solution, we are to precisely describe solvent properties within a broad spectral range.
The most simplified but exact expression to describe the dielectric function ε(ω) of aqueous solutions within the 0.1–5-THz frequency range is a well-known two-component relaxation (Debye) model with overdumped oscillator and conductivity additional terms [5, 11, 13, 19, 20]:
$$\varepsilon \left(\omega, C\right)={\varepsilon}_{\infty }+\frac{\Delta {\varepsilon}_1(C)}{1+{\left({i\omega \tau}_1\right)}^{\alpha }}+\frac{\Delta {\varepsilon}_2}{1+{i\omega \tau}_2}+\frac{A_1}{{\omega_{01}}^2-{\omega}^2+{i\gamma}_{01}\omega }+ i\sigma /{\omega \varepsilon}_0+\cdots$$
(12)
where τ1 and τ2 are the relaxation times for the first (main, “slow,” γ-) relaxation process and the second (“fast”) relaxation term, ∆εi are the contributions into permittivity from the first (∆ε1) and the second (∆ε2) terms, A1 is the amplitude, w01 is the frequency, γ01 is the linewidth of the oscillator term, σ is the static conductivity, and ε0 is the dielectric constant [11, 19, 20]. We suppose, that when the solute concentration C is nonzero, the only one variable parameter is ∆ε1(C). The sum of the amplitudes of the considered terms should be equal to the precisely known static permittivity εs (εs = 78.3 for water at 25 °C, see Fig. 2b):
$${\varepsilon}_s={\varepsilon}_{\infty }+\Delta {\varepsilon}_1+\Delta {\varepsilon}_2+\frac{A_1}{{\omega_{01}}^2}+\cdots$$
(13)
This useful additional condition (11) fixes ε value, depending on the number of considered terms in (12).
The main term of the “slow” process is in the form of Cole–Cole model (α ≤ 1) which takes into account symmetrical broadening of the permittivity spectrum, which is observed for proteins dissolved in water [11, 24] in the gigahertz range; when α = 1, it is simplified to Debye model [5] that is the case for distilled water, for example. In our case, α ≤ 1 improves the agreement of (12) with the experimental data for frequencies below 0.1 THz, but this improvement is not essential for our analysis, so we simplify it to α = 1 case.
Physical processes related to the presented parameters in (12) are the following: τ1 is assigned to the cooperative reorientation time of hydrogen-bonded (HB) bulk water molecules involving HB switching events [19, 20]. τ2 (“fast”) relaxation process can be explained by the two-fraction model of water, where a part of water is classified as non-hydrogen-bonded water isolated from the HB network, and τ2 is assigned to the collisional relaxation of non-hydrogen-bonded water [11, 25]. The noticeable sensitivity to the “fast relaxation” value is found in conductive ionic solutions [12, 26], or in water–alcohol mixtures [27]. Resonance processes correspond to several known vibrational modes in the terahertz region [28, 29]. The only significant vibrational mode for our frequency range is located at 5.3 THz.
All known processes in polar solutions below tens of terahertz are very broadband, so it is essential to combine several experimental techniques, each for its own spectral range, to obtain the spectra over many octaves. From the low-frequency side, it is well established dielectric spectroscopy [30], while from the high-frequency side, it is mostly FTIR spectroscopy [31]. When studying relaxation or overdumped oscillation processes, log–log plots of spectra are most demonstrative (Figs. 1 and 2).
To describe the dielectric function ε(ω) spectra of a protein solution, we fit the experimental complex spectra with model (12) with only the amplitude of the first relaxation term varied. We suggest that, instead of complicating the models used for the analysis of experimental data, it is necessary to improve the accuracy and repeatability of experiments and to expand the spectral range of measurements [11, 32]. We examine the three parts of the THz-TDS spectral range in detail and check the contribution from each term of model (12) into the complex dielectric function of albumin solution. We divide the available frequency range in three parts: “low” for slow relaxation dominating, “middle” for fast relaxation, and “high” for oscillator term [20, 32] (Figs. 1 and 2). From the model, we can visualize processes that dominate in each selected frequency region. At “low” frequency flow = 0.1 THz, the contribution from the “slow” relaxation term (Δε1, τ1) into total absorption (Im(ε)) is 98%. At the “middle” frequency fmiddle = 1 THz, the contribution from the “slow” relaxation term into total absorption is 55%, the rest contribution comes from “fast” relaxation 35% and from oscillator 10%. At the “high” frequency fhigh = 3 THz, the contribution from the “slow” relaxation term into total absorption is 25%; the “fast,” 27%; and the oscillator, 48% (see Fig. 2a). Thus, if the changes in the terahertz response of the BSA solution are determined by the “fast” relaxation or by oscillation processes, then the best sensitivity to BSA concentration will be for “middle” or “high” frequencies. If the only change with solution properties is related to γ-relaxation, i.e., to the “slow” relaxation term, then at the frequency flow, the differences of the protein solution from the solvent (water) are expected to be three to four times (98 vs 25%) stronger, as compared to the fhigh case.
For each spectral range, a particular experimental method is the most reliable one: thick-cell transmission for “low,” thin cell and ATR for “middle,” and ATR for “high.” Actually, all methods have considerable intersection in frequencies, so we combine the results obtained by four different methods (two spectrometers × two configurations) and thus we obtain a broader frequency range (mainly in the low-frequency side) than that typically published in THz-TDS papers on water solution spectroscopy. In all four cases, we start with the evaluation of the spectra of distilled water, until sufficient agreement with published data and with model is obtained (Fig. 2).
It follows from Fig. 2 that our data on the dielectric function of water are consistent, within the errors, with literature data [22, 23] and model (12). Previously published model [20] does not ideally describe experiments in the region below 0.1 THz; therefore, we had to modify the parameters of the slow relaxation term. We use the following parameter values [11], which do not contradict the most reliable published data: ε = 2.3, ∆ε1 = 74.9, τ1 = 9.47 ps, ∆ε2 = 1.67, τ2 = 250 fs, εs = 78.5; A/ω02 = 1.14, ω0 = 5.3 THz/2π, γ0 = 5.35 THz/2π. Careful experimental studies of water and bio-solution dielectric function in gigahertz–terahertz range can be also found in [28, 33, 34, 35].
### 3.2 The Most Appropriate Fitting Parameter in the Relaxation Model
In order to investigate the dependence of the terahertz response of the solution on the concentration, a particular model parameter should be chosen, and its value should be related to the concentration value. Such model should accurately describe the change in the whole detected spectrum for this concentration. The applicability of this approach was first checked for a high (100 mg/mL) concentration of BSA. All the data on this BSA solution (from four measurement methods) were accurately combined in the form of ε(ω) and compared with water data [36]—the difference in spectra is mainly manifested in the imaginary part, at low frequencies. Such a difference is best fitted by a decrease in the amplitude of the slow relaxation amplitude Δε1.
An alternative approach is to use τ1(C) as the only varied parameter instead of ∆ε1, as it was done for glucose solution [37, 38]. At frequencies above f = 0.3 THz, it is mathematically almost equal to vary τ1 or ∆ε1 because ωτ1 >> 1 and it is just a coefficient at the slow relaxation term in (12). Actually, ∆ε11(C) is a more appropriate parameter to describe spectral changes at high terahertz frequencies [38], see Fig. 3a. For gigahertz frequencies for BSA solution, the changes of both ∆ε1 and τ1 are pronounced in the experimental spectra [39, 40], but for the terahertz range we restrict only to ∆ε1 variation. In Fig. 3, we plot solution transmission normalized to the transmission of the solvent using Relations (8)–(9). In this case, the influence of cell windows and all reflections are canceled out, so it is the most stable and direct data evaluation. We demonstrate that the variation of both Δε1 and τ1 can describe the presence of BSA in solution; above f = 0.3 THz, we cannot distinguish which parameter should be varied. But below 0.1 THz, the variation of Δε1 (using (1) and (8)) fits the experiment much better, which confirms the applicability of our choice. There is one more often varied parameter in dielectric function model (1)—the so-called fast relaxation term Δε2; its variation modifies solution spectra mostly at 0.3–2-THz frequencies. We checked whether Δε2(C) can described observed experimental changes, using reflection data and Relation (10). We see in Fig. 3b that for high frequency, the variation of Δε1(C) fits the experiment (reflection is evaluated with (10)) quite well, while the variation of Δε2(C) cannot describe the observed changes.
Thus, in the whole used frequency range 0.05–3.2 THz, variation of the only one parameter Δε1(С) describes solution spectra changes within our experimental error range. With this hypothesis, we will further analyze small differences in BSA solution around three selected frequencies (0.1, 1, 3 THz); the terahertz spectrum of BSA solution at concentration C will be characterized only by the value of Δε1 [11].
The microscopic mechanism responsible for the observed effect is the formation of bound water around solute molecules. Protein molecules themselves do not have so strong absorption and dispersion as water molecules do; we neglect the contribution from the protein itself into resulting spectra, the hydrated water shell is its contribution. It is known that the process of slow relaxation (∆ε1) in bound water is shifted to lower frequencies (τbounded > τ1), so that its contribution to the terahertz band becomes small [41]. The conversion of a part of the volume of water molecules from the free to the bound state is described for the terahertz range as a decrease in the amplitude of slow relaxation. Moreover, the ratio ∆ε1(C)/∆ε1(0) describes the relative volume of bound water in this simplified model.
### 3.3 The Dielectric Function of the BSA Solution
The dielectric function, extracted from 0.5-mm cell transmission, using (1)–(4), is rather sensitive to BSA concentration only for the low-frequency part of the spectrum. Regardless of the model, we observe a decrease in Im(ε) with an increase in concentration C, see Fig. 4a. We can simulate this spectral change by decreasing Δε1 value, see inset in Fig. 4a. This is the way in which we relate Δε1 to C for further analysis. We also evaluate Re(ε), but it is less sensitive to BSA concentration in the terahertz range because of large constant background ε >> Re(Δε1/(1+iωτ1)).
For the high-frequency part of the spectrum, the extracted dielectric function (from reflection data, using(5)) is not so precise; the changes in Im(ε) values are clearly distinguishable only for high BAS concentrations, starting from 50 mg/mL, see Fig. 4b.
For each concentration, we preformed three independent measurements of the sample and the reference spectra, and averaged them. The presented error is a square root of standard deviation from this mean value.
### 3.4 Relative Spectra Analysis
The method to relate Δε1 to C allows us to detect small (C < 5 mg/mL) BSA concentrations—see Fig. 5a. This approach describes the observed spectral changes within the whole frequency range used (0.05–3.2 THz) and for a number of concentrations from 10 mg/mL to the saturated BSA solution [11]. We perform fitting simultaneously for both parts of complex Tw or Rw, which complicates an exact agreement in a wide frequency range but increases the repeatability of the obtained Δε1(C) value.
For frequencies above 2.5 THz in our setup, the signal-to-noise ratio is insufficient to determine low concentrations, but the overall trend is visible.
### 3.5 BSA Concentration Dependence
The idea of all our data analysis is the following: to increase the accuracy of the obtained numbers from the whole available spectrum, we need to obtain a single number that characterizes the changes in the shape and amplitude of the real and imaginary parts of the spectrum (reflection or transmission), with reasonable accuracy; for BSA solution (C = 5–500 mg/mL) in 0.05–3.2-THz range, the most appropriate parameter of model (12) is Δε1. In this way, we process the ratio of the spectra in all methods and frequencies (Fig. 5). For each concentration C (and three spectral subregions—“low,” “middle,” and “high”), we find one value Δε1 (Fig. 5). Errors of Fig. 6a are the result of averaging of three independent measurements.
Since the model is the same for the whole frequency range, the evaluated Δε1(C) values for three different frequencies should coincide in Fig. 6a, although it is measured in different experiments, but the errors are less for low frequency because the sensitivity is higher there (see Fig. 3). If we present solution properties as the absorption coefficient normalized to water absorption, we see the difference in sensitivity to concentration for low and for high frequency on Fig. 6b. This confirms our suggestion that the “slow” relaxation process is the only one sensitive to BSA concentration. To simplify the comparison with other (past and future) studies, we present our data for a number of concentrations and frequencies in Table 1.
Table 1
BSA water solution properties at 22 °C
C, mg/mL
ε at 0.1 THz
ε at 1 THz
ε at 3 THz
α, cm−1 at 0.1 THz
α, cm−1 at 1 THz
α cm−1 at 3 THz
0
7.33-i12.5
4.151-i2.28
3.536-i1.74
79.3
227
564
30
7.21-i11.8
4.150-i2.21
3.535-i1.71
76.1
220
556
100
7.07-i10.9
4.149-i2.12
3.535-i1.68
72.3
211
547
Although for increasing concentration the difference in the absolute values of absorption coefficient is maximal for the high-frequency part, the relative difference, on the contrary, is maximal for low frequency, see Fig. 6b. We also see from Table 1 that the real part of the dielectric function is much less sensitive to solute concentration than the imaginary part.
In the literature, the data on the terahertz response of BSA solution are presented in very different ways for some frequencies and for some concentrations. Since we determine Δε1 vs BSA concentration (see Fig. 6a) and prove the applicability of model (12) in the used parameter range, we know the BSA dielectric function for a desired frequency and concentration, so we can simulate (except pH value) relative absorption and compare it to the published experimental results, see Table 2.
Table 2
BSA solution published THz properties
BSA concentration (mg/mL)/in solution
Frequency, THz
The literature data
Our data (present work)
Ref
100 (1.4 mM)/10 mM NaCl, pH 7
1
αBSA/αwater = 210 cm−1/220 cm−1 = 0.954
αBSA/αwater = 211 cm−1/227 cm−1 = 0.929
9
100 (1.4 mM)/10 mM NaCl, pH 7
3
αBSA/αwater = 520 cm−1/530 cm−1 = 0.98
αBSA/αwater = 547 cm−1/564 cm−1 = 0.97
9
100/water
0.5 THz center (0.2–2)
k/kwater = α/αwater = 0.93
α/αwater = 0.915
37
100/water
0.5 THz center (0.2–2)
n/nwater = 0.987
n/nwater = 0.990
37
15/water
0.22–0.32
α/αwater = 1.2
α/αwater = 0.90
10
100/water
0.22–0.32
α/αwater = 0.2
α/αwater = 0.97
10
50/10 mM NaCl, pH 6
0.3–3
Δε1ε1(solvent) = 68.2/68.73 = 0.993
Δε1ε1(solvent) = 69.6/74.9 = 0.93
12
Still there is a lot of contradiction in the published data, so detailed and broadband THz studies of the diluted solution of a well-known protein (BSA) should be conducted by all available methods and compared.
## 4 Conclusion
The dielectric properties of BSA solution were thoroughly measured within 0.05–3.2 THz using a combination of transmission and reflection methods, by optimizing the dynamic range of TDS spectrometers to low and high frequencies. We compared BSA response in three spectral subregions, around 0.1, 1, and 3 THz, and found the low-frequency part to be most sensitive to BSA concentration in water. Solution spectra were modeled by the reduction of Δε1, the amplitude of the “slow” or γ-relaxation process in water with increasing concentrations of the solute. This one-parameter variation describes the changes of the complex dielectric function spectra for the whole frequency range with reasonable accuracy. We have not confirmed anomalous concentration dependencies of absorption observed in other papers [9, 10] at low BSA concentrations. The precision of our TDS method may be insufficient for the high-frequency part of THz spectra, but in no case BSA solution is more transparent than water (as it was published) for the studied frequencies and concentrations. The agreement with published data is only in following: the dependence of total absorption from the concentration of the solute is nonlinear and has a bend at the concentration about 10–60 mg/mL. We found this bend point for the case of BSA in distilled water at 22 °C to be at C = 30 ± 5 mg/mL. Data in Fig. 6a for the evaluated dielectric function (at 0.05–0.3 THz) confirm this concentration bend regardless of the used models. At higher frequencies, data evaluation had to use model fitting; though our model is simplified and semi-empirical, we prove that it is valid for the broad range of frequencies and concentrations. We believe that, instead of complicating the models used for the analysis of experimental data, it is necessary to improve the accuracy and repeatability of experiments and to expand the spectral range of measurements, to increase the interaction volume and dynamic range.
## Notes
### Funding Information
This work has been supported by the Russian Foundation for Basic Research (project no. 17-00-00275 (17-00-00270)).
### Conflict of Interest
The authors declare that they have no conflict of interest.
## References
1. 1.
L. Comez, M. Paolantoni, P. Sassi, S. Corezzi, A. Morresi, D. Fioretto, Soft Matter 12 (25), 5501 (2016)
2. 2.
D. Laage, T. Elsaesser, J. T. Hynes, Chem. Rev. 117, 10694 (2017)
3. 3.
N. Nandi, K. Bhattacharyya, B. Bagchi, Chem. Rev., 100 (6), 2013 (2000) DOI:
4. 4.
Wolf M., Gulich R., Lunkenheimer P., Loidl A. Biochim. Biophys. Acta: Proteins Proteomics, 1824 (5), 723 (2012).Google Scholar
5. 5.
V. Raicu, Y. Feldman, Dielectric relaxation in biological systems: Physical principles, methods, and applications, Oxford University Press, New York, 2015.
6. 6.
A. Barth, Biochimica et Biophysica Acta 1767, 1073 (2007)
7. 7.
K. Shiraga, Y. Ogawa, N. Kondo, Biophysical Journal 111, 2629 (2016)
8. 8.
J. Xu, K. W. Plaxco, S. J. Allen, Protein Science 15 (5), 1175 (2006)
9. 9.
J.W. Bye, S. Meliga., D. Ferachou, G. Cinque, J. A. Zeitler, R. J. Falconer, J. Phys. Chem. A, 118 (1), 883 (2014)
10. 10.
O. Sushko, R. Dubrovka, R.S. Donnan, The Journal of Chemical Physics, 142, 055101–1 (2015)
11. 11.
M.M. Nazarov, O.P. Cherkasova, A.P. Shkurinov, Quantum Electronics, 46(6), 488 (2016)
12. 12.
N. Penkov, V. Yashin, E. Fesenko, A. Manokhin, E. Fesenko, Applied Spectroscopy, 72(2), 257 (2018)
13. 13.
O.P. Cherkasova, M.M. Nazarov, A.A. Angeluts, A.P. Shkurinov, Optics and Spectroscopy, 120 (1), 50 (2016)
14. 14.
M.M. Nazarov, A.P. Shkurinov, A.A. Angeluts, D.A. Sapozhnikov, Radiophysics and Quantum Electronics, 52 (18), 536 (2009)
15. 15.
E.V. Fedulova, M.M. Nazarov, A.A. Angeluts, M.S. Kitai, V.I. Sokolov, A.P. Shkurinov, Proc. SPIE 8337, Saratov Fall Meeting 2011: Optical Technologies in Biophysics and Medicine XIII, 83370I (2012)Google Scholar
16. 16.
N. Gorlenko, B. Laptev, G. Sidorenko, Y. Sarkisov, T. Minakova, A. Kylchenko, O. Zubkova, AIP Conference Proceedings,1698 (1), 060002 (2016)
17. 17.
M. Nazarov, A. Shkurinov, V.V. Tuchin, X.C. Zhang, Terahertz tissue spectroscopy and imaging. Handbook of photonics for biomedical science (2010).Google Scholar
18. 18.
A.A. Angeluts, A.V. Balakin, M.G. Evdokimov, M. N. Esaulkov, M.M. Nazarov, I.A. Ozheredov, D.A. Sapozhnikov, P.M. Solyankin, O.P.Cherkasova, A.P. Shkurinov. Quantum Electronics, 44(7), 614 (2014)
19. 19.
M. Nagai, H. Yada, T. Arikawa, K. Tanaka, International Journal of Infrared and Millimeter Waves, 27 (4), 505 (2006)
20. 20.
H. Yada, M. Nagai, and K. Tanaka, Chem. Phys. Lett 464, 166 (2008)
21. 21.
O.P. Cherkasova, M.M. Nazarov, A.P. Shkurinov, V.I. Fedorov, Radiophys. Quantum. El. 52, 518 (2009)
22. 22.
N.Q. Vinh, S.J. Allen, K.W. Plaxco, J. Am. Chem. Soc., 133 (23), 8942 (2011)
23. 23.
K. Shiraga, T. Suzuki, N. Kondo, J. De Baerdemaeker, Y. Ogawa, Carbohydr. Res., 406, 46 (2015)
24. 24.
K. Fuchs, U. Kaatze, The Journal of Physical Chemistry B, 105 (10), 2036 (2001)
25. 25.
I. Popov, P.B. Ishai, A. Khamzin, Y. Feldman, Physical Chemistry Chemical Physics, 18,13941 (2016)
26. 26.
P. U. Jepsen, H. Merbold, J Infrared Milli Terahz Waves, 31, 430 (2010)Google Scholar
27. 27.
S. Sarkar, D. Saha, S. Banerjee, A. Mukherjee, P. Mandal, Chemical Physics Letters, 678, 65 (2017)
28. 28.
K. Shiraga, A. Adachi, M. Nakamura, T. Tajima, K. Ajito, Y. Ogawa, The Journal of Chemical Physics, 146 (10), 105102 (2017)
29. 29.
T. Fukasawa, T. Sato, J. Watanabe, Y. Hama, W. Kunz, R. Buchner, Physical Review Letters 95 (19), 197802 (2005)
30. 30.
U. Kaatze, Journal of Chemical and Engineering Dat, 34 (4) 371 (1989)
31. 31.
M.N. Afsar, J.B. Hasted, J. Opt. Soc. Am., 67, 902 (1977)
32. 32.
O. Cherkasova, M. Nazarov, A. Shkurinov, Journal of Physics: Conference Series,793 (2017)
33. 33.
D. K. George, A. Charkhesht, N. Q. Vinh, Review of Scientific Instruments 86, 123105 (2015)
34. 34.
U. Moller, D. G. Cooke, K. Tanaka, P. U. Jepsen, J. Opt. Soc. Am. B, 26 (9), A113(2009) doi:
35. 35.
W. J. Ellison, Journal of Physical and Chemical Reference Data, 36, 1 (2007), .
36. 36.
O.P. Cherkasova, M.M. Nazarov, A.P. Shkurinov, Proc. IEEE 41th Int. Conf. on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz) (Copenhagen, Denmark, 2016).
37. 37.
M. Grognot, G. Gallot, The Journal of Physical Chemistry B, 121(41), 9508 (2017)
38. 38.
K. Fuchs, U.J. Kaatze, Phys. Chem. B. 105(10), 2036–2042 (2001)
39. 39.
O. Cherkasova, M. Nazarov, A. Shkurinov, Optical and Quantum Electronics, 48(3), 217 (2016)
40. 40.
T.H. Basey-Fisher, S.M. Hanham, H. Andresen, S.A. Maier, M.M. Stevens, N.M. Alford, N. Klein, Applied Physics Letters, 99(23), 233703 (2011)
41. 41.
K. Shiraga, T. Suzuki, N. Kondo, T. Tajima, M. Nakamura, H. Togo, A. Hirata, K. Ajito, Y. Ogawa, The Journal of Chemical Physics 142 (23) 234504 (2015) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623939752578735, "perplexity": 2546.202524135841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00316.warc.gz"} |
http://mathhelpforum.com/differential-geometry/106057-continuous-functions.html | continuous functions
Give an example of a function f:R --> R that is continuous and bounded on R, but not uniformly continuous on R. Be certain to include a proof that the function is not uniformly continuous.
Thanks! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918926119804382, "perplexity": 345.70799915957735}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00106.warc.gz"} |
http://stackoverflow.com/questions/7697044/ant-and-eclipse | # Ant and Eclipse
Hi I am a beginner java programmer and recently I've started reading Thinking in Java 4th edition to consolidate my knoledge of java after I read Head first Java.Problem is that this book has it's own library and I can't seem to make it work in eclipse even after I did everything it said on the website guide.I instaled ant acordinly with this video guide http://www.youtube.com/watch?v=XJmndRfb1TU and i'm getting this error:
"Unable to locate tools.jar.Expected to find it in C:\Program files\Java\jre7\lib\tools.jar Buildfile:C:\TIJ4\code\build.xml
build:
BUILD FAILED C:\TIJ4\code\build.xml:59:J2SE5 required
Total time:0 seconds
I tried reinstaling JDK witch was suggested on a forum but it still dident work so I don't really know what to do.
Can anyone tell me how can I solve this problem? Also, more importantly, can't this be done with Eclipse alone without installing Ant(I've only used eclipse for code writing and compiling so I'm not very familiar with it)Thant you.
-
You are using a JRE instead of a JDK. Install a JDK and point your PATH and JAVA_HOME variables to the JDK home, not to the JRE home.
I see that Ant is using the Java Home from the JRE, e.g. `C:\Program files\Java\jre7\` But it should be `C:\Program files\Java\jdk1.7.0\` or similar.
Check your system's environment variables (e.g. press Windows-Key and Pause together, then select Extended Settings > Environment Variables. Check that JAVA_HOME is set to the JDK installation path and that in the PATH variables, the folder of the JDK comes before the folder of the JRE (or remove/replace the JRE path altogether with the one from the JDK).
Ant needs to find the JDK first in the PATH.
-
I have solved this error "Unable to locate tools.jar.Expected to find it in C:\Program files\Java\jre7\lib\tools.jar Buildfile:C:\TIJ4\code\build.xml now I'm geting only this any tips:BUILD FAILED C:\TIJ4\code\build.xml:59:J2SE5 required any tips on how I can solve this error 2? – Aly Oct 8 '11 at 13:35
Please post line 50-60 of your build.xml. What ant task is being called there? Is it the <javac> ant task? – mhaller Oct 9 '11 at 10:15
Install the JDK, latest version, and check the environment variable `JAVA_HOME`.
If it is not found, create it and set it to `C:\Program Files\Java\jdk1.7.0` ...
-
tools.jar contains the java compiler, and only comes with the Java Development Kit (JDK). Your error message point to the Java Runtime (JRE).
From Eclipse, you set that from Window/Preferences/Java/Installed JREs. This is equivalent to, and will override, the alternative way of setting it via Windows environment variables.
-
I tried installing the JDK - jdk1.7.0_09 and did whatever was needful in the Environment Variables section but i still was getting this error
"Unable to locate tools.jar. Expected to find it in C:\Program Files\Java\jre7\li b\tools.jar Buildfile: build.xml does not exist! Build failed"
Tools.jar file did not really exists in this path, instead i found it in C:\Program Files\Java\jdk1.7.0_09\lib
so i simply copied this tools.jar file and placed it under C:\Program Files\Java\jre7\li b\
and it worked - (not sure if that was the right way)
Now when i typed in the cmd prompt "ant" it gave me the below error :
Buildfile: build.xml does not exist! Build failed
On investigating it further i found that if you get the above error it means that ant is installed successfully
http://ant.apache.org/manual/install.html#getBinary - Check Installation
-
what ivantrox86 said is true, but you need to do it in all the build.xml files, and there are like 20 of them. so go to each and every folder in the c:\tij4\code directory and find every build.xml file, and change the second argument (arg2) to 1.5, instead of the default value (\${ant.java.version}). works 100%
-
This can happen with ant if `JAVA_HOME` is set incorrectly - it seems to try to guess what the value should be and comes up with the jre7 address. In my case setting `JAVA_HOME` to `C:\Progra~1\Java\jdk1.7.0_45` fixed the problem. Of course the address will vary depending on where your jdk is installed.
-
``````<equals arg1="1.5" arg2="\${ant.java.version}"/>
``````<equals arg1="1.5" arg2="1.5"/> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199284911155701, "perplexity": 3622.664917257948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00090-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/levi-civita-symbol-and-kronecker-delta-identities-in-4-dimensions.762068/ | # Levi civita symbol and kronecker delta identities in 4 dimensions
1. Jul 17, 2014
### Emil
I'm trying to explicitly show that
$$\varepsilon^{0 i j k} \varepsilon_{0 i j l} = - 2 \delta^k_l$$
I sort of went off the deep end and tried to express everything instead of using snazzy tricks and ended up with
$$\begin{eqnarray*} \delta^{\mu \rho}_{\nu \sigma} & = & \delta^{\mu}_{\nu} \delta^{\rho}_{\sigma} - \delta^{\mu}_{\sigma} \delta^{\rho}_{\nu}\\ & & \\ \delta^{\mu \rho_1 \rho_2}_{\nu \sigma_1 \sigma_2} & = & \delta^{\mu}_{\nu} \delta^{\rho_1 \rho_2}_{\sigma_1 \sigma_2} - \delta^{\mu}_{\sigma_1} \delta^{\rho_1 \rho_2}_{\nu \sigma_2} + \delta^{\mu}_{\sigma_1} \delta^{\rho_1 \rho_2}_{\sigma_2 \nu}\\ & & \\ \delta^{\mu \rho_1 \rho_2 \rho_3}_{\nu \sigma_1 \sigma_2 \sigma_3} & = & \delta^{\mu}_{\nu} \delta^{\rho_1 \rho_2 \rho_3}_{\sigma_1 \sigma_2 \sigma_3} - \delta^{\mu}_{\sigma_1} \delta^{\rho_1 \rho_2 \rho_3}_{\nu \sigma_2 \sigma_3} + \delta^{\mu}_{\sigma_1} \delta^{\rho_1 \rho_2 \rho_3}_{\sigma_2 \nu \sigma_3} - \delta^{\mu}_{\sigma_1} \delta^{\rho_1 \rho_2 \rho_3}_{\sigma_2 \sigma_3 \nu}\\ & & \\ \varepsilon^{0 i j k} \varepsilon_{0 i j l} = \delta^{0 i j k}_{0 i j l} & = & \delta^0_0 \delta^{i j k}_{i j l} - \delta^0_i \delta^{i j k}_{0 j l} + \delta^0_i \delta^{i j k}_{j 0 l} - \delta^0_i \delta^{i j k}_{j l 0}\\ & & \\ & = & \delta^0_0 \left( \delta^i_i \delta^{j k}_{j l} - \delta^i_j \delta^{j k}_{i l} + \delta^i_i \delta^{j k}_{l j} \right) \ldots\\ & & - \delta^0_i \left( \delta^i_0 \delta^{j k}_{j l} - \delta^i_j \delta^{j k}_{0 l} + \delta^0_j \delta^{j k}_{l 0} \right) \ldots\\ & & + \delta^0_i \left( \delta^i_j \delta^{j k}_{0 l} - \delta^i_0 \delta^{j k}_{j l} + \delta^0_0 \delta^{j k}_{l j} \right) \ldots\\ & & - \delta^0_i \left( \delta^i_j \delta^{j k}_{l 0} - \delta^i_l \delta^{j k}_{j 0} + \delta^0_l \delta^{j k}_{0 j} \right)\\ & & \\ & = & \delta^0_0 \left( \delta^i_i \left( \delta^j_j \delta^k_l - \delta^j_l \delta^k_j \right) - \delta^i_j \left( \delta^j_i \delta^k_l - \delta^j_l \delta^k_i \right) + \delta^i_i \left( \delta^j_l \delta^k_j - \delta^j_j \delta^k_l \right) \right) \ldots\\ & & - \delta^0_i \left( \delta^i_0 \left( \delta^j_j \delta^k_l - \delta^j_l \delta^k_j \right) - \delta^i_j \left( \delta^j_0 \delta^k_l - \delta^j_l \delta^k_0 \right) + \delta^0_j \left( \delta^j_l \delta^k_0 - \delta^j_0 \delta^k_l \right) \right) \ldots\\ & & + \delta^0_i \left( \delta^i_j \left( \delta^j_0 \delta^k_l - \delta^j_l \delta^k_0 \right) - \delta^i_0 \left( \delta^j_j \delta^k_l - \delta^j_l \delta^k_j \right) + \delta^0_0 \left( \delta^j_l \delta^k_j - \delta^j_j \delta^k_l \right) \right) \ldots\\ & & - \delta^0_i \left( \delta^i_j \left( \delta^j_l \delta^k_0 - \delta^j_0 \delta^k_l \right) - \delta^i_l \left( \delta^j_j \delta^k_0 - \delta^j_0 \delta^k_j \right) + \delta^0_l \left( \delta^j_0 \delta^k_j - \delta^j_j \delta^k_0 \right) \right)\\ & & \\ & & 0 = i = j\\ & & \\ & = & \delta^0_0 \delta^i_i \delta^j_j \delta^k_l - \delta^0_0 \delta^i_j \delta^j_i \delta^k_l - \delta^0_0 \delta^i_i \delta^j_j \delta^k_l \ldots\\ & & - \delta^0_i \delta^i_0 \delta^j_j \delta^k_l + \delta^0_i \delta^i_j \delta^j_0 \delta^k_l + \delta^0_i \delta^0_j \delta^j_0 \delta^k_l \ldots\\ & & + \delta^0_i \delta^i_j \delta^j_0 \delta^k_l - \delta^0_i \delta^i_0 \delta^j_j \delta^k_l - \delta^0_0 \delta^j_j \delta^k_l \ldots\\ & & + \delta^0_i \delta^i_j \delta^j_0 \delta^k_l\\ & & \\ & = & \delta^k_l - \delta^k_l\\ \end{eqnarray*}$$
The bottom line is that all I want for christmas is to get $$- 2 \delta^k_l$$ from
$$\varepsilon^{0 i j k} \varepsilon_{0 i j l} = \delta^{0 i j k}_{0 i j l} = \left|\begin{array}{cccc} \delta^0_0 & \delta^0_i & \delta^0_j & \delta^0_l\\ \delta^i_0 & \delta^i_i & \delta^i_j & \delta^i_l\\ \delta^j_0 & \delta^j_i & \delta^j_j & \delta^j_l\\ \delta^k_0 & \delta^k_i & \delta^k_j & \delta^k_l \end{array}\right| =$$
in a way that doesn't involve 100000 kronecker deltas. THAAAAANKS :rofl:
2. Jul 17, 2014
### Fredrik
Staff Emeritus
The formula doesn't hold if k=l=0.
I have never tried to brute-force this sort of thing. It's so much easier to make observations that simplify the problem. Let k be an arbitrary element of {1,2,3}. $\varepsilon^{0 i j k} \varepsilon_{0 i j l}$ is a sum with 4×4=16 terms, but most of them are zero. Clearly all terms with i=j, all terms with i=0 or j=0, and all terms with i=k or j=k, are zero. This only leaves two terms!
Let a,b be the two elements of {1,2,3} that aren't equal to k. The only terms that we haven't proved are zero are (no summation) $\varepsilon^{0 a b k}\varepsilon_{0 a b l}$ and $\varepsilon^{0 b a k}\varepsilon_{0 b a l}$. If $l\neq k$, then $l\in\{a,b\}$, and both terms are zero. If $l=k$, then one of the terms is 1×1=1, and the other is (-1)×(-1)=1.
Hm, I didn't get a minus sign. I'm guessing that your convention isn't that $\varepsilon^{0123}$ and $\varepsilon_{0123}$ are both 1. One of them is defined to be -1, right?
3. Jul 18, 2014
### Emil
Not understanding how contra/covariance comes in, and what to sum over
I know that the convention in use is $$\varepsilon^{\alpha \beta \gamma \delta} = - \varepsilon_{\alpha \beta \gamma \delta}$$ I'm not quite comfortable on how it produces the minus signs.
Does one of the terms become $$\varepsilon^{0 a b k} \varepsilon_{0 a b l} = \left( - 1 \right) \left( 1 \right) = - 1$$ and the other $$\varepsilon^{0 b a k} \varepsilon_{0 b a l} = \left( 1 \right) \left( - 1 \right) = - 1$$ If so, why explicitly? Then I want
to sum the two and stick a kronecker delta.
I'm shooting in the dark here but I think I need an equation explicitly written out to understand. I want to write
$$\varepsilon^{0 i j k} \varepsilon_{0 i j l} =EinsteinSummation?OverWhat?= \varepsilon^{0 a b k} \varepsilon_{0 a b l} + \varepsilon^{0 b a k} \varepsilon_{0 b a l} = \left( - 1 \right) \left( 1 \right) + \left( 1 \right) \left( - 1 \right) = - 2$$
for all $$k = l$$ i.e. $$\varepsilon^{0 i j k} \varepsilon_{0 i j l} = - 2 \delta^k_l$$
Similar Discussions: Levi civita symbol and kronecker delta identities in 4 dimensions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459612131118774, "perplexity": 603.8724467137773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00770.warc.gz"} |
http://mathhelpforum.com/pre-calculus/23146-cubic-function-integral-coefficients.html | # Math Help - Cubic function with integral coefficients.
1. ## Cubic function with integral coefficients.
"A cubic function f(x) with integral coefficients has the following properties: f(3/2) = 0, (x-2) is a factor of f(x), and f(4) = 50. Determine f(x)."
How do I solve this?
The two factors are (x - 3/2) and (x - 2), right?
I don't know what to do from there.
2. Originally Posted by Jeavus
"A cubic function f(x) with integral coefficients has the following properties: f(3/2) = 0, (x-2) is a factor of f(x), and f(4) = 50. Determine f(x)."
How do I solve this?
The two factors are (x - 3/2) and (x - 2), right?
I don't know what to do from there.
So,
$f(x) = A(x-3/2)(x-2)(x-c)$
And $f(4)=50$ so,
$50 = A(4-3/2)(4-2)(4-c)$
It seems you need another condition.
3. Hello, Jeavus!
A cubic function f(x) with integral coefficients has the following properties:
. . $f\left(\frac{3}{2}\right) \:=\: 0,\;\;(x-2)$ is a factor of $f(x)$, and $f(4) \:= \:50$
Determine $f(x)$.
Two of the factors are: . $(x - 2)$ and $(2x-3)$
Since the function is a cubic, there is a third factor: . $(x - a)$
Hence: . $f(x) \;\;=\;\;(x-2)(2x-3)(x-a) \;\;=\;\;2x^3 - (2a+7)x^2 + (7a+6)x - 6a$
Since $f(4) = 50$, we have: . $2\!\cdot\!4^3 - (2a+7)\!\cdot\!4^2 + (7a+6)\!\cdot\!4 - 6a\;=\;50$
. . which simplifies to: . $a \:=\:-1$
Therefore, the cubic is: . ${\color{blue}f(x)\;=\;2x^3 - 5x^2 - x + 6}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868963360786438, "perplexity": 1021.802267220177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703964/warc/CC-MAIN-20140313024503-00091-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://civil.gateoverflow.in/542/gate2014-2-45 | With reference to a standard Cartesian (x, y) plane, the parabolic velocity distribution profile of fully developed laminar flow in x-direction between two parallel, stationary and identical plates that are separated by distance, h, is given by the expression
$$u= - \frac{h^2}{8 \mu} \: \frac{dp}{dx} \bigg[1-4 \bigg( \frac{y}{h} \bigg) ^2 \bigg]$$
In this equation, the y=0 axis lies equidistant between the plates at a distance h/2 from the two plates, p is the pressure variable and $\mu$ is the dynamic viscosity term. The maximum and average velocities are, respectively
1. $u_{max} = - \dfrac{h^2}{8 \mu} \: \dfrac{dp}{dx} \text{ and } u_{average} = \dfrac{2}{3} u_{max} \\$
2. $u_{max} = \dfrac{h^2}{8 \mu} \: \dfrac{dp}{dx} \text{ and } u_{average} = \dfrac{2}{3} u_{max} \\$
3. $u_{max} = - \dfrac{h^2}{8 \mu} \: \dfrac{dp}{dx} \text{ and } u_{average} = \dfrac{3}{8} u_{max} \\$
4. $u_{max} = \dfrac{h^2}{8 \mu} \: \dfrac{dp}{dx} \text{ and } u_{average} = \dfrac{3}{8} u_{max}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132506251335144, "perplexity": 539.8760760181216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00653.warc.gz"} |
https://www.groundai.com/project/disk-inhomogeneities-and-the-origins-of-planetary-system-architectures-and-observational-properties/ | [
# [
## Abstract
Recent high-resolution observations show that protoplanetary disks have various kinds of structural properties or ÒinhomogeneitiesÓ. These are the consequence of a mixture of a number of physical and chemical processes taking place in the disks. Here, we discuss the results of our comprehensive investigations on how disk inhomogeneities affect planetary migration. We demonstrate that disk inhomogeneities give rise to planet traps - specific sites in protoplanetary disks at which rapid type I migration is halted. We show that up to three types of traps (heat transitions, ice lines and dead zones) can exist in a single disk, and that they move differently as the disk accretion rate decreases with time. We also demonstrate that the position of planet traps strongly depends on stellar masses and disk accretion rates. This indicates that host stars establish preferred (initial) scales of their planetary systems. Finally, we discuss the possible observational signatures of disk inhomogeneities.
accretion, accretion disks, radiative transfer, turbulence, (stars:) planetary systems: formation, (stars:) planetary systems: protoplanetary disks.
Disk Inhomogeneities and the Resultant Planet Traps] Disk Inhomogeneities and the Origins of Planetary System Architectures and Observational Properties Yasuhiro Hasegawa & Ralph E. Pudritz] Yasuhiro Hasegawa and Ralph E. Pudritz 2013 \volumexxx \pagerange1–4 \jnameExploring the Formation and Evolution of Planetary Systems \editorsB. Metthews, & J. Graham, eds.
\firstsection
## 1 Introduction
The advent of ”next generation” telescopes has revealed that protoplanetary disks are rich in their structures ([Tamura (2009), Tamura 2009]). For instance, SMA observations have recently inferred that there is a jump in the column density of CO at CO-ice lines ([Qi et al (2011), Qi et al 2011]). The evidence suggests that the jump occurs due to the condensation of CO onto dust grains at the CO-ice line, where the disk temperature declines to about 160 K.
One of the most serious problems in theories of planet formation is planetary migration ([Kley & Nelson (2012), Kley & Nelson 2012]). It arises from tidal, resonant interactions between gaseous disks and protoplants that form in the disks. The most advanced studies show that the timescale of type I migration, that is applicable for low-mass planets such as terrestrial planets and cores of gas giants, is very rapid ( years for planets with at AU), and that its direction is highly coupled with disk properties such as the surface density and the temperature of disks. Here, we report on our investigations as to how disk structures or ”disk inhomogeneities” slow planetary migration and explore their consequences for planetary system architectures. We also discuss how important the inhomogeneities are by bridging between the observations of exoplanets and protoplanetary disks.
## 2 Disk Inhomogeneities and the Resultant Planet Traps
We first discuss dead zones, which are predicted to be present in the inner region of disks ([Gammie (1996), Gammie 1996]). In general, protoplanetary disks are ionized by X-rays from the central stars and cosmic rays. In the inner region of disks, however, the ionization is not so efficient due to the high column density there. Magnetorotational instabilities (MRIs) will be largely suppressed there, because MRIs require a good coupling between charged fluids in the gas disks and magnetic fields threading the disks. As a result, protoplanetary disks are characterized mainly by two zones: the inner, high density region is the so-called MRI ”dead” zone and is weakly turbulent whereas the outer, low density region is a MRI active zone and is highly turbulent.
We examine the effects of dead zones on the thermal structure of disks and investigate how planetary migration is affected by the presence of dead zones. Figure 1 (panels (a) and (b)) shows the results of the density and thermal structures of disks with dead zones ([Hasegawa & Pudritz (2010a), Hasegawa & Pudritz 2010a]). The dust distribution (panel (a)) shows that a dusty ”wall” is present at the outer boundary of a dead zone that is 6 AU in size. This arises because dust settling in the dead zone is significantly enhanced due to the low value of turbulence whereas dust in the active zone is still mixed well with the gas. Adopting the dust distribution, we computed the disk temperature. In order to make our calculations reliable, we performed Monte Carlo based, radiative transfer simulations, assuming the main heat source is stellar irradiation (also see [Hasegawa & Pudritz (2010b), Hasegawa & Pudritz 2010b]). Once we obtain the temperature structure of disks, we re-calculate the gas distribution using the computed disk temperature, assuming the vertical hydrostatic equilibrium. It is interesting that wall-like structures do not appear in the gas distribution (see Panel (b)).
We investigate how the dusty wall affects the thermal structure of disks. In order to proceed, we plotted the temperature of the midplane as a function of the distance from the central star. Figure 1 (panel (c); top) shows that the disk temperature has a positive gradient in the dead zone. This is the major finding in [Hasegawa & Pudritz (2010b), Hasegawa & Pudritz (2010b)] and can be understood as what follows: in general, the central stars irradiate the surface layer of disks directly, so that the midplane region of disks is indirectly heated by such surface layers. When a disk has a dead zone, a dusty wall is left at the outer boundary of the dead zone. Then, the wall is heated efficiently by stellar irradiation or the surface layers. As a result, the wall becomes thermally hot and leads to the back heating of the dead zone at smaller disk radii, which produces a positive temperature gradient in the dead zone.
We then use these results to examine how this thermal structure affects planetary migration. More specifically, we analytically calculate the timescale of type I migration, adopting the density and thermal structure of the disk with the dead zone. Figure 1 (panel (c); bottom) shows that the timescale of type I migration becomes negative in the dead zone where the positive temperature gradient is established - planets migrate outward. This occurs because such a temperature profile makes the inner resonances closer to a planet and the outer ones further away from the planet. As a result, the balance of the torque, that drives planetary migration, reverses there. Thus, when planets migrate either within or beyond dead zones, they will be trapped at the location where the net torque is zero. The location is often referred to as a planet trap ([Masset et al (2006), Masset et al 2006]).
## 3 Unified Picture of Planet Traps
As shown in this example, disk inhomogeneities play a significant role in planetary migration through the creation of planet traps. Since a number of disk inhomogeneities are already discussed in the literature, we take into account them and generalize how the disk inhomogeneities act as planet traps.
We adopt an analytical approach for comprehensively examining disk inhomogeneities and the resultant planet traps ([Hasegawa & Pudritz (2011), Hasegawa & Pudritz 2011]). We utilized analytical modeling of disk inhomogeneities and investigated how the direction of type I migration switches from inward to outward there. We consider three types of inhomogeneities: dead zones, ice lines, and heat transitions. We focus on water-ice lines, because the ice lines are most important for planet formation. Heat transitions are disk inhomogeneities at which the main heat source of protoplanetary disks changes from viscous heating to stellar irradiation. The temperature profile therefore changes from steep to shallow ones.
We confirmed analytically that the net torque becomes zero at all the three types of disk inhomogeneities, so that they can act as planet traps for rapid type I migration. Also, we showed that the position of planet traps is sensitive to the surface density of disks, or equivalently the disk accretion rate. Figure 1 (panel (d)) shows how the position of planet traps evolves with the disk accretion rate. It is important that single disks have up to three types of planet traps and that different traps move inward at different rates.
Table 1 summarizes the position of planet traps for various types of stars. The position of planet traps vary largely, depending on the mass of the central stars. Thus, our results indicate that stellar masses and disk accretion rates play a significant role in establishing a preferred (initial) scale of planetary systems.
## 4 Implications of Planet Traps for Observations
W have discussed how useful disk inhomogeneities and the resultant planet traps are for resolving the problem of rapid type I migration and the origin of planetary system architectures. The next fundamental question is how such effects are tested against observations. In order to address such a question, we have recently computed evolutionary tracks of planets that grow at planet traps. We have found that, when planet traps are coupled with the core accretion scenario, the observations of exoplanets done by radial velocity techniques can be reproduced very well ([Hasegawa & Pudritz (2012), Hasegawa & Pudritz 2012]; and Pudritz & Hasegawa, these Proceedings). As discussed above, the presence of dead zones alters the disk structure significantly, so that some kind of features may be shown in the observables. One possibility is in the SEDs/images of disks with inhomogeneities. We are currently investigating this for further supporting the importance of disk inhomogeneities and the resultant planet traps.
### References
1. Gammie, C.F. 1996, ApJ, 457, 355
2. Hasegawa, Y., & Pudritz, R.E. 2010a, ApJ, 710, L167
3. Hasegawa, Y., & Pudritz, R.E. 2010b, MNRAS, 401, 143
4. Hasegawa, Y., & Pudritz, R.E. 2011, MNRAS, 417, 1236
5. Hasegawa, Y., & Pudritz, R.E. 2012, ApJ, 760, 117
6. Kley, W., & Nelson, R.P. 2012, ARAA, 50, 211
7. Masset, F.S. et al. 2006, ApJ, 642, 478
8. Qi, C. et al. 2011, ApJ, 740, 84
9. Tamura, M. 2009, Proceedings of American Institute of Physics Conference Series, 1158, 11
Comments 0
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Loading ...
131381
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324664831161499, "perplexity": 2361.157422529845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00206.warc.gz"} |
http://mathhelpforum.com/calculus/83930-sequences.html | 1. ## Sequences
A sequence is increasing if an < an+1 and decreasing if an > an+1 is what i believe is correct, if not please correct me
With this problem an = 1/(2n+3) , (n is a subscript on the left and not the right.)
I checked it by doing:
1/(2n+3) 1/(2(n+1)+2)
1/(2n+3) < 1/(2n+5)
In this shouldn't this be a increasing? But the answer is indeed decreasing? Am i missing something? Thanks in advance for your help.
2. 1/(2n+3)>1/(2n+5) smaller denominator bigger fraction | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496579766273499, "perplexity": 1471.2552561781442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00480.warc.gz"} |
https://gravityandlevity.wordpress.com/2011/01/20/problems-you-can-solve-just-by-looking-at-them-the-meaning-of-noethers-theorem/ | I am grateful for Emmy Noether
Emmy Noether was an extremely prolific (and fairly heroic) German mathematician from the early 20th century; Einstein once called her the most important woman in the history of mathematics. (She is also the subject of two hilariously bad “biographies”: one for children and one for young adults).
The great majority of Noether’s contributions to mathematics and to physics are well beyond my ability to understand, but the one I can appreciate is “Noether’s Theorem.” Noether’s Theorem is an idea that has has made my life in physics significantly easier, because it allows one to turn difficult “dynamics” problems into simple “accounting” problems.
Let me explain what I mean. Consider the following intro-level physics problem: You throw a rock vertically upward with initial speed 10 m/s. What is the maximum height reached by the rock?
Isaac Newton taught us how to solve this problem:
While in flight, the force of gravity acts on the rock and dictates how the rock’s velocity is changing at any given moment. Using the mathematical description of the gravitational force gives you a (very simple) differential equation for the rock’s position as a function of time. You can solve this differential equation using the known initial conditions to find a full description of the rock’s trajectory over time. Now you find the trajectory’s maximum height (perhaps by taking a derivative and setting it equal to zero) and you have your answer.
This is what I call a “dynamics” problem: you have a situation where you know the various forces at work and how they affect the trajectory of an object at any given point in space, so you use it to calculate the trajectory. While the “throwing a rock into the air” problem is easy for anyone who’s taken a class or two in physics, such dynamics problems can get notoriously difficult very quickly (say, when you have more than one force at work or more than one interacting object in motion).
Of course, there is a much faster way of solving the “rock in the air” problem. Namely, you make use of energy conservation, which says that the total energy of the rock is identical at every moment in time. This allows you to figure out the maximum height by equating the energy at the bottom of the trajectory (purely kinetic) to the energy at the top of the trajectory (purely potential). The answer is $h_{max} = \frac{1}{2} v^2/ g \approx 5.1$ meters.
When I was a TA, I tried to teach my students that using energy conservation is like accounting. You know how much energy you started with and you need to keep track of where all the energy is going. Energy doesn’t disappear; it only moves around from one place to another, just like money that can get saved or spent in one way or another. If you know where an object has “spent” its energy you can figure out nearly every important fact about it.
These “accounting” approaches are pretty powerful in physics: they allow you to quickly figure things out by keeping track of a conserved quantity. And there are a number of different quantities that can be worth “accounting” for: energy, momentum, and angular momentum, to name a few.
The problem is that the momentum and angular momentum of a particular object are not necessarily conserved — that depends on what kind of environment the object is moving through and what kind of forces are present. For example, if you throw a bouncy ball down a long, frictionless hallway then the forward momentum of the ball is conserved but the vertical momentum is not (because of gravity) and neither is the lateral momentum (because the ball can bounce off the walls).
So when you begin to solve a physics problem, one of the first and most important questions to answer is this: When I have an object moving through a given environment, what quantities are conserved?
Noether’s Theorem gives an answer to this question. What’s more, it provides a way to identify other conserved quantities that you might not even have thought to look for. And the theorem is so simple that you can usually figure out the conserved quantities just by drawing a picture.
$\hspace{10mm}$
$\hspace{10mm}$
Noether’s Theorem and what it looks like
Noether’s Theorem can be stated this way:
For every continuous symmetry that an environment has, there is a corresponding conserved quantity.
The theorem then gives a simple recipe for calculating what these conserved quantities are, which I’ll discuss in a bit. But first I should give you a sense of what it means to have a “continuous symmetry”.
Imagine, as an example, that you are trying to describe the behavior of one (or multiple) objects in the vicinity of a very long force-emitting cylinder. Something like this:
I’m not going to tell you what kind of force the cylinder is emitting because it doesn’t matter — it could be gravitational attraction, or electric repulsion, or nuclear radiation, or simply contact force associated with hitting the cylinder’s hard walls.
Without knowing what kind of force is being emitted, we apparently don’t know much about how the objects around the cylinder will behave. Will they get stuck to the cylinder, or will they be flung away, or will they orbit it, or what? What we can say immediately though, just after drawing the picture, is that the environment in which the objects are moving has two important symmetries. The first is symmetry with respect to motion along the $x$ axis. If the cylinder is infinitely long, then no matter where you stand along the $x$ axis the system will look the same. The second symmetry is with respect to rotations along the $x$ axis: you can spin everything around the x axis and nothing will change.
Noether’s Theorem guarantees that for each of these symmetries there is a conserved quantity. In this case, the two conserved quantities are the total momentum in the $x$ direction (which is related to the translational symmetry) and the total angular momentum around the $x$ axis (which comes from the rotational symmetry). As a general rule, translational symmetries always produce conserved linear momentum and rotational symmetries produce conserved angular momentum.
More exactly, Noether’s theorem says that if you can continuously change some coordinate variable $\xi$ without changing the environment, then there is a conserved quantity $P_{cons}$ equal to
$P_{cons} = \partial(K.E.)/\partial \dot{\xi}$,
where $K.E.$ is the kinetic energy and $\dot{\xi}$ is the time rate of change of $\xi$. As an example, the kinetic energy can be written $\frac{1}{2} m ( \dot{x}^2 + \dot{y}^2 + \dot{z}^2 )$, so that if the system is unchanged by translations along the $x$ direction, then the conserved quantity is $P_{cons} = m \dot{x} = m v_x$, which is the momentum in the $x$ direction.
$\hspace{10mm}$
Noether’s Theorem also allows you to identify less obvious conserved quantities. For example, imagine that the force-emitting object is a cylinder with a helical coil wrapped around it, like this:
This environment is no longer unchanged by small translations in the $x, y$ or $z$ directions, nor by small rotations around any of the axes. It is, however, unchanged by a particular combination of translation and rotation. Specifically, if $d$ is the distance between coils of the helix then the environment is unchanged when you simultaneously rotate 360 degrees around the $x$ axis and translate by $d$ in the $x$ direction. Any small translation/rotation done in that same proportion also leaves the environment unchanged. Noether’s Theorem therefore guarantees that a particular combination of linear momentum and angular momentum will be conserved forever. Specifically, you can work out from the equation above that $L_x + \frac{d}{2\pi} P_x$ is conserved, where $L_x$ is the angular momentum around the $x$ axis and $P_x$ is the linear momentum.
Probably the most profound insight of Noether’s Theorem comes from its view of the principle of energy conservation itself. Energy conservation appears naturally from Noether’s Theorem when you assume that the environment is symmetric with respect to translations in time. That is, saying that energy is conserved is equivalent to saying that the laws of physics are unchanging in time.
I never cease to be amazed how mathematics can guide our way of thinking philosophically about the universe.
$\hspace{10mm}$
$\hspace{10mm}$
In the end, I really haven’t done justice to Noether’s Theorem, which has tremendous consequences in field theory and pure mathematics as well in as the normal mechanics of particles. But this simplified version of the theorem is enough to make me grateful, because it allows me to solve hard problems just by drawing pictures.
Now if only I could get someone to write a decent biography of Emmy Noether.
January 20, 2011 10:59 pm
Noether’s theorem is a great topic, so it’s always fun to see blog posts describing its virtues, just be a little careful with the “symmetries => conserved quantity” statement, as it obviously has caveats. Check out Bad Physics for comments on Noether’s 1st, and why it’s not *quite* that straightforward (but still simple).
Love the blog.
January 20, 2011 11:11 pm
Thanks for the warning! The huge caveat, as you pointed out, is that the forces involved have to be conservative . This means that there can’t be any energy-dissipating forces involved, like friction.
And of course you shouldn’t go crazy with liberal uses of the terms “symmetry” and “conservation”, as in the hilarious “Journal of Public Relations” example on your Bad Physics link.
July 31, 2015 10:42 am
January 20, 2011 10:59 pm
Very nice piece of intelligible exposition
January 21, 2011 8:05 am
Good news: Well, there actually IS a good biography. Bad news: You’ve gotta learn German first.
January 21, 2011 8:10 pm
http://www.amazon.com/Emmy-Noether-1882-1935-Auguste-Dick/dp/3764330198
seems to be the tranlation of the German book published in 1970. saw references to this in:
http://www.weylmann.com/weyl+noether.pdf
February 2, 2011 1:03 am
Wow, that pdf link is awesome. I particularly enjoyed reading Weyl’s eulogy at Emmy Noether’s funeral.
January 29, 2011 10:46 pm
As a current AP Physics student, I found your article very interesting and helpful! I followed your first example but am still working on the second. I, too, am grateful for Noether’s Theorem and look forward to implementing it during some challenging momentum problems. It’s amazing how simply a seemingly complicated problem can be…
January 30, 2011 1:29 pm
I definitely agree with the idea that energy conservation is just like accounting. As an AP Physics student, I have learned that bascially every problem can be solved in more than one way (using kinematics or conservation of energy or other options) and that it is important to be able to recognize different approaches to any given problem. I look forward to learning how to apply Noether’s Theorem to some of the problems involving energy and momentum that we do.
January 30, 2011 2:13 pm
I always love learning about new ways to do physics problems, and Noether’s Theorem has a lot of promise for simplifying otherwise complex ones. I’m curious, though–what if a nonconservative force were present? Would the the theorem become completely useless? Would there be some other way to account for the dissipated energy? Or would the presence of a nonconservative force make it necessary to return to traditional kinematics methods?
January 30, 2011 4:31 pm
I totally agree that Noether’s theorem simplifies a lot of physics conditions by looking at the symmetric aspect. It helps to identify conserved quantities and therefore solve problems faster. I’m currently learning Work and Energy in Physics and Understanding Conservation of Energy is ideed a significant learning goals. I look forward to put Noether’s theorem into more practice:)
January 30, 2011 5:33 pm
I like the symmetrical aspect of conservation of energy and it definitely helps simplify physics problems (such as in the first example given). I am curious to learn about more of the applications of Noether’s Theorem so that I can solve problems faster and with more than one method. One of the upcoming units in that AP Physics class that I am taking is rotational motion and I am looking forward to learning about possible applications of Noether’s Theorem in that unit.
January 30, 2011 6:37 pm
As an AP Physics student, I have become quite reliant on Newton’s Second Law when attempting to solve physics problems. This reliance, however, often puts me in a difficult situation whenever a problem does not provide sufficient information about the forces present in a situation (as pointed out in the post). Thus, I can greatly appreciate the usefulness of Noether’s Theorem and how simple it can make seemingly complex problems. As I recall problems I grappled with earlier in the year, it is amazing to think how much easier using Noether’s Theorem could have made them and I look forward to applying the theorem as I work on physics problems in the future. I am also interested in the varying applications of Noether’s Theorem in one scenario as you change your definition of the system you’re examining, for while some symmetries may exist for one definition of the system, they may not for other definitions.
January 30, 2011 7:52 pm
As an AP physics student, it is interesting to read about another method of solving physics problems because I have learned that problems are often solvable in many different ways. I definitely agree with the usefulness of thinking of energy conservation as accounting, leading you to figure things out by keeping track of a conserved quantity. As mentioned in the post, I think it is an important question to ask What quantities are conserved. (I’m not quite sure I understand how you would know that.) I also thought it was interesting to think about situations in which quantities are symmetric with respect to time, such as energy conservation. It was also neat to realize that this theorem was developed by an early 20th century woman!
January 30, 2011 8:45 pm
Noether’s Theorem is fascinating. Someone tried to describe it to me early in the week and I didn’t understand, but this article does a good job of explaining what the symmetrical forces are and how to find them. What I do not yet quite understand is how that actually makes problems simpler. I would guess that keeping the reference frame along the symmetrical force–such as along the x-axis or the helix–would make the problem reduce very quickly. With the example of the ball traveling down the hallway, I am not sure how the problem would simplify with Noether’s Theorem. All in all though, Noether’s Theorem is very interesting and an inspiring new way to look at physics
13. January 31, 2011 9:11 am
I like the ideas of that energy conservation is just like accounting. I found how convenient to use different aspect to think/rethink about physics problems. It is fascinating to think find the corresponding conserved quantity; however, I need more practice. It is much simpler and easier. The Noether’s Theorem is inspiring and I am looking forward to applying it in practice.
February 2, 2011 12:54 am
Wow, which physics teacher referred their entire class to my blog?
1) You are actually very unlikely to encounter a formal teaching of Noether’s Theorem before graduate-level classes in physics. Most people aren’t really taught it until they get to Quantum Field Theory, which is kind of like seventh-year physics. The basic idea behind it is pretty cool, though, so I decided to write about it here.
2) In problems with dissipation, there isn’t much hope to using “accounting” approaches. The problem is that nothing at all is conserved: not energy and not momentum. So if you throw that bouncy ball down the hallway but the entire hallway is submerged in water, you can’t use any of the conservation principles to help you. This is why physicists deal with dissipation problems only when they absolutely have to.
3) The main take away message for someone in first or second year physics is that if you can draw the picture, you can immediately identify which of the various momenta are conserved (linear and angular momentum). This might be able to help you quite a bit. I wouldn’t worry too much about funny examples like the helix one, where only some particular combination is conserved.
4) Well, maybe the real take away message for someone in first or second year physics is that physics is beautiful, and that women are perfectly capable of making revolutionary contributions to it.
15. February 28, 2012 1:43 pm
Very well drafted piece of writing. I appreciate the simplicity and clarity of thought with which it is explained. Thank you.
March 17, 2013 7:10 pm
Thank you for posting this.I am a 7th grader and I am doing a report on Emmy Noether and for a full 100 (required for me) I’ll have to explain the concept with full understanding.This article makes it simplier for me to understand this concept,thank you. 🙂
17. March 31, 2015 4:37 pm
I highly recommend this wonderful book on the topic:
http://www.amazon.com/Noethers-Wonderful-Theorem-Dwight-Neuenschwander/dp/0801896940
18. May 27, 2015 2:56 am
“I never cease to be amazed how mathematics can guide our way of thinking philosophically about the universe.”
Me too. Math is oracular. Fits physics (and so many other things) like a glove.
19. June 23, 2015 2:53 pm
Hmm. The one time I used Noether’s theorem very long ago, it was in a blind, plug-and-play way. I might have to go back and try and understand what the hell I actually did in light of this explanation. If I still have my notes that is 🙂 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319758176803589, "perplexity": 391.4781507596141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00119-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.mis.mpg.de/de/veranstaltungen/vortraege/2021/vortrag-32136.html | # Zusammenfassung für den Vortrag am 02.03.2021 (17:00 Uhr)
Nonlinear Algebra Seminar Online (NASO)
Gleb Pogudin (École polytechnique Paris)
How many experiments?
Siehe auch das Video dieses Vortrages.
An ODE model with parameters is said to be structurally identifiable if the values of parameter can be uniquely determined from continuous noise-free data. This property is a natural prerequisite for practical identifiability. It may happen that, although the model is not identifiable, it becomes identifiable if several independent experiments (with the same parameter values but different initial conditions) are conducted. A natural question is: how many experiments are sufficient to get "the maximal possible identifiability"?
We give an algorithm for computing a bound for the number of experiments "providing the maximal possible identifiability" which is off at most by one. The algorithm is fast in practice (and has polynomial arithmetic complexity). The algorithm is based on our theoretical results about expressing the field of definition of a (differential-algebraic) variety in terms of several its generic points. Interestingly, the process of discovery and establishing of these properties originated from model theory.
This is joint work with Alexey Ovchinnikov, Anand Pillay, and Thomas Scanlon.
05.03.2021, 07:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830744624137878, "perplexity": 1456.5059478061894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00312.warc.gz"} |
https://www.zora.uzh.ch/id/eprint/155326/ | # Self-induced compactness in Banach spaces
Casazza, P G; Jarchow, H (1996). Self-induced compactness in Banach spaces. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 126(02):355-362.
## Abstract
We consider the question: is every compact set in a Banach space X contained in the closed unit range of a compact (or even approximable) operator on X? We give large classes of spaces where the question has an affirmative answer, but observe that it has a negative answer, in general, for approximable operators. We further construct a Banach space failing the bounded compact approximation property, though all of its duals have the metric compact approximation property
## Abstract
We consider the question: is every compact set in a Banach space X contained in the closed unit range of a compact (or even approximable) operator on X? We give large classes of spaces where the question has an affirmative answer, but observe that it has a negative answer, in general, for approximable operators. We further construct a Banach space failing the bounded compact approximation property, though all of its duals have the metric compact approximation property
## Statistics
### Citations
Dimensions.ai Metrics
13 citations in Web of Science®
16 citations in Scopus®
### Altmetrics
Detailed statistics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980359673500061, "perplexity": 690.1832310304109}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00058.warc.gz"} |
https://play.google.com/store/apps/details?id=com.erkutaras.cstable | # Chi-Square Table
Everyone
22
"Chi-Square Table" application is developed to provide easy and accurate information to users. You can find all Chi-Square critical values on a single app and easy to use table.
The Chi-Square Table application can be benefited in quality concerns and statistic lessons. It includes basic explanations and the Chi-Square Table in a usable format.
You can easily download and start to use Chi-Square Table application, you don’t have to carry Chi-Square Table or need internet connection anymore.
In probability theory and statistics, the chi-squared distribution (also chi-square or x²-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation.
Collapse
Review Policy
3.8
22 total
5
4
3
2
1
## What's New
- Optimization update
- Bug fixes
Collapse
Updated
January 30, 2019
Size
3.4M
Installs
5,000+
Current Version
1.5.1 (6)
Requires Android
4.0.3 and up
Content Rating
Everyone
Permissions
Offered By
feo2
Developer
feo2 Danışmanlık ve Yazılım Çözümleri İstanbul / Türkiye | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401904940605164, "perplexity": 1972.2426838088795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530527.11/warc/CC-MAIN-20190421100217-20190421122217-00148.warc.gz"} |
https://learn.careers360.com/ncert/question-show-that-if-the-diagonals-of-a-quadrilateral-are-equal-and-bisect-each-other-at-right-angles-then-it-is-a-square/ | # Q: 5 Show that if the diagonals of a quadrilateral are equal and bisect each other at right angles, then it is a square.
M mansi
Given : ABCD is a quadrilateral with AC=BD,AO=CO,BO=DO,$\angle$COD =$90 \degree$
To prove: ABCD is a square.
Proof: Since the diagonals of a quadrilateral are equal and bisect each other at right angles, then it is a rhombus.
Thus, AB=BC=CD=DA
In $\triangle$BAD and $\triangle$ABC,
AB=AB (common)
BD=AC
$\triangle$BAD $\cong$ $\triangle$ABC (By SSS)
$\angle BAD = \angle ABC$ (CPCT)
$\angle$BAD+$\angle$ABC =$180 \degree$ (Co-interior angles)
2. $\angle$ABC = $180 \degree$
$\angle$ABC =$90 \degree$
Hence, the diagonals of a quadrilateral are equal and bisect each other at right angles, then it is a square.
Exams
Articles
Questions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468076586723328, "perplexity": 1070.037575289915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00262.warc.gz"} |
https://www.analyzemath.com/calculus/Differential_Equations/introduction.html | # Introduction to Differential Equations
What is a differential equation? An equation that involves one or more derivatives of an unknown function is called a differential equation. The order of the highest derivative included in a differential equation defines the order of this equation. Examples y ' = 3x , the order of the highest derivative is 1 (y ' ) so the order of this differential equation is 1. y '' + y' + y = 3x , the order of the highest derivative is 2 (y '' ) so the order of this differential equation is 2. -2 y ''' + y'' + y 4 = 3x , the order of the highest derivative is 3 (y ''' ) so the order of this differential equation is 3. y = f(x) is a solution of a differential equation if the equation is satisfied upon substitution of y and its derivatives into the differential equation. Example: Verify that y = C*e 4x + e 3x, where c is a constant, is a solution to the differential equation y ' - 4y = -e 3x y ' is given by y ' = 4C*e 4x + 3e 3x We now substitute y ' and y into the left side of the equation and simplify y ' - 4y = 4C*e 4x + 3e 3x - 4 (C*e 4x + e 3x) = 4C*e 4x + 3e 3x - 4C*e 4x - 4e 3x = 4C*e 4x - 4C*e 4x + e 3x (3 - 4) = - e 3x Which is equal to the left side of the given equation and therefore y = C*e 4x + e 3x is a solution to the differential equation y ' - 4y = -e 3x. Most of the work on differential equations consists in solving these equations. For example to solve the following differential example y ' = 2x Let us integrate both sides of the given equation as follows ò y ' dx = ò 2x dx which gives y + C1 = x 2 + C2 where C1 and C2 are constants of integration. The solution y of the above equation is given by: y = x 2 + C, where C = C2 - C1. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823123812675476, "perplexity": 302.46541124412187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00185.warc.gz"} |
https://www.hcm.uni-bonn.de/events/eventpages/2014/hausdorff-kolloquium-2014/ | # Hausdorff-Kolloquium 2014
Date: April 23, 2014 - July 9, 2014
Venue: Mathematik-Zentrum, Lipschitz Lecture Hall, Endenicher Allee 60, Bonn
## Wednesday, April 23
15:15 Clément Mouhot (Cambridge): Factorization of semigroups and applications to PDEs 16:45 Aaron Naber (Cambridge): Characterizations of Bounded Ricci Curvature on Smooth and Nonsmooth Spaces
## Wednesday, June 4
15:15 Valentin Blomer (Göttingen): Trace formulae in number theory 16:45 Nicola Gigli (Université de Nice): Spaces with Ricci curvature bounded from below
## Abstracts:
#### Michael Christ: On additive combinatorics and the fine structure of certain classical inequalities
Analysis is replete with inequalities stating that some linear operator is bounded from one Banach space to another, commonly with an unspecified operator norm. However, for a few fundamental inequalities, optimal constants and extremizing functions are known. This arises typically for inequalities with a high degree of symmetry. The character of extremizers reflects and sheds light on underlying algebraic or geometric structure.
This talk is concerned with characterization of functions (or sets) that nearly, but not exactly, extremize certain classical inqualities set in Euclidean space: Young's inequality concerning convolutions of functions, the Brunn-Minkowski inequality concerning sums of sets, the Riesz-Sobolev inequality concerning convolutions of indicator functions of sets, and the Hausdorff-Young inequality concerning the Fourier transform.
Analyses that identify extremizers are often unstable, and may reveal nothing about near extremizers. Recent progress on near extremizers relies on inverse theorems from additive combinatorics. Arithmetic progressions play a central role.
The four inequalities, their extremizers, and two combinatorial inverse theorems will be reviewed. Interconnections will be sketched.
#### Nicola Gigli: Spaces with Ricci curvature bounded from below
We will discuss the definition of metric measure spaces with a lower bound on the Ricci curvature and some of their analytic and geometric properties.
#### Valentin Blomer: Trace formulae in number theory
Starting from the Poisson summation formula, I discuss spectral summation formulae on locally symmetric spaces and present a variety of applications to automorphic forms, analytic number theory, and arithmetic.
#### Aaron Naber: Characterizations of Bounded Ricci Curvature on Smooth and Nonsmooth Spaces
In this talk we discuss several new estimates on manifold with bounded Ricci curvature, and in particular Einstein manifolds. In fact, the estimates are not only implied by bounded Ricci curvature, but turn out to be equivalent to bounded Ricci curvature. We will see that bounded Ricci curvature controls analysis on the path space P(M) of a manifold in much the same way that lower Ricci curvature controls analysis on M. There are three distinct such characterizations given. The first is a gradient estimate that acts as an infinite dimensional analogue of the Bakry-Emery gradient estimate on path space. The second is a C^{1/2}-Holder estimate on the time regularity of the martingale decomposition of functions on path space.
For the third we consider the Ornstein-Uhlenbeck operator, a form of infinite dimensional laplace operator, and show that bounded Ricci curvature is equivalent to an appropriate spectral gap. One can use these notions to make sense of bounded Ricci curvature on abstract metric-measure spaces.
#### Clément Mouhot: Factorization of semigroups and applications to PDEs
We will present a general factorization method for estimating growth of semigroups in Banach spaces for a class of operators, and discuss various applications for linear and nonlinear PDEs of Boltzmann and Fokker-Planck types.
#### Zhan Shi: Biased random walks on trees
I am going to make some elementary discussions on the asymptotic properties of recurrent biased random walks on rooted trees. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465389251708984, "perplexity": 970.2789538838404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330800.17/warc/CC-MAIN-20190825194252-20190825220252-00034.warc.gz"} |
http://mathoverflow.net/questions/14514/how-big-can-the-irreps-of-a-finite-group-be-over-an-arbitrary-field?answertab=active | # How big can the irreps of a finite group be (over an arbitrary field)?
Let G be a nontrivial finite group. Does there exist an irreducible representation of G of dimension greater than or equal to the cardinality of G?
[Edited for clarity. -- PLC]
-
I deleted some comments that no longer make any sense after Pete's edit. – Scott Morrison Feb 9 '10 at 7:23
1. The answer is no (as long as we are working over a field - of any characteristic, algebraically closed or not). If $k$ is a field and $G$ is a finite group, then the dimension of any irreducible representation $V$ of $G$ over $k$ is $\leq \left|G\right|$. This is actually obvious: Take any nonzero vector $v\in V$; then, $k\left[G\right]v$ is a nontrivial subrepresentation of $V$ of dimension $\leq\dim\left(k\left[G\right]\right)=\left|G\right|$. Since our representation $V$ was irreducible, this subrepresentation must be $V$, and hence $\dim V\leq\left|G\right|$.
2. Okay, we can do a little bit better: Any irreducible representation $V$ of $G$ has dimension $\leq\left|G\right|-1$, unless $G$ is the trivial group. Same proof applies, with one additional step:
If $\dim V=\left|G\right|$, then the map $k\left[G\right]\to V,\ g\mapsto gv$ must be bijective (in fact, it is surjective, since $k\left[G\right]v=V$, and it therefore must be bijective since $\dim\left(k\left[G\right]\right)=\left|G\right|=\dim V$), so it is an isomorphism of representations (since it is $G$-equivariant), and thus $V\cong k\left[G\right]$. But $k\left[G\right]$ is not an irreducible representation, unless $G$ is the trivial group (in fact, it always contains the $1$-dimensional trivial representation).
3. Note that if the base field $k$ is algebraically closed and of characteristic $0$, then we can do much better: In this case, an irreducible representation of $G$ always has dimension $<\sqrt{\left|G\right|}$ (in fact, in this case, the sum of the squares of the dimensions of all irreducible representations is $\left|G\right|$, and one of these representations is the trivial $1$-dimensional one). However, if the base field is not necessarily algebraically closed and of arbitrary characteristic, then the bound $\dim V\leq \left|G\right|-1$ can be sharp (take cyclic groups).
4. There is a way to improve 2. so that it comes a bit closer to 3.:
Theorem 1. If $V_1$, $V_2$, ..., $V_m$ are $m$ pairwise nonisomorphic irreducible representations of a finite-dimensional algebra $A$ over a field $k$ (not necessarily algebraically closed, not necessarily of characteristic $0$), then $\dim V_1+\dim V_2+...+\dim V_m\leq\dim A$.
(Of course, if $A$ is the group algebra of some finite group $G$, then $\dim A=\left|G\right|$, and we get 2. as a consequence.)
First proof of Theorem 1. At first, for every $i\in\left\lbrace 1,2,...,m\right\rbrace$, the (left) representation $V_i^{\ast}$ of the algebra $A^{\mathrm{op}}$ (this representation is defined by $a\cdot f=\left(v\mapsto f\left(av\right)\right)$ for any $f\in V_i^{\ast}$ and $a\in A$) is irreducible (since $V_i$ is irreducible) and therefore isomorphic to a quotient of the regular (left) representation $A^{\mathrm{op}}$ (since we can choose some nonzero $u\in V_i^{\ast}$, and then the map $A^{\mathrm{op}}\to V_i^{\ast}$ given by $a\mapsto au$ must be surjective, because its image is a nonzero subrepresentation of $V_i^{\ast}$ and therefore equal to $V_i^{\ast}$ due to the irreducibility of $V_i^{\ast}$). Hence, by duality, $V_i$ is isomorphic to a subrepresentation of the (left) representation $A^{\mathrm{op}\ast}=A^{\ast}$ of $A$. Hence, from now on, let's assume that $V_i$ actually is a subrepresentation of $A^{\ast}$ for every $i\in\left\lbrace 1,2,...,m\right\rbrace$.
Now, let us prove that the vector subspaces $V_1$, $V_2$, ..., $V_m$ of $A^{\ast}$ are linearly disjoint, i. e., that the sum $V_1+V_2+...+V_m$ is actually a direct sum. We will prove this by induction over $m$, so let's assume that the sum $V_1+V_2+...+V_{m-1}$ is already a direct sum. It remains to prove that $V_m\cap \left(V_1+V_2+...+V_{m-1}\right)=0$. In fact, assume the contrary. Then, $V_m\cap \left(V_1+V_2+...+V_{m-1}\right)=V_m$ (since $V_m\cap \left(V_1+V_2+...+V_{m-1}\right)$ is a nonzero subrepresentation of $V_m$, and $V_m$ is irreducible). Thus, $V_m\subseteq V_1+V_2+...+V_{m-1}$. Consequently, $V_m$ is isomorphic to a subrepresentation of the direct sum $V_1\oplus V_2\oplus ...\oplus V_{m-1}$ (because the sum $V_1+V_2+...+V_{m-1}$ is a direct sum, according to our induction assumption).
Now, according to Theorem 2.2 and Remark 2.3 of Etingof's "Introduction to representation theory", any subrepresentation of the direct sum $V_1\oplus V_2\oplus ...\oplus V_{m-1}$ must be a direct sum of the form $r_1V_1\oplus r_2V_2\oplus ...\oplus r_{m-1}V_{m-1}$ for some nonnegative integers $r_1$, $r_2$, ..., $r_{m-1}$. Hence, every irreducible subrepresentation of the direct sum $V_1\oplus V_2\oplus ...\oplus V_{m-1}$ must be one of the representations $V_1$, $V_2$, ..., $V_{m-1}$. Since we know that $V_m$ is isomorphic to a subrepresentation of the direct sum $V_1\oplus V_2\oplus ...\oplus V_{m-1}$, we conclude that $V_m$ is isomorphic to one of the representations $V_1$, $V_2$, ..., $V_{m-1}$. This contradicts the non-isomorphy of the representations $V_1$, $V_2$, ..., $V_m$. Thus, we have proven that the sum $V_1+V_2+...+V_m$ is actually a direct sum. Consequently, $\dim V_1+\dim V_2+...+\dim V_m=\dim\left(V_1+V_2+...+V_m\right)\leq \dim A^{\ast}=\dim A$, and Theorem 1 is proven.
Second proof of Theorem 1. I just learnt the following simpler proof of Theorem 1 from §1 Lemma 1 in Crawley-Boevey's "Lectures on representation theory and invariant theory":
Let $0=A_0\subseteq A_1\subseteq A_2\subseteq ...\subseteq A_k=A$ be a composition series of the regular representation $A$ of $A$. Then, by the definition of a composition series, for every $i\in \left\lbrace 1,2,...,k\right\rbrace$, the representation $A_i/A_{i-1}$ of $A$ is irreducible.
Let $T$ be an irreducible representation of $A$. We are going to prove that there exists some $I\in \left\lbrace 1,2,...,k\right\rbrace$ such that $T\cong A_I/A_{I-1}$ (as representations of $A$).
In fact, let $I$ be the smallest element $i\in \left\lbrace 1,2,...,k\right\rbrace$ satisfying $A_iT\neq 0$ (such elements $i$ exist, because $A_kT=AT=T\neq 0$). Then, $A_IT\neq 0$, but $A_{I-1}T=0$. Now, choose some vector $t\in T$ such that $A_It\neq 0$ (such a vector $t$ exists, because $A_IT\neq 0$), and consider the map $f:A_I\to T$ defined by $f\left(a\right)=at$ for every $a\in A_I$. Then, this map $f$ is a homomorphism of representations of $A$. Since it maps the subrepresentation $A_{I-1}$ to $0$ (because $f\left(A_{I-1}\right)=A_{I-1}t\subseteq A_{I-1}T=0$), it gives rise to a map $g:A_I/A_{I-1}\to T$, which, of course, must also be a homomorphism of representations of $A$. Since $A_I/A_{I-1}$ and $T$ are irreducible representations of $A$, it follows from Schur's lemma that any homomorphism of representations from $A_I/A_{I-1}$ to $T$ is either an isomorphism or identically zero. Hence, $g$ is either an isomorphism or identically zero. But $g$ is not identically zero (since $g\left(A_I/A_{I-1}\right)=f\left(A_I\right)=A_It\neq 0$), so that $g$ must be an isomorphism, i. e., we have $T\cong A_I/A_{I-1}$.
So we have just proven that
(1) For every irreducible representation $T$ of $A$, there exists some $I\in \left\lbrace 1,2,...,k\right\rbrace$ such that $T\cong A_I/A_{I-1}$ (as representations of $A$).
Denote this $I$ by $I_T$ in order to make it clear that it depends on $T$. So we have $T\cong A_{I_T}/A_{I_T-1}$ for each irreducible representation $T$ of $A$. Applying this to $T=V_i$ for every $i\in\left\lbrace 1,2,...,m\right\rbrace$, we see that $V_i\cong A_{I_{V_i}}/A_{I_{V_i}-1}$ for every $i\in\left\lbrace 1,2,...,m\right\rbrace$. Hence, the elements $I_{V_1}$, $I_{V_2}$, ..., $I_{V_m}$ of the set $\left\lbrace 1,2,...,k\right\rbrace$ are pairwise distinct (because $I_{V_i}=I_{V_j}$ would yield $V_i\cong A_{I_{V_i}}/A_{I_{V_i}-1}=A_{I_{V_j}}/A_{I_{V_j}-1}\cong V_j$, but the representations $V_1$, $V_2$, ..., $V_m$ are pairwise nonisomorphic), and thus
$\sum\limits_{i=1}^{m}\dim\left(A_{I_{V_i}}/A_{I_{V_i}-1}\right)=\sum\limits_{\substack{j\in\left\lbrace 1,2,...,k\right\rbrace ;\ \\ \text{there exists }\\ i\in\left\lbrace 1,2,...,m\right\rbrace \\ \text{ such that }j=I_{V_i}}}\dim\left(A_j/A_{j-1}\right)$ $\leq \sum\limits_{j\in\left\lbrace 1,2,...,k\right\rbrace}\dim\left(A_j/A_{j-1}\right)$ (since $\dim\left(A_j/A_{j-1}\right)\geq 0$ for every $j$, so that adding more summands cannot decrease the sum) $=\sum\limits_{j=1}^{k}\dim\left(A_j/A_{j-1}\right)=\sum\limits_{j=1}^{k}\left(\dim A_j-\dim A_{j-1}\right)$.
Since $\dim\left(A_{I_{V_i}}/A_{I_{V_i}-1}\right)=\dim V_i$ for each $i$ (due to $A_{I_{V_i}}/A_{I_{V_i}-1}\cong V_i$) and $\sum\limits_{j=1}^{k}\left(\dim A_j-\dim A_{j-1}\right)=\dim A$ (in fact, the sum $\sum\limits_{j=1}^{k}\left(\dim A_j-\dim A_{j-1}\right)$ is a telescopic sum and simplifies to $\dim A_k-\dim A_0=\dim A-\dim 0=\dim A-0=\dim A$), this inequality becomes $\sum\limits_{i=1}^{m}\dim V_i\leq\dim A$. This proves Theorem 1.
-
@DG: This is the correct answer: no, for any nontrivial finite group and any ground field $k$. May I ask you to edit your answer to make it more concise (now that I have edited the question)? A simple question deserves a correspondingly simple answer, if possible. – Pete L. Clark Feb 7 '10 at 18:37
Done. When I wrote the answer, the question was much more ambiguous, so I wanted to cover all possibilities. – darij grinberg Feb 7 '10 at 18:58
Am I the only one who gets very bad math parsing on this post and cannot understand the math because of this? – Dror Speiser Feb 7 '10 at 19:46
The jsMath has a serious problem with math at linebreaks. I added a line break to fix one of the worst above, but at the moment I'm not sure if there's a long term fix in the works. – Ben Webster Feb 7 '10 at 20:16
Hmm, I have never had troubles with reading my post - actually I see no difference between it now and my last version. I'm using FF 3.5.7 (I know, 3.6 is up already). What browsers are you using? – darij grinberg Feb 7 '10 at 20:44
The $\sqrt{|G|}$ bound is true [for algebraically closed fields -- PLC] in any characteristic. In fact if $R$ is the Jacobson radical of $k[G]$, then $k[G]/R\cong\prod_iM_{n_i}(k)$ where $i$ runs over the irreducible representations and $n_i$ is the degree of the corresponding representation. This gives $\sum_in_i^2=\dim k[G]/R\leq|G|$. In the modular case we always have that $R\neq0$ so that in particular we have strict inequality. Also for every irreducible representation in charateristic $p$ there is an irreducible representation in characteristic $0$ whose degree is $\ge$ than the degree of the characteristic $p$ representation: Choose a number field $K$ which is a splitting field for $G$ and let $R$ be its ring of integers and let further $P$ be a maximal ideal of $R$ dividing $p$. We may filter $K[G]$ by a Jordan-Hölder filtration $W'_i$ so that $W'_i/W' _{i-1}$ is irreducible. Put $W_i:=W'_i\cap R[G]$ so that in particular $W_i/W_{i-1}$ is $R$-torsion free. Hence reducing modulo $P$ we get a filtration $\overline{W}_i$ of $k[G]$. This filtration can be refined to a Jordan-Hölder filtration showing that every irreducible $k[G]$-module which appears in any Jordan-Hölder filtration of $k[G]$ must appear in any Jordan-Hölder filtration of some $\overline{W}_i/\overline{W} _{i-1}$ and thus its degree is $\leq \dim(\overline{W}_i/\overline{W} _{i-1})=\mathrm{rank}(W_i/W _{i-1})=\dim(W'_i/W' _{i-1})$. Hence the degree of any $k[G]$-representation is $\le$ the degree of some $K[G]$-representation. It is rare (but does happen) that $\overline{W}_i/\overline{W} _{i-1}$ is irreducible in the modular case.
-
In the modular case (i.e., when the characteristic of the field divides the order of the group), it is more interesting to look for indecomposable representations (because there are very few irreps or, as they are known in this context, simple modules: the 'worst' case is that of a $p$-group, which has one simple module!), and those can be as big as you want, in general, as soon as the group is not of finite representation type. This follows from the first Brauer-Thrall conjecture, proved by Maurice Auslander et al.; see, for example, [Ringel, Claus Michael. On algorithms for solving vector space problems. I. Report on the Brauer-Thrall conjectures: Rojter's theorem and the theorem of Nazarova and Rojter. Representation theory, I (Proc. Workshop, Carleton Univ., Ottawa, Ont., 1979), pp. 104--136, Lecture Notes in Math., 831, Springer, Berlin, 1980. MR0607142] and the references therein.
The smallest non-trivial example is the Klein Klein four group $\mathbb Z_2\oplus\mathbb Z_2$ in characteristic two, which is of infinite tame representation type, so has indecomposable modules of arbitrary high dimension. They have been known for ages, in various forms; they are described nicely, e.g., in [Benson, D. J. Representations and cohomology. I. Basic representation theory of finite groups and associative algebras. Second edition. Cambridge Studies in Advanced Mathematics, 30. Cambridge University Press, Cambridge, 1998. xii+246 pp. MR1644252].
-
I just want to add that for the $\sqrt{|G|}$ bound, we don't need the field to be algebraically closed, we only require that it be a splitting field for the group (i.e., all irreducible representations over the algebraic closure are realized in the group). For a finite group of exponent d, any field over which $x^d - 1$ splits completely into distinct linear factors is a splitting field for the group. (Such fields are called "sufficiently large fields" in some places). The converse doesn't hold -- there could be splitting fields that are not sufficiently large.
If the field is not a splitting field for the group, but has a degree r extension that is a splitting field for the group, then the degrees of irreducible representations are bounded by $r\sqrt{|G|}$ because each irreducible representation over the field can splitinto at most $r$ irreducibles when we pass to the splitting field.
So, over the real numbers, the degree of any irreducible representation is at most $2\sqrt{|G|}$, which is smaller than $|G| - 1$ for groups of size greater than five.
On the other hand, over the rational numbers, the cyclic group of prime order has an irreducible representation of degree equal to the order minus one, so the bound is tight for the rationals.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908123016357422, "perplexity": 105.34251941564273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639414.6/warc/CC-MAIN-20150417045719-00283-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://dept.atmos.ucla.edu/alexhall/publications/current-gcms-unrealistic-negative-feedback-arctic | Current GCMs' unrealistic negative feedback in the Arctic
Citation:
Boé, J, A Hall, and X Qu. 2009. “Current GCMs' unrealistic negative feedback in the Arctic.” . Journal of Climate 22: 4682–4695.
Abstract:
The large spread of the response to anthropogenic forcing simulated by state-of-the-art climate models in the Arctic is investigated. A feedback analysis framework specific to the Arctic is developed to address this issue. The feedback analysis shows that a large part of the spread of Arctic climate change is explained by the longwave feedback parameter. The large spread of the negative longwave feedback parameter is in turn mainly due to variations in temperature feedback. The vertical temperature structure of the atmosphere in the Arctic, characterized by a surface inversion during wintertime, exerts a strong control on the temperature feedback and consequently on simulated Arctic climate change. Most current climate models likely overestimate the climatological strength of the inversion, leading to excessive negative longwave feedback. The authors conclude that the models’ near-equilibrium response to anthropogenic forcing is generally too small.
Publisher's Version
Last updated on 03/25/2020 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922670483589172, "perplexity": 2911.405926918675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00808.warc.gz"} |
https://cs.stackexchange.com/questions/49324/solving-recurrences-by-substitution-method-why-can-i-introduce-new-constants | # Solving recurrences by substitution method: why can I introduce new constants?
I am solving an exercise from the book of Cormen et al. (Introduction To Algorithms). The task is:
Show that solution of $T(n) = T(\lceil n/2\rceil) + 1$ is $O(\lg n)$
So, by big-O definition I need to prove that, for some $c$,
$$T(n) \le c\lg n\,.$$
My take on it was: \begin{align*} T(n) &\leq c\lg(\lceil n/2\rceil) + 1 \\ &< c\lg(n/2 + 1) + 1 \\ &= c\lg(n+2) - c + 1\,. \end{align*} As this doesn't seem satisfactory I looked up a solution and the author after getting to the same stage as me decided to introduce a new arbitrary constant $d$: \begin{align*} T(n) &\le c\lg(\lceil n/2-d\rceil) + 1 \\ &< c\lg(n/2+1-d) + 1 \\ &< c\lg((n-2d+2)/2) + 1 \\ &= c\lg(n-2d+2) - c + 1 \\ &= c\lg(n-d-(d-2)) - c + 1\,. \end{align*}
And now, for $d \ge 2$, $c \ge 1$ and $n > d$,
$$c\lg(n-d-(d-2)) - c + 1 \le c\lg(n-d)\,.$$
What I don't understand is how does it prove that $T(n) \le c\lg n$? Cormen et al. make a big point that you have to prove the exact form of the inductive hypothesis which in this case was $T(n) \le c\lg n$. They then go on to show example similar to one above.
How is that the exact form of the inductive hypothesis? This doesn't seem to fit the big-O definition. When can I omit constants or cheat them away? When is it wrong?
• Please, check if this can help. – kentilla Nov 12 '15 at 20:42
I am guessing that the book wants you to proof the stronger hypothesis $T(n) \leq c\log( n -d)$ which implies $T(n) \leq c log(n)$ is also true ( because $n \geq n -d$ for $d\geq 0$. ) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999818801879883, "perplexity": 447.13720122693064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00263.warc.gz"} |
http://mathhelpforum.com/number-theory/109760-rational-number-print.html | Rational number
• Oct 22nd 2009, 05:18 PM
spikedpunch
Rational number
I need help proving that between any two real numbers there exists a rational number.
• Oct 22nd 2009, 09:09 PM
tonio
Quote:
Originally Posted by spikedpunch
I need help proving that between any two real numbers there exists a rational number.
$r,s \in \mathbb{Q} \Longrightarrow x:=\frac{r+s}{2} \in \mathbb{Q}\,,\,\,so...$
Tonio
• Oct 23rd 2009, 07:36 AM
HallsofIvy
Tonio's proof is that between any two rational numbers, there exist an irrational number. If x is NOT rational, It's a little more complicated.
Suppose that r and s are the given real numbers, with r< s. Again, look at t= (r+s)/2. That is a real number between r and s but may not be rational. Let $\epsilon= t-r= (r+s)/2- 2r/2= (s- r)/2$. There exist an increasing sequence of rational number converging to any real number (The sequence got by truncating the decimal expansion of the number at the $n^{th}$ decimal place is such a sequence.) so there exist an increasing sequence of rational numbers, $\{a_n\}$ converging to t. Since it converges to t, there exist N such that if n> N $|a_n- t|< \epsilon$. $a_n$ for n> N is then larger than t- (s- r)/2= (s+r)/2- (s- r)/2= r and is less than t< s because it is an increasing sequence.
• Oct 23rd 2009, 10:07 AM
tonio
Quote:
Originally Posted by HallsofIvy
Tonio's proof is that between any two rational numbers, there exist an irrational number. If x is NOT rational, It's a little more complicated.
$\color{red}\mbox{No. Tonio's proof is that between the two rational numbers r,s there exists}$ $\color{red}\mbox{ another RATIONAL number which I denoted by x}$
$\color{red}\mbox{This in fact is the easy part. To prove there exists an irrational}$ $\color{red}\mbox{between those two is harder}$
$\color{red}Tonio$
Suppose that r and s are the given real numbers, with r< s. Again, look at t= (r+s)/2. That is a real number between r and s but may not be rational. Let $\epsilon= t-r= (r+s)/2- 2r/2= (s- r)/2$. There exist an increasing sequence of rational number converging to any real number (The sequence got by truncating the decimal expansion of the number at the $n^{th}$ decimal place is such a sequence.) so there exist an increasing sequence of rational numbers, $\{a_n\}$ converging to t. Since it converges to t, there exist N such that if n> N $|a_n- t|< \epsilon$. $a_n$ for n> N is then larger than t- (s- r)/2= (s+r)/2- (s- r)/2= r and is less than t< s because it is an increasing sequence.
.
• Oct 24th 2009, 12:12 AM
proscientia
All tonio has shown is that between two rational numbers there exists a rational number; this does not imply that between two real numbers there exists a rational number.
More precisely, we want to show that between two distinct real numbers there exists a rational number. Let $x,y\in\mathbb R$ with $x Then $0 $\implies$ $\exists\,n\in\mathbb N$ with $\frac1n by the Archimedean principle. $\therefore\ 1 Let $m=\lfloor nx\rfloor$ (so $m$ is the largest integer such that $m\leqslant nx).$ Then $nx and also $m+1 (otherwise $m+1\geqslant ny$ $\implies$ $1\geqslant ny-m\geqslant ny-nx).$ Hence
$x\ <\ \frac{m+1}n\ <\ y$
and $\frac{m+1}n\in\mathbb Q.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859592318534851, "perplexity": 421.1415787294943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804680.40/warc/CC-MAIN-20171118075712-20171118095712-00684.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2006.6.623 | Article Contents
Article Contents
# Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model
• This paper extends Runge-Kutta discontinuous Galerkin (RKDG) methods to a nonlinear Dirac (NLD) model in relativistic quantum physics, and investigates interaction dynamics of corresponding solitary wave solutions. Weak inelastic interaction in ternary collisions is first observed by using high-order accurate schemes on finer meshes. A long-lived oscillating state is formed with an approximate constant frequency in collisions of two standing waves; another is with an increasing frequency in collisions of two moving solitons. We also prove three continuum conservation laws of the NLD model and an entropy inequality, i.e. the total charge non-increasing, of the semi-discrete RKDG methods, which are demonstrated by various numerical examples.
Mathematics Subject Classification: Primary: 65M60, 81Q05; Secondary: 35L60.
Citation: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.986855149269104, "perplexity": 1945.1051017128432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00499.warc.gz"} |
http://www.fret.at/category/fluorescence/multiparameter-detection/ | # Calculating FRET lines
A FRET line is a projection of a parameterized model to a set of single-molecule FRET observables. Such FRET experiment typically jointly determined the fluorescence weighted average lifetime of the donor $\langle \tau \rangle_f$ in the presence of an acceptor and the FRET efficiency, E, for every molecule. Next, a multidimensional frequency histogram of these parameters integrates the detected particles. Often, such histograms are called MFD histograms. The feature image shown to the top of this post displays a set of FRET lines of normally distributed donor-acceptor distances. The red and the blue line ranging from FRET efficiency zero to one corresponds to static FRET lines. Static FRET lines depend on the average donor and acceptor separation distance for a single state. The red line corresponds “broad” and the blue line to “narrow” donor-acceptor distance distributions. Below I will outline how to calculate a FRET line in ChiSurf. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247748851776123, "perplexity": 2287.223684918544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741510.42/warc/CC-MAIN-20181113215316-20181114001316-00486.warc.gz"} |
http://nam2015.org/index.php/thursday/form/23/374-the-evolving-demographics-of-the-red-sequence-since-z-1 | Abstract
The evolving demographics of the red sequence since z=1
Witnessing Disc Galaxy Evolution Through The Eyes of their Stellar Structures
Thomas Melvin
ICG, University of Portsmouth
Galaxy Zoo team
In the local universe we observe a bimodal galaxy population, with galaxies tending to be either part of the blue cloud or the red sequence. As we observe galaxies towards higher redshifts, this bimodality becomes less distinct. Here, we
combine photometric and spectroscopic data from the Cosmic Evolution Survey (COSMOS) with visual morphological classifications from Galaxy Zoo: Hubble to explore how the demographics of the red sequence have evolved over the last
eight billion years. Since z=1, the fraction of galaxies in the red sequence has remained steady (~30-40% of all galaxies). However, the red sequence has become almost a magnitude redder over this time period, with the mean colour
rising from 4.8 at z=1.0 to 5.5 at z=0.2. The morphologies of the red sequence over this time has also changed, with the red sequence consisting of more disks, and barred disks, as the universe has aged. We propose that this growth in red
disk galaxies is due to the disk population becoming more mature, and disk-dominated by secular evolution (i.e. through stellar bars). This work indicates that not all galaxies in the red sequence are ellipticals formed by a major merger process, and thus slower processes, like secular evolution, can move galaxies onto the red sequence in ways which do not drastically alter their morphologies.
Schedule
13:30 - 15:00
14:15
Thursday | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939091682434082, "perplexity": 4262.854174580024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00241.warc.gz"} |
https://www.physicsforums.com/threads/a-way-to-see-lethes-notation.3762/ | # A way to see Lethe's notation
• Start date
#### marcus
Gold Member
Dearly Missed
24,713
783
I have been unable to read the differential forms thread but wanted to, and now I have discovered how to make it legible on my browser. Lethe has been using Times Roman font and for some reason most of the symbols come thru as boxes for me in that font. So as an experiment I have quoted a Lethe post and removed the Font specification, setting that back to default. Most of the symbols now come thru for me altho a couple I see here (& sdot , & nabla ) still do not
I'm wondering if anyone else was discouraged earlier by seeing all those boxes and no being able to tell what symbols they stood for.
Originally posted by lethe in diff forms thread
at this point, i will stop using classical vector notation. i will write the directional derivative as vμ∂μƒ, where ∂μ is shorthand for ∂/∂xμ, and xμ is one of the coordinates, and μ is a number that ranges over the number of dimensions of the manifold, from 0 to n-1 usually. so there will be n different coordinates for an n dimensional manifold. and vμ is going to be associated with the μ-th component of the vector, to be defined. and even though i didn t write it, i meant for that to be a summation: v⋅∇ƒ = ∑ vμ∂μƒ = vμ∂μƒ. i just leave off the ∑ from now on. every time you see an equation with the same letter as a superscript and a subscript, you should sum over that index.
......
we define the vector to be that operator. this is how it operates on a function:
v(ƒ) = vμ∂μƒ (2)
since this is independent of the function that i want to operator on, let me just write the vector operator:
v = vμ∂μ (3)
and this is the point of this post. a tangent vector is defined to be/associated with/thought of as a differential operator. v is the vector, and vμ are the coordinate components of the vector, and ∂μ are the coordinate basis vectors of the tangent space. the vector itself is coordinate independent, but the components are not, ....
OK, it should be easy to show that the set of tangent vectors, thusly defined, satisfy the axioms of the vector space. i will call this vector space TMp. that is, the tangent space to the manifold M at the point p is TMp. for an n dimensional manifold, the tangent space is always an n dimensional vector space.
......
Last edited:
#### lethe
656
0
so do you think times new roman is poor choice for font? i chose that because certain symbols, like pi, display poorly in the default font. i also upped the size, which might make a difference.
maybe i should change the font/size?
#### marcus
Gold Member
Dearly Missed
24,713
783
Originally posted by lethe
so do you think times new roman is poor choice for font? i chose that because certain symbols, like pi, display poorly in the default font. i also upped the size, which might make a difference.
maybe i should change the font/size?
If I dont change my browser and you dont change your font then when I first open one of your posts I will see a lot of boxes.
This is not the end of the world or even a particularly bad thing.
I can adapt to this situation as I just have in this example by
reposting in the default font!
I am assuming that no one else has the problem I do----everyone else who wants to read the post can do so in your chosen Times New Roman, with things like ∇ and ⋅ (which I will see as boxes).
BTW I also do not like certain features of the default font----the pi does not look pi-like and the theta looks like an ugly number 8.
But nothing is perfect!
Why not carry on, Lethe, with your chosen font and size, and let me cope by reposting certain passages in the default font so that I can read them? A little occasional repetition in another font seems harmless enough. Or?
I like your style of writing---it could be a really useful thread as originally planned---I would say go for it.
If anyone tries to distract you with some kind of compensatory know-it-all syndrome just put them on the ignore list. The original idea is a basic introductory treatment, right?
• Posted
Replies
21
Views
4K
• Posted
Replies
6
Views
611
• Posted
Replies
4
Views
1K
• Posted
Replies
2
Views
1K
• Posted
Replies
7
Views
2K
• Posted
Replies
13
Views
684
• Posted
Replies
2
Views
567
• Posted
Replies
4
Views
1K
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458534479141235, "perplexity": 1507.7856776733493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00179.warc.gz"} |
http://madipedia.de/wiki/Computereinsatz_und_Simulation_als_Instrumente_eines_Problemorientierten_Mathematikunterrichts_-_Strategien,_%C3%9Cberlegungen,_Erfahrungen | # Computereinsatz und Simulation als Instrumente eines Problemorientierten Mathematikunterrichts - Strategien, Überlegungen, Erfahrungen
Robert Müller (1984): Computereinsatz und Simulation als Instrumente eines Problemorientierten Mathematikunterrichts - Strategien, Überlegungen, Erfahrungen. Dissertation, Universität Wien.
Begutachtet durch Hans-Christian Reichel und Siegfried Grosser.
## Zusammenfassung
The triumphal march of the computer has not stopped at the schools’ gates. This fact necessitates both the investigation of the way in which the computer influences the educational system and results in a rethinking of the position of the formal sciences. This thesis tries to make a practical contribution to mathematical education. The synopsis of some critiques as well as an analysis of the demands of the formal sciences and mathematical teaching gave rise to the following theory: “The deficiencies in our usual way of teaching chiefly come from an intolerable abridgement of the process of cognition”. Such an attitude seems rather problematic, however, if faced with the opinion which descends from the connection between an ‘evolutionary’ theory of cognition and the methods of the ‘formal’ sciences: “‘mathematics’ has to be the simplification as well as the consequent continuation and generalisation of the fundamental principles of our way to winning cognition.” Therefore – at the same time at theoretical and practical level – the core of this thesis is to prove that the computer and the simulation-method are instruments which give us a greater chance to show and pursue mathematics in accordance with its real nature. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346994519233704, "perplexity": 2280.512665236147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00671.warc.gz"} |
https://www.physicsforums.com/threads/circular-motion-ratio-of-velocities.658421/ | # Circular Motion (ratio of velocities)
1. Dec 11, 2012
### Qualenal
1. The problem statement, all variables and given/known data
Two small planets are moving in circular orbits around the same star. If the radius of the orbit of planet A is 4 times the radius of the orbit of planet B, find the ratio of their speeds vA/vB.
2. Relevant equations
Not really sure but
v=omega*r
a(centripetal)=v^2/r
UCM T=2*pi*r/v
3. The attempt at a solution
I honestly have no idea how to do this problem. I know the answer is .5 I just want to know how to get there!
2. Dec 11, 2012
### Staff: Mentor
Have a look at Kepler's third law. It may give you a starting point. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649023413658142, "perplexity": 405.44608800088366}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647251.74/warc/CC-MAIN-20180320013620-20180320033620-00413.warc.gz"} |
http://mathoverflow.net/questions/121345/when-are-the-smooth-sections-of-a-bundle-generated-as-a-module-over-smooth-func?sort=oldest | # When are the Smooth Sections of a Bundle Generated as a Module (over Smooth Functions) by the Holomorohic Sections
For a holomorphic vector bundle $E$ over a complex manifold $M$, we denote its space of smooth sections by $\Gamma^{\infty}(E)$, and its space of holomorphic sections by $\Gamma^{hol}(E)$. Now I've been looking at the line bundles $L_k$ over the complex projective spaces ${\bf C} P^N$, and I have managed to show that $\Gamma^{\infty}(L_k)$ is generated as a $C^{\infty}({\bf C} P^N)$-module by $\Gamma^{hol}(E)$, which is to say that every element $\Gamma^{\infty}(E)$ is a sum of elements of the form $ef$, where $e \in \Gamma^{hol}(E)$, and $f \in C^{\infty}({\bf C} P^N)$.
I am guessing that this result is extremely well known, and an example of a well understood general phenomenon. So I would like to ask if there is a characterization of the manifolds for which this result holds, for both the case of line bundles alone, and holomorphic vector bundles of general dimension?
-
Swan has proved that taking global section gives an anti-equivalence between finitely generate projective $\Gamma^{\infty}(M)$-modules and $C^{\infty}$ vector bundles on $M$; this correspondence is functorial in $M$. Hence a set of section of a $C^{\infty}$ vector bundle $E$ on $M$ generated $\Gamma^{\infty}(E)$ if and only if it generates each fiber of $E$. So if $E$ is holomorphic, the holomorphic sections generate if and only if $E$ is globally generated. In particular, this is always true if $M$ is Stein. In the case of $L_k$ on $\mathbb{CP}^N$, this is true if and only if $k ≥ 0$.
Sorry, but I would like to get this totally clear for my mind. What you are saying is that there exists for $v \in E_p$, for any $p \in M$, an element $s \in \Gamma^{\infty}(E)$ of the smooth sections (or global sections as you call them), such that $v = s(p)$. Is this correct? – Jean Delinez Feb 11 '13 at 16:06
.... but I can't see why every vector is not globally generated. Surely, for $v \in E_p$ as above, we have a local section for which $s(p) = v$, why can one not just use a partition of unity argument to extend $s$ to a global section, and hence conclude that your vector bundle is globally generated? I can't see the problem in my logic here. – Jean Delinez Feb 11 '13 at 16:13
A holomorphic vector bundle is globally generated when for every point $p$ of $M$ and every $v \in E_p$ there exists a holomorphic global section $s$ with $s(p) = v$. There are lots of holomorphic vector bundles which have no non-zero sections, for example, the line bundles on $\mathbb{CP}^N$ with negative degree, so they can't be globally generated. – Angelo Feb 11 '13 at 17:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847456216812134, "perplexity": 73.48860188926747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297281.13/warc/CC-MAIN-20150323172137-00270-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://infoscience.epfl.ch/record/204380 | Infoscience
Journal article
# Methyl proton contacts obtained using heteronuclear through-bond transfers in solid-state NMR spectroscopy
A two-dimensional proton-mediated carbon-carbon correlation experiment that relies on through-bond heteronuclear magnetization transfers is demonstrated in the context of solid-state NMR of proteins. This new experiment, dubbed J-CHHC by analogy to the previously developed dipolar CHHC techniques, is shown to provide selective and sensitive correlations in the methyl region of 2D spectra of crystalline organic compounds. The method is then demonstrated on a microcrystalline sample of the dimeric protein Crh (2 x 10.4 kDa). A total of 34 new proton-proton contacts involving side-chain methyl groups were observed in the J-CHHC spectrum, which had not been observed with the conventional experiment. The contacts were then used as additional distance restraints for the 3D structure determination of this microcrystalline protein. Upon addition of these new distance restraints, which are in large part located in the hydrophobic core of the protein, the root-mean-square deviation with respect to the X-ray structure of the backbone atom coordinates of the 10 best conformers of the new ensemble of structures is reduced from 1.8 to 1.1 angstrom. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161827564239502, "perplexity": 3941.9653022555385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00382.warc.gz"} |
https://en.wikipedia.org/wiki/Incidence_(geometry) | # Incidence (geometry)
In geometry, an incidence relation is a binary relation between different types of objects that captures the idea being expressed when phrases such as "a point lies on a line" or "a line is contained in a plane" are used. The most basic incidence relation is that between a point, P, and a line, l, sometimes denoted P I l. If P I l the pair (P, l) is called a flag. There are many expressions used in common language to describe incidence (for example, a line passes through a point, a point lies in a plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner, reflecting this property of the relation. Statements such as "line l1 intersects line l2" are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a point P that is incident with both line l1 and line l2". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed as containment.
Statements such as "any two lines in a plane meet" are called incidence propositions. This particular statement is true in a projective plane, though not true in the Euclidean plane where lines may be parallel. Historically, projective geometry was developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view of synthetic geometry, projective geometry should be developed using such propositions as axioms. This is most significant for projective planes due to the universal validity of Desargues' theorem in higher dimensions.
In contrast, the analytic approach is to define projective space based on linear algebra and utilizing homogeneous co-ordinates. The propositions of incidence are derived from the following basic result on vector spaces: given subspaces U and W of a (finite dimensional) vector space V, the dimension of their intersection is dim U + dim W − dim (U + W). Bearing in mind that the geometric dimension of the projective space P(V) associated to V is dim V − 1 and that the geometric dimension of any subspace is positive, the basic proposition of incidence in this setting can take the form: linear subspaces L and M of projective space P meet provided dim L + dim M ≥ dim P.[1]
The following sections are limited to projective planes defined over fields, often denoted by PG(2, F), where F is a field, or P2F. However these computations can be naturally extended to higher dimensional projective spaces and the field may be replaced by a division ring (or skewfield) provided that one pays attention to the fact that multiplication is not commutative in that case.
## PG(2,F)
Let V be the three dimensional vector space defined over the field F. The projective plane P(V) = PG(2, F) consists of the one dimensional vector subspaces of V called points and the two dimensional vector subspaces of V called lines. Incidence of a point and a line is given by containment of the one dimensional subspace in the two dimensional subspace.
Fix a basis for V so that we may describe its vectors as coordinate triples (with respect to that basis). A one dimensional vector subspace consists of a non-zero vector and all of its scalar multiples. The non-zero scalar multiples, written as coordinate triples, are the homogeneous coordinates of the given point, called point coordinates. With respect to this basis, the solution space of a single linear equation {(x, y, z) | ax + by + cz = 0} is a two dimensional subspace of V, and hence a line of P(V). This line may be denoted by line coordinates [a, b, c] which are also homogeneous coordinates since non-zero scalar multiples would give the same line. Other notations are also widely used. Point coordinates may be written as column vectors, (x, y, z)T, with colons, (x : y : z), or with a subscript, (x, y, z)P. Correspondingly, line coordinates may be written as row vectors, (a, b, c), with colons, [a : b : c] or with a subscript, (a, b, c)L. Other variations are also possible.
## Incidence expressed algebraically
Given a point P = (x, y, z) and a line l = [a, b, c], written in terms of point and line coordinates, the point is incident with the line (often written as P I l), if and only if,
ax + by + cz = 0.
This can be expressed in other notations as:
$ax + by + cz = [a,b,c] \cdot (x,y,z) =(a,b,c)_L \cdot (x,y,z)_P =$
$= [a:b:c] \cdot (x:y:z) = (a,b,c) \left ( \begin{matrix} x \\ y \\ z \end{matrix} \right ) = 0.$
No matter what notation is employed, when the homogeneous coordinates of the point and line are just considered as ordered triples, their incidence is expressed as having their dot product equal 0.
## The line incident with a pair of distinct points
Let P1 and P2 be a pair of distinct points with homogeneous coordinates (x1, y1, z1) and (x2, y2, z2) respectively. These points determine a unique line l with an equation of the form ax + by + cz = 0 and must satisfy the equations:
ax1 + by1 + cz1 = 0 and
ax2 + by2 + cz2 = 0.
In matrix form this system of simultaneous linear equations can be expressed as:
$\left( \begin{matrix} x & y & z \\ x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \end{matrix} \right) \left( \begin{matrix} a \\ b \\ c \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right).$
This system has a nontrivial solution if and only if the determinant,
$\left| \begin{matrix} x & y & z \\ x_1 & y_1 & z_1 \\x_2 & y_2 & z_2 \end{matrix} \right| = 0.$
Expansion of this determinantal equation produces a homogeneous linear equation which must be the equation of line l. Therefore, up to a common non-zero constant factor we have l = [a, b, c] where:
a = y1z2 - y2z1,
b = x2z1 - x1z2, and
c = x1y2 - x2y1.
In terms of the scalar triple product notation for vectors, the equation of this line may be written as:
PP1 × P2 = 0,
where P = (x, y, z) is a generic point.
### Collinearity
Main article: Collinear
Points which are incident with the same line are said to be collinear. The set of all points incident with the same line is called a range.
If P1 = (x1, y1, z1), P2 = (x2, y2, z2), and P3 = (x3, y3, z3), then these points are collinear if and only if
$\left| \begin{matrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{matrix} \right| = 0,$
i.e., if and only if the determinant of the homogeneous coordinates of the points is equal to zero.
## Intersection of a pair of lines
Let l1 = [a1, b1, c1] and l2 = [a2, b2, c2] be a pair of distinct lines. Then the intersection of lines l1 and l2 is point a P = (x0, y0, z0) that is the simultaneous solution (up to a scalar factor) of the system of linear equations:
a1x + b1y + c1z = 0 and
a2x + b2y + c2z = 0.
The solution of this system gives:
x0 = b1c2 - b2c1,
y0 = a2c1 - a1c2, and
z0 = a1b2 - a2b1.
Alternatively, consider another line l = [a, b, c] passing through the point P, that is, the homogeneous coordinates of P satisfy the equation:
ax+ by + cz = 0.
Combining this equation with the two that define P, we can seek a non-trivial solution of the matrix equation:
$\left( \begin{matrix} a & b & c \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right).$
Such a solution exists provided the determinant,
$\left| \begin{matrix} a & b & c \\ a_1 & b_1 & c_1 \\a_2 & b_2 & c_2 \end{matrix} \right| = 0.$
The coefficients of a, b and c in this equation give the homogeneous coordinates of P.
The equation of the generic line passing through the point P in scalar triple product notation is:
ll1 × l2 = 0.
### Concurrence
Lines that meet at the same point are said to be concurrent. The set of all lines in a plane incident with the same point is called a pencil of lines centered at that point. The computation of the intersection of two lines shows that the entire pencil of lines centered at a point is determined by any two of the lines that intersect at that point. It immediately follows that the algebraic condition for three lines, [a1, b1, c1], [a2, b2, c2], [a3, b3, c3] to be concurrent is that the determinant,
$\left| \begin{matrix} a_1 & b_1 & c_1 \\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{matrix} \right| = 0.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168062806129456, "perplexity": 345.7539621758278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398458511.65/warc/CC-MAIN-20151124205418-00111-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-math-topics/204527-basis-second-degree-polynomials.html | # Math Help - Basis of second degree polynomials
1. ## Basis of second degree polynomials
Find a basis for the set of quadratic polynomials p(x) that satisfy p(1)=0.
2. ## Re: Basis of second degree polynomials
Hey christianwos.
Is this an orthogonal basis in terms of a vector space of three two degree polynomials?
Also if this the above type of basis you need to specify the range of the polynomial if you wish to use an integral as the definition of the inner product.
3. ## Re: Basis of second degree polynomials
Originally Posted by christianwos
Find a basis for the set of quadratic polynomials p(x) that satisfy p(1)=0.
Basis implies it's a vector space (or module). But it's obviously not, since if f and -f are such quadratics, then, if it were a vector space, f+(-f) = 0 would again be such a quadratic - and it clearly is not. I assume what's intended is "Find a basis for the set of polynomials p of degree <= 2 such that p(1) = 0". Also, the base field/ring is unspecified. I'll assume it's just some arbitrary field F.
Have: "Let V = {p in F[x] | deg(p) <=2 and p(1) = 0 }. Then V is a vector space over F. Find a basis for V."
It's easy enough to show that V actually is a vector space over F.
Main observation is: V = {(ax+b)(x-1) in F[x] | a, b in F}, which includes all the degenerate cases (a=0, or a=b=0).
So have p in V iff p(x) = (ax+b)(x-1) = a(x)(x-1) + b(x-1) = a(x^2-x) + b(x-1) for some a, b in F.
Thus p in V iff p(x) = ar(x) + bs(x) for some a, b in F, where r(x) = x^2 - x and s(x) = x-1.
Therefore V = LS{ r, s }.
If a, b in F and ar(x) + bs(x) = 0 in F[x], then a(x^2-x) + b(x-1) = ax^2 + (b-a)x -b is the 0 polynomial in F[x],
which implies that a = b = 0.
Therefore r and s are linearly independent over F.
Therefore { r, s } is a basis for V. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948804497718811, "perplexity": 1067.624856261668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165302.57/warc/CC-MAIN-20160205193925-00122-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://nrich.maths.org/844/clue | ### Roaming Rhombus
We have four rods of equal lengths hinged at their endpoints to form a rhombus ABCD. Keeping AB fixed we allow CD to take all possible positions in the plane. What is the locus (or path) of the point D?
### Quad in Quad
The points P, Q, R and S are the midpoints of the edges of a convex quadrilateral. What do you notice about the quadrilateral PQRS as the convex quadrilateral changes?
### Arrowhead
The points P, Q, R and S are the midpoints of the edges of a non-convex quadrilateral.What do you notice about the quadrilateral PQRS and its area?
# Similarly So
##### Stage: 4 Challenge Level:
Look for similar and isosceles triangles. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162416815757751, "perplexity": 777.1636269689425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111581.11/warc/CC-MAIN-20160428161511-00063-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Quantum_reflection | # Quantum reflection
Quantum reflection is a physical phenomenon involving the reflection of a matter wave from an attractive potential. In classical mechanics, such a phenomenon is not possible; for instance when one magnet is pulled toward another, the observer does not expect one of the magnets to suddenly (i.e. before the magnets 'touch') turn around and retreat in the opposite direction.
## Definition
Quantum reflection became an important branch of physics in the 21st century. In a workshop about quantum reflection,[1] the following definition of quantum reflection was suggested:
Quantum reflection is a classically counterintuitive phenomenon whereby the motion of particles is reverted "against the force" acting on them. This effect manifests the wave nature of particles and influences collisions of ultracold atoms and interaction of atoms with solid surfaces.
Observation of quantum reflection has become possible thanks to recent advances in trapping and cooling atoms.
## Reflection of slow atoms
Although the principles of quantum mechanics apply to any particles, usually the term "quantum reflection" means reflection of atoms from a surface of condensed matter (liquid or solid). The full potential experienced by the incident atom does become repulsive at a very small distance from the surface (of order of size of atoms). This is when the atom becomes aware of the discrete character of material. This repulsion is responsible for the classical scattering one would expect for particles incident on a surface. Such scattering is diffuse rather than specular, and so this component of the reflection is easy to distinguish. Indeed to reduce this part of the physical process, a grazing angle of incidence is used; this enhances the quantum reflection. This requirement of small incident velocities for the particles means that the non-relativistic approximation to quantum mechanics is all that is required.
## Single-dimensional approximation
So far, one usually considers the single-dimensional case of this phenomenon, that is when the potential has translational symmetry in two directions (say $y$ and $z$), such that only a single coordinate (say $x$) is important. In this case one can examine the specular reflection of a slow neutral atom from a solid state surface .[2][3] Where one has an atom in a region of free space close to a material capable of being polarized, a combination of the pure van der Waals interaction, and the related Casimir-Polder interaction attracts the atom to the surface of the material. The latter force dominates when the atom is comparatively far from the surface, and the former when the atom comes closer to the surface. The intermediate region is controversial as it is dependent upon the specific nature and quantum state of the incident atom.
The condition for a reflection to occur as the atom experiences the attractive potential can be given by the presence of regions of space where the WKB approximation to the atomic wave-function breaks down. If, in accordance with this approximation we write the wavelength of the gross motion of the atom system toward the surface as a quantity local to every region along the $x$ axis,
$\lambda\left(x\right)=\frac{h}{\sqrt{2m\left(E-V\left(x\right)\right)}}$
where $m$ is the atomic mass, $~E~$ is its energy, and $~V(x)~$ is the potential it experiences, then it is clear that we cannot give meaning to this quantity where,
$\left|\frac{d\lambda\left(x\right)}{dx}\right|\sim 1$
That is, in regions of space where the variation of the atomic wavelength is significant over its own length (i.e. the gradient of $V(x)$ is steep), there is no meaning in the approximation of a local wavelength. This breakdown occurs irrespective of the sign of the potential, $~V(x)~$. In such regions part of the incident atom wave-function may become reflected. Such a reflection may occur for slow atoms experiencing the comparatively rapid variation of the van der Waals potential near the material surface. This is just the same kind of phenomenon as occurs when light passes from a material of one refractive index to another of a significantly different index over a small region of space. Irrespective of the sign of the difference in index, there will be a reflected component of the light from the interface. Indeed, quantum reflection from the surface of solid-state wafer allows one to make the quantum optical analogue of a mirror - the atomic mirror - to a high precision.
## Experiments with grazing incidence
Fig. A. Observation of quantum reflection at grazing incidence
Practically, in many experiments with quantum reflection from Si, the grazing incidence angle is used (figure A). The set-up is mounted in a vacuum chamber to provide a several-meter free path of atoms; the good vacuum (at the level of 10−7 Torr or 130 µPa) is required. The magneto-optical trap (MOT) is used to collect cold atoms, usually excited He or Ne, approaching the point-like source of atoms. The excitation of atoms is not essential for the quantum reflection but it allows the efficient trapping and cooling using optical frequencies. In addition, the excitation of atoms allows the registration at the micro-channel plate (MCP) detector (bottom of the figure). Movable edges are used to stop atoms which do not go toward the sample (for example a Si plate), providing the collimated atomic beam. The He-Ne laser was used to control the orientation of the sample and measure the grazing angle $~\theta~$. At the MCP, there was observed relatively intensive strip of atoms which come straightly (without reflection) from the MOT, by-passing the sample, strong shadow of the sample (the thickness of this shadow could be used for rough control of the grazing angle), and the relatively weak strip produced by the reflected atoms. The ratio $~r~$ of density of atoms registered at the center of this strip to the density of atoms at the directly illuminated region was considered as efficiency of quantum reflection, i.e., reflectivity. This reflectivity strongly depends on the grazing angle and speed of atoms.
In the experiments with Ne atoms, usually just fall down, when the MOT is suddenly switched off. Then, the speed of atoms is determined as $~v=\sqrt{2gh}~$, where $~g~$ is acceleration of free fall, and $~h~$ is distance from the MOT to the sample. In experiments described, this distance was of order of 0.5 meters (2 ft), providing the speed of order of 3 m/s (6.7 mph; 11 km/h). Then, the transversal wavenumber can be calculated as $~k=\sin(\theta)\frac{mv}{\hbar}~$, where $~m~$ is mass of the atom, and $\hbar$ is the Planck constant.
In the case with He, the additional resonant laser could be used to release the atoms and provide them an additional velocity; the delay since the release of the atoms till the registration allowed to estimate this additional velocity; roughly, $~v=\frac{1}{t\!~h}~$, where $~t~$ is time delay since the release of atoms till the click at the detector. Practically, $v$ could vary from 20 to 130 m/s (45 to 291 mph; 72 to 468 km/h).[4][5][6]
Although the scheme at the figure looks simple, the extend facility is necessary to slow atoms, trap them and cool to millikelvin temperature, providing a micrometre size source of cold atoms. Practically, the mounting and maintaining of this facility (not shown in the figure) is the heaviest job in the experiments with quantum reflection of cold atoms. The possibility of an experiment with the quantum reflection with just a pinhole instead of MOT are discussed in the literature.[6]
## Casimir and van der Waals attraction
Despite this, there is some doubt as to the physical origin of quantum reflection from solid surfaces. As was briefly mentioned above, the potential in the intermediate region between the regions dominated by the Casimir-Polder and Van der Waals interactions requires an explicit Quantum Electrodynamical calculation for the particular state and type of atom incident on the surface. Such a calculation is very difficult. Indeed, there is no reason to suppose that this potential is solely attractive within the intermediate region. Thus the reflection could simply be explained by a repulsive force, which would make the phenomenon not quite so surprising. Furthermore, a similar dependence for reflectivity on the incident velocity is observed in the case of the adsorption of particles in vicinity of a surface. In the simplest case, such absorption could be described with a non-Hermitian potential (i.e. one where probability is not conserved). Until 2006, the published papers interpreted the reflection in terms of a Hermitian potential
[7]
this assumption allows to build a quantitative theory .[8]
## Efficient quantum reflection
Fig.1. Approximation $r=\frac{1}{(1+kw)^4}$, compared to experimental data.
A qualitative estimate for the efficiency of quantum reflection can be made using dimensional analysis. Letting $m$ be mass of the atom and $k=2\pi/\lambda$ the normal component of its wave-vector, then the energy of the normal motion of the particle,
$E=\frac{(\hbar k)^2}{2m}$
should be compared to the potential, $V(x)$ of interaction. The distance, $|x_{t}|$ at which $E=V(x)$ can be considered as the distance the which the atom will come across a troublesome discontinuity in the potential. This is the point at which the WKB method truly becomes nonsense. The condition for efficient quantum reflection can be written as $k|x_{t}|<1$. In other words the wavelength is small compared to the distance at which the atom may become reflected from the surface. If this condition holds, the aforementioned effect of the discrete character of the surface may be neglected. This argument produces a simple estimate for the reflectivity, $r$,
$r=\frac{1}{(1+k|x_{t}|)^4}$
which shows good agreement with experimental data for excited neon and helium atoms, reflected from a flat silicon surface (fig.1), see [6] and references therein. Such a fit is also in good agreement with a single-dimensional analysis of the scattering of atoms from an attractive potential,.[9] Such agreement indicates, that, at least in the case of noble gases and Si surface, the quantun reflection can be described with single-dimensional hermitian potential, as the result of attraction of atoms to the surface.
## Ridged mirror
Fig.2. The ridges may enhance the quantum reflection
The effect of quantum reflection can be enhanced using ridged mirrors .[10] If one produces a surface consisting of a set of narrow ridges then the resulting non-uniformity of the material allows the reduction of the effective van der Waals constant; this extends the working ranges of the grazing angle. For this reduction to be valid, we must have small distances, $L$ between the ridges. Where $L$ becomes large, the non-uniformity is such that the ridged mirror must be interpreted in terms of multiple Fresnel diffraction [4] or the Zeno effect;[5] these interpretations give similar estimates for the reflectivity .[11] See ridged mirror for the details.
Similar enhancement of quantum reflection takes place where one has particles incident on an array of pillars .[12] This was observed with very slow atoms (Bose–Einstein condensate) at almost normal incidence.
## Application of quantum reflection
Quantum reflection makes the idea of solid-state atomic mirrors and atomic-beam imaging systems (atomic nanoscope) possible.[6] The use of quantum reflection in the production of atomic traps has also been suggested.[9] Up to year 2007, no commercial application of quantum reflection was reported.
## References
1. ^ Quantum Reflection, workshop; October 22–24, 2007, Cambridge, Massachusetts, USA; http://cfa-www.harvard.edu/itamp/QuantumReflection.html
2. ^ F.Shimizu (2001). "Specular Reflection of Very Slow Metastable Neon Atoms from a Solid Surface". Physical Review Letters 86 (6): 987–990. Bibcode:2001PhRvL..86..987S. doi:10.1103/PhysRevLett.86.987. PMID 11177991.
3. ^ H.Oberst; Y.Tashiro, K.Shimizu, F.Shimizu (2005). "Quantum reflection of He* on silicon". Physical Review A 71 (5): 052901. Bibcode:2005PhRvA..71e2901O. doi:10.1103/PhysRevA.71.052901.
4. ^ a b H.Oberst; D.Kouznetsov, K.Shimizu, J.Fujita, and F.Shimizu (2005). "Fresnel Diffraction Mirror for an Atomic Wave". Physical Review Letters 94: 013203. Bibcode:2005PhRvL..94a3203O. doi:10.1103/PhysRevLett.94.013203.
5. ^ a b D.Kouznetsov; H.Oberst (2005). "Reflection of Waves from a Ridged Surface and the Zeno Effect". Optical Review 12 (5): 1605–1623. Bibcode:2005OptRv..12..363K. doi:10.1007/s10043-005-0363-9.
6. ^ a b c d D.Kouznetsov; H.Oberst, K.Shimizu, A.Neumann, Y.Kuznetsova, J.-F.Bisson, K.Ueda, S.R.J.Brueck (2006). "Ridged atomic mirrors and atomic nanoscope". Journal of Physics B 39 (7): 1605–1623. Bibcode:2006JPhB...39.1605K. doi:10.1088/0953-4075/39/7/005.
7. ^ H.Friedrich; G.Jacoby, C.G.Meister (2002). "quantum reflection by Casimir–van der Waals potential tails". Physical Review A 65 (3): 032902. Bibcode:2002PhRvA..65c2902F. doi:10.1103/PhysRevA.65.032902.
8. ^ F.Arnecke; H.Friedrich, J.Madroñero (2006). "Effective-range theory for quantum reflection amplitudes". Physical Review A 74 (6): 062702. Bibcode:2006PhRvA..74f2702A. doi:10.1103/PhysRevA.74.062702.
9. ^ a b J.Madroñero; H.Friedrich (2007). "Influence of realistic atom wall potentials in quantum reflection traps". Physical Review A 75 (2): 022902. Bibcode:2007PhRvA..75b2902M. doi:10.1103/PhysRevA.75.022902.
10. ^ F.Shimizu; J. Fujita (2002). "Giant Quantum Reflection of Neon Atoms from a Ridged Silicon Surface". Journal of the Physical Society of Japan 71: 5–8. arXiv:physics/0111115. Bibcode:2002JPSJ...71....5S. doi:10.1143/JPSJ.71.5.
11. ^ D.Kouznetsov; H.Oberst (2005). "Scattering of waves at ridged mirrors" (PDF). Physical Review A 72: 013617. Bibcode:2005PhRvA..72a3617K. doi:10.1103/PhysRevA.72.013617.
12. ^ T.A.Pasquini; M.Saba, G.-B.Jo, Y.Shin, W.Ketterle, D.E.Pritchard, T.A.Savas, N. Mulders. (2006). "Low Velocity Quantum Reflection of Bose-Einstein Condensate". Physical Review Letters 97 (9): 093201. arXiv:cond-mat/0603463. Bibcode:2006PhRvL..97i3201P. doi:10.1103/PhysRevLett.97.093201. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715330362319946, "perplexity": 1134.019428094125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458398357.2/warc/CC-MAIN-20150501053318-00068-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/191995-antiderivative-w-tan.html | # Math Help - antiderivative w/ tan
1. ## antiderivative w/ tan
Having trouble with this
find the antiderivative of
$\frac{1}{81z^2+16}$
2. ## Re: antiderivative w/ tan
We have something of the form $\frac{1}{a^2+x^2}$ which is a standard form that you can directly integrate.
If you don't want to do that, then we're going to have to do an icky substitution.
Let $z=\frac{4}{9}tan(x)$
Then $dz=\frac{4}{9}sec^2{x}~dx$
$I=\int\frac{1}{81z^2+16}dz$
$=\int\frac{\frac{4}{9}sec^2{x}}{81(\frac{16}{81}ta n^2(x))+16}dx$
I think it should work out. Good luck! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658601641654968, "perplexity": 1619.127591612989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111392.88/warc/CC-MAIN-20160428161511-00123-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Bispherical_coordinates | # Bispherical coordinates
Illustration of bispherical coordinates, which are obtained by rotating a two-dimensional bipolar coordinate system about the axis joining its two foci. The foci are located at distance 1 from the vertical z-axis. The red self-intersecting torus is the σ=45° isosurface, the blue sphere is the τ=0.5 isosurface, and the yellow half-plane is the φ=60° isosurface. The green half-plane marks the x-z plane, from which φ is measured. The black point is located at the intersection of the red, blue and yellow isosurfaces, at Cartesian coordinates roughly (0.841, -1.456, 1.239).
Bispherical coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that connects the two foci. Thus, the two foci $F_{1}$ and $F_{2}$ in bipolar coordinates remain points (on the $z$-axis, the axis of rotation) in the bispherical coordinate system.
## Definition
The most common definition of bispherical coordinates $(\sigma, \tau, \phi)$ is
$x = a \ \frac{\sin \sigma}{\cosh \tau - \cos \sigma} \cos \phi$
$y = a \ \frac{\sin \sigma}{\cosh \tau - \cos \sigma} \sin \phi$
$z = a \ \frac{\sinh \tau}{\cosh \tau - \cos \sigma}$
where the $\sigma$ coordinate of a point $P$ equals the angle $F_{1} P F_{2}$ and the $\tau$ coordinate equals the natural logarithm of the ratio of the distances $d_{1}$ and $d_{2}$ to the foci
$\tau = \ln \frac{d_{1}}{d_{2}}$
### Coordinate surfaces
Surfaces of constant $\sigma$ correspond to intersecting tori of different radii
$z^{2} + \left( \sqrt{x^2 + y^2} - a \cot \sigma \right)^2 = \frac{a^2}{\sin^2 \sigma}$
that all pass through the foci but are not concentric. The surfaces of constant $\tau$ are non-intersecting spheres of different radii
$\left( x^2 + y^2 \right) + \left( z - a \coth \tau \right)^2 = \frac{a^2}{\sinh^2 \tau}$
that surround the foci. The centers of the constant-$\tau$ spheres lie along the $z$-axis, whereas the constant-$\sigma$ tori are centered in the $xy$ plane.
### Inverse formulae
The formulae for the inverse transformation are:
$\sigma = \arccos((R^2-a^2)/Q)$
$\tau = \operatorname{arsinh}(2 a z/Q)$
$\phi = \operatorname{atan}(y/x)$
where $R=\sqrt{x^2+y^2+z^2}$ and $Q=\sqrt{(R^2+a^2)^2-(2 a z)^2}.$
### Scale factors
The scale factors for the bispherical coordinates $\sigma$ and $\tau$ are equal
$h_\sigma = h_\tau = \frac{a}{\cosh \tau - \cos\sigma}$
whereas the azimuthal scale factor equals
$h_\phi = \frac{a \sin \sigma}{\cosh \tau - \cos\sigma}$
Thus, the infinitesimal volume element equals
$dV = \frac{a^3 \sin \sigma}{\left( \cosh \tau - \cos\sigma \right)^3} \, d\sigma \, d\tau \, d\phi$
and the Laplacian is given by
\begin{align} \nabla^2 \Phi = \frac{\left( \cosh \tau - \cos\sigma \right)^3}{a^2 \sin \sigma} & \left[ \frac{\partial}{\partial \sigma} \left( \frac{\sin \sigma}{\cosh \tau - \cos\sigma} \frac{\partial \Phi}{\partial \sigma} \right) \right. \\[8pt] &{} \quad + \left. \sin \sigma \frac{\partial}{\partial \tau} \left( \frac{1}{\cosh \tau - \cos\sigma} \frac{\partial \Phi}{\partial \tau} \right) + \frac{1}{\sin \sigma \left( \cosh \tau - \cos\sigma \right)} \frac{\partial^2 \Phi}{\partial \phi^2} \right] \end{align}
Other differential operators such as $\nabla \cdot \mathbf{F}$ and $\nabla \times \mathbf{F}$ can be expressed in the coordinates $(\sigma, \tau)$ by substituting the scale factors into the general formulae found in orthogonal coordinates.
## Applications
The classic applications of bispherical coordinates are in solving partial differential equations, e.g., Laplace's equation, for which bispherical coordinates allow a separation of variables. However, the Helmholtz equation is not separable in bispherical coordinates. A typical example would be the electric field surrounding two conducting spheres of different radii.
## Bibliography
• Morse PM, Feshbach H (1953). Methods of Theoretical Physics, Part I. New York: McGraw-Hill. pp. 665–666.
• Korn GA, Korn TM (1961). Mathematical Handbook for Scientists and Engineers. New York: McGraw-Hill. p. 182. LCCN 5914456.
• Zwillinger D (1992). Handbook of Integration. Boston, MA: Jones and Bartlett. p. 113. ISBN 0-86720-293-9.
• Moon PH, Spencer DE (1988). "Bispherical Coordinates (η, θ, ψ)". Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions (corrected 2nd ed., 3rd print ed.). New York: Springer Verlag. pp. 110–112 (Section IV, E4Rx). ISBN 0-387-02732-7. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800426363945007, "perplexity": 1393.304141523053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010567051/warc/CC-MAIN-20140305090927-00081-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/89060/notation-for-field-access-in-algorithm | # Notation for field access in Algorithm
While writing array index in an algorithm $a[i]\gets v$ is conventionally used. Is the notation $a_{i}\gets v$ also used ?
If the algorithm uses struct or equivalent, some notation is required for property access. In some occasions $p(s)$ is used where $p$ is the property. Using $s \rightarrow p$ messes up with the assignment arrow. Using $s_{p}$ looks like $p$ is an index and $p\in \mathbb{Z}^{+}$. Should $s["p"]$ be used ? What is the convention to denote property access in algorithm ?
Is there any reference where I can check different notations of these two scenarios ?
• Pseudocode is not a programming language. The exact notation doesn't matter as long as it's clear and concise. You might want to use dot notation for properties. – Solomonoff's Secret Mar 8 '18 at 16:31
• Yes I am asking about conventions and references. Writing something too unconventional makes things hard to read. – Neel Basu Mar 8 '18 at 16:33
• $a[i]$ and $a_i$ are commonly used. Dot notation is commonly used. C-style $s \rightarrow p$ is not, in my experience. – Solomonoff's Secret Mar 8 '18 at 16:33
• If $a$ is an array, $a[i]$ is typically used, but not $a_i$.
• If $(a_1,\dots,a_n)$ is a sequence, $a_i$ is typically used, but not $a[i]$. Usually sequences are not updateable.
• If $a$ is a string, you might see either $a[i]$ or $a_i$; both are common, in my experience.
• I have most commonly seen $s.p$ used for property access. Sometimes I have seen people use $p(s)$ used. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842408299446106, "perplexity": 531.3229803594934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00501.warc.gz"} |
https://www.physicsforums.com/threads/equation-in-algebra-help.366609/ | # Equation in algebra help?
1. Jan 1, 2010
### jtesttubes
Help me to solve this equation in algebra
More than its width A rectangle has a length of 10 mm long . It has a perimeter which is more than 30mm. We can take w as width.
question is
1. write and Use expressions calculate the length in terms of width.
2. Write expressions for the length and width on the basis of given information
3. solve inequality, clearly indicating the width of rectangle.
2. Jan 2, 2010
### tiny-tim
Hi jtesttubes!
(you mean "The length is 10mm more than the width" )
Call the length x and the width w.
Similar Discussions: Equation in algebra help? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584957957267761, "perplexity": 2738.3678275156526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00194.warc.gz"} |
https://mpotd.com/395/ | ## Problem of the Day #395: DessertApril 17, 2012
Posted by Saketh in : potd , trackback
Alex just had a sumptuous dinner at his favorite restaurant. He now has $n$ different choices for dessert.
With so many good options, he decides to pick randomly using a coin. However, he wants to ensure that each dessert has an equal probability of being picked.
Devise a strategy for choosing dessert that minimizes the expected number of flips needed, and express the expected number in terms of $n$.
Bonus: Now solve the same problem in the case that Alex’s coin is weighted to land heads more often than tails. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90910404920578, "perplexity": 1140.0069636805758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00544.warc.gz"} |
http://mathhelpforum.com/trigonometry/149173-question-cosine.html | # Thread: A question on Cosine.
1. ## A question on Cosine.
Hi all,
I did'nt know how to write formula here. So:
2. Originally Posted by Mathelogician
Hi all,
I did'nt know how to write formula here. So:
Hi,
(Edited)
The full theorem is: $\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k. In other words, $\cos x=\cos a \Rightarrow \exists k\in\mathbb{Z}x=2k\pi+a)\lor (x=2k\pi-a)" alt="\cos x=\cos a \Rightarrow \exists k\in\mathbb{Z}x=2k\pi+a)\lor (x=2k\pi-a)" />. This is much different from $\cos x=\cos a \Rightarrow \forall k \in \mathbb{Z}:x=2k\pi\pm a$. Among other things, this would allow us to easily show that $0=2\pi$.
Regarding writing math formulae, etc., you can see the LaTeX tutorial within the LaTeX Help Subforum.
3. Thanks.
No my assert is right! We know that k is an element of the set of integers and that solution is the general solution of the equation and it means that for all ks it is true!
So if we choose k to be 0, then x=+-x and we choose the case x=-x wich is one of the solutions!
---------
Even If that is wrong, when we get x=2k(pi)+-x the there are 2 cases: 1) 2x=2k*pi=>x=k*pi and 2) 2k*pi=0 => k=0 wich we may have choiced k=another number except 0 for example if k=1 => 2*pi=0
4. Originally Posted by Mathelogician
Thanks.
No my assert is right! We know that k is an element of the set of integers and that solution is the general solution of the equation and it means that for all ks it is true!
So if we choose k to be 0, then x=+-x and we choose the case x=-x wich is one of the solutions!
---------
Even If that is wrong, when we get x=2k(pi)+-x the there are 2 cases: 1) 2x=2k*pi=>x=k*pi and 2) 2k*pi=0 => k=0 wich we may have choiced k=another number except 0 for example if k=1 => 2*pi=0
By you argument, $cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" alt="cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" />, and letting k=2, we have $(0=4\pi+2\pi=6\pi)\lor(0=4\pi-2\pi=2\pi)$. This is a contradiction.
What exactly is your question?
Edit: Actually, I should have written that the full theorem is:
$\forall x,a\in\mathbb{R}:\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k.
Furthermore, the converse is true. So, for the most information and least amount of characters:
$\forall x,a\in\mathbb{R}:\cos x=\cos a \Leftrightarrow\exists k\in\mathbb{Z}: x=2k\pi\pm a$.
Edit 2: I suppose it's worth mentioning that for the correct theorem, we can't just choose an arbitrary k, like you did at the end the quoted post.
Maybe it would help you to have an analogy. $\forall a,b\in\mathbb{Z},|a|+|b|\ne0:a|b\Leftrightarrow \exists k\in\mathbb{Z}:ka=b$.
5. Originally Posted by undefined
By you argument, $cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" alt="cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" />, and letting k=2, we have $(0=4\pi+2\pi=6\pi)\lor(0=4\pi-2\pi=2\pi)$. This is a contradiction.
What exactly is your question?
Edit: Actually, I should have written that the full theorem is:
$\forall x,a\in\mathbb{R}:\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k.
1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!!
2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k.
6. Originally Posted by Mathelogician
1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!!
2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k.
When you say $cosx = cosa$ has solutions $x = 2 \pi k \pm a$, what do you mean?
You mean that there exists such a $k \in \mathbb{N}$, for which $x = 2 \pi k + a$. It does not mean that $x = 2 \pi k \pm a$ for all $k \in \mathbb{N}$!
Take, for example, a quadratic equation - say $x^2 - x - 2 = 0$. We know, then, its solutions are $x_{1, 2} = \frac{1 \pm \sqrt{1 + 8}}{2} \Rightarrow x_1 = 2, \ x_2 = -1$
What does this mean? It means that if you have a number, say $a$, for which $a^2 - a - 2 = 0$, then either $a = 2$ or $a = -1$. It does not mean that $a = 2 = -1$!
The case for $cosx = cosa$ is exactly the same!
7. Originally Posted by Mathelogician
1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!!
2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k.
1) You've heard of proof by contradiction, right? What I gave was a proof that your claim is false. You have not used "allowable ways."
2) This is false, as was already proven.
In general: You are using "proof by vehement assertion." You are not presenting an actual proof, you are just stating emphatically that your claim is true, over and over.
See Defunkt's post which goes along with everything I've been saying.
8. Indeed, the location of quantifiers in logic ("there exists", or, "for all") is extremely important, as demonstrated above. If a statement looks the same, but has those quantifiers in different locations, then it's not necessarily the same statement.
9. Ok, Why you make the things complicated than they are?
$x = 2k\pi \pm a$ for cos(a)=cos(x).
cos(-x)=cos(x) because cosine is even function.
Now this works for every x in reals.
10. Mathelogician: what are you trying to do? I'm just taking a step back here. What is your goal?
11. Hello and Thanks for responses.
------------------------------
Dear Defunkt and other friends, i mean that it's true for all the integer numbers.
For example: cos(x)=cos(2*pi+x)=cos(4*pi+x)=cos(6*pi+x)=cos(8*p i+x)=cos(10*pi+x)=cos(12*pi+x)=cos(14*pi+x)=cos(16 *pi+x)=cos(18*pi+x)=cos(20*pi+x)=cos(22*pi+x)=cos( 24*pi+x)=cos(26*pi+x)=cos(28*pi+x)=cos(30*pi+x)=.. ..{also for negative numbers}
If you use the Unit circle you will understant my assertion. Infact the Original period of Cosine function is p=2*pi (like sine function). If a function f is periodic with period P, then for all x in the domain of f and all integers n, f(x + nP) = f(x).
See: Periodic function - Wikipedia, the free encyclopedia
And the quadratic equation like any other polynomial IS NOT periodic.
So i think my assertion is Reasonable and yours is not!
12. I would agree with your claim, mathelogician. It is true that the sin and cos functions are $2\pi$-periodic. So, $\cos(x)=\cos(x+2\pi k)\;\forall\,k\in\mathbb{Z}$, and $\forall\,x\in\mathbb{R}$. However, reasoning backwards to any sort of equality of the x's is incorrect. Example:
$\frac{1}{2}=\cos\left(\frac{\pi}{3}\right)=\cos\le ft(-\frac{\pi}{3}\right)$.
But, obviously, $\frac{\pi}{3}-\left(-\frac{\pi}{3}\right)=\frac{2\pi}{3}\not=2\pi k$ for any integer $k$.
The cosine and the sine functions are not 1-1; hence, reasoning from equality of the functions to equality of the arguments is not permissible. Reasoning from equality of arguments to equality of functions is permissible, since the cosine and sine functions are well-defined.
So, mathelogician, I would say that your original claim in the OP is incorrect. Your original claim was that IF $\cos(x)=\cos(a)$ THEN $x=2\pi k\pm a$. But I've just shown you a counterexample to that claim. The converse of that claim, that IF $x=2\pi k\pm a$ THEN $\cos(x)=\cos(a)$, is true. A statement is not, in general, equal to its converse!
But, all of this could well be irrelevant. I'm still left wondering what it is you're trying to do.
13. Thanks.
I think i got the Mistake!
1) When we speak about an equation, then we must have an unknown number (called x) an we want to find all Possible values for x. So my mistake was forgetting this Important issue!!
2) Then you should note that my claim wich is the expression and its converse, is ALWAYS true for Trig EQUATIONS.
Infact there we have an unknown number x and the General solution is the set of all possible values for that!
There are different proofs for this claim. For example a Geometric proof exists for that(If you need, i will write it here).
And almost in every Trigonometry book in you can find this solving method.(tell me if you need).
--------
Why do you insist on asking my goal of questioning??!!!
14. Your Original Claim: if $\cos(x)=\cos(a)$, then $x=2\pi k\pm a$ for all k. I would love to see a proof of this false claim. Here is my proof that it is false:
Let $a=\pi/3$. Then $\cos(-\pi/3)=\cos(\pi/3)$, and yet it is not true that $-\pi/3=2\pi k\pm\pi/3$ for any integer $k$. Therefore, the implication in the claim is false.
15. Originally Posted by Mathelogician
1) When we speak about an equation, then we must have an unknown number (called x) an we want to find all Possible values for x.
I don't know where you get your definitions. 5=5 is an equation. And clearly in the equation 5=5 there is no unknown.
Originally Posted by Mathelogician
2) Then you should note that my claim wich is the expression and its converse, is ALWAYS true for Trig EQUATIONS.
Just to be perfectly clear that we are using the same language: for a statement
p -> q
the converse is
q -> p
Ackbeet very clearly explained that using "for all integers k" only one direction is true, while the other is false. Please try to understand this. Using the symbol $\displaystyle \Leftrightarrow$ is to claim that both directions are true.
I will not post on this thread anymore (except to correct any mistake I may have made) because I feel the discussion is becoming unproductive, just saying the same thing over and over. Hope all this makes sense to the OP. Cheers.
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 47, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98978191614151, "perplexity": 3783.8082495626268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696696.79/warc/CC-MAIN-20170926212817-20170926232817-00276.warc.gz"} |
https://owenduffy.net/blog/?p=15303 | # Transmission lines: departure from ideal Zo
The article On the concept of that P=Pfwd-Prev discussed the question of the validity of the concept of that P=Pfwd-Prev, exploring an example of a common nominally 50Ω coaxial cable at 100kHz. The relatively low frequency was used to accentuate the departure from ideal.
This article digs a little further with analyses at both 100kHz and 10MHz.
## 100kHz
A plot was given of the components and sum of terms of the expression for power at a point along the line.
Lets look at the power calculated from voltages and currents for the example at 100kHz where Zo=50.71-j8.35Ω and Zload=5+j50Ω.
Above, the four component terms are plotted along with the sum of the terms.
Term1 is often known as Pfwd and -Term4 is often known at Prev, and when Zo is real, Term2=-Term3 and they cancel, and in that circumstance P=Pfwd-Prev.
These are calculated using the actual value of Zo, Zload and propagation constant.
Above is a plot of impedance along the line.
We can use the impedance along the line to calculate the expected result if measurements were made along the line with an instrument calibrated for Zref=50+j0Ω. We will obtain a different values for Γ and ρ as they will not related to the actual line but to the Zref in use.
Above is a plot of actual ρ on the line, and ρ wrt 50+j0Ω (ρ50). You will note that ρ is a smooth exponential curve as determined by the line attenuation, whereas ρ50 varies cyclically and seems inconsistent with expected behavior of a transmission line.
Because ρ50 varies in this way, so will VSWR50 and ReturnLoss50. All of these metrics are of very limited value because Zref is so different to Zo.
We can calculate the expected reading of ‘Directional’ Power (as would be displayed on a directional wattmeter.
Above, the blue line is the actual power along the line and it varies cyclically because for this line, under standing waves more power is lost per unit length in regions of high current that those of high voltage.
An important attribute is that where Zref is real:
• Pfwd and Prev are each meaningful if Zref=Zo; and
• where Zref is not equal to Zo, Pfwd and Prev each are of no stand alone relevance to the actual line, but P does equal Pfwd-Prev.
## 10MHz
Let’s plot the components and sum of terms of the expression for power at a point along the line.
Lets look at the power calculated from voltages and currents for the example at 10MHz where Zo=50.01-j0.8025Ω and Zload=5+j50Ω.
Above, the four component terms are plotted along with the sum of the terms.
Term1 is often known as Pfwd and -Term4 is often known at Prev, and when Zo is real, Term2=-Term3 and they cancel, and in that circumstance P=Pfwd-Prev.
These are calculated using the actual value of Zo, Zload and propagation constant.
Above is a plot of impedance along the line.
We can use the impedance along the line to calculate the expected result if measurements were made along the line with an instrument calibrated for Zref=50+j0Ω. We will obtain a different values for Γ and ρ as they will not related to the actual line but to the Zref in use.
Above is a plot of actual ρ on the line, and ρ wrt 50+j0Ω (ρ50). You will note that ρ is a smooth exponential curve as determined by the line attenuation, whereas ρ50 varies cyclically and seems inconsistent with expected behavior of a transmission line.
Because ρ50 varies in this way, so will VSWR50 and ReturnLoss50. All of these metrics are of somewhat limited value because Zref is a little different to Zo.
We can calculate the expected reading of ‘Directional’ Power (as would be displayed on a directional wattmeter.
Above, the blue line is the actual power along the line and it varies cyclically because for this line, under standing waves more power is lost per unit length in regions of high current that those of high voltage.
An important attribute is that where Zref is real:
• Pfwd and Prev are each meaningful if Zref=Zo; and
• where Zref is not equal to Zo, Pfwd and Prev each are of no stand alone relevance to the actual line, but P does equal Pfwd-Prev.
## Conclusions
Whilst it is convenient to treat Zo of practical transmission lines as a purely real quantity, it isn’t and the error may be significant.
The departure from ideal Zo is typically worst at lower frequencies, and may be very small, perhaps insignificantly so above 100MHz. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869540095329285, "perplexity": 1604.2394487630102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00250.warc.gz"} |
https://www.springerprofessional.de/en/angular-segregation-of-fibres-in-pipe-flow-floc-formation-and-ut/18146980?fulltextView=true | main-content
## Swipe to navigate through the articles of this issue
04-07-2020 | Original Research | Issue 13/2020 Open Access
# Angular segregation of fibres in pipe flow: floc formation and utilization for length-based fibre separation
Journal:
Cellulose > Issue 13/2020
Author:
Jakob D. Redlinger-Pohn
Important notes
## Electronic supplementary material
The online version of this article (https://doi.org/10.1007/s10570-020-03290-8) contains supplementary material, which is available to authorized users.
## Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Introduction
Highly elongated particles, for example, cellulose fibres, flocculate and form a porous network from elastic interlocking (Kerekes et al. 1985; Soszynski 1987; Soszynski and Kerekes 1988a, b): cohesion from fibre–fibre friction exceeds dispersion, for example from hydrodynamic stress. Whilst the network of flocculated fibres can withstand dispersion at moderate stress, it can undergo deformation (Meyer 1964). The fibre network in suspension flow through straight pipes was found consolidating towards the centre what yields a near-wall annulus free of fibres. Steenberg and Wahren (Steenberg and Wahren 1960) found that this annulus contains small particles, i.e. fibre-fines, that are not coherently incorporated in the fibre network. This describes the case of radial fibre segregation by the fibre length. An illustration is given in Fig. 1 first row, with long fibres consolidated to a radius r Fibre smaller than the pipe radius R Pipe. Fibre-fines (i.e. particles smaller than 200 µm (ISO 16,065–2)) are suspended in the pipe cross-section. Redlinger-Pohn et al. ( 2017a, b) successfully utilized the flow situation for fibre-length fractionation. In a process termed hydrodynamic fractionation, they removed the near-wall suspension essentially free of long fibres by side-channels and hence separated fibre-fines exclusively.
Axial segregation, illustrated in Fig. 1 second row, is another example of length-based fibre segregation from fibre crowding and network deformation. In principle, it describes the flow of long and short fibres (or fibre-fines) in consecutive compartments, documented by Johansson and Kubát ( 1956) for fibre suspension slug flow. Within the liquid slug, a secondary motion, that is parallel to the flow direction, arises. The secondary motion is in flow direction at the pipe centre and counter flow direction at the pipe wall. Suspended material is transported to the slug front, i.e. after the bubble, where fibres flocculate and form a plug. Fibre-fines are mixed in the liquid slug and hence tail the long fibres in the transport direction. The principle was utilized commercially for industrial length-based fibre fractionation, for example in the Johnson fractionator (Olgard 1970; Olgard and Axenfalk 1972).
Following the nomenclature of cylindrical coordinates, the question arises on the existence of angular segregation, i.e. segregation and preferential accumulation of fibres in a circular segment of the tube cross-section, i.e. φ Fibre < 2π, illustrated in Fig. 1 third row. In fact, angular segregation of long fibres was hinted on by Redlinger-Pohn et al. ( 2016a; b) being the leading mechanism in tube flow fraction. Tube flow fractionation is a commercial preparation technique of cellulose pulp for analytics from Metso (Metso Automation Oy, Finland, (Niinimäki et al. 2007)). The fractionator was introduced by Silvy and Pascal as an auxiliary unit for sample preparation as a “tube of a few hundred meters long which may be rolled up to avoid bulkiness” (Silvy and Pascal 1990). A fibre pulp sample injected into a flow through the coiled tube separated the fibres by their length: long fibres exit the tube first, followed by short fibres and fibre-fines (Laitinen et al. 2006, 2011). That separation result is comparable to axial segregation (compare to Fig. 1 second row Johansson and Kubát ( 1956)) which led to confusion on the separation mechanism in the literature (Krogerus et al. 2003). The argument based on the outflow results ignores a fundamental difference between slug flow and coiled pipe flow: the secondary motion is parallel to the flow direction in slug flow (Talimi et al. 2012) and normal to the flow direction in coiled-pipe flow (Dean 1927; Vashisth et al. 2008). The cross-sectional mixing flow or secondary motion arises from a pressure imbalance between the outer and inner bend of the coiled pipe resulting from centrifugal forces. In addition, the velocity maximum is deflected from the pipe centre to the coiled-pipe outer bend. The secondary motion is known to (i) re-suspend non-flocculating particles (Koutsky and Adler 1964; Palazoglu and Sandeep 2004) (ii) enhance mixing (Naphon and Wongwises 2006) and (iii) promote colloidal flocculation of polymers (Carissimi et al. 2018). Redlinger-Pohn et al. ( 2016a; b) compared numerical results of individual fibres transported in the coiled pipe flow to experiments with cellulose fibre samples and argued from differences, that long fibres in coiled pipe flow flocculate. The observed length-separation at the outflow was addressed to differences in the networking capabilities of long and short fibres. The separation quality of 0.3 mm and 4 mm fibres was tested for concentrations yielding non-coherent and coherent flocs. The separation was unsatisfactory for a fibre sample yielding coherent flocs. Further, they documented a decreasing residence time of fibre length fractions with a increasing fibre concentration C (cohesion) and an decreasing Reynolds number Re (dispersion). That agrees with a recommendation in Krogerus et al. ( 2003) to operate the tube flow fractionator with concentrations below the threshold for coherent flocs. An indication of angular segregation was found by Jagiello et al. ( 2016) who split the coiled tube outflow in an inner and outer bend flow. The outer bend flow mean fibre length l 1 was larger compared to the inner bend flow with a negative dependence of the length difference on the Reynolds number Re. But doubt remained on the segregation mechanism for two reasons: (i) an opaque tube prevented visual observation, and (ii) the earlier mentioned confusion in literature.
With hindsight on the increased demand for separated fibre and fibre-fines streams for the production of paper and board with dedicatedly set and tuned properties (Odabas et al. 2016; Mayr et al. 2017; Fischer et al. 2017; Bossu et al. 2019), angular segregation and its potential for length-based fibre separation should be re-addressed. Commonly applied pressure screening relies on the individual treatment of each particle, i.e. fibre or fibre-fines, and invests energy to disperse the networked fibres (an overview on pressure screening is for example given by Jokinen ( 2007)). Processes that can utilize fibre flocculation and fibre networking avoid the need for this extra energy (Olgard and Axenfalk 1972; Redlinger-Pohn et al. 2017a, b; Schmid et al. 2019).
In this paper I will provide visual evidence of angular segregation from coiled suspension flow. This insight and understanding are then transferred into a prototypical separation process. The results are promising and present a simple and straight-forward way of separating fibre pulp by the fibre length.
The paper is organized as follows. In Section 2 I describe the setting for the visual observation and fibre pulp separation. In Section 3 I present the results from both studies. In Section 4 I elucidate the fibre segregation in coiled suspension flow. In Section 5 I conclude the study. The Appendix adds further comments on the observed fibre segregation and contains scaling analysis mentioned in the discussion. The Supplement Material shows the high-speed recordings of flocculating fibres in coiled tube flow.
## Materials and methods
The choice of fibre was taken with hindsight on the optical investigation. Dyed fibres have a larger contrast and are less transparent. For simplicity and from availability, mono-coloured napkin tissue paper (Aro Napkins, Metro whole-seller shop brand) was re- slushed after soaking in water for a minimum of 24 h. Availability (in the institute's social room) led to the use of two types of coloured napkins: blue and red. Blue was used for video observation and red for the length-based fibre separation. The properties of the fibre pulp from re-slushed napkins were determined with a L&W Fibre Tester Plus (ABB, Sweden). The mean properties are listed in Table 1. l 1 is the length-weighted and l 3 is the volume-weighted mean fibre-length.
Table 1
Length-weighted ( l 1) and volume-weighted ( l 3) mean fibre length and average fibre coarseness cs as measured with the L&W Fibre Tester Plus (ABB, Sweden)
l 1 [mm]
l 3 [mm]
cs [mg/m]
Blue
1.901
2.061
0.177
Red
1.553
1.783
0.118
The fibre interaction is characterized by the crowding number N CW (Kerekes and Schell 1992):
$$N_{CW} = 5\,\,\left[ {\frac{kg}{{m^{3} }}} \right]\frac{{C\,l_{1}^{2} }}{cs}$$
(1)
cs is the average line density of the fibre (coarseness in pulp and paper nomenclature) and C the mass-based fibre concentration in [%]. Kerekes and Schell ( 1992) have shown, that coherent networks form for N CW ≥ 60 (for a length to diameter aspect ratio AR > 20). Between 16 < N CW < 60, fibres aggregate and form flocs that are more easily dispersible (Kerekes 2006). For N CW < 16, fibres interact without flocculation. The concentration in the experiments corresponded to N CW ranging from 4 to 20. That is within the optimum of N CW < 60 (Krogerus et al. 2003) and non-coherent flocculation can be assumed.
The experimental setup, sketched in Fig. 2, followed work of Redlinger-Pohn et al. ( 2016a) and Jagiello et al. ( 2016) and consisted of a transparent tube, D Tube = 16 mm, coiled at a diameter of D Coil = 400 mm. The curvature, $$\kappa = {{D_{Tube} } \mathord{\left/ {\vphantom {{D_{Tube} } {D_{Coil} }}} \right. \kern-\nulldelimiterspace} {D_{Coil} }}$$, calculates to 0.04. The coordinate h runs along the coil radius R Coil from the tube inner bend to the tube outer bend.
Greyscale images of the suspension flow were recorded after 1.5 coils from the inlet (0.25 coils from the outlet) at a frequency of 500 Hz. Low transmission and higher grey value correspond to higher fibre concentration. The light absorption is expressed as time-averaged relative light absorption RLA and calculated as the grey image compliment compared to the maximum of 255. The RLA was calculated as mean over 200 recorded images along h. The recorded high-speed videos are presented in the Supplement Material.
The suspension flow was split into an inner and outer bend flow after 1.75 coils from the inlet. The prototypical flow splitter was 3D printed in-house. The design was guided by results from the optical investigation. Details are hence presented in the results section.
A typical metrics to report the condition of a coiled flow is the Dean number Da (see for example (Dean 1927; Itō 1959; Naphon and Wongwises 2006; Vashisth et al. 2008)):
$$Da = {\text{Re}} \sqrt \kappa ,{\text{with}}$$
(2)
$${\text{Re}} = \frac{{U_{Bulk} D_{Tube} }}{\nu }$$
(3)
which characterizes the effect of inertial, viscous, and centrifugal forces on the flow. Re is the Reynolds number, ν the kinematic viscosity, and U Bulk the average stream-wise velocity. Canton et al. ( 2017) however showed in a recent and dedicated work that different flow conditions exist for the same Da but different combinations of Re and κ. The experimental settings in this work are hence reported with their Reynolds number Re and curvature κ.
## Results
### Observation of fibre crowding in coiled suspension flow
Figure 3 provides a visual of the fibre suspension motion in coiled suspension flow observed from a top view. The case fibre concentration C, corresponding crowding number N CW, and Reynolds number Re (based on the mean flow rate, D Tube, and the viscosity of water as a reference, ν = 10 –6 m 2/s) are stated for each case. N CW is calculated according to Eq. 1 with values for Blue (Table 1).
Recordings in Fig. 3 capture fibre segregation to the outer tube bend and the formation of flocs already at a low crowding number of N CW = 8. The fibre crowding benefits from the fibre concentration C and suffers from increased hydrodynamic stress, i.e. higher Reynolds number Re. The time-averaged RLA profile along h (from the inner to outer tube bend) is shown in Fig. 4, at a position, indicated in Fig. 3B. The error bar is 5 times the standard deviation of the RLA in time and hence an indicator of the local fibre crowding. RLA standard deviation is high for large temporal (or in stream-wise direction) differences in the local fibre concentration.
At N CW = 4 (Fig. 4a) RLA was nearly constant with a slim increase towards the outer bend for Re = 9000. The fibres were well distributed in the tube. At N CW = 8 (Fig. 4b) RLA increases between h = 0.2 to h = 0.7 from the inner bend to the outer bend at Re = 9000. Fibres were segregated to the outer bend. The difference between outer and inner bend was significantly lower at Re = 19,000. That documents a dispersion of fibre aggregates at increased hydrodynamic stress in the coiled tube flow. At N CW = 18 (Fig. 4c) RLA was significantly higher at the outer bend for both Re cases. The inner bend area was however over-exposed, and a small concentration of fibres may have been concealed by the bright background. Noteworthy is a subtle difference from increased Re at N CW = 18 (Fig. 4c): RLA at the outer bend was slightly decreased. Also in Fig. 3f bright spots that indicate fibre depletion can be identified what differs from the situation at lower Re (Fig. 3e). This difference will be addressed in the discussion. The RLA standard deviation is low for N CW = 4 (Fig. 4a). The fibres were well-dispersed also in stream-wise direction and no flocculation of significance was observed (Fig. 3a, b). The RLA standard deviation is also low for N CW = 18 (Fig. 4c). Fibres were observed flocculated and segregated to the outer bend. The local fibre concentration with time (or in stream-wise direction) was low. An exception is the outer bend area at Re = 19,000: RLA is not only decreased but also varying in time. The RLA standard deviation is comparatively high for N CW = 8 (Fig. 4b). At Re = 9000, fibres were observed aggregating into distinguishable flocs (Fig. 3c) with an increasing RLA towards the outer bend (Fig. 4b, grey line). The RLA standard deviation is large at the inner bend which suggests dynamic local fibre flocculation and dispersion. Figure 3c documents some of the fibre aggregates segregated to the outer bend appearing stretched towards the inner bend. That will be addressed in the discussion.
Table 2
Separation results from splitting the coiled pipe flow. Δl 1,rel is the difference of the outer bend to inner bend l 1 relative to the feed l 1, Feed
Separation
1
2
3
Figure 3
D
F
-
Re [−]
19,436
19,206
25,356
N CW [−]
11
21
20
l 1,Feed [mm]
1.575
1.514
1.473
l 1,inner [mm]
1.518
1.247
1.343
l 1,outer [mm]
1.587
1.582
1.605
Δl 1 [mm]
0.069
0.335
0.262
Δl 1,rel [%]
4
22
18
The fibre suspension flow through the coiled tube from observation (videos are included in the Supplementary Material) is described in detail in the Appendix, Table 3.
### Length-based separation of fibre pulp
Observations of the suspension flow showed segregation of flocculated fibres to the outer bend, i.e. h > 0.3. The flow splitter was designed accordingly and separated the inner from the outer bend flow at h = D Tube/3. The prototypical flow splitter as realized by 3D printing is presented in Fig. 5. Two areas are highlighted: the splitter blade in red and an area of observed flow recirculation in orange.
A splitter blade, i.e. a sharp edge in flow direction, is known to accumulate fibres by stapling. Kerekes et al. ( 1985) for example used it as floc generator, and Eßl ( 2017) showed that even openings larger than the fibre length can clog from stapling at low flow speed. The area of the inner bend section is smaller than the outer bend section, and the velocity is higher at the outer bend (high velocity counters stapling. See for example (Eßl 2017)). Speculative, stapled fibres could impact the ratio of flow split and change the separation quality by being re-entrained at the outer bend or inner bend flow. The impact of the splitter blade on the fibre separation quality was not attained in this proof-of-concept. For the inner bend outflow, the expansion led to a region of lower pressure and a recirculation which impacted the upstream flow. The test settings and average results for the separation proof-of-concept study are listed in Table 2. The first two settings correspond to case D (Fig. 3d) and case F (Fig. 3f).
The results of the separation study (Table 2) agree well with the visually observed fibre segregation (Fig. 3). Separation 1, N CW = 11 and Re = 19,436 was characterized by a homogenous fibre distribution. The inner bend suspension and the outer bend suspension mean fibre length l 1 was comparable and the difference Δl 1,rel was only 4% of the feed l 1,Feed. Separation 2, N CW = 21 and Re = 19,206 was characterized by fibre segregation to the outer bend. The difference in the mean fibre length from the outer bend suspension to the inner bend suspension Δl 1 was 0.335 mm and Δl 1,rel = 22% of the feed fibre length l 1,Feed. RLA (Fig. 4c) was found increased from h > 0.3. The third case, N CW = 20 and Re = 25,356 has no corresponding case in the visual observation study. Higher Re from increased fluid motion can deform or disperse a fibre network. Here I found Δl 1 = 0.262 mm, what is Δl 1,rel = 22% of the feed fibre length l 1,Feed. A reduction in the separation quality Δl 1,rel compared to case 2 of 20% for an increase in the treated suspension flow rate of 24%.
Despite the flow splitter design faults, measured differences in the inner bend and outer bend fibre-length distribution (Table 2) agree well with the observation of the fibre aggregation shown in Fig. 3. Further, it is proof, that the observed segregation is preferent for longer fibres and hence full-fills the case of angular length-based fibre segregation as described in the introduction (compare Fig. 1 third row).
## Discussion
The argument of length-based fibre segregation stems from the observed fibre aggregation at the outer bend and a measured increase of l 1 in the corresponding case. The fibre suspension motion in the cross-section was inaccessible in this study. The nature of the flocculation phenomena will be elucidated based on a comparative literature discussion. Surprisingly, the cross-sectional flow field in coiled pipe flow (Fig. 6: fluid only, corresponding to cases in Redlinger-Pohn et al. ( 2016a), Re = 3316, κ = 0.043) and the flow field in a semi-filled horizontal rotating cylinder, i.e. Jacqueline floc generator (Fig. 7, adapted from Soszynski (Soszynski 1987)) show similarities. Both are characterized by a larger mixing vortex which stretches the half-pipe cross-section with respect to the equatorial/horizontal centre plane.
Soszynski (Soszynski 1987) reported for fibre suspension flow in the Jacqueline floc generator:
(i)
temporal aggregation of fibres for a concentration below the rigidity threshold ( N CW < 60),
(ii)
the formation of coherent flocs for higher concentration in the deacceleration zone (marked by a solid ellipse in Fig. 7 at r/ R = 0.6 and angles of 50° to 90°),
(iii)
which is characterized by a sharp velocity gradient, i.e. change from upwards-outwards flow to inwards flow.
(iv)
The flocs withstood dispersion from hydrodynamic stress when moving through the cylinder cross-section and compacted when re-entering the flocculation zone.
Soszynski ( 1987) suggested, that a cyclic flow through acceleration and deacceleration zones is required for flocculation. Both zones are present in the secondary motion of the coiled pipe flow (Fig. 6). Focusing on the secondary motion, fibres move through (i) a deacceleration zone at the inner bend centre, and (ii) an acceleration zone at the outer bend wall. Both areas are indicated in Fig. 6, cross-section upper half as full and dashed ellipse, respectively. The position h of changes in RLA (Fig. 4) is drawn on-top of the flow field. h = 0.5 corresponds to the location of the secondary velocity maximum and h = 0.3 corresponds to the location of a vortex at the inner bend which directs the suspension towards the tube centre. A clear crowding and floc formation event was not observable from the tube top-view recordings (Fig. 3 and Supplementary Material). But for Case C (Fig. 3c), N CW = 8, a deformation of flocs at the outer bend and thinning of the fibre flocs at h≈0.5 was noted. And a high standard deviation in RLA was recorded (Fig. 4b, grey line). The results suggest, that the formed fibre aggregates did not withstand dispersion by the secondary motion fully. Instead, fibres did undergo a dynamic dispersion and aggregation process in the cross-section. For a higher fibre concentration, case E (Fig. 3e) with a bulk crowding number N CW of 18, no comparable dispersion event was noticed. Additionally, the deformation of the aggregates at the outer bend observed for case C was not observed for case E.
The mixing flow in a coiled pipe differs from the mixing flow in the Jacqueline floc generator by centrifugal forces acting in direction of the coil radius R Coil (or h) and the contribution of an axial (stream-wise) flow. Centrifugal forces may lead to sedimentation of the fibres and/or fibre aggregates towards the outer bend in addition to their transportation by the secondary motion. An estimation of scales is provided in the Appendix. The contribution of centrifugal forces increases with the curvature κ (which was constant in this study) and the Reynolds number Re. For the highest value, i.e. Re = 25,356, the ratio of the sedimentation velocity from centrifugal forces and the secondary motion at the equatorial centre plane is 0.09. Hence, centrifugal forces can be neglected and the fibre motion and segregation documented in this study can be attributed to the flow secondary motion. The mixing flow in the coiled pipe is imposed on a stream-wise flow. The maximum velocity of the stream-wise flow is shifted to the outer bend, i.e. h = 0.92 for the case of Re = 3316 and κ = 0.043 (Redlinger-Pohn et al. 2016a), and Appendix Fig. 8). Fibre flocs and aggregates moving from the inner bend to the outer bend hence move through an additional acceleration zone into an area of local high shear at the outer bend. Fibre aggregates in case C (shown in Fig. 3c) were accordingly deformed in flow direction. The RLA for case F is lower than for case E at the other bend (Fig. 4c) what agrees with a local dispersion of fibres aggregates in the high shear zone at the outer bend.
For straight pipe flow it was shown that for formed fibre networks, the fluid moves at the same speed as the network what results in a plug flow velocity profile (Jäsberg and Kataja 2009; Nikbakht et al. 2014; MacKenzie et al. 2018). Likewise, Soszynski ( 1987) documented that the flow profile for the case of temporal fibre aggregation and the case of coherent floc formation differed slightly. It is conceivable, that fibre aggregates in the flocculating coiled fibre suspension flow modified the flow field locally. The state of fibre flocculation in straight pipe flow was correlated to the pressure loss curve (see for example (Duffy et al. 1976; Hemström et al. 1976)). Similar measurements can be envisioned to characterize the fibre flocculation in coiled pipe flow and map the flocculation in the Re- κ space in future work (see for inspiration the work of Itō ( 1959) on the friction factor in coiled suspension flow and Canton et al. ( 2017) for a recent discussion).
The angular segregation was studied below the rigidity threshold N CW < 60 where non-coherent fibre aggregation is expected. In the regime, the angular segregation benefited from a higher fibre concentration C, i.e. increased fibre floc strength and a lower Reynolds number Re, i.e. increased dispersion from hydrodynamic stress. That is unique for flocculation from elastic interlocking and different from colloidal flocculation of polymers which also benefits from higher Re (Carissimi et al. 2018). It raises the question if dispersion, respectively re-suspension, is needed for angular segregation for what it may be limited to the regime of non-coherent flocs, i.e. N CW < 60. That limit was reported for the treatment of fibre samples injected into coiled pipe flow (Krogerus et al. 2003) and should be addressed for continuous fibre suspension flow in future studies. A second limitation not discussed here is the tube size of D Tube < 50 mm (Krogerus et al. 2003). It is conceivable, that the velocity gradient for larger tube diameter as seen by the fibre simply becomes too small to promote flocculation (for example for l 1 = 1 mm, the ratio D Tube/ l 1 = 50). Soszynski noted steeper velocity gradients for flocculation in the Jacqueline floc generator as compared to the non-flocculating case (Soszynski 1987). The flow field for flocculation in coiled pipe flow can be addressed in future studies experimentally, for example by magnetic resonance velocimetry (MacKenzie et al. 2018) or numerically as the motion of interacting flexible elongated particles (Tozzi et al. 2005; Schmid et al. 2000; Lindström and Uesaka 2007).
Length-based fibre separation by flow splitting was documented in this study for a fibre suspension propagating 1.75 coil through a tube with D Tube = 16 mm at κ = 0.04 what equals a tube length of 2.2 m. That is by two orders of magnitude shorter than the tube of, for example, 100 m length, used by Silvy and Pascal ( 1990) and work thereafter. That difference comes from two different attributes of the coiled flow causing angular segregation and axial separation. A scaling analysis is provided in the Appendix. Angular segregation results from mixing in the cross-section and consequent segregation of networked fibres towards the outer bend. The minimum tube length for this study setup was estimated from the Dean vortex turn-over time (i.e. a particle is mixed in the cross-section once) to 1 m. Axial separation results from the stream-wise velocity difference and the fact that fibres aggregated at the outer bend are entrained in the faser flow (see Fig. 9 in the Appendix). The aggregate mean velocity at the outer bend was measured from the recorded images (Fig. 10 in Appendix) to 1.1 times the average velocity. To achieve a good enough separation resolution in the axial direction, i.e. time interval of two consecutive samples of at least 5 s, a 100 m long tube is needed.
A separation effect in direction of h was recently also documented for rectangular coiled channels. Wang et al. ( 2020) separated ca. 1 µm long ( l 1) cellulose-fibrils from smaller fibrils. The longer 1 µm fibril was found accumulating at the outer bend of a 45 µm high ( H Channel) and 300 µm wide channel. For micro-particles, a separation from the complex interplay of lift-force and drag force from the secondary motion (Ookawara et al. 2006; Di Carlo 2009; Redlinger-Pohn and Radl 2017) is known. Redlinger-Pohn and Radl (Redlinger-Pohn and Radl 2017) have shown that the accumulation at the outer bend appears only for confinements H Channel/ l 1 < 13. In Wang et al. ( 2020) the confinement was H Channel/ l 1 = 45 and the fibres were highly elongated. As in previous studies on fibre tube flow fractionation, only the resulting suspension properties are known but the process itself is a black box. Cross-sectional mixing may also promote aggregation of highly elongated particles in the nm and µm size range. Certainly, that should be addressed in future studies.
## Summary and conclusions
In this work I documented the fibre flocculation and fibre aggregation in coiled suspension flow at comparable low crowding number, i.e. N CW = 8 for Re = 9000. The formed aggregates segregated towards the outer bend of the coiled pipe, which presents angular segregation in pipe flow, i.e. fibre accumulation in a circular segment of the tube cross-section. The angular segregation was studied for cases below the rigidity threshold N CW < 60, hence non-coherent fibre flocs, and a tube diameter of D Tube = 16 mm ( D Tube/ l 1,Fibre = 8 to 10). Literature notes upper limitation for the pipe diameter ( D Tube < 50 mm) and fibre concentration. Those should be addressed in future studies. The quality of the angular segregation was probed by flow splitting after 1.75 tube coil (ca. 2 m tube length). The flow splitter design was based on the observation results. The flow was separated in an inner bend flow and an outer bend flow at 0.3 D Tube in the coil radius R Coil direction. The spread in mean fibre length l 1 between outer and inner bend suspension flow was 22% maximum.
For the studied cases, I found the fibre segregation benefiting from an increased fibre concentration and suffering from increased hydrodynamic stress, i.e. higher flow velocity expressed by a higher Reynolds number. The segregation of fibre aggregates to the outer bend was documented after 1.5 coils, ca. 2 m but may already be found after 1 coil, i.e. 1 loop, based on an estimated secondary motion turn over time. That raises an interesting question of whether fibre pulp suspensions transported through pipes are mixed or segregated in pipe bents and turns. In this study, I document angular fibre segregation as a promising route for pulp fibre separation by the length. This observation may also explain the documented separation of cellulose nano-fibrils in coiled channel flow (Wang et al. 2020). But some key work for future utilization of coiled pipe flow for the length-based fibre separation remains to be done. The understanding of the fibre flocculation in the Reynolds number Re and curvature κ space should be refined. A comparison of the coiled fibre suspension pressure drop to the pressure drop of coiled water flow (Itō 1959) may allow a quantification of the state of flocculation. Phase-contrast magnetic resonance velocimetry (MacKenzie et al. 2018) may allow the direct measurement of the cross-sectional flow profile what allows a detail discussion of the fibre aggregation and dispersion process. The design of the axial flow splitter should be rethought, or replaced by other flow-splitting strategies, for example side-channel splitting as used in hydrodynamic fractionation (Redlinger-Pohn et al. 2017b).
## Acknowledgments
Open access funding provided by Royal Institute of Technology. Jakob Redlinger-Pohn thanks the T 3 UG—Teens Treffen Technik internship program at Graz University of Technology for the helping hands of summer intern Veronika Rieger and the financial compensation which sponsored the project. Jakob thanks the Institute of Bioproducts and Paper Technology at Graz University of Technology for the access to their laboratory analytics.
## Appendix
See Figs. 8, 9, 10.
## Notes on the fibre motion
Table 3 summarizes the fibre motion observation from the video recordings (Supplement Material) for the studied combinations of Reynolds number Re and fibre concentration C. The case labeling corresponds to the case labeling in Fig. 3.
Table 3
Description of the suspension motion from flow observation. Case labeling corresponds to the labeling in Fig. 3
Case A Grainy structures indicating local fibre aggregation. An increase of the grain's numbers was observed towards the outer bend. RLA (Fig. 4a) was slightly higher at h = 0.8 and smaller for h < 0.5 Case B Grainy structures as observed in case A was not observed in case B. RLA (Fig. 4a) was close to a vertical line what suggests a homogenous distribution of fibres in the tube Case C Fibres flocs and fibre aggregates were observed at the outer bend. The RLA was higher for h > 0.7 (Fig. 4b). Fibres are observed segregated in h direction and in flow direction: aggregates of fibres at the outer bend were followed by fiber-depleted suspension. The standard deviation in RLA is low, despite the identification of crowding fluctuations. A local fibre crowding was reported Soszynski ( 1987). Noteworthy is, that fibres aggregated already at a concentration C that corresponds to N CW = 8 what is half the gel-point, N CW = 16 (Martinez et al. 2001) Case D Fibre flocculation was observed in the tube. The aggregates were not segregated to the outer bend but appeared distributed in the tube. RLA (Fig. 4b) shows only little dependence on h. A slight increase in RLA is noted at h = 0.6, but RLA is lower for larger and smaller h Case E The crowding number N CW = 18 is above the gel-point and the fibre suspension is within the connectivity regime (Celzard et al. 2009), i.e. the average number of contacts between fibres is larger than 2. RLA (Fig. 4c) was high for h > 0.3. Recordings of the inner bend area were over-exposed, and a low fibre concentration at the inner bend might have been missed. As for case C, segregation in h and flow direction was observed. The all-over concentration was higher. And from the video observation no significant motion or deformation of the aggregates within the observation window was noted Case F As for case E, fibres are segregated to the outer bend. The RLA is comparable to case E (Fig. 4C). Differently, the segregation in flow direction is less pronounced in case F compared to case E (Supplement Material). Increased hydrodynamic stress at higher Re appears to have a homogenizing effect on the formed fibre flocs
## Fluid motion in coiled pipe flow
The curved flow induces centrifugal forces in direction of the coil radius and causes a pressure difference between the inner bend I and the outer bend O. The pressure difference gives rise to a secondary motion in a plane normal to the flow direction, i.e. the pipe cross-section. The velocity maximum is shifted from the tube centre toward the outer bend O, i.e. h = 0.92. The change of the streamwise velocity profile along h with the curvature κ is presented in Fig. 8a. Figure 8b shows the profile of a coiled pipe flow: secondary motion imposed on the stream-wise flow. Simulation results correspond to cases discussed in Redlinger-Pohn et al. ( 2016a): Re = 3316 and κ = 0.00 (pipe), κ = 0.04 (this work), κ = 0.10.
## Centrifugal sedimentation of fibres in pipe flow
The centrifugal force in R Coil or h direction differs with the density and leads to sedimentation of heavier particles, for example fibres with a density ratio to water of 1.3 (Redlinger-Pohn et al., 2016a), to the outer bend. The significance of sedimentation in the centrifugal field is estimated from a comparison of the sedimentation velocity in the centrifugal field U sed,a to the secondary motion at the equatorial centre plane U E (from the inner bend to the outer bend) and along the circumference U C (from the outer bend to the inner bend). The quantities are described in Fig. 9. The magnitude of the secondary motion was estimated from Redlinger-Pohn et al. ( 2016a) to be U C = 0.15 U Bulk and U E = 0.02 U Bulk (see Fig. 6).
The fibre sedimentation velocity can be calculated from the force balance on the fibre,
$$0 = F_{C} - F_{A} - F_{H} = \left( {\rho_{F} - \rho_{W} } \right)V_{F} a_{z} - F_{H}$$
(4)
where F A is the buoyancy force opposite to the centrifugal force F C and F H is the hydrodynamic drag force. ρ F and ρ W are the fibre and water density, respectively. The drag force coefficient c d for a fibre scales with its aspect ratio AR (i.e. the ratio of the fibre length l to the fibre diameter d) and depends on fibre orientation with respect to the direction of acceleration (see for example (Fan and Ahmadi 1995)). The drag force coefficient for a fibre orientated parallel ( c d,p) and normal ( c d,n) to the direction of acceleration is:
$$c_{d,p} = \frac{{8\left( {AR^{2} - 1} \right)}}{{\left[ {{{\left( {2AR^{2} - 1} \right)\ln \left( {AR + \left( {AR^{2} - 1} \right)^{0.5} } \right)} \mathord{\left/ {\vphantom {{\left( {2AR^{2} - 1} \right)\ln \left( {AR + \left( {AR^{2} - 1} \right)^{0.5} } \right)} {\left( {AR^{2} - 1} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {AR^{2} - 1} \right)}}} \right] - AR}}$$
(5)
$$c_{d,n} = \frac{{16\left( {AR^{2} - 1} \right)}}{{\left[ {{{\left( {2AR^{2} - 3} \right)\ln \left( {AR + \left( {AR^{2} - 1} \right)^{0.5} } \right)} \mathord{\left/ {\vphantom {{\left( {2AR^{2} - 3} \right)\ln \left( {AR + \left( {AR^{2} - 1} \right)^{0.5} } \right)} {\left( {AR^{2} - 1} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {AR^{2} - 1} \right)}}} \right] - AR}}$$
(6)
The hydrodynamic drag force is formulated as:
$$F_{H} = \eta \pi \frac{l}{2AR}c_{d} U_{sed,a}$$
(7)
η is the fluid dynamic viscosity. The orientation of individual fibres in coiled suspension flow was reported by Redlinger-Pohn et al. ( 2016a) to be in stream-wise direction and hence normal to the direction of sedimentation. Combining Eq. 7 with Eq. 4 and Eq. 6 allows solving for U sed,a. The centrifugal acceleration, $$a_{z} = 2{{U_{Bulk}^{2} } \mathord{\left/ {\vphantom {{U_{Bulk}^{2} } {D_{Coil} }}} \right. \kern-\nulldelimiterspace} {D_{Coil} }}$$, is 1.57 m/s 2 and 7.08 m/s 2 for Re = 9000 and Re = 19,000, respectively. Estimations (Table 4) are attained for a fibre with length l = 5 mm and an aspect ratio AR = 100 in water with η = 1 mPa .s and ρ W = 1000 kg/m 3.
The relative magnitude of sedimentation in the centrifugal field increased with the Reynolds number Re and curvature κ as D Coil decreases. The latter was constant in this study. The impact of sedimentation from centrifugal acceleration at settings in this study is small.
Table 4
Estimation of the fibre sedimentation velocity in the centrifugal field of a coiled suspension flow in comparison to the secondary motion
Re[−]
U Bulk [m/s]
a z [m/s 2]
U C [m/s]
U E [m/s]
U sed,a [mm/s]
U sed,a/ U C [%]
U sed,a/ U E [%]
9000
0.56
1.57
0.084
0.011
0.353
0.42
3.15
19,000
1.19
7.08
0.179
0.024
1.592
0.89
6.69
25,400
1.59
12.60
0.238
0.032
2.834
1.19
8.93
Further, fibres at the studied concentration interact and are not individually suspended. A fibre network or fibre aggregates can be viewed as large porous particles. The aggregate density is close to the density of water. Networks withstanding dispersion can deform (for example consolidation towards the outer bend). The deformation from the centrifugal acceleration needs to be regarded again in relation to the impact of the secondary motion. Fibres locked in the network may have a lower, close to 0, velocity in h or R Coil direction. For that, hydrodyamic drag forces from the secondary motion would deform the network. From the estimation in Table 4 can be argued that network deformation form the secondary motion drag force is larger than deformation from the centrifugal force. Hence, fibre motion, aggregation, and dispersion for cases discussed in this paper can be attributed to the secondary motion.
## Scaling angular segregation and axial separation in coiled tube flow
Angular segregation was observed in this study after 1.5 coils what equals a tube length L Tube of 1.9 m ( $$L_{Tube} = 1.5D_{Tube} \pi \kappa^{ - 1}$$, with D Tube = 16 mm). That is much shorter than a 100 m tube used in tube flow fractionation to attain axial separation of a pulp sample (Silvy and Pascal 1990; Laitinen et al. 2011; Jagiello et al. 2016; Redlinger-Pohn et al. 2016a).
Cause for angular segregation is the mixing in the pipe or tube cross-section by the secondary motion. A mixing time t Mix can be estimated for a particle entrained in a Dean vortex, spanning the half cross-section (Dean vortex turn-over time). The particle moves in the cross-section from the outer bend to the inner bend along the tube half-circumference at a speed U C and along the equatorial/horizontal centre plane at a speed of U E (see Fig. 9).
$$t_{Mix} = \frac{{D_{Tube} \pi }}{2}\frac{1}{{U_{C} }} + D_{Tube} \frac{1}{{U_{E} }}$$
(8)
The propagation in stream-wise direction L P can be estimated as the product of the mixing time t Mix and the mean stream-wise velocity U Bulk:
$$L_{P} = t_{Mix} U_{Bulk} = \frac{{D_{Tube} \pi }}{2}\frac{{U_{Bulk} }}{{U_{C} }} + D_{Tube} \frac{{U_{Bulk} }}{{U_{E} }}$$
(9)
The propagation per tube coil L P,Coil for one turn-over is attained form comparing L P to the coil length, i.e. the circumference of the coil with D Coil = k −1 D Tube.
$$L_{P,Coil} = \frac{{L_{P} }}{{L_{Coil} }} = \frac{{L_{P} \kappa }}{{D_{Tube} \pi }} = \kappa \left( {\frac{1}{2}\frac{{U_{Bulk} }}{{U_{C} }} + \frac{1}{\pi }\frac{{U_{Bulk} }}{{U_{E} }}} \right)$$
(10)
With U C = 0.15 U Bulk and U E = 0.02 U Bulk propagation length calculates to L P = 1 m and L P,Coil of 0.77 (with D Tube = 16 mm and k = 0.04). Observations of angular segregation and length-based separation in this paper were attained at approx. twice this value, hence, two theoretical turn-over in the tube cross-section from the secondary mixing motion.
Fibre aggregates segregated to the outer tube bend are entrained in a faster stream-wise flow (see Fig. 8). Hence, the mean stream-wise velocity of longer fibres U S,L is higher than the mean stream-wise velocity of non-aggregated or non-flocculated and in the cross-section mixed shorter fibres or fibre-fines U S,F. The separation time t TF between fractions scales with the difference in mean stream-wise velocity and the tube length L TF:
$$t_{TF} = L_{TF} \left| {\frac{1}{{U_{S,F} }} - \frac{1}{{U_{S,L} }}} \right|$$
(11)
The fibre aggregates velocity at the outer bend was estimated by tracing fibre flocks in the imaged tube segment (an example is given for case C in Fig. 10. The corresponding video is in the Supplement Material). The tube segment was measured to 5.73 D Tube length and the fibre flock is tracked for 76 frames, taken at a frame rate of 500 Hz.
The propagation velocity calculated to 0.603 m/s. The bulk velocity at Re = 9000 is U Bulk = 0.56 m/s (Table 4). The relative fibre velocity calculates to U S,L≈1.1 U Bulk. That agrees with the average streamwise velocity along the equatorial centre line from h = 0.33 to h = 1 of ca. 1.13 U Bulk from fluid simulations (Fig. 8). Fibre-fines and small particles can be assumed well mixed in the cross-section and their mean velocity U S,F≈1 U Bulk. The tube flow fractionation sampling interval t TF in previous work was adjusted to the flow rate being a minimum 5 s (Jagiello et al. 2016; Redlinger-Pohn et al. 2016a). For a case of Re = 9000 and D Tube = 16 mm, the minimum tube length to attain measurable axial separation calculates to 31 m. That is an order of magnitude larger than the propagation length for angular segregation. Further, the axial separation attained by tube flow fractionation is not ideal in long fibres and fibre-fines but in consecutive compartments of decreasing fibre length. Hence, the tube length needs to be appropriate long to achieve attainable axial separation. That may explain the “ tube of a few hundred meters” used by Silvy and Pascal ( 1990) and in succeeding work (Laitinen et al. 2011; Jagiello et al. 2016; Redlinger-Pohn et al. 2016a). Redlinger-Pohn et al. ( 2016a) reported the relative residence time per fibre fraction τ L from separation in a 100 m tube with D Tube = 16 mm coiled at κ = 0.043. The Reynolds number was Re = 6926. The residence time is inversely correlated to the fraction specific mean propagation speed U S,L ( $${U}_{Bulk}/{U}_{S,L}={\tau }_{L}$$). Table 5 lists U S,L calculated from the fraction residence time in Redlinger-Pohn et al. ( 2016a) for sulfite pulp, 0.25 wt% (Redlinger-Pohn et al. 2016a), and t TF as difference from the longest fibre fraction. t TF for fibre-fines after 100 m separation was measured by them to 21.6 s.
Above, I estimated a tube length of 31 m for t TF = 5 s (and U S,F = 1.1 U Bulk). Scaling the separation time t TF to a 100 m tube and accounting for the different flow rates (scaled by the Reynolds number ratio 9000/6926 as the same viscosity of water and tube diameter D Tube are used as reference) results to 21.0 s. The axial separation measured from tube flow fractionation (Redlinger-Pohn et al. 2016a) and estimated from angular segregation agree well.
Table 5
Fibre length fraction, stream-wise propagation velocity per fraction U S,Fraction and fraction separation time t TF. Values from Redlinger-Pohn et al. ( 2016a)
Re [−]
l 1,Fraction [mm]
U S,Fraction [m/s]
t TF[s]
6926
3.746
0.583
0
1.745
0.568
4.6
0.745
0.550
10.2
0.238
0.518
21.6
The aggregation of fibres and their angular segregation is quick. The difference in the mean stream-wise velocity of the fibre aggregates at the outer bend to the mean bulk velocity is small, and separation in axial direction hence needs residence time within the tube. The tube length for tube flow fractionation, i.e. 100 m, is hence longer than the tube length for flow-splitting after angular segregation, i.e. 1 m. This analysis also suggests that the choice of tube length can tune the separation achieved by tube flow fractionation.
## Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Material
Supplementary file1 (MP4 242131 kb)
Literature | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406700730323792, "perplexity": 3155.7370742110606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00269.warc.gz"} |
https://mathoverflow.net/questions/335572/fontaine-fargues-curve-and-period-rings-and-untilt | # Fontaine-Fargues curve and period rings and untilt
When I read the paper "THE FARGUES–FONTAINE CURVE AND DIAMONDS" of Matthew Morrow, I have a question on page 11.
Question: The arthur said that the de Rham and crystalline period rings implicitly depended on having chosen an untilt of $$F$$, where $$F=C_p^b$$ is the tilt of the p-adic complex field $$C_p$$. I can't understand this. If the author just defines the $$\infty$$ point on Fontaine-Fargues curve to be the class $$[C_p]$$, why the author mention the period rings at the bottom of page11 when he defines the $$\infty$$ point?
Definition: The definitions of the de Rham and crystalline period rings I know are $$B_{dR}:=Frac(lim W(R)[\frac{1}{p}]/{(ker\theta)}^n])$$, $$B_{cris}:=Frac(A_{cris}[\frac{1}{p}])$$, $$A_{cris}:=limA^0_{cris}/p^nA^0_{cris}$$, and $$A^0_{cris}$$ is just the sub $$W(R)$$-module of $$W(R)[\frac{1}{p}]$$ generated by the $$\frac{\xi^n}{n!}$$ where $$n$$ takes all positive integers. These definitions come from Fontaine's readable book Theory of p-adic Galois Representations.
Thanks for any answers!
• Doesn't the map $\theta$ (and hence $\xi$) depend on a choice of untilt? Also, I think $A^0_{cris}$ should be a subring of $B_{dR}$, not $W(R)[1/p]$, right? – Daniel Litt Jul 6 '19 at 15:07
• @DanielLitt Thanks for your answer. The map $\theta$ is defined the natural extension of $W(R)\rightarrow O_C:(x_0,x_1,...,x_n,...)\rightarrow \sum p^nx_n^{(n)}$ where $x_n=(x_n^{(m)})$ and $x_n^{(m)}\in O_C$. And $A_{cris}^0\subseteq W(R)[\frac{1}{p}]\subseteq B_{dR}^+\subseteq B_{dR}$. The untilt of a perfectoid field $K$ is a pair $(L,r)$, where $L$ is a perfectoid field and $r:L^b\cong K$ is an isomorphism. But I still can't see why $\theta$ determines such a pair $(L,r)$. – user141691 Jul 6 '19 at 15:54
• $C$ is a choice of untilt! – Daniel Litt Jul 6 '19 at 17:32
## 1 Answer
As Daniel Litt says, the choice of $$C$$ is actually an untilt. The "classical" approach to period rings, which you might have in mind, was to start with a certain complete, algebraically closed field $$C_p$$, then to construct $$R$$ out of the quotient $$O_{C_p}/p$$ and out of its Witt vectors to construct the period rings.
The content of Morrow's Proposition 5.1 is that $$R$$ may arise from many other fields $$C$$ other than from your initially chosen $$C_p$$ and all these form, modulo $$\varphi^\mathbb{Z}$$, all the equivalence classes of untilts. But to produce $$\theta$$ you need a target, hence you need to pick one of these choices. By Theorem 2.3 ibid. this choice is a point on The Curve, call it $$\infty$$: the construction of the line bundle depends on this point. Morrow's remark that a choice has been made means the following: suppose $$B_e$$ could be constructed independently of the choice of an untilt. Then the construction in (6) would give the same result irrespectively of the chosen point, providing you with a canonical line bundle on it, or equivalently with a prefered point on $$\mathbb{P}^1$$, which is absurd. On the other hand, the two choices match with each other: a point on $$\mathbb{P}^1$$ (rather, on $$X^{FF}$$) call it $$\infty$$ and an untilt, for instance $$C_p$$.
• Thanks for your detailed answer. When I study p-adic Hodge, I haven't thought that we can replace $C_p$ with other complete algebraic closed non-archimedean field in the constructions of these period rings. One more question: Does the construction in (6) you mentioned mean the two hypotheses $Pic(X)\cong Z$ and $H^1(X,O_X(k))=0$ for all $k\geq0$ on page 11? – user141691 Jul 7 '19 at 13:45
• Yea, especially in the isomorphism $\mathrm{Pic}(X)\cong\mathbb{Z}$. – Filippo Alberto Edoardo Jul 8 '19 at 16:12
• Thank you very much ! – user141691 Jul 8 '19 at 16:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367841482162476, "perplexity": 222.57133708366484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643703.56/warc/CC-MAIN-20210619051239-20210619081239-00257.warc.gz"} |
https://www.mail-archive.com/[email protected]/msg07559.html | # Re: post-review: HTTP 500 error
```Just a follow up. It does seem that what I described below was the
problem.```
```
The "solution" was to move the repository to our Office B, where
development is done. Now that the Perforce server is on the same LAN,
with the same DNS resolution as the reviewboard server, I was able to
install without issues. I must say I love the product. Thanks for all
Zamir
On Aug 29, 2:03 pm, Zamir Khan <[email protected]> wrote:
> So I did some more digging and think I may have discovered a cause of
> the problem. I'm not an expert when it comes to networking, so please
> bear with me.
>
> My company has two locations, each with their own network. The
> perforce server is currently located at Office A on Network A. I am
> working at Office B, Network B, with the server that is hosting
> reviewboard. I believe that there is a bridge from network A to B.
> That is to say, I can ping the IP address 192.168.x.y of the perforce
> server successfully. However, the DNS for this address does not carry
> over. I cannot ping the hostname of the perforce server that is
> returned from p4 info successfully. I believe that reviewboard uses
> that is the problem, since I cannot ping that hostname, only the IP.
> I've tried setting the P4PORT to the IP, but p4 info gets its info
> from the server, which returns its hostname.
>
> Is there any way around this? Or do I need the perforce server to be
> on the same network?
>
> Zamir
>
> On Aug 24, 1:50 pm, Christian Hammond <[email protected]> wrote:
>
>
>
>
>
>
>
> > Unfortunately then, you may have to do a bit of debugging on your end to
>
> > Sometimes it's due to incompatibilities with where you got your Python from
> > (different ones are compiled different ways). Sometimes it's due to
> > permissions or file path issues.
>
> > Christian
>
> > --
> > Christian Hammond - [email protected]
> > Review Board -http://www.reviewboard.org
> > VMware, Inc. -http://www.vmware.com
>
> > On Wed, Aug 24, 2011 at 1:48 PM,ZamirKhan <[email protected]> wrote:
> > > I will ask, but as far as I know, we are a windows shop :(
>
> > >Zamir
>
> > > On Aug 24, 1:21 pm, Christian Hammond <[email protected]> wrote:
> > > > Yeah, it appears to be some installation issue with p4python.
>
> > > > I should point out that Windows is the most difficult server platform to
> > > > install Review Board on, as errors like this for many of our third party
> > > > modules crop up from time to time. There appears to be many causes. Can
> > > you
> > > > install in a Linux machine instead? Perhaps in a VM?
>
> > > > Christian
>
> > > > --
> > > > Christian Hammond - [email protected]
> > > > Review Board -http://www.reviewboard.org
> > > > VMware, Inc. -http://www.vmware.com
>
> > > > On Wed, Aug 24, 2011 at 1:17 PM,ZamirKhan <[email protected]>
> > > wrote:
> > > > > Here it is. Does this have to do with p4python (it's installed), the
> > > > > p4.exe, or either of those being on the path? I'll double-check those
> > > > > issues.
>
> > > > > ERROR:django.request:Internal Server Error: /reviewboard/api/review-
> > > > > requests/
> > > > > Traceback (most recent call last):
> > > > > File "c:\python25\lib\site-packages\django-1.3-py2.5.egg\django\core
> > > > > \handlers\base.py", line 111, in get_response
> > > > > response = callback(request, *callback_args, **callback_kwargs)
> > > > > File "c:\python25\lib\site-packages\django-1.3-py2.5.egg\django\views
> > > > > \decorators\cache.py", line 79, in _wrapped_view_func
> > > > > response = view_func(request, *args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\django-1.3-py2.5.egg\django\views
> > > > > \decorators\vary.py", line 22, in inner_func
> > > > > response = func(*args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\Djblets-0.6.9-py2.5.egg\djblets
> > > > > \webapi\resources.py", line 338, in __call__
> > > > > result = view(request, api_format=api_format, *args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\Djblets-0.6.9-py2.5.egg\djblets
> > > > > \webapi\resources.py", line 464, in post
> > > > > return self.create(*args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\Djblets-0.6.9-py2.5.egg\djblets
> > > > > \webapi\decorators.py", line 88, in _checklogin
> > > > > return view_func(*args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\Djblets-0.6.9-py2.5.egg\djblets
> > > > > \webapi\decorators.py", line 62, in _call
> > > > > return view_func(*args, **kwargs)
> > > > > File "c:\python25\lib\site-packages\Djblets-0.6.9-py2.5.egg\djblets
> > > > > \webapi\decorators.py", line 224, in _validate
> > > > > return view_func(*args, **new_kwargs)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\webapi\resources.py", line 4315, in create
> > > > > changenum)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\reviews\managers.py", line 90, in create
> > > > > review_request.update_from_changenum(changenum)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\reviews\models.py", line 358, in update_from_changenum
> > > > > update_obj_with_changenum(self, self.repository, changenum)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\reviews\models.py", line 40, in update_obj_with_changenum
> > > > > changeset = repository.get_scmtool().get_changeset(changenum)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\scmtools\models.py", line 56, in get_scmtool
> > > > > return cls(self)
> > > > > File "c:\python25\lib\site-packages\ReviewBoard-1.5.5-py2.5.egg
> > > > > \reviewboard\scmtools\perforce.py", line 26, in __init__
> > > > > import P4
> > > > > File "C:\Python25\lib\site-packages\P4.py", line 19, in <module>
> > > > > import P4Client
> > > > > ImportError: DLL load failed: The specified module could not be found.
>
> > > > > On Aug 24, 12:50 pm, Christian Hammond <[email protected]> wrote:
> > > > > > Well, it certainly isn't supposed to do that :) Can you check the
> > > server
> > > > > > logs and find out what the exception traceback says?
>
> > > > > > Christian
>
> > > > > > --
> > > > > > Christian Hammond - [email protected]
> > > > > > Review Board -http://www.reviewboard.org
> > > > > > VMware, Inc. -http://www.vmware.com
>
> > > > > > On Wed, Aug 24, 2011 at 12:05 PM,ZamirKhan <[email protected]>
> > > > > wrote:
> > > > > > > I suppose it would help if I put what I'm running in this thread
> > > > > > > as
> > > > > > > well:
>
> > > > > > > I am running reviewboard 1.5.5 on a Windows VM with MySQL, Apache
> > > > > > > 2.2,
> > > > > > > Python 2.5.4, etc. (let me know what else you need to know) and a
> > > > > > > Perforce repository.
>
> > > > > > >Zamir
>
> > > > > > > On Aug 23, 4:16 pm,ZamirKhan <[email protected]> wrote:
> > > > > > > > So I've gotten to another point, with a new error, so I figured
> > > > > > > > I
> > > > > > > > should start a new thread. Here is where I am getting to trying
> > > to
> > > > > run
> > > > > > > > post-review from the client:
>
> > > > > > > > C:\>post-review 2480 -d --p4-passwd=<mypassword> --server
> > > > > > > > http://<reviewboardserver>/reviewboard/
>
> > > > > > > > >>> RBTools 0.3.3
> > > > > > > > >>> Home = <home path>
> > > > > > > > >>> p4 info
> > > > > > > > >>> repository info: Path: <perforce IP and port>, Base path:
> > > None,
> > > > > > > Suppor
> > > > > > > > s changesets: True
> > > > > > > > >>> HTTP GETting api/
> > > > > > > > >>> HTTP GETting <reviewboardserver>/reviewboard/api/info/
> > > > > > > > >>> Using the new web API
> > > > > > > > >>> Generating diff for changenum 2480
> > > > > > > > >>> p4 -P <p4 password> describe -s 2480
>
> > > > > > > > snipped out all the diff-creation
>
> > > > > > > > >>> Attempting to create review request on <perforce IP and
> > > > > > > > >>> port>
> > > for
> > > > > > > 2480
> > > > > > > > >>> HTTP POSTing to
> > > <reviewboardserver>/reviewboard/api/review-req
>
> > > > > > > > ests/: {'changenum': '2480', 'repository': <perforce IP and
> > > port>'}
> > > > > > > > ==> HTTP Authentication Required
> > > > > > > > Enter authorization information for "Web API" at
> > > <reviewboardserver>
> > > > > > > > Username: zkhan
> > > > > > > > Password:>>> Got HTTP error: 500: <!DOCTYPE html PUBLIC
> > > "-//W3C//DTD
> > > > > > > XHTML 1.0 Transitio
>
> > > > > > > > al//EN"
> > > > > > > > "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>
>
> > > > > > > > <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en"
> > > lang="en">
> > > > > > > > <head>
> > > > > > > > <title>500 - Internal Server Error | Review Board</title>
> > > > > > > > </head>
> > > > > > > > <body>
> > > > > > > > <h1>Something broke! (Error 500)</h1>
> > > > > > > > <p>
> > > > > > > > It appears something broke when you tried to go to here. This
> > > is
> > > > > > > > either
> > > > > > > > a bug in Review Board or a server configuration error. Please
> > > > > > > > report
> > > > > > > > this to your administrator.
> > > > > > > > </p>
> > > > > > > > </body>
> > > > > > > > </title>
>
> > > > > > > --
> > > > > > > Want to help the Review Board project? Donate today at
> > > > > > >http://www.reviewboard.org/donate/
> > > > > > > Happy user? Let us know athttp://www.reviewboard.org/users/
> > > > > > > -~----------~----~----~----~------~----~------~--~---
> > > > > > > To unsubscribe from this group, send email to
> > > > > > > [email protected]
> > > > > > > For more options, visit this group at
> > > > > > >http://groups.google.com/group/reviewboard?hl=en
>
> > > > > --
> > > > > Want to help the Review Board project? Donate today at
> > > > >http://www.reviewboard.org/donate/
> > > > > Happy user? Let us know athttp://www.reviewboard.org/users/
> > > > > -~----------~----~----~----~------~----~------~--~---
> > > > > To unsubscribe from this group, send email to
> > > > > [email protected]
> > > > > For more options, visit this group at
>
> > > --
> > > Want to help the Review Board project? Donate today at
> > >http://www.reviewboard.org/donate/
> > > Happy user? Let us know
>
> ...
> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697048664093018, "perplexity": 2872.4447417296965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00469-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/relativity-and-lorentz-transformations.665886/ | # Homework Help: Relativity and Lorentz Transformations
1. Jan 21, 2013
### mhz
1. The problem statement, all variables and given/known data
Two particles in a high-energy accelerator experiment are approaching each other head-on, each with a speed of 0.9500c as measured in the laboratory.
What is the magnitude of the velocity of one particle relative to the other?
2. Relevant equations
$v_x^'\frac{v_x-u}{1-\frac{uv_x}{c^2}}$
3. The attempt at a solution
I've considered the laboratory to be moving in addition to one particle having twice the given speed and the other zero, but I don't know what I'm actually doing. I really would look a nice, concise explanation of what is going on. Thank you.
Additionally, there is a similar problem that I am completely lost with:
Two protons are moving away from each other. In the frame of each proton, the other proton has a speed of 0.615c.
In the rest frame of the earth the protons are moving in the opposite directions with equal values of speed. What does an observer in the rest frame of the earth measure for the speed of each proton?
If you are interested in previous responses to a similar question, see here: https://www.physicsforums.com/showthread.php?t=481467
2. Jan 22, 2013
### ehild
A particle travels with velocity vx along the x axis in the laboratory frame of reference. In an other frame of reference that travels with velocity u with respect to the laboratory, the velocity of the particle is vx'. If you are an observer, sitting in that new fame of reference, you would see the particle travelling with vx'.
$$v_x'=\frac{v_x-u}{1-\frac{u v_x}{c^2}}$$.
Substitute the velocity of one particle for u and the velocity of the other one for v.
ehild | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547685742378235, "perplexity": 369.8481498019598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00029.warc.gz"} |
https://jewellerybook.com/drysdale/inverse-laplace-transform-tutorial.php | # Drysdale Inverse Laplace Transform Tutorial
## Lecture 3 The Laplace transform Stanford University
### Solving PDEs using Laplace Transforms Chapter 15
The Inverse Laplace Transform Department of Mathematics. Numerical Inversion of the Laplace Transform In this section we present another method for the calculation of the inverse Laplace transform., Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of.
### Inverse Laplace Transform using Partial Fractions Step by
Differential Equations Laplace Transforms. This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very, MA 1506 Mathematics II Tutorial 7 The Laplace transformation Groups: B03 & B08 March 14, 2012 Ngo Quoc Anh Question 2: Finding the inverse Laplace transform of.
The Laplace transform and it's use within control engineering. A step by step guide to applying the Laplace transform to time domain equations Numerical Inversion of the Laplace Transform In this section we present another method for the calculation of the inverse Laplace transform.
Tutorial sheet 1 - Laplace transforms. Tutorial sheet 2 - inverse Laplace. Tutorial sheet 3 - final value theorem and dead-time. Tutorial sheet 4 Tutorial Problem Sheet 10, Laplace Transform: Find the inverse Laplace transforms of the following by the theory of residues: (i) 1
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz Inverse Z-transform Free Inverse Laplace Transform calculator - Find the inverse Laplace transforms of functions step-by-step
Laplace Transforms (LT) - Learn Signals and Systems in simple and easy steps starting from Overview, Signal Analysis, Fourier Series, Fourier Transforms, Convolution Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of
This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. Tutorials; Examples; Videos This MATLAB function returns the Laplace Transform of f. Toggle Main Navigation. To compute the inverse Laplace transform, Tutorials; Examples; Videos and
Find inverse Laplace transform of the following : Documents Similar To Tutorial II. Diff Equation 12 2012 FourierSeries. Uploaded by. Camelia Lupu. MATH263_Mid_2009F. The Laplace transformation is a powerful tool to solve a vast class of ordinary differential equations. good tutorial about the topic, it and inverse Laplace
InverseLaplaceTransform[expr, s, t] gives the inverse Laplace transform of expr. InverseLaplaceTransform[expr, Integral Transforms; Tutorials. Laplace transform numerical inversion algorithm can be noticed. The inverse Laplace transform can easily be expressed by referring to the Fourier transform as
To obtain inverse Laplace transform. 18. To solve constant coefficient linear ordinary differential equations using Laplace transform. 19. This MATLAB function returns the Laplace Transform of f. Toggle Main Navigation. To compute the inverse Laplace transform, Tutorials; Examples; Videos and
The Laplace Transform method is a technique for solving linear differential equations with initial conditions. It is commonly used to solve electrical circuit The Laplace Transform is used in The main idea behind the Laplace Transformation is that we Inverse of the Laplace Transform. 8. Using Inverse Laplace to
Tutorial Problem Sheet 10, Laplace Transform: Find the inverse Laplace transforms of the following by the theory of residues: (i) 1 Tutorials; Books; Free Code Free Books Introduction to Digital Filters . Introduction to Laplace Transform Analysis the inverse Laplace transform of is ,
4. Laplace Transform 41 Example 4.10. Find the inverse Laplace transform of F(s) = 2s+1 s2 +4s+5, Solution. Just a matter of making it look like ones in the table. Our very productive member Nasir decided to write another series of tutorials. This time, he wants to focus on Laplace Transform, which is all about mathematics. Are
4. Laplace Transform 41 Example 4.10. Find the inverse Laplace transform of F(s) = 2s+1 s2 +4s+5, Solution. Just a matter of making it look like ones in the table. Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz Inverse Z-transform Free Inverse Laplace Transform calculator - Find the inverse Laplace transforms of functions step-by-step
Free Inverse Laplace Transform calculator - Find the inverse Laplace transforms of functions step-by-step Help with Inverse Laplace Transform Function . Learn more about ilaplace
This MATLAB function returns the Inverse Laplace Transform of F. De ne the generalized inverse of F, F 1: [0;1] ! IR, via the inverse transform method to generate the iid exponential interraival times X i, we can represent X
Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can
Tutorial Problem Sheet 10, Laplace Transform: Find the inverse Laplace transforms of the following by the theory of residues: (i) 1 Our very productive member Nasir decided to write another series of tutorials. This time, he wants to focus on Laplace Transform, which is all about mathematics. Are
One-dimensional Laplace transforms. The Laplace transform of a function f(t) is given by \[Integral]_0^\[Infinity]f(t)e^-st\[DifferentialD]t. The inverse Laplace Topics covered include the properties of Laplace transforms and inverse Laplace transforms together with applications to ordinary and partial differential equations,
Next tutorial. Properties of the I'll show you in a few videos, there are whole tables of Laplace Transforms, Introduction to the Laplace Transform. Section 4-3 : Inverse Laplace Transforms. Finding the Laplace transform of a function is not terribly difficult if we’ve got a table of transforms in front of us to
The Laplace Transform is one of the most powerful mathematical tools that can be used to solve a wide variety of problems in Math, Science, and Engineering. We need to know how to find the inverse of the Laplace Transform, when solving problems.
MA 1506 Mathematics II Tutorial 7 The Laplace transformation Groups: B03 & B08 March 14, 2012 Ngo Quoc Anh Question 2: Finding the inverse Laplace transform of De ne the generalized inverse of F, F 1: [0;1] ! IR, via the inverse transform method to generate the iid exponential interraival times X i, we can represent X
### Laplace transform intro Differential equations (video
MATHS TUTORIAL – LAPLACE and FOURIER TRANSFORMS. % Finding the inverse Laplace transform using partial fraction decomposition. % For polynomials with distinct and non-repeating roots. % A degree of denominator, 43 The Laplace Transform: Basic De nitions and Results Laplace transform is yet another operational tool for solving constant coe -cients linear di erential equations..
### Differential Equations Inverse Laplace Transforms
INVERSE LAPLACE University of Sheffield. The calculator will find the Inverse Laplace Transform of the given function. Recall, that \mathcal{L}^{-1}\left(F(s)\right) is such a function f(t) t This section provides materials for a session on how to compute the inverse Laplace transform. Materials include course notes, a lecture video clip, practice problems.
• Laplace transform MATLAB laplace - MathWorks
• MATLAB TUTORIAL for the First Course. Part 6 Inverse
• Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to
In MuPAD Notebook only, ilaplace(F, s, t) computes the inverse Laplace transform of the expression F = F(s) with respect to the variable s at the point t. Solution via Laplace transform and matrix exponential so etA is nonsingular, with inverse etA −1 = e−tA Solution via Laplace transform and matrix exponential
4. Laplace Transform 41 Example 4.10. Find the inverse Laplace transform of F(s) = 2s+1 s2 +4s+5, Solution. Just a matter of making it look like ones in the table. The Laplace Transform is one of the most powerful mathematical tools that can be used to solve a wide variety of problems in Math, Science, and Engineering.
we use tables of the Laplace transforms and obtain sY(s) y(0) = 3 1 s 2 1 s2 and invert it using the inverse Laplace transform and the same tables again and Laplace Transforms (LT) - Learn Signals and Systems in simple and easy steps starting from Overview, Signal Analysis, Fourier Series, Fourier Transforms, Convolution
Inverse Laplace Transform, The Inverse Laplace Transform by Partial Fraction Expansion. Inverse Laplace Transform by Partial Fraction Expansion. Lecture Notes on Laplace and z-transforms Such uniqueness theorems allow us to find inverse Laplace transform by looking at Laplace transform tables.
This MATLAB function returns the Inverse Laplace Transform of F. This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. Tutorials; Examples; Videos
8. Find the inverse Laplace transforms of the following by the theory of residues: (i) 1 (s+1)(s 2)2, (ii) s+2 (s+1)(s2 +4). 9. Solve the following Initial Value One-dimensional Laplace transforms. The Laplace transform of a function f(t) is given by \[Integral]_0^\[Infinity]f(t)e^-st\[DifferentialD]t. The inverse Laplace
This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. Tutorials; Examples; Videos Inverse Laplace Transform using Partial Fractions Step by Step – Differential Equations Made Easy. If you are asked to find the Inverse Laplace that involves
SCHOOL OF ENGINEERING & BUILT ENVIRONMENT . Mathematics . Laplace Transforms . 1. Inverse Laplace Transforms Answers to Tutorial Exercises The Laplace Transform is used in The main idea behind the Laplace Transformation is that we Inverse of the Laplace Transform. 8. Using Inverse Laplace to
We need to know how to find the inverse of the Laplace Transform, when solving problems. Inverse Laplace transform inprinciplewecanrecoverffromF via f(t) = 1 2…j Z¾+j1 ¾¡j1 F(s)estds where¾islargeenoughthatF(s) isdeflnedfor
PYKC – 20 Jan 08 2 5.** Find the inverse Laplace transform of the function: . 6.*** The Laplace transform of a causal periodic signal can be found from the In MuPAD Notebook only, ilaplace(F, s, t) computes the inverse Laplace transform of the expression F = F(s) with respect to the variable s at the point t.
## Inverse Laplace Transform Calculator eMathHelp
Z-transform Wikipedia. The calculator will find the Inverse Laplace Transform of the given function. Recall, that \mathcal{L}^{-1}\left(F(s)\right) is such a function f(t) t, The Laplace Transform is one of the most powerful mathematical tools that can be used to solve a wide variety of problems in Math, Science, and Engineering..
### Basics of Laplace Transform Electrical engineering Community
MATHS TUTORIAL – LAPLACE and FOURIER TRANSFORMS. Calculator Tutorials; Geometry + Trig; Solving ODEs with Laplace Transforms, Part 1 We also discuss the Inverse Laplace Transform and derive several inverses., SCHOOL OF ENGINEERING & BUILT ENVIRONMENT . Mathematics . Laplace Transforms . 1. Inverse Laplace Transforms Answers to Tutorial Exercises.
Laplace Transforms & Transfer Functions Laplace Transforms: method for solving differential equations, converts differential • Inverse transform, s Find inverse Laplace transform of the following : Documents Similar To Tutorial II. Diff Equation 12 2012 FourierSeries. Uploaded by. Camelia Lupu. MATH263_Mid_2009F.
Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of The Laplace transform and it's use within control engineering. A step by step guide to applying the Laplace transform to time domain equations
This MATLAB function returns the Inverse Laplace Transform of F. Laplace transform numerical inversion algorithm can be noticed. The inverse Laplace transform can easily be expressed by referring to the Fourier transform as
We need to know how to find the inverse of the Laplace Transform, when solving problems. We need to know how to find the inverse of the Laplace Transform, when solving problems.
Chapter 4 (Laplace transforms): Solutions (The table of Laplace transforms is used throughout.) To find the inverse Laplace transform of Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz Inverse Z-transform Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to
Laplace transform numerical inversion algorithm can be noticed. The inverse Laplace transform can easily be expressed by referring to the Fourier transform as Calculator Tutorials; Geometry + Trig; Solving ODEs with Laplace Transforms, Part 1 We also discuss the Inverse Laplace Transform and derive several inverses.
Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can Topics covered include the properties of Laplace transforms and inverse Laplace transforms together with applications to ordinary and partial differential equations,
Introduction These slides cover the application of Laplace Transforms to Heaviside functions. See the Laplace Transforms workshop if you need to revise this topic rst. Find inverse Laplace transform of the following : Documents Similar To Tutorial II. Diff Equation 12 2012 FourierSeries. Uploaded by. Camelia Lupu. MATH263_Mid_2009F.
Introduction These slides cover the application of Laplace Transforms to Heaviside functions. See the Laplace Transforms workshop if you need to revise this topic rst. Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz Inverse Z-transform The calculator will find the Inverse Laplace Transform of the given function. Recall, that \mathcal{L}^{-1}\left(F(s)\right) is such a function f(t) t
Using partial fraction expansion and applying the inverse Laplace transform to the Laplace transforms. Laplace transforms in this table are valid for: f(t) Chapter 7 Laplace Transform The Laplace transform can be used to solve di erential equations. Be-sides being a di erent and e cient alternative to variation of parame-
Calculator Tutorials; Geometry + Trig; Solving ODEs with Laplace Transforms, Part 1 We also discuss the Inverse Laplace Transform and derive several inverses. This section provides materials for a session on how to compute the inverse Laplace transform. Materials include course notes, a lecture video clip, practice problems
Find inverse Laplace transform of the following : Documents Similar To Tutorial II. Diff Equation 12 2012 FourierSeries. Uploaded by. Camelia Lupu. MATH263_Mid_2009F. An important concept in engineering concerns the Laplace Transform. Applying this important concept in electrical engineering, the Laplace Transform takes a function
This MATLAB function returns the Laplace Transform of f. Toggle Main Navigation. To compute the inverse Laplace transform, Tutorials; Examples; Videos and Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of
PYKC – 20 Jan 08 2 5.** Find the inverse Laplace transform of the function: . 6.*** The Laplace transform of a causal periodic signal can be found from the This MATLAB function returns the Laplace Transform of f. Toggle Main Navigation. To compute the inverse Laplace transform, Tutorials; Examples; Videos and
Inverse Laplace Transform, The Inverse Laplace Transform by Partial Fraction Expansion. Inverse Laplace Transform by Partial Fraction Expansion. Help with Inverse Laplace Transform Function . Learn more about ilaplace
Our very productive member Nasir decided to write another series of tutorials. This time, he wants to focus on Laplace Transform, which is all about mathematics. Are Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to
The Laplace transform and it's use within control engineering. A step by step guide to applying the Laplace transform to time domain equations An important concept in engineering concerns the Laplace Transform. Applying this important concept in electrical engineering, the Laplace Transform takes a function
To obtain inverse Laplace transform. 18. To solve constant coefficient linear ordinary differential equations using Laplace transform. 19. Inverse Laplace Transform using Partial Fractions Step by Step – Differential Equations Made Easy. If you are asked to find the Inverse Laplace that involves
Laplace Transform explained Mass Spring Damper. Laplace transforms and their inverse are a mathematical technique which allows us to solve differential equations, by primarily using algebraic methods. This, Solution via Laplace transform and matrix exponential so etA is nonsingular, with inverse etA −1 = e−tA Solution via Laplace transform and matrix exponential.
### INVERSE LAPLACE University of Sheffield
MATLAB TUTORIAL for the First Course. Part 6 Inverse. Calculator Tutorials; Geometry + Trig; Solving ODEs with Laplace Transforms, Part 1 We also discuss the Inverse Laplace Transform and derive several inverses., This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. Tutorials; Examples; Videos.
Maxima Tutorial. Inverse Laplace Transform - Inverse Laplace Transform - Signals and Systems - Signals and Systems Video tutorials GATE, IES and other PSUs exams preparation and to, The Laplace Transform is used in The main idea behind the Laplace Transformation is that we Inverse of the Laplace Transform. 8. Using Inverse Laplace to.
### The Laplace Transform Tutorial tut4dl.com
Laplace transform MATLAB laplace - MathWorks. This MATLAB function returns the Inverse Laplace Transform of F. An important concept in engineering concerns the Laplace Transform. Applying this important concept in electrical engineering, the Laplace Transform takes a function.
Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can Tutorials; Books; Free Code Free Books Introduction to Digital Filters . Introduction to Laplace Transform Analysis the inverse Laplace transform of is ,
An important concept in engineering concerns the Laplace Transform. Applying this important concept in electrical engineering, the Laplace Transform takes a function Laplace Transform Explained - Mass Spring Damper. Laplace transforms were utilised in previous sections to so that the inverse Laplace Transform tables can
One-dimensional Laplace transforms. The Laplace transform of a function f(t) is given by \[Integral]_0^\[Infinity]f(t)e^-st\[DifferentialD]t. The inverse Laplace This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very
De ne the generalized inverse of F, F 1: [0;1] ! IR, via the inverse transform method to generate the iid exponential interraival times X i, we can represent X This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. Tutorials; Examples; Videos
Chapter 4 (Laplace transforms): Solutions (The table of Laplace transforms is used throughout.) To find the inverse Laplace transform of To obtain inverse Laplace transform. 18. To solve constant coefficient linear ordinary differential equations using Laplace transform. 19.
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz Inverse Z-transform InverseLaplaceTransform[expr, s, t] gives the inverse Laplace transform of expr. InverseLaplaceTransform[expr, Integral Transforms; Tutorials.
One-dimensional Laplace transforms. The Laplace transform of a function f(t) is given by \[Integral]_0^\[Infinity]f(t)e^-st\[DifferentialD]t. The inverse Laplace Numerical Inversion of the Laplace Transform In this section we present another method for the calculation of the inverse Laplace transform.
This MATLAB function returns the Inverse Laplace Transform of F. Laplace transform L Inverse Laplace transform L-1 Algebraic equation Algebraic techniques Response Not Strictly Proper Laplace Transforms Find the inverse LT of
Inverse Laplace transform inprinciplewecanrecoverffromF via f(t) = 1 2…j Z¾+j1 ¾¡j1 F(s)estds where¾islargeenoughthatF(s) isdeflnedfor
Tutorial sheet 1 - Laplace transforms. Tutorial sheet 2 - inverse Laplace. Tutorial sheet 3 - final value theorem and dead-time. Tutorial sheet 4 The Laplace Transform method is a technique for solving linear differential equations with initial conditions. It is commonly used to solve electrical circuit
Using partial fraction expansion and applying the inverse Laplace transform to the Laplace transforms. Laplace transforms in this table are valid for: f(t) InverseLaplaceTransform[expr, s, t] gives the inverse Laplace transform of expr. InverseLaplaceTransform[expr, Integral Transforms; Tutorials.
View all posts in Drysdale category | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571666717529297, "perplexity": 838.1110181445808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00706.warc.gz"} |
https://www.physicsforums.com/threads/general-physics-question-rod-moving-about-a-pivot.160435/ | # General Physics Question - Rod moving about a pivot
1. Mar 12, 2007
### integra2k20
1. The problem statement, all variables and given/known data
This problem involves a uniform rod of length L with a pivot at point L/4 (so that 1/4 of the rod is behind the pivot and 3/4 is in front of it). The rod is released from a horizontal position and drops down. You need to use conservation of energy to solve for the velocity when it is at the vertical position (when the energy is all kinetic). Then, using the idea of a physical pendulum, you have to calculate the period of oscillation if it is displaced slightly.
2. Relevant equations
mgy = 1/2mv^2
3. The attempt at a solution
My idea for the solution is just to ignore the portion of the rod that is to the left of the pivot point (which is equal in length to L/4) and focus only on the remaining 3L/4 portion of the rod. I don't know, however, if this is the right way to approach the problem, or if there is something else i need to take into account.
2. Mar 12, 2007
### Staff: Mentor
That won't work--every piece of the rod counts. Hint: What's the rotational inertia of the rod about its pivot?
3. Mar 12, 2007
### integra2k20
The rotational Intertia would be the sum of MR^2, so you would treat the portion to the left of the pivot as one mass and the portion to the right as one mass, taking the length of each R (radius) from the center of mass. I got 9ML^2/108.
Can this be solved by summing torques and setting equal to (I)(alpha), solving for alpha (rotational acceleration) and using a rotational kinematic?
4. Mar 13, 2007
### Staff: Mentor
You'll need to redo this calculation. You can certainly treat the stick as composed of two smaller sticks joined together. But you cannot treat a stick as if all its mass is located at its center of mass. Instead, add up the rotational inertia of each smaller stick to find the rotational inertia of the complete stick. (Or make use of the parallel axis theorem.)
Use conservation of energy, as suggested in the problem statement.
5. Mar 14, 2007
### integra2k20
I found the moment of inertia, but I'm not sure exactly how to set up the conservation of energy. Usually its mgy = 1/2Iw^2 (where w is angular velocity at the bottom), but in this case i'm not really sure what "y" would be.
6. Mar 14, 2007
### Staff: Mentor
Consider the change in height of the center of mass.
7. Mar 15, 2007
### integra2k20
the center of mass or the pivot point?
8. Mar 16, 2007
### Staff: Mentor
The pivot point is fixed.
9. Mar 21, 2007
### integra2k20
Thanks for all the help thus far, Doc Al.
The last part of the question is: The rod is brought to rest hanging in the vertical position, then displaced slightly. Calculate the period of oscillation as it swings.
Now, since this is a physical pendulum, I know that the period is 2(pi)(I/mgd)^(1/2), where I is the moment of inertia of the rod about the pivot. I would assume that d, the distance of the pendulum, is ONLY the portion of the rod below the pivot point, am I correct?
10. Mar 21, 2007
### Staff: Mentor
Good.
In that equation, d is the distance between the pivot point and the center of mass of the pendulum.
11. Mar 22, 2007
### integra2k20
By the way I didn't really understand what you said about the Inertia, i thought I had it but i reread your post above and it seems that you said 9ML^2/108 was wrong. My method for doing this was treating the rod as if it were composed of two smaller rods, one to the left of the pivot, and one to the right, then summing the mass*radius^2 for each of the two sticks to get the total moment of inertia. For each mass, i used a fraction of "M" (the total mass of the rod) and for the radius i used the distance between the pivot and the center of mass of each of the two smaller rods. Thanks.
12. Mar 22, 2007
### Staff: Mentor
You seem to think that the moment of inertia of an object is $MR^2$, where R is the distance from the pivot to the center of mass. NO! That's true for a point mass, but not for an extended object like a rod. To derive the moment of inertial for a rod you must integrate $dMR^2$ for each element of mass (dM) within the object--since each element has a different R. Of course, you can just look up the moment of inertia for common shapes--like rods, cylinders, spheres, etc.
Example: The moment of inertial of a thin rod about one end is $1/3 M L^2$--but using your (incorrect) method, you'd get $M(L/2)^2 = 1/4ML^2$.
Read this: Moment of Inertia Examples
13. Mar 24, 2007
### integra2k20
thanks, you just saved me alot of trouble. I found the correct answer to this problem, but was having trouble figuring out just how to get there. It makes sense now, a rod is not composed of a bunch of point masses. thanks
14. Mar 24, 2007
### integra2k20
just one more question - over what interval would i integrate the function? L/3 of the rods length is to the LEFT of the point, and 2L/3 is to the RIGHT, so i would assume i integrate over -L/3 to 2L/3
15. Mar 24, 2007
### Staff: Mentor
Actually, a rod is composed of a bunch of point masses--all at different positions, which is why you must integrate. But a rod cannot be modeled as a single point mass located at its center of mass.
If you wanted to do the integration yourself to find the moment of inertia of the rod about its pivot, you'd integrate from -L/4 to 3L/4--since the pivot is at L/4. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214829802513123, "perplexity": 439.6329461905592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543170.25/warc/CC-MAIN-20161202170903-00319-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://astronomy.stackexchange.com/questions/25982/does-the-hubble-constant-depend-on-redshift?noredirect=1 | # Does the Hubble constant depend on redshift?
I know there are lot of questions on the Hubble constant already, but I am curious to know if it changes with redshift? If at current redshift, $$z=0$$, we know its value to be 0.7, will it be different at higher redshift ($$z=0.1$$)? If so, is there any relationship with redshift?
• – ProfRob Apr 19 '18 at 11:24
Yes, definitely.
The Hubble constant describes the expansion rate of the Universe, and the expansion may, in turn, may be decelerated by "regular" matter/energy, and accelerated by dark energy.
It's more or less the norm to use the term Hubble constant $H_0$ for the value today, and Hubble parameter $H(t)$ or $H(a)$ for the value at a time $t$ or, equivalently, a scale factor $a = 1/(1+z)$, where $z$ is the redshift.
The value is given by the Friedmann equation: $$\frac{H^2(a)}{H_0^2} = \frac{\Omega_\mathrm{r}}{a^4} + \frac{\Omega_\mathrm{M}}{a^3} + \frac{\Omega_k}{a^2} + \Omega_\Lambda,$$ where $\{ \Omega_\mathrm{r}, \Omega_\mathrm{M}, \Omega_k, \Omega_\Lambda \} \simeq \{ 10^{-3},0.3,0,0.7 \}$ are the fractional energy densities in radiation, matter, curvature, and dark energy, respectively.
For instance, you can solve the above equation at $z=0.1$ and find that the expansion rate was 5% higher than today.
Since everything but dark energy dilutes with increasing $a$, $H(a)$ will asymptotically converge to a value $H_0\sqrt{\Omega_\Lambda} \simeq 56\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$.
The figure below shows the evolution of the Hubble parameter with time:
As noted by KenG, the fact that $H$ decreases with time may seem at odds with the accelerated expansion of the Universe. But $H$ describes how fast a point in space at a given distance recedes. Later, that point will be farther away, and so will recede faster. From the definition of the Hubble parameter, $H\equiv\dot{a}/a$, multiplying by the scale factor shows the acceleration $da/dt$:
• And just to stave off any possible confusion surrounding that wonderful answer, when people talk about the expansion "accelerating," they are talking about what is happening to the expansion speed H times a, not the expansion rate H itself. So your result shows that while H is dropping with or without dark energy, dark energy makes H times a rise with a, whereas matter alone makes H times a drop with a. – Ken G Apr 19 '18 at 10:42
• @KenG Yes, that's an important point. – pela Apr 19 '18 at 11:44
• If an animal grew by 1% per year, a gedanken microbe on its skin might claim that the expansion was accelerating! – John Duffield Apr 19 '18 at 14:20
• @JohnDuffield Gedanken microbe :D – pela Apr 19 '18 at 15:38
• Good to see this one is finally settled. – Wayfaring Stranger Apr 14 '19 at 14:31
What the Hubble constant really depends on is how old was the universe at the time, but if you have a dynamical model of the universe, you can map that into z and come up with a function H(z). So in that sense, the answer is "yes," but be careful-- we also think of z as a measure of how far away the objects are, and H does not depend on location it depends on age. What's more, the z we get from a given measurement reflects all the expansion, so all the H's, since that light was emitted, not just the value of that H at that z. It would be a bit like if you were using your height to talk about your age, and you look at a picture of yourself at 4 feet tall, and say you were growing two inches a year when you were that tall. That would be like the H that applies in that picture, but the ratio of the height you are now to the height in that picture depends on more than just the rate you were growing in that picture.
(Also-- what do you mean the value of the Hubble constant is 0.7 now? That sounds like the fraction of total energy that is "dark energy," so is more about the rate of acceleration of the expansion rather than the rate of expansion itself. If you are asking about that, then this number has been rising with age of the universe, so can be mapped into a function of z if we are using z to talk about the age of the universe at the time.)
• Great analogy! +1 – pela Apr 19 '18 at 11:45
• I guess by "0.7" he means the reduced Hubble constant, i.e. $H_0$/100 km/s/Mpc. – pela Apr 19 '18 at 11:45
• Ah yes, you're probably right. – Ken G Apr 19 '18 at 15:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599364161491394, "perplexity": 393.819103382434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00292.warc.gz"} |
https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/IMPROVING_GANS_USING_OPTIMAL_TRANSPORT&diff=prev&oldid=33677 | # Difference between revisions of "stat946w18/IMPROVING GANS USING OPTIMAL TRANSPORT"
## Introduction
Generative Adversarial Networks (GANs) are powerful generative models. A GAN model consists of a generator and a discriminator or critic. The generator is a neural network which is trained to generate data having a distribution matched with the distribution of the real data. The critic is also a neural network, which is trained to separate the generated data from the real data. A loss function that measures the distribution distance between the generated data and the real one is important to train the generator.
Optimal transport theory evaluates the distribution distance based on metric, which provides another method for generator training. The main advantage of optimal transport theory over the distance measurement in GAN is its closed form solution for having a tractable training process. But the theory might also result in inconsistency in statistical estimation due to the given biased gradients if the mini-batches method is applied.
This paper presents a variant GANs named OT-GAN, which incorporates a discriminative metric called 'MIni-batch Energy Distance' into its critic in order to overcome the issue of biased gradients.
## GANs and Optimal Transport
where $\prod (p,g)$ is the set of all joint distributions $\gamma (x,y) with marginals p(x), g(y)$ Consider that solving the Wasserstein distance is usually not possible, the proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into dual formulation using a set of Lipschitz functions. A neural network can then be used to obtain an estimation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87989342212677, "perplexity": 570.2879854889378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00143.warc.gz"} |
http://psychology.wikia.com/wiki/Evidence | # Evidence
34,203pages on
this wiki
Evidence in its broadest sense, refers to anything that is used to determine or demonstrate the truth of an assertion. Philosophically, evidence can include propositions which are presumed to be true used in support of other propositions that are presumed to be falsifiable. The term has specialized meanings when used with respect to specific fields, such as policy, scientific research, criminal investigations, and legal discourse.
The most immediate form of evidence available to an individual is the observations of that person's own senses. For example an observer wishing for evidence that the sky is blue need only look at the sky. However this same example illustrates some of the difficulties of evidence as well: someone who was blue-yellow color blind, but did not know it, would have a very different perception of what color the sky was than someone who was not. Even simple sensory perceptions (qualia) ultimately are subjective; guaranteeing that the same information can be considered somehow true in an objective sense is the main challenge of establishing standards of evidence.
## Evidence in scienceEdit
Main article: Scientific evidence
In scientific research evidence is accumulated through observations of phenomena that occur in the natural world, or which are created as experiments in a laboratory. Scientific evidence usually goes towards supporting or rejecting a hypothesis. When evidence is contradictory to theoretical expectations, the evidence and the ways of making it are often closely scrutinized (see experimenter's regress). The rules for evidence used by science is collected systematically in an attempt to avoid the bias inherent to anecdotal evidence.
## Evidence in criminal investigationEdit
In criminal investigation, rather than attempting to prove an abstract or hypothetical point, the evidence gatherers are attempting to determine who is responsible for a criminal act. The focus of criminal evidence is to connect physical evidence and reports of witnesses to a specific person.
## Evidence in lawEdit
Main article: Evidence (law)
Legal evidence differs from the above in the tight rules governing the presentation of facts that tend to prove or disprove the point at issue. In law, certain policies require that evidence that tends to prove or disprove an assertion or fact must nevertheless be excluded from consideration based either on indicia relating to reliability, or on broader social concerns. Testimony (which tells) and exhibits (which show) are the two main categories of evidence presented at a trial or hearing.
es:Prueba
fi:Evidenssi
ja:エビデンス
pt:Evidência | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259316921234131, "perplexity": 1623.616891037592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00582-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://en.academic.ru/dic.nsf/enwiki/9289 | # Integral domain
Integral domain
In abstract algebra, an integral domain is a commutative ring that has no zero divisors,[1] and which is not the trivial ring {0}. It is usually assumed that commutative rings and integral domains have a multiplicative identity even though this is not always included in the definition of a ring. Integral domains are generalizations of the integers and provide a natural setting for studying divisibility. An integral domain is a commutative domain with identity.[2]
The above is how "integral domain" is almost universally defined, but there is some variation. In particular, noncommutative integral domains are sometimes admitted.[3] However, we follow the much more usual convention of reserving the term integral domain for the commutative case and use domain for the noncommutative case; this implies that, curiously, the adjective "integral" means "commutative" in this context. Some sources, notably Lang, use the term entire ring for integral domain.[4]
Some specific kinds of integral domains are given with the following chain of class inclusions:
Commutative ringsintegral domainsintegrally closed domainsunique factorization domainsprincipal ideal domainsEuclidean domainsfields
The absence of zero divisors means that in an integral domain the cancellation property holds for multiplication by any nonzero element a: an equality ab = ac implies b = c.
## Definitions
There are a number of equivalent definitions of integral domain:
• An integral domain is a commutative ring with identity in which for any two elements a and b, the equality ab = 0 implies either a = 0 or b = 0.
• An integral domain is a commutative ring with identity in which the zero ideal {0} is a prime ideal.
• An integral domain is a commutative ring with identity that is a subring of a field.
• An integral domain is a commutative ring with identity in which for every non-zero element r, the function that maps every element x of the ring to the product xr is injective. Elements that have this property are called regular, so it is equivalent to require that every non-zero element of the ring be regular.
## Examples
• The prototypical example is the ring Z of all integers.
• Every field is an integral domain. Conversely, every Artinian integral domain is a field. In particular, all finite integral domains are finite fields (more generally, by Wedderburn's little theorem, finite domains are finite fields). The ring of integers Z provides an example of a non-Artinian infinite integral domain that is not a field, possessing infinite descending sequences of ideals such as:
$\mathbf{Z}\;\supset\;2\mathbf{Z}\;\supset\;\cdots\;\supset\;2^n\mathbf{Z}\;\supset\;2^{n+1}\mathbf{Z}\;\supset\;\cdots$
• Rings of polynomials are integral domains if the coefficients come from an integral domain. For instance, the ring Z[X] of all polynomials in one variable with integer coefficients is an integral domain; so is the ring R[X,Y] of all polynomials in two variables with real coefficients.
• For each integer n > 1, the set of all real numbers of the form a + bn with a and b integers is a subring of R and hence an integral domain.
• For each integer n > 0 the set of all complex numbers of the form a + bin with a and b integers is a subring of C and hence an integral domain. In the case n = 1 this integral domain is called the Gaussian integers.
• If U is a connected open subset of the complex number plane C, then the ring H(U) consisting of all holomorphic functions f : UC is an integral domain. The same is true for rings of analytic functions on connected open subsets of analytic manifolds.
• If R is a commutative ring and P is an ideal in R, then the factor ring R/P is an integral domain if and only if P is a prime ideal. Also, R is an integral domain if and only if the ideal (0) is a prime ideal.
• A regular local ring is an integral domain. In fact, a regular local ring is a UFD.[5][6]
The following rings are not integral domains.
## Divisibility, prime and irreducible elements
If a and b are elements of the integral domain R, we say that a divides b or a is a divisor of b or b is a multiple of a if and only if there exists an element x in R such that ax = b.
The elements which divide 1 are called the units of R; these are precisely the invertible elements in R. Units divide all other elements.
If a divides b and b divides a, then we say a and b are associated elements or associates.
If q is a non-unit, we say that q is an irreducible element if q cannot be written as a product of two non-units.
If p is a non-zero non-unit, we say that p is a prime element if, whenever p divides a product ab, then p divides a or p divides b. Equivalent, an element is prime if and only if an ideal generated by it is a nonzero prime ideal. Every prime element is irreducible. Conversely, in a GCD domain (e.g., a unique factorization domain), an irreducible element is a prime element.
The notion of prime element generalizes the ordinary definition of prime number in the ring Z, except that it allows for negative prime elements. While every prime is irreducible, the converse is not in general true. For example, in the quadratic integer ring $\mathbb{Z}\left[\sqrt{-5}\right]$ the number 3 is irreducible, but is not a prime because 9, the norm of 3, can be factored in two ways in the ring, namely, $\left(2 + \sqrt{-5}\right)\left(2 - \sqrt{-5}\right)$ and $3\times3$. Thus $3|\left(2 + \sqrt{-5}\right)\left(2 - \sqrt{-5}\right)$, but 3 does not divide $\left(2 + \sqrt{-5}\right)$ nor $\left(2 - \sqrt{-5}\right).$ The numbers 3 and $\left(2 \pm \sqrt{-5}\right)$ are irreducible as there is no $\pi = a + b\sqrt{-5}$ where π | 3 or $\pi|\left(2 \pm \sqrt{-5}\right)$ as a2 + 5b2 = 3 has no integer solution.
While unique factorization does not hold in the above example, if we use ideals we do get unique factorization. Namely, the ideal (3) equals the ideals $\left(\left(2 + \sqrt{-5}\right)\right)$ and $\left(\left(2 - \sqrt{-5}\right)\right)$ and is the unique product of the two prime ideals: $pp^\prime = \left(3, 1 + 2\sqrt{-5}\right)\left(3, 1 - 2\sqrt{-5}\right)$, each of which have a norm of 3.
## Properties
• Let R be an integral domain. Then there is an integral domain S such that RS and S has an element which is transcendental over R.
• The cancellation property holds in integral domains. That is, let a, b, and c belong to an integral domain. If a0 and ab = ac then b = c. Another way to state this is that the function xax is injective for any non-zero a in the domain.
• An integral domain is equal to the intersection of its localizations at maximal ideals.
## Field of fractions
If R is a given integral domain, the smallest field containing R as a subring is uniquely determined up to isomorphism and is called the field of fractions or quotient field of R. It can be thought of as consisting of all fractions a/b with a and b in R and b ≠ 0, modulo an appropriate equivalence relation. The field of fractions of the integers is the field of rational numbers. The field of fractions of a field is isomorphic to the field itself.
## Algebraic geometry
In algebraic geometry, integral domains correspond to irreducible varieties. They have a unique generic point, given by the zero ideal. Integral domains are also characterized by the condition that they are reduced and irreducible. The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, hence such rings are integral domains. The converse is clear: No integral domain can have nilpotent elements, and the zero ideal is the unique minimal prime ideal.
## Characteristic and homomorphisms
The characteristic of every integral domain is either zero or a prime number.
If R is an integral domain with prime characteristic p, then f(x) = x p defines an injective ring homomorphism f : RR, the Frobenius endomorphism.
## Notes
1. ^ Dummit and Foote, p. 229
2. ^ Rowen (1994), p. 99 at Google Books.
3. ^ J.C. McConnel and J.C. Robson "Noncommutative Noetherian Rings" (Graduate studies in Mathematics Vol. 30, AMS)
4. ^ Pages 91–92 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley Pub. Co., ISBN 978-0-201-55540-0
5. ^ Maurice Auslander; D.A. Buchsbaum (1959). "Unique factorization in regular local rings". Proc. Natl. Acad. Sci. USA 45 (5): 733–734. doi:10.1073/pnas.45.5.733. PMC 222624. PMID 16590434.
6. ^ Masayoshi Nagata (1958). "A general theory of algebraic geometry over Dedekind domains. II". Amer. J. Math. (The Johns Hopkins University Press) 80 (2): 382–420. doi:10.2307/2372791. JSTOR 2372791.
## References
• Iain T. Adamson (1972). Elementary rings and modules. University Mathematical Texts. Oliver and Boyd. ISBN 0-05-002192-3.
• Bourbaki, Nicolas (1988). Algebra. Berlin, New York: Springer-Verlag. ISBN 978-3-540-19373-9.
• Mac Lane, Saunders; Birkhoff, Garrett (1967). Algebra. New York: The Macmillan Co.. ISBN 1568810687. MR0214415.
• Dummit, David S.; Foote, Richard M. (1999). Abstract algebra (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-36857-1.
• Hungerford, Thomas W. (1974). Algebra. New York: Holt, Rinehart and Winston, Inc.. ISBN 0030305586.
• Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. 211. Berlin, New York: Springer-Verlag. ISBN 978-0-387-95385-4. MR1878556.
• David Sharpe (1987). Rings and factorization. Cambridge University Press. ISBN 0-521-33718-6.
• Louis Halle Rowen (1994). Algebra: groups, rings, and fields. A K Peters. ISBN 1568810288.
• Charles Lanski (2005). Concepts in abstract algebra. AMS Bookstore. ISBN 053442323X.
• César Polcino Milies; Sudarshan K. Sehgal (2002). An introduction to group rings. Springer. ISBN 1402002386.
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• integral domain — noun A commutative ring with identity not equal to zero which has no zero divisors. Ring R is an integral domain if and only if the polynomial ring is an integral domain … Wiktionary
• integral domain — noun Date: 1937 a mathematical ring in which multiplication is commutative, which has a multiplicative identity element, and which contains no pair of nonzero elements whose product is zero < the integers under the operations of addition and… … New Collegiate Dictionary
• integral domain — noun : a mathematical ring in which multiplication is commutative, which has a multiplicative identity element, and which contains no pair of nonzero elements whose product is zero * * * Math. a commutative ring in which the cancellation law… … Useful english dictionary
• integral domain — Math. a commutative ring in which the cancellation law holds true. [1935 40] * * * … Universalium
• Domain — may refer to: General Territory (administrative division), a non sovereign geographic area which has come under the authority of another government Public domain, a body of works and knowledge without proprietary interest Eminent domain, the… … Wikipedia
• Integral (disambiguation) — Integral may refer to: *constituting, belonging to or making up a whole, necessary for completeness;in mathematics * Of or related to an integer * integral (calculus), the generalization of area, mass, etc. * Integral (measure theory), or… … Wikipedia
• Domain (ring theory) — In mathematics, especially in the area of abstract algebra known as ring theory, a domain is a ring such that ab = 0 implies that either a = 0 or b = 0.[1] That is, it is a ring which has no left or right zero divisors. (Sometimes such a ring is… … Wikipedia
• domain — noun Etymology: alteration of Middle English demayne, from Anglo French demeine, from Latin dominium, from dominus Date: 15th century 1. a. complete and absolute ownership of land compare eminent domain … New Collegiate Dictionary
• Integral — This article is about the concept of integrals in calculus. For the set of numbers, see integer. For other uses, see Integral (disambiguation). A definite integral of a function can be represented as the signed area of the region bounded by its… … Wikipedia
• Integral transform — In mathematics, an integral transform is any transform T of the following form:: (Tf)(u) = int {t 1}^{t 2} K(t, u), f(t), dt.The input of this transform is a function f , and the output is another function Tf . An integral transform is a… … Wikipedia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789302349090576, "perplexity": 449.8389255832639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697760.44/warc/CC-MAIN-20191019191828-20191019215328-00536.warc.gz"} |
https://papers.nips.cc/paper/2012/hash/a9be4c2a4041cadbf9d61ae16dd1389e-Abstract.html | #### Authors
Pedro Ortega, Jordi Grau-moya, Tim Genewein, David Balduzzi, Daniel Braun
#### Abstract
We propose a novel Bayesian approach to solve stochastic optimization problems that involve finding extrema of noisy, nonlinear functions. Previous work has focused on representing possible functions explicitly, which leads to a two-step procedure of first, doing inference over the function space and second, finding the extrema of these functions. Here we skip the representation step and directly model the distribution over extrema. To this end, we devise a non-parametric conjugate prior where the natural parameter corresponds to a given kernel function and the sufficient statistic is composed of the observed function values. The resulting posterior distribution directly captures the uncertainty over the maximum of the unknown function. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631795644760132, "perplexity": 1343.5841377683764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00335.warc.gz"} |
http://mathhelpforum.com/calculus/166586-tricky-integral.html | 1. Tricky Integral
So I have $\displaystyle \int{\frac{\sqrt{x}}{x+1}\,dx}$
My initial attempts at u substitution failed because I couldn't see any clear derivatives of terms in the numerator or denominator.
I looked at Wolfram Alpha and it says to use $u = \sqrt{x}$ and $du=\frac{1}{2\sqrt{x}}$
But then, I'm not really sure where to go from there. It says to go right to
$\displaystyle \2\int{\frac{u^2}{u^2+1}\,du}$
but I can't figure out how they got that from the u and du.
2. Originally Posted by Chaobunny
So I have $\displaystyle \int{\frac{\sqrt{x}}{x+1}\,dx}$
My initial attempts at u substitution failed because I couldn't see any clear derivatives of terms in the numerator or denominator.
I looked at Wolfram Alpha and it says to use $u = \sqrt{x}$ and $du=\frac{1}{2\sqrt{x}}$
But then, I'm not really sure where to go from there. It says to go right to
$\displaystyle \2\int{\frac{u^2}{u^2+1}\,du}$
but I can't figure out how they got that from the u and du.
one way (perhaps the easiest way):
$\displaystyle u = \sqrt x$
$\displaystyle \Rightarrow u^2 = x$
$\displaystyle \Rightarrow 2u ~du = dx$
$\displaystyle \int \frac {u}{u^2 + 1} \cdot 2 u~du$
$\displaystyle = 2 \int \frac {u^2}{u^2 + 1}~du$
otherwise, you'd do some voodoo like multiplying the original integral by $\frac {2 \sqrt x}{2 \sqrt x}$, and then make the switch from x to u
I assume you have no trouble taking it from there.
3. Ah, thank you very much! I never could quite get those u substitutions with the algebraic manipulations.
4. Originally Posted by Chaobunny
So I have $\displaystyle \int{\frac{\sqrt{x}}{x+1}\,dx}$
My initial attempts at u substitution failed because I couldn't see any clear derivatives of terms in the numerator or denominator.
I looked at Wolfram Alpha and it says to use $u = \sqrt{x}$ and $du=\frac{1}{2\sqrt{x}}$
It ought to have given
$\displaystyle\ u=\sqrt{x}\Rightarrow\frac{du}{dx}=\frac{1}{2\sqrt {x}}\Rightarrow\ dx=2\sqrt{x}du=2udu$
Then
$\displaystyle\int{\frac{\sqrt{x}}{x+1}}dx=\int{\fr ac{u}{u^2+1}2u}du=2\int{\frac{u^2}{u^2+1}}du$
But then, I'm not really sure where to go from there. It says to go right to
$\displaystyle \2\int{\frac{u^2}{u^2+1}\,du}$
but I can't figure out how they got that from the u and du.
5. Oh, I forgot to add the 2 in the original post. But my main problem was with the manipulation of the du and dx, which was answered. So thank you both very much. =) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956549346446991, "perplexity": 235.99139545375638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660966.51/warc/CC-MAIN-20160924173740-00297-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.mathimatikoi.org/forum/viewtopic.php?f=15&t=282&p=533 | It is currently Thu Jul 18, 2019 1:27 am
All times are UTC [ DST ]
Page 1 of 1 [ 3 posts ]
Print view Previous topic | Next topic
Author Message
Post subject: Function and partitionPosted: Fri Jan 01, 2016 3:28 pm
Team Member
Joined: Mon Nov 09, 2015 1:52 pm
Posts: 426
Let $\displaystyle{E}$ be a non-empty set and $\displaystyle{A\,,B\in\mathbb{P}(E)-\left\{\varnothing\right\}}$ .
We define $\displaystyle{f:\mathbb{P}(E)\longrightarrow \mathbb{P}(A)\times \mathbb{P}(B)}$ by
$\displaystyle{X\mapsto f(X)=\left(X\cap A,X\cap B\right)}$ .
Prove that $\displaystyle{\left\{A,B\right\}}$ is a partition of the set $\displaystyle{E}$ if, and only if, the function $\displaystyle{f}$
is one to one and onto $\displaystyle{\mathbb{P}(A)\times \mathbb{P}(B)}$ .
Top
Post subject: Re: Function and partitionPosted: Fri Jan 01, 2016 3:30 pm
Joined: Sat Nov 07, 2015 6:12 pm
Posts: 841
Location: Larisa
Good evening y'all.
$\left ( \Rightarrow \right )$We have that $B=A^c$. Suppose $f(X) = f(Y)$, then:
$$X\cap A= Y\cap A \;\;\; {\rm and} \;\;\; X\cap A^c = Y \cap A^c$$
Thus :
$$X=(X \cap A) \cup (X \cap A^c)=(Y \cap A) \cup (Y \cap A^c)=Y$$
so $f$ is $1-1$.
Also
$$X \subset A, Y \subset B \Rightarrow X=A \cap (X \cup Y), Y=B \cap (X \cup Y) \Rightarrow (X,Y)=f(X \cup Y)$$
meaning that is onto also proving the first part.
$\left ( \Leftarrow \right )$ We have that $f(A \cup B) =f(E)$ and since $f$ is $1-1$ we also have $A \cup B=E$. The function $f$ is also onto meaning that there exists $Z \subset E$ such that
$$f(Z)=(A, \varnothing) \Rightarrow A \cap Z=A \wedge B \cap Z=\varnothing \\ \Rightarrow A \subset Z \wedge Z \subset B^c \Rightarrow A \subset B^c \Rightarrow A \cap B=\varnothing$$
_________________
Imagination is much more important than knowledge.
Top
Post subject: Re: Function and partitionPosted: Fri Jan 01, 2016 3:34 pm
Team Member
Joined: Mon Nov 09, 2015 1:52 pm
Posts: 426
Here is another solution that $\displaystyle{f}$ is one to one according to the hypothesis
that $\displaystyle{\left\{A,B\right\}}$ is a partition of $\displaystyle{E}$, that is
$\displaystyle{E=A\cup B\,\,,A\cap B=\varnothing}$ . Let $\displaystyle{X\,,Y\in\mathbb{P}(E)}$ such
that $\displaystyle{f(X)=f(Y)}$. Then, $\displaystyle{X\cap A=Y\cap A\,\,\,,X\cap B=Y\cap B}$.
We'll prove that $\displaystyle{X=Y}$ . For this purpose, let $\displaystyle{x\in X}$. Then, $\displaystyle{x\in E}$
and $\displaystyle{x\in A}$ or $\displaystyle{x\in B}$ . If $\displaystyle{x\in A}$, then :
$\displaystyle{x\in X\cap A\implies x\in Y\cap A\implies x\in Y\,\land x\in A}$, so : $\displaystyle{x\in Y}$ . If $\displaystyle{x\in B}$
then $\displaystyle{x\in X\cap B\implies x\in Y\cap B\implies x\in Y}$. In any case, $\displaystyle{X\subseteq Y}$.
Similarly, $\displaystyle{Y\subseteq X}$ and finally, $\displaystyle{X=Y}$ .
Here is another solution that $\displaystyle{A\cup B=E}$according to the hypothesis that $\displaystyle{f}$ is
one to one an onto $\displaystyle{\mathbb{P}(A)\times \mathbb{P}(B)}$ .
Suppose that $\displaystyle{A\cup B\neq E}$. Then, there exists $\displaystyle{x\in E}$ such that
$\displaystyle{x\notin A\,\,,x\notin B}$. Therefore,
$\displaystyle{f(\left\{x\right\})=\left(\left\{x\right\}\cap A,\left\{x\right\}\cap B\right)=\left(\varnothing,\varnothing\right)=f\,(\varnothing)}$
and since the function $\displaystyle{f}$ is one to one, we get $\displaystyle{\left\{x\right\}=\varnothing}$, a contradiction,
so $\displaystyle{A\cup B=E}$ .
Top
Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending
Page 1 of 1 [ 3 posts ]
All times are UTC [ DST ]
Mathimatikoi Online
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum
Search for:
Jump to: Select a forum ------------------ Algebra Linear Algebra Algebraic Structures Homological Algebra Analysis Real Analysis Complex Analysis Calculus Multivariate Calculus Functional Analysis Measure and Integration Theory Geometry Euclidean Geometry Analytic Geometry Projective Geometry, Solid Geometry Differential Geometry Topology General Topology Algebraic Topology Category theory Algebraic Geometry Number theory Differential Equations ODE PDE Probability & Statistics Combinatorics General Mathematics Foundation Competitions Archives LaTeX LaTeX & Mathjax LaTeX code testings Meta | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285321593284607, "perplexity": 4890.51975642022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.62/warc/CC-MAIN-20190718001934-20190718023934-00106.warc.gz"} |
https://www.physicsforums.com/threads/why-is-anti-neutrino-called-so.423844/ | # Why is anti-neutrino called so?
1. Aug 23, 2010
### DrDu
From what I know of the history of the neutrino, it was first postulated by Pauli to explain momentum conservation in the beta decay. However, nowadays we call the particle emitted in that process anti-neutrino and not neutrino. What is the reasoning behind this change of naming?
2. Aug 23, 2010
### humanino
The reasoning is the conservation of leptonic charge.
Start with a free neutron : there is no lepton. It decays into a proton, plus an electron (which conserve electric charge; the electron is a lepton), plus an anti-neutrino (which carries negative leptonic number).
In the Feynman diagram
you can also see the lepton number "carried in" along the arrow by the anti-neutrino, and "carried away" by the electron. By the same token, the hadronic number is conserved along the d->u line. I hope the diagram is not confusing. The anti-neutrino is really outgoing with positive energy, it is represented as a neutrino in-going (backwards in time) with negative energy.
Last edited by a moderator: Apr 25, 2017
3. Aug 24, 2010
### DrDu
Thank you humanino!
I understand more or less the reasoning, although I cannot see (but imagine) the diagramm.
How is leptonic charge being defined? Is it a conserved charge of the weak interaction?
4. Aug 24, 2010
### humanino
The lepton number is an additive quantum number which is +1 for leptons (electron, muon, tau and their neutrinos) and -1 for antileptons. You may just count them in the initial and final state, and expect this number to be conserved, included in weak interactions.
First note however that this number is already known not to be conserved per family or flavor, as massive neutrinos can oscillate from one flavor to another.
Even worse, although the total number of leptons however has (AFAIK) never been observed to be violated, in principle it could be by very tiny effects called anomalies (the breaking of classical symmetries by quantum effects or loops), even within the standard model. Since it has never been observed, keep in mind that taking lepton number as conserved is an excellent working hypothesis. Effects in which it could be violated also occur beyond the standard model. For instance, there are searches for proton decay into neutral pion plus positron, which violates both baryon and lepton number. Note however that this reaction which has never been observed despite intense searches respects the difference B-L, which in fact is protected against anomalies within the standard model and also respected in many models beyond the standard one.
http://en.wikipedia.org/wiki/Lepton_number
5. Aug 25, 2010
### DrDu
Is there a (possibly only approximate) symmetry behind the conservation of leptonic and baryonic charge in the standard model?
6. Aug 25, 2010
### humanino
Yes, this is a global symmetry consisting in multiplying all leptons (or hadrons) by a pure phase, so those are two U(1). This is described for instance in
http://arxiv.org/abs/hep-ph/0410370v2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161655902862549, "perplexity": 1374.8758690043705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00136.warc.gz"} |
http://mathhelpforum.com/calculus/161223-rolle-s-theorem.html | 1. ## Rolle's Theorem
Could someone please walk me through how I use Rolle's Theorem to find c for:
f(x) = x^3 - x^2 - 2x + 7
[ 0 , 2 ]
Sorry about the formatting, I don't know how to write latex code.
2. Notice $f(0)=f(2)$ so Rolle's theorem gives a $c\in [0,2]$ with $f'(c)=0$. If you want the value of such a c, notice $f'(x)=3x^2-2x-2$ and solve this quadratic polynomial.
3. So basically f'(c) = 0 so i set f'(x) = 0 and solve and the answer is c? So the answer is 1.2153 because -0.5486 is not on the interval [0,2]. Right?
4. If they are the solutions then the logic is fine. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492379426956177, "perplexity": 348.4083059073352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541995.74/warc/CC-MAIN-20161202170901-00082-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/1273/calculating-entropy-why-consider-a-reversible-path/1275 | # Calculating entropy: why consider a reversible path?
I am reading up on entropy in a textbook and I got confused by this:
It says that to calculate the entropy for an irreversible process using heat flow, one must imagine the reversible process in which the initial and final states are the same as for the irreversible process. Why is this so and why do we need to think of it in the reversible way?
• Short answer: you don't. Just calculate the entropy of the initial and final states and take the difference. If there's a heat bath you have to include the change in entropy of the heat bath as well, but that's fine. I've never understood why some textbooks teach this weird and complicated reversible vs. irreversible path stuff, it's just unnecessary. – Nathaniel Oct 1 '12 at 21:03
A state function is one that is independent of path. That is, if one goes from state $A$ to state $B$, the change in internal energy is independent of how the change takes place. The same is true for the entropy.
So the entropy change in going from state $A$ to state $B$ is independent of how the change takes place. But we can calculate the change ONLY for reversible changes.
So we mentally find a reversible path from state $A$ to state $B$ and calculate the entropy change. Since entropy is a state function, that must also be the change for a non-reversible path.
For a closed system, if you calculate the integral of $dq/T_I$ over all possible process paths between two thermodynamic equilibrium states of a system (where $T_I$ is the temperature at the portion of the system interface (boundary) with its surroundings at which the heat transfer takes place), the integral will differ from path to path. The maximum value of the integral over all possible process paths will be found to occur when the path is reversible. Over any reversible path, the system temperature T will be essentially uniform throughout, and the temperature at the interface $T_I$ will be essentially equal to the system temperature T. All reversible paths will give the same maximum value for the integral. This maximum value is the change in entropy of the system between the two thermodynamic equilibrium states. It is a function of state because the maximum value of the integral is determined only by the initial and final equilibrium states. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579452276229858, "perplexity": 117.98505574812333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00529.warc.gz"} |
http://www.physicspages.com/2011/07/23/the-energy-time-uncertainty-relation/ | # The energy-time uncertainty relation
Required math: calculus, complex numbers
Required physics: basics of quantum mechanics
Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 3.5.3.
The uncertainty principle gives a lower bound on the accuracy with which two observables can be measured. If the operators for two observables commute, then both quantities can be exactly determined. If, however, they don’t commute, the lower bound is expressed as an inequality involving the two standard deviations. For two observables ${\hat{A}}$ and ${\hat{B}}$, we get
$\displaystyle \sigma_{A}\sigma_{B}\ge\left|\frac{1}{2i}\left\langle [\hat{A},\hat{B}]\right\rangle \right| \ \ \ \ \ (1)$
where ${\left\langle [\hat{A},\hat{B}]\right\rangle }$ is the expectation value of the commutator of the two observables.
This means that if we do a large number of experiments, all starting in the same quantum state ${\Psi}$, then the product of the standard deviations of these two observables measured over all these experiments is given by this formula. It doesn’t say that if we try to measure both ${A}$ and ${B}$ at the same time we won’t get exact values for both. It does say that over a large number of experiments the statistics must satisfy the uncertainty principle.
So for example if we have an experiment in which we measure the position of a particle accurately, and also try to measure its momentum, then we will get precise values for both quantities in each run of the experiment, but if we repeat the experiment the measurements of momentum will vary widely if we constrain the position of the particle.
One uncertainty relation that is often quoted is the energy-time relation, which is often stated as
$\displaystyle \Delta E\Delta t\ge\frac{\hbar}{2} \ \ \ \ \ (2)$
This relation doesn’t follow from the general uncertainty relation since the time is not an operator in quantum mechanics; rather it is an independent variable on which everything else depends. We can measure the position, energy, momentum, angular momentum and so on, but it doesn’t make sense to measure the ‘time’ of a particle. Time (at least in non-relativistic theory) is a parameter that is independent of everything else.
In fact, this relation can be derived in a way that gives it a different meaning than the other uncertainty relations. Suppose we have an observable ${Q}$ that depends explicitly on ${x}$, ${p}$ and possibly ${t}$.
First we should note the difference between an observable operator depending explicitly on time and the expectation value of the operator depending on time. An operator with no explicit time dependence (such as the Hamiltonian, which is the sum of a kinetic and potential energy, where the potential ${V}$ has no explicit time dependence) can have a mean value that still depends on time. This is because when the Schrödinger equation is solved for a given Hamiltonian, the wave function ${\Psi(x,t)}$ is in general a function of time, even if the Hamiltonian is not. As we saw when we solved the equation for a time independent potential, we can use the separation of variables technique to peel off the time dependence, which turns up in the general solution as a complex exponential of the time.
To return to our observable ${Q}$, let’s find the total time derivative (rate of change) of the expectation value of this observable.
$\displaystyle \frac{d}{dt}\left\langle Q\right\rangle$ $\displaystyle =$ $\displaystyle \frac{d}{dt}\left\langle \Psi|Q\Psi\right\rangle \ \ \ \ \ (3)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\langle \frac{\partial\Psi}{\partial t}|Q\Psi\right\rangle +\left\langle \Psi|\frac{\partial Q}{\partial t}\Psi\right\rangle +\left\langle \Psi|Q\frac{\partial\Psi}{\partial t}\right\rangle \ \ \ \ \ (4)$
We can now use the Schrödinger equation to replace the time derivatives of the wave function. The Schrödinger equation states that
$\displaystyle \frac{\partial\Psi}{\partial t}=\frac{1}{i\hbar}H\Psi \ \ \ \ \ (5)$
Using this we get
$\displaystyle \frac{d}{dt}\left\langle Q\right\rangle =-\frac{1}{i\hbar}\left\langle H\Psi|Q\Psi\right\rangle +\left\langle \Psi|\frac{\partial Q}{\partial t}\Psi\right\rangle +\frac{1}{i\hbar}\left\langle \Psi|QH\Psi\right\rangle \ \ \ \ \ (6)$
The middle term is the expectation value of ${\frac{\partial Q}{\partial t}}$ so overall we get
$\displaystyle \frac{d}{dt}\left\langle Q\right\rangle$ $\displaystyle =$ $\displaystyle -\frac{1}{i\hbar}\left\langle H\Psi|Q\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle +\frac{1}{i\hbar}\left\langle \Psi|QH\Psi\right\rangle \ \ \ \ \ (7)$
Finally, since the Hamiltonian ${H}$ is hermitian, we can rewrite the first term as ${\left\langle H\Psi|Q\Psi\right\rangle =\left\langle \Psi|HQ\Psi\right\rangle }$ so we get
$\displaystyle \frac{d}{dt}\left\langle Q\right\rangle$ $\displaystyle =$ $\displaystyle -\frac{1}{i\hbar}\left\langle \Psi|(HQ-QH)\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (8)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{i}{\hbar}\left\langle \Psi|[H,Q]\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (9)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{i}{\hbar}\left\langle \left[H,Q\right]\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (10)$
Although we haven’t arrived at the energy-time relation yet, this result has a fundamental significance as it stands. If ${Q}$ doesn’t depend explicitly on time, the second term on the right is zero, and we get
$\displaystyle \frac{d}{dt}\left\langle Q\right\rangle =\frac{i}{\hbar}\left\langle [H,Q]\right\rangle \ \ \ \ \ (11)$
What does this tell us? First, since both ${H}$ and ${Q}$, being observables, are hermitian, their commutator is purely imaginary so the term on the right is always real, which is a relief since the rate of change of an observable could hardly be complex.
Second, and more fundamental, is that any observable that doesn’t depend explicitly on time, and that commutes with the Hamiltonian has an expectation value that doesn’t change; that is, it is a conserved quantity. Since ${H}$ obviously commutes with itself, its rate of change is zero, so energy is conserved. We’ll explore a couple of other examples in another post.
For now, though, we need to return to the energy-time relation. Using 1 with ${A=H}$ and ${B=Q}$, we get
$\displaystyle \sigma_{H}\sigma_{Q}$ $\displaystyle \ge$ $\displaystyle \left|\frac{1}{2i}\left\langle [H,Q]\right\rangle \right|\ \ \ \ \ (12)$ $\displaystyle$ $\displaystyle \ge$ $\displaystyle \frac{\hbar}{2}\left|\frac{d}{dt}\left\langle Q\right\rangle \right| \ \ \ \ \ (13)$
Since ${\sigma_{H}}$ is the standard deviation of the hamiltonian, it is reasonable to interpret it as the uncertainty in the energy ${E}$. If we consider the quantity
$\displaystyle \Delta t\equiv\frac{\sigma_{Q}}{\left|d\left\langle Q\right\rangle /dt\right|} \ \ \ \ \ (14)$
we see that it has the units of time (since ${\sigma_{Q}}$ has the same units as ${Q}$). In this case we get
$\displaystyle \Delta E\Delta t\ge\frac{\hbar}{2} \ \ \ \ \ (15)$
which is the energy-time uncertainty relation.
So what exactly does ${\Delta t}$ mean in this context? From its definition, we have
$\displaystyle \sigma_{Q}=\left|\frac{d}{dt}\left\langle Q\right\rangle \right|\Delta t \ \ \ \ \ (16)$
Since ${\sigma_{Q}}$ is the standard deviation of the observable ${Q}$, this expression gives an approximate (in the Taylor series sense) value for the length of time (${\Delta t}$) taken for the observable to change by one standard deviation. This would be exact if ${Q}$‘s rate of change were constant.
We’ve derived this relation by considering some arbitrary observable ${Q}$, so that the time interval ${\Delta t}$ depends on the particular observable ${Q}$ we’re considering. However, the uncertainty relation involves this time interval and the uncertainty in the energy ${\Delta E}$. This seems a bit odd, since it seems like we can get a more accurate measurement of the energy just by choosing another observable which changes very slowly (thus making ${\Delta t}$ very large). However, that’s not quite what the relationship is saying. Rather, in order to get an accurate energy measurement, all other observables have to be changing slowly. Equation 14 puts an upper limit on ${\Delta t}$ for observable ${Q}$; if we look at all observables (all operators ${Q}$) then absolute upper limit for ${\Delta t}$ is the largest value of ${\sigma_{Q}/\left|d\left\langle Q\right\rangle /dt\right|}$, that is, for the smallest rate of change ${\left|d\left\langle Q\right\rangle /dt\right|}$ of any observable.
It might look as though there is something wrong here. After all, if we are in a state where the energy is exact, then all other observables would have to be exact as well. How can this be when there are observables like position and momentum that don’t commute, and thus cannot be determined precisely at the same time?
The key to resolving this apparent paradox is to note that it’s not the precise values of each observable at a particular point in time that we are concerned with. The expression for ${\Delta t}$ involves the rate of change of an expectation value, not a precise measurement. It is certainly possible for the expectation values of position and momentum to have precise, constant-in-time values without violating the uncertainty principle, and that is what is implied here.
In fact, any system in a stationary state where the energy is precisely known does satisfy the condition ${d\left\langle Q\right\rangle /dt=0}$ for all observables. In order to get a case where the energy is uncertain, we need a linear combination of two or more stationary states, with each state corresponding to a different energy. Then a measurement on the system will give one of the energies in the mix, and we can’t say a priori which energy will result. The expectation values of observables will also be time-dependent in general in such a case.
Another way of looking at it is this. For the time-independent Schrödinger equation, the general solution is
$\displaystyle \Psi\left(x,t\right)=\sum_{k}c_{k}\psi_{k}e^{-iE_{k}t/\hbar} \ \ \ \ \ (17)$
where ${\psi_{k}}$ is an eigenstate of the hamiltonian with eigenvalue (energy) ${E_{k}}$. The probability of finding the system in state ${\psi_{k}}$ (with energy ${E_{k}}$) is ${\left|c_{k}\right|^{2}}$. Note that this does not depend on time, so ${\Delta E\equiv\sigma_{H}}$ is actually independent of time. What the energy-time uncertainty relation tells us, then, is that ${\Delta E}$ puts a constraint on the time scales over which other observables ${Q}$ can vary. A system composed of many different eigenstates has a larger ${\Delta E}$ and allows changes on a shorter time scale than a system with only a few eigenstates. In the extreme case of a system consisting of a single eigenstate, ${\Delta E=0}$ and other observables never change.
It’s also worth pointing out that the energy-time relation does not allow violation of conservation of energy. Statements such as “you can violate conservation of energy provided you do so in a short enough time so that 15 is not violated” are just plain false.
## 16 thoughts on “The energy-time uncertainty relation”
1. Pingback: Virial theorem « Physics tutorials
2. alex
You state that (14) puts a upper limit on the time. and then you say the upper limit of time is when the right side of the same equation (14) is minimum. Typo?
1. gwrowe Post author
I think what I meant to say is the upper limit corresponds to the smallest rate of change of an observable. Fixed now, anyway. Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 87, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841530323028564, "perplexity": 127.46994497559402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608936.94/warc/CC-MAIN-20170527094439-20170527114439-00317.warc.gz"} |
https://simple.wikipedia.org/wiki/Faraday%27s_laws_of_electrolysis | Faraday's laws of electrolysis are a set of scientific laws used in chemistry. They are used to express magnitudes of electrolytic effects. They were first described by Michael Faraday in 1834.
The first law states that the mass of a substance produced by electrolysis is directly proportional to the quantity of the electricity that passes through the cell.
The second law says: since Q, F, and z are constants, the larger the value of M/z (equivalent weight) the larger m will be.
## Mathematical form
Faraday's laws can be summarized by
${\displaystyle m\ =\ \left({Q \over F}\right)\left({M \over z}\right)}$
where:
• m is the mass of the substance liberated at an electrode in gms
• Q is the total electric charge passed through the substance in coulombs
• F = 96485.33289(59) C mol−1 is the Faraday constant
• M is the molar mass of the substance in grams per mol
• z is the valency number of ions of the substance (electrons transferred per ion).
M/z is the same as the equivalent weight of the substance altered. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727327227592468, "perplexity": 883.5283604556406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00579.warc.gz"} |
http://openstudy.com/updates/4f1fb9ace4b076dbc3486a7e | • anonymous
can someone help me with this. Let f(x) = x2 - 8x + 5. Find f(-1). ?
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321874141693115, "perplexity": 1258.8760442358714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00576-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/168324/solving-for-unknown-inside-square-root | # Solving for unknown inside square root
Sorry if this is a very primitive question, but I really not sure if I am right about this kind of situations. Imagine the following equation where $a$ , $b$ and $c$ are known numbers and $x$ is the unknown variable:
$$a\sqrt{bx}=c$$
Is it ok in this case to do it like $$a^2bx=c^2$$
If not, how to solve such equation?
-
Yes, this is fine, provided that $a$ and $c$ have the same algebraic sign. When you solve the second equation, you get $$x=\frac{c^2}{a^2b}\;.$$ Now try substituting that into the original equation:
$$a\sqrt{\frac{bc^2}{a^2b}}=a\sqrt{\frac{c^2}{a^2}}=a\left|\frac{c}a\right|\;.\tag{1}$$
If $a$ and $c$ have the same algebraic sign, $\left|\dfrac{c}a\right|=\dfrac{c}a$, and $(1)$ can be simplified to $a\left(\dfrac{c}a\right)=c$, as desired.
If one of $a$ and $c$ is positive and the other negative, the original equation has no solution, since by convention $\sqrt{bx}$ denotes the non-negative square root of $bx$.
-
$a\sqrt{bx} = c$
$\sqrt{bx} = \frac{c}{a}$
$bx = \frac{c^2}{a^2}$
$x = \frac{c^2}{ba^2}$
In general, you have to be careful to check each "solution" by plugging it in to the original equation: this sort of argument often introduces extraneous roots, because squaring is not a one-to-one function. For example, try $$\sqrt{x} - 1/\sqrt{x} = 2/\sqrt{3}$$ Squaring both sides and expanding gives you $$x - 2 + 1/x = 4/3$$ which has solutions $x=3$ and $x=1/3$. But only $x=3$ is a solution of the original equation: $x=1/3$ is instead a solution of $\sqrt{x} - 1/\sqrt{x} = -2/\sqrt{3}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540769457817078, "perplexity": 61.206128714939226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123845.65/warc/CC-MAIN-20160428161523-00028-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/quick-question.105223/#post-869902 | # Quick question
• Start date
• #1
6
0
i got this question from a friend and its bugging me because i cannot understand it. i just cannot understand what it means... here is it, word for word what i have on the assigned paper
"if f(x) = x^n , "n" is a positive interger, the first derivative of f(x) which is identically zero is "
A) the nth
B) the (n-1)st
C) the (n+2)nd
D) the first
E) the (n+1)st
those are the options...am I to assume that its so easy that its B? i am hesitant to pick B tho because this teacher is known for his tricks and it seemed a little too easy...a little help would be great for my friend and myself
• #2
Tide
Homework Helper
3,089
0
I suggest trying a few simple example like $x^2$ and so forth - you should see a pattern emerge! :)
• #3
114
1
Basically, it takes n iterations to get to a constant, and then one more.
• #4
75
0
Not sure if you got this already, but the question is asking "How many times do you need to differentiate this thing to get 0?"
It's worded in a tricky manner, but essentially, what do you know about differentiating a constant? How many times will you need to differentiate to get a constant? Then, how many times will you have to differentiate that to get 0?
PS - please use more descriptive thread titles. I've noticed a few threads by you with no indication as to what lies within. It makes it very difficult to get help when you need it if people skip over it!
• #5
8
0
The hint lies in the fact
The nth differential of f(X)=x^n gives a constant
And n+1th diffrential i.e of a constant gives us Zero
• #6
36
0
Remember that each time you take de derivative, the exponent reduces by one.
And the derivative of a constant is zero.
Ciao
• Last Post
Replies
4
Views
2K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
10
Views
2K
• Last Post
Replies
2
Views
895
• Last Post
Replies
2
Views
1K
• Last Post
Replies
2
Views
2K
• Last Post
Replies
4
Views
647
• Last Post
Replies
2
Views
1K
• Last Post
Replies
1
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113408088684082, "perplexity": 1265.0620170017082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00580.warc.gz"} |
http://math.stackexchange.com/questions/218384/can-someone-show-me-how-to-solve-functions-that-include-a-set-of-ordered-pairs?answertab=active | # Can someone show me how to solve functions that include a set of ordered pairs? Im really stuck on this.
Im unsure of how to go about solving these when given a set of
Given
$f= \{(-2,4),(0,-2),(1,2),(2,3)\}$ and $g(x)= 3x-5$
Find
A) $f(-2)$
B) $(f+g)(2)$
C) $f(g(x))$
D) $f(1)-g(1-k)$
In each case I’m not sure how to calculate the $f$ values.
-
What have you tried? What is your definition of a function? – wj32 Oct 22 '12 at 0:24
First, verify $f$ is a function. Once you have done that you will probably have recalled the way in which one looks up values for a function defined in such a manner, which is all that has to be done. Finally, part C does not make sense to me, perhaps you means g(f(x)). – peoplepower Oct 22 '12 at 0:25
Hint: $f(2)=3$. – MJD Oct 22 '12 at 0:58
Part C has its domain restricted to the x values of the pairs given for $f$. – Ben Oct 22 '12 at 1:11
The part I think you are missing is that $f$ is a function which is defined only at the values -2, 0, 1, and 2, and nowhere else. $f(-2) = 4$, which is the answer to part (A); $f(2) = 3$, from which you can easily solve part (B); $f(1) = 2$, from which you can solve part (D).
This leaves only part (C), to determine $f(g(x))$. $f$ is defined only at -2, 0, 1, and 2, so the composite function $f(g(x))$ is defined only at values of $x$ where $g(x)$ is one of -2, 0, 1, or 2. If $x$ is such that $g(x)$ comes out to something else, say 37, then that is an invalid argument for $f$, and the entire expression $f(g(x))$ is undefined. So what you need to do is make a list of the few values of $x$ that make $g$ yield a good argument for $f$, and then you can describe the behavior of the resulting $f(g(x))$ function in a way similar to the way that $f$ itself was described.
In principle, all functions can be described as a list (possible a very large infinite list) of ordered pairs in this way. In some contexts in mathematics we take an ordinary function like $g(x) = 3x -5$ and define it as the infinite set $$\{\ldots, (-2, -11), (-1, -8), (0, -5), (1, -2), \left(1\frac13, -1\right), \\ (2, 1), (\pi, 3\pi-5), \ldots, (57.89, 168.67), \ldots \}.$$ The function $f$ here is no different in principle; it just has a much smaller domain.
-
A) $f(-2) = 4$
B)$(f+g) (2) = 4$
D) $f(1)-g(1-k) = 2 - (3(1-k) - 5)$
-
Can you explain part C? – wj32 Oct 22 '12 at 0:29
Part C) f(g(x)) this is exactly how it is listed on my assignment. This is what confused me the most. :/ – John E. Oct 22 '12 at 0:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932046115398407, "perplexity": 190.37224787158434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00477-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://perimeterinstitute.ca/videos/effect-initial-correlations-evolution-quantum-states | The effect of initial correlations on the evolution of quantum states
Playing this video requires the latest flash player from Adobe.
Recording Details
Speaker(s):
Scientific Areas:
PIRSA Number:
12100045
Abstract
Until fairly recently, it
was generally assumed that the initial state of a quantum system prepared for information processing was in a product state with its environment. If this is the case,
the evolution is described by a completely positive map. However, if the system and environment are initially correlated, or entangled, such that the so-called quantum discord is non-zero, then the
evolution is described by a map which is not completely positive. Maps that are not completely positive are not as well understood and the implications of having such a map are not completely known. I will discuss a few examples and a theorem (or two) which may help us understand the implications of having maps which are not completely positive. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608637452125549, "perplexity": 431.2563871703231}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543170.25/warc/CC-MAIN-20161202170903-00264-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/75389/group-cohomology-of-compact-lie-group-with-integer-coeffient | # Group cohomology of compact Lie group with integer coeffient
It is known that group cohomology class $H^d[U(1),Z]$ is Z for even d and 0 for odd d. Do we know $H^d[G,Z]$ for $G=SO(3)$, $SU(2)$ and other compact Lie group?
Also is the Borel-group-cohomology class $H^d[G,R]$ alway trivial for compact Lie group $G$?
-
Your first example suggests you are talking about the cohomology of the classifying space $BG$. If so, why not ask this? Also, your first example gives a counter-example for your second question (unless I'm misinterpreting what the "Borel-group-cohomology class" is). Please clarify. – Mark Grant Sep 14 '11 at 13:55
Dear Mark, Thanks for the question. I just know the algebraic definition of group cohomology and I am not familiar with classifying space (I am a physicist). By Borel-group-cohomology of $H^d[G,R]$ I mean that we take cochains as measurable function over $G$. The issue of "continuity" comes up since both the group $G$ and the module $R$ are continuous. If $H^d[G,R]$ is trivial, I am hoping to get $H^{d+1}[G,Z] = H^d[G,U(1)]$, again $H^d[G,U(1)]$ is the Borel-group-cohomology described above. I learned those from some math papers that I only half understand. I hope they are right. – Xiao-Gang Wen Sep 14 '11 at 15:07
For the group $SU(2)=S^3$ we just have $H^*(BSU(2);\mathbb{Z})=\mathbb{Z}[c_2]$ (where $c_2\in H^4$). More generally, for all $n$ we have \begin{align*} H^*(BU(n);\mathbb{Z}) &= \mathbb{Z}[c_1,\dotsc,c_n] \\\\ H^*(BSU(n);\mathbb{Z}) &= \mathbb{Z}[c_2,\dotsc,c_n] \\\\ H^*(BSp(n);\mathbb{Z}) &= \mathbb{Z}[p_1,\dotsc,p_n] \end{align*} with $c_i\in H^{2i}$ and $p_i\in H^{4i}$.
Now let $V$ be the tautological $3$-plane bundle over the space $X=BSO(3)$. This has Stiefel-Whitney classes $w_2\in H^2(X;\mathbb{Z}/2)$ and $w_3\in H^3(X;\mathbb{Z}/2)$. There is also a Bockstein element $v=\beta(w_2)\in H^3(X;\mathbb{Z})$ (which satisfies $2v=0$) and a Chern class $c=c_2(\mathbb{C}\otimes V)\in H^4(X;\mathbb{Z})$. The mod two reduction map $\rho$ satisfies $\rho(v)=Sq^1(w_2)=w_3$ and $\rho(c)=w_2^2$. If I've got everything straight, one can check using the Bockstein spectral sequence that $$H^*(BSO(3);\mathbb{Z}) = \mathbb{Z}[v,c]/(2v).$$
It is not possible to be similarly explicit about $H^*(BSO(n);\mathbb{Z})$ for general $n$ (although $H^*(BSO(n);\mathbb{Z}/2)$ and $H^*(BSO(n);\mathbb{Q})$ are fairly straightforward).
-
A collection of relevant references (general, standard as well as specific ones) is here:
http://ncatlab.org/nlab/show/group+cohomology#OnTopologicalGroups
-
-
Thank you very much, Igor. But I am a physicist. May I ask how to obtain $H^d[G,Z]$ from its torsion free part and $H^d[G,Z_2]$. In particular, what are $H^4[SO(3),Z]$ and $H^5[SO(3),Z]$? I guess $H^3[SO(3),Z]=Z_2$ corresponding to the projective representations of $SO(3)$ and $H^2[SO(3),Z]=Z_1$ corresponding the absence of non-trivial 1D representation of $SO(3)$. ( I am using $H^{d+1}[SO(3),Z] = H^d[SO(3),U(1)]$ here, if I use $H^d[SO(3),U(1)]$ to mean Borel group cohomology where the conchains are measurable functions over $SO(3)$.) – Xiao-Gang Wen Sep 14 '11 at 14:54
In planetmath.org/encyclopedia/GroupCohomology3.html , it is stated that $H^d[SU(2),Z]=Z$ if $d=0$ mod 4 and 0 other wise. This seems contradict with the result in your note: The torsion free part of $H^d[SO(3),Z]$ is Z for $d=0$ mod 3 is zero other wise. (I am guessing that the torsion free part of $H^d[SU(2),Z]$ and $H^d[SO(3),Z]$ are the same, and I thought $H^4[G,Z]$ contains a $Z$ for any Lie group.) Please correct me if I misunderstand your note – Xiao-Gang Wen Sep 14 '11 at 23:48
After thinking more, now I feel that math.cornell.edu/~hatcher/SO/comments.pdf may not be the answer. It calculates cohomology of the topological space of $SO(n)$. What I am seeking is the group cohomology of $SO(3)$ which is the cohomology of the topological space $BSO(3)$ -- the classifying space of $SO(3)$. – Xiao-Gang Wen Sep 15 '11 at 0:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708155393600464, "perplexity": 212.69993800357406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117783.16/warc/CC-MAIN-20160428161517-00143-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://en.m.wikiversity.org/wiki/PlanetPhysics/Isomorphism | # PlanetPhysics/Isomorphism
Definition 0.1 \bigbreak A morphism ${\displaystyle f:A\to B}$ in a category ${\displaystyle C}$ is an isomorphism when there exists an inverse morphism of ${\displaystyle f}$ in ${\displaystyle C}$ , denoted by $\displaystyle \inv f: B \to A$ , such that $\displaystyle f \circ \inv f =id_A = 1_A: A \to A$ .
One also writes: ${\displaystyle A\cong B}$, expressing the fact that the object A is isomorphic with object B under the isomorphism ${\displaystyle f}$.
Note also that an isomorphism is both a monomorphism and an epimorphism; moreover, an isomorphism is both a section and a retraction. However, an isomorphism is not the same as an equivalence relation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 2, "math_score": 0.9986402988433838, "perplexity": 163.26297972003542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00386.warc.gz"} |
http://mathoverflow.net/questions/34023/spectral-sequence-for-reduced-homology | # Spectral sequence for reduced homology
In the Serre spectral sequence, is it true that we can replace homology by reduced homology? That is: If $f:X\rightarrow B$ is a Serre fibration,with $F$ the fiber, then if $\tilde E^2_{pq}=\tilde H_p(B,\tilde H_q F) \longrightarrow \tilde H_{p+q}(X)$?
I think it is fine: we use complexes involving "augmentation", then filtration, do the usual things in the spectral sequence, and finally we get a sequence that converges to the homology of the original complex. Only now it becomes reduced homology. But I am not precisely sure.
I ask this question because I want to know the answer of another problem (asked by me) "homology dimension of mapping class group of surface with boundary". (I am sorry I don't know how to insert a link). I need some help for that problem. Thanks!
-
Hint for the Serre spectral sequence: is the Kuenneth formula valid in reduced homology? (Work over a field and count dimensions.) – Tim Perutz Jul 31 '10 at 14:18
Hi,Tim Perutz, it is not valid. – HYYY Jul 31 '10 at 14:43
As Tim Perutz indicates, this version is not valid. However, consider: for spaces with a chosen basepoint, reduced homology is isomorphic to homology relative to the basepoint. For p: E -> B with subfibration D -> E, together with a subspace A of B, there is a Serre spectral sequence involving relative homology of the base and fiber, computing the homology of E relative to $D \cup p^{-1}A$. – Tyler Lawson Jul 31 '10 at 15:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795873761177063, "perplexity": 467.5396498193857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00050-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/127155/open-weak-lefschetz-with-coefficients | # (Open) weak Lefschetz with coefficients
Let $X$ be a complex smooth projective variety and let $\iota: Y \hookrightarrow X$ be a smooth hyperplane section. Put $\dim(X)=n+1$. Then weak Lefschetz says that $$\iota^\ast: H^k(X^{an}, \mathbb{Q}) \to H^k(Y^{an}, \mathbb{Q})$$ is an isomorphism for $k \leq n-2$.
I would be interesting in the following variant:
• First, instead of considering the whole $X$, I want to look at $U=X-D$, where $D$ is a simple normal crossings divisor.
• Secondly, instead of taking $\mathbb{Q}$ as coefficients, I would like to look at a rank one local system $V$ of $\mathbb{C}$-vector spaces on $U^{an}$.
Assume $Y$ is a smooth section of $X$ (intersecting properly all the intersections of the irreducible components of $D$). Put $W=Y-D \cap Y$. Is it true that
$H^k(U^{an}, V) \to H^k(W^{an}, V_{|W^{an}})$
is an isomorphism for $k \leq n-2$? Same question for cohomology with compact support.
-
Lefschetz theorem also says that the inclusion map $i: Y\to X$ is $n$-connected (if $X$ is $n+1$-dimensional), see e.g. Milnor's book on Morse Theory. Thus, $i$ induces an isomorphism of cohomology with with coefficients in arbitrary flat bundle/locally constant sheaf (up to degree $n-1$), since the latter can be computed by looking at the skeleta of dimension $\le n$ of cell complexes for $X$ and $Y$. – Misha Apr 10 '13 at 23:35
Thanks for your answer Misha! Do you have a reference where your statement about cohomology is proved? Remark that my connection has logarithmic singularities on $D$. Is everything still working? – lefsloc Apr 11 '13 at 8:37
If you want log singularities, you could perhaps proceed like this: first look up the results of Esnault-Viehweg on generalizations with log. sing. of the Kodaira vanishing theorem and of the degeneration of the Hodge-de-Rham spectral sequence (maybe in their book ?) and then replicate the proof of the weak Lefschetz theorem given in the book of Griffiths and Harris (p. 156, chap. I, §2). – Damian Rössler Apr 11 '13 at 11:09
Thanks Damian. That sounds great – lefsloc Apr 11 '13 at 13:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599357843399048, "perplexity": 317.1407942914415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121983086.76/warc/CC-MAIN-20150124175303-00094-ip-10-180-212-252.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.