url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://www.ck12.org/algebra/Exponential-Growth/lesson/Exponential-Growth-BSC-ALG/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Exponential Growth
## Functions with x as an exponent
Estimated11 minsto complete
%
Progress
Practice Exponential Growth
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated11 minsto complete
%
Exponential Growth
Suppose that 1000 people visited an online auction website during its first month in existence and that the total number of visitors to the auction site is tripling every month. Could you write a function to represent this situation? How many total visitors will the auction site have had after 9 months?
### Exponential Growth
Previously, we have seen the variable only used as the base in an exponential expression. In exponential functions, the exponent is the variable and the base is a constant.
The General Form of an Exponential Function is \begin{align*}y=a (b)^x\end{align*}, where \begin{align*}a=\end{align*} initial value and \begin{align*}b= growth \ factor\end{align*}.
In exponential growth situations, the growth factor must be greater than one.
\begin{align*}b>1\end{align*}
#### Let's use an exponential function to solve the following problem:
A colony of bacteria has a population of 3,000 at noon on Sunday. During the next week, the colony’s population doubles every day. What is the population of the bacteria colony at noon on Saturday?
Make a table of values and calculate the population each day.
Day 0 (Sun) 1 (Mon) 2 (Tues) 3 (Wed) 4 (Thurs) 5 (Fri) 6 (Sat)
Population (thousands) 3 6 12 24 48 96 192
To get the population of bacteria for the next day we multiply the current day’s population by 2 because it doubles every day. If we define \begin{align*}x\end{align*} as the number of days since Sunday at noon, then we can write the following: \begin{align*}P= 3 \cdot 2^x\end{align*}. This is a formula that we can use to calculate the population on any day. For instance, the population on Saturday at noon will be \begin{align*}P = 3 \cdot 2^6=3 \cdot 64 = 192\end{align*} thousand bacteria. We use \begin{align*}x=6\end{align*}, since Saturday at noon is six days after Sunday at noon.
#### Graphing Exponential Functions
Graphs of exponential growth functions show you how quickly the values of the functions get very large.
#### Let's use tables of values to complete the following problems:
1. Graph \begin{align*}y=2^x\end{align*}.
Make a table of values that includes both negative and positive values of \begin{align*}x\end{align*}. Substitute these values for \begin{align*}x\end{align*} to get the value for the \begin{align*}y\end{align*} variable.
\begin{align*}x\end{align*} \begin{align*}y\end{align*}
–3 \begin{align*}\frac{1}{8}\end{align*}
–2 \begin{align*}\frac{1}{4}\end{align*}
–1 \begin{align*}\frac{1}{2}\end{align*}
0 1
1 2
2 4
3 8
Plot the points on the coordinate axes to get the graph below. Exponential functions always have this basic shape: They start very small and then once they start growing, they grow faster and faster, and soon they become huge.
1. In the last problem, we produced a graph for \begin{align*}y=2^x\end{align*}. Compare that graph with the graph of \begin{align*}y = 3 \cdot 2^x\end{align*}.
\begin{align*}x\end{align*} \begin{align*}y\end{align*}
–2 \begin{align*}3 \cdot 2^{-2} = 3 \cdot \frac{1}{2^2} =\frac{3}{4}\end{align*}
–1 \begin{align*}3 \cdot 2^{-1} = 3 \cdot \frac{1}{2^1} = \frac{3}{2}\end{align*}
0 \begin{align*}3 \cdot 2^0 = 3\end{align*}
1 \begin{align*}3 \cdot 2^1 = 6\end{align*}
2 \begin{align*}3 \cdot 2^2 = 3 \cdot 4 = 12\end{align*}
3 \begin{align*}3 \cdot 2^3 = 3 \cdot 8 = 24\end{align*}
We can see that the function \begin{align*}y=3 \cdot 2^x\end{align*} is bigger than the function \begin{align*}y=2^x\end{align*}. In both functions, the value of \begin{align*}y\end{align*} doubles every time \begin{align*}x\end{align*} increases by one. However, \begin{align*}y=3 \cdot 2^x\end{align*} starts with a value of 3, while \begin{align*}y=2^x\end{align*} starts with a value of 1, so it makes sense that \begin{align*}y=3 \cdot 2^x\end{align*} would be bigger.
The shape of the exponential graph changes if the constants change. The curve can become steeper or shallower.
### Examples
#### Example 1
Earlier, you were told that 1000 people visited an online auction website during its first month in existence and that the total number of visitors to the auction site is tripling every month. What function represents this situation? How many total visitors will the auction site have had after 9 months?
To write the function, it is helpful to write out a table of values where \begin{align*}x\end{align*} is the month since the website opened and \begin{align*}y\end{align*} is the number of visitors
\begin{align*}x\end{align*} \begin{align*}y\end{align*}
1 1000
2 3000
3 9000
4 27000
5 81000
6 243000
7 729000
8 2187000
9 6561000
Notice that the initial value is 1000 and the growth factor is 3. Therefore, you would expect the exponential function that represents this situation to be:
\begin{align*}y = 1000 \cdot 3^x\end{align*}
However, if you plug in 1 to this equation, you will get 3000 instead of 1000. The exponent value is 1 above what it should be. Therefore, you need to decrease the exponent by 1 and the exponential function that represents this situation is:
\begin{align*}y = 1000 \cdot 3^{x-1}\end{align*}
If you plug in 1 to this equation, you will get 1000 as necessary. You should test the other numbers from the table in this equation to verify that the function is correct.
To find how many total visitors that the website would have had after 9 months, add all the values of \begin{align*}y\end{align*} in the table from above:
\begin{align*}1000 + 3000 + 9000 + 27000 + 81000 + 243000 + 729000 + 2187000 + 6561000 = 9841000\end{align*}In the first 9 months, the website had 9,841,000 total visitors.
#### Example 2
A population of 500 E. Coli organisms doubles every fifteen minutes. Write a function expressing the population size as a function of hours.
Since there are four 15 minute periods in an hour, this means the population will double 4 times in an hour. Doubling twice is the same thing as quadrupling since:
\begin{align*}1\cdot 2 \cdot 2=1\cdot 2^2=1 \cdot 4=4.\end{align*}
This means that doubling 4 times can be calculated as \begin{align*}2^4=16\end{align*}. So the population is 16 times as big every hour. With an initial population size of 500, the function is:
\begin{align*}f(x)=500\cdot 8^x\end{align*}
where \begin{align*}x\end{align*} is in hours and \begin{align*}f(x)\end{align*} is the number of organisms after \begin{align*}x\end{align*} hours.
### Review
1. What is the general form for an exponential equation? What do the variables represent?
2. How is an exponential growth equation different from a linear equation?
3. What is true about the growth factor of an exponential equation?
4. True or false? An exponential growth function has the following form: \begin{align*}f(x)=a(b)^x\end{align*}, where \begin{align*}a>1\end{align*} and \begin{align*}b<1\end{align*}?
5. What is the \begin{align*}y-\end{align*}intercept of all exponential growth functions?
Graph the following exponential functions by making a table of values.
1. \begin{align*}y=3^x\end{align*}
2. \begin{align*}y=2^x\end{align*}
3. \begin{align*}y=5 \cdot 3^x\end{align*}
4. \begin{align*}y=\frac{1}{2} \cdot 4^x\end{align*}
5. \begin{align*}f(x)=\frac{1}{3} \cdot 7^x\end{align*}
6. \begin{align*}f(x)=2 \cdot 3^x\end{align*}
7. \begin{align*}y=40 \cdot 4^x\end{align*}
8. \begin{align*}y=3 \cdot 10^x\end{align*}
Solve the following problems involving exponential growth.
1. A chain letter is sent out to 10 people telling everyone to make 10 copies of the letter and send each one to a new person. Assume that everyone who receives the letter sends it to 10 new people and that it takes a week for each cycle. How many people receive the letter in the sixth week?
2. Nadia received \$200 for her \begin{align*}10^{th}\end{align*} birthday. If she saves it in a bank with a 7.5% interest rate compounded yearly, how much money will she have in the bank by her \begin{align*}21^{st}\end{align*} birthday?
Mixed Review
1. Suppose a letter is randomly chosen from the alphabet. What is the probability the letter chosen is \begin{align*}M, K\end{align*}, or \begin{align*}L\end{align*}?
2. Evaluate \begin{align*}t^4 \cdot t^\frac{1}{2}\end{align*} when \begin{align*}t=9\end{align*}.
3. Simplify \begin{align*}28-(x-16)\end{align*}.
4. Graph \begin{align*}y-1=\frac{1}{3} (x+6)\end{align*}.
To see the Review answers, open this PDF file and look for section 8.7.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English Spanish
TermDefinition
Exponential growth Exponential growth occurs when a quantity increases by the same proportion in each given time period.
general form of an exponential function $y=a (b)^x$, where $a=$ initial value and $b= growth \ factor$
Asymptotes An asymptote is a line on the graph of a function representing a value toward which the function may approach, but does not reach (with certain exceptions).
Model A model is a mathematical expression or function used to describe a physical item or situation. | {"extraction_info": {"found_math": true, "script_math_tex": 68, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 3, "texerror": 0, "math_score": 0.9759958386421204, "perplexity": 1320.3594622433857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187193.40/warc/CC-MAIN-20170322212947-00650-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/16443/guitar-tablatures-typesetting/111752 | # Guitar tablatures typesetting?
So I kindof wanted to leave old and ugly ascii-art tabs and produce something nice, but found there's probably no method to typeset actual guitar tabs in TeX.
All I found was:
• MusiXTeX for classical music notation
• songbook for lyrics+chords above
• guitar.sty for something similar.
Is there something that does tabs and I missed it?
edit: I need TeX text around, it's for a (kind of) guitar textbook.
-
Do you mean something like this? openguitar.com/files/juba-short.pdf – Harold Cavendish Apr 23 '11 at 9:23
yeah, would be nice. With LaTeX text around, ofcourse :] – Mirek Kratochvil Apr 23 '11 at 12:13
Take a look at the hyperlink posted by Jefromi in the comment to my answer below. I was not aware of the possibility to integrate LilyPond with LaTeX. I think that this solution is what you are looking for. – Harold Cavendish Apr 23 '11 at 15:03
I think MusiXTeX has extensions for guitar tablature and guitar chord diagrams. – gniourf_gniourf Apr 23 '11 at 22:18
Are you searching for something like this? http://www.texample.net/tikz/examples/guitar-chords/
-
Not exactly what I wanted, but seems that a little latex macro work can easily tune it to the state I need. Thanks very much! – Mirek Kratochvil Apr 25 '11 at 10:00
My recommendation is to use LilyPond, which I believe was formerly based on TeX. It is possibly the best solution you can get for free. The file in my comment to your question is said to be typeset in it. Here is another possible output with displayed chords.
-
LilyPond seems like an excellent choice to me as well. There are also some very nice editors for it. – ipavlic Apr 23 '11 at 11:18
I believe LilyPond can be used along with LaTeX: lilypond.org/doc/v2.12/Documentation/user/lilypond-program/… – Jefromi Apr 23 '11 at 14:47
@Jefromi I have some experience with LilyPond, but I did not know this. Very useful for myself as well, thank you! – Harold Cavendish Apr 23 '11 at 15:04
In case anybody stumbles onto this question (like I just did):
I remembered having tried something like a tabulature with musixtex a little while ago. It is only a start and far from being perfect but shows that tabulatures can be done with a little effort.
\documentclass{article}
\usepackage{musixtex,graphicx}
% custom clef
\newcommand\TAB[1]{%
\setclefsymbol{#1}{\,\rotatebox{90}{TAB}}%
\setclef{#1}9}
% internal string choosing command
% #1: string (a number from 1--6)
% #2: finger
\makeatletter
\newcommand\@str[2]{%
\ifcase#1\relax\@strerror
\or\def\@strnr{-1}%
\or\def\@strnr{1}%
\or\def\@strnr{3}%
\or\def\@strnr{5}%
\or\def\@strnr{7}%
\or\def\@strnr{9}%
\else\@strerror
\fi
\zchar\@strnr{\footnotesize#2}}
% \@strerror could be defined to issue some warning/error
% User level commands
\newcommand\STr[2]{\@str{#1}{#2}\sk} % with a full note skip
\newcommand\Str[2]{\@str{#1}{#2}\hsk} % with a half note skip
\newcommand\str[2]{\@str{#1}{#2}} % with no skip
\makeatother
\begin{document}
\setlength\parindent{0pt}
\begin{music}
\instrumentnumber{1}
\nobarnumbers
\TAB1
\setlines1{6}
\startpiece
\Notes\hsk\STr37\en
\Notes\Str45\en
\Notes\Str55\en
\Notes\Str65\en
\bar
\Notes\str67\Str36\en
\Notes\Str45\en
\Notes\Str55\en
\Notes\Str67\en
\bar
\Notes\str68\Str35\en
\Notes\Str45\en
\Notes\Str55\en
\Notes\Str68\en
\bar
\Notes\Str34\en
\Notes\Str42\en
\Notes\Str53\en
\Notes\Str62\en
\bar
\Notes\Str33\en
\Notes\Str42\en
\Notes\Str51\en
\Notes\itieu0r\Str60\en
\bar
\Notes\ttie0\Str60\en
\Notes\Str51\en
\Notes\Str42\en
\Notes\Str33\en
\bar
\Notes\Str13\en
\Notes\Str20\en
\Notes\STr20\en
\bar
\Notes\STr20\en
\Notes\Str28\en
\Notes\STr27\en
\endpiece
\end{music}
\end{document}
Any one able to spot the song? ;)
-
I recently wanted to recreate chord tablature sheets that my guitar teacher used to use in his lessons. They were basically a grid of small tables with 5 times 4 cells. These tables were then filled by hand with the chords I was supposed to remember. Creating these tables is a piece of cake but I wanted the possibility to add the chord schemes with LaTeX, adding position, fingers, barrés, specify the root etc. with an easy syntax. I also wanted a similarly easy syntax for creating tablatures of scales.
I did what I always do in these cases: I wrote me a little package, guitarchordschemes, (which I will upload to CTAN if you deem it useful) that allows to do that. Below are a few examples:
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{guitarchordschemes}
\begin{document}
\chordscheme[
name = Gmi\textsuperscript{7($\flat$5)} ,
position = IX ,
finger = {3/4, 2/3, 3/2} ,
root = {2/5} ,
mute = {1,6}
]
\end{document}
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{guitarchordschemes}
\begin{document}
\chordscheme[
name = Gmi\textsuperscript{7($\flat$5)} ,
position = IX ,
finger = {3/4:3, 2/3:2, 3/2:4} ,
root = {2/5:1} ,
show-root = {4/3} ,
mute = {1,6}
]
\end{document}
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{guitarchordschemes}
\begin{document}
\scales[
name = D major/position II ,
position = I ,
fingering = type 3
]
\end{document}
-
Have you tried with programs like GuitarPro 5? I'm sure you can export the tabs to pdf there...
-
Yeah, the problem is that it's not really good for actual typesetting, especially if you're writing a textbook. – Mirek Kratochvil Apr 23 '11 at 12:12
Open source Tuxguitar which is similar to GuitarPro can export your work to lilypond format. – ipavlic Apr 23 '11 at 15:59
Tuxguitar export to lilypond seems pretty good, gonna consider it. Thanks. – Mirek Kratochvil Apr 25 '11 at 9:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128959536552429, "perplexity": 3664.54587330218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826259.53/warc/CC-MAIN-20140820021346-00437-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1140%2Fepjc%2Fs10052-017-5184-z | The European Physical Journal C
, 77:626
# Measurement of meson resonance production in $$\pi ^-+$$ C interactions at SPS energies
The NA61/SHINE Collaboration
• Y. Ali
• E. V. Andronov
• T. Antićić
• B. Baatar
• M. Baszczyk
• S. Bhosale
• A. Blondel
• M. Bogomilov
• A. Brandin
• A. Bravar
• J. Brzychczyk
• S. A. Bunyatov
• O. Busygina
• H. Cherif
• M. Ćirković
• T. Czopowicz
• A. Damyanova
• N. Davis
• H. Dembinski
• M. Deveaux
• W. Dominik
• P. Dorosz
• J. Dumarchez
• R. Engel
• A. Ereditato
• S. Faas
• G. A. Feofilov
• Z. Fodor
• C. Francois
• X. Garrido
• A. Garibov
• M. Gaździcki
• M. Golubeva
• K. Grebieszkow
• F. Guber
• A. Haesler
• A. E. Hervé
• J. Hylen
• S. N. Igolkin
• A. Ivashkin
• S. R. Johnson
• E. Kaptur
• M. Kiełbowicz
• V. A. Kireyeu
• V. Klochkov
• V. I. Kolesnikov
• D. Kolev
• A. Korzenev
• V. N. Kovalenko
• K. Kowalik
• S. Kowalski
• M. Koziel
• A. Krasnoperov
• W. Kucewicz
• M. Kuich
• A. Kurepin
• D. Larsen
• A. László
• T. V. Lazareva
• M. Lewicki
• B. Lundberg
• B. Łysakowski
• V. V. Lyubushkin
• I. C. Mariş
• M. Maćkowiak-Pawłowska
• B. Maksiak
• A. I. Malakhov
• D. Manić
• A. Marchionni
• A. Marcinek
• A. D. Marino
• K. Marton
• H. -J. Mathes
• T. Matulewicz
• V. Matveev
• G. L. Melkumov
• A. O. Merzlaya
• B. Messerly
• Ł. Mik
• G. B. Mills
• S. Morozov
• S. Mrówczyński
• Y. Nagai
• V. Ozvenchuk
• V. Paolone
• M. Pavin
• O. Petukhov
• C. Pistillo
• R. Płaneta
• B. A. Popov
• S. Puławski
• J. Puzović
• R. Rameika
• W. Rauch
• M. Ravonel
• R. Renfordt
• E. Richter-Wąs
• D. Röhrich
• E. Rondio
• M. Roth
• B. T. Rumberger
• M. Ruprecht
• A. Rustamov
• M. Rybczynski
• A. Rybicki
• K. Schmidt
• I. Selyuzhenkov
• A. Yu. Seryakov
• P. Seyboth
• M. Słodkowski
• A. Snoch
• P. Staszel
• G. Stefanek
• J. Stepaniak
• M. Strikhanov
• H. Ströbele
• T. Šuša
• M. Szuba
• A. Taranenko
• A. Tefelska
• D. Tefelski
• V. Tereshchenko
• A. Toia
• R. Tsenov
• L. Turko
• R. Ulrich
• M. Unger
• F. F. Valiev
• D. Veberič
• V. V. Vechernin
• M. Walewski
• A. Wickremasinghe
• C. Wilkinson
• Z. Włodarczyk
• A. Wojtaszek-Szwarc
• O. Wyszyński
• L. Zambelli
• E. D. Zimmerman
Open Access
Regular Article - Experimental Physics
## Abstract
We present measurements of $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ spectra in $$\pi ^{-} +$$ C production interactions at 158 $$\text{ GeV }{/}\text{ c }$$ and $$\rho ^0$$ spectra at 350 $$\text{ GeV }{/}\text{ c }$$ using the NA61/SHINE spectrometer at the CERN SPS. Spectra are presented as a function of the Feynman’s variable $$x_\text {F}$$ in the range $$0< x_\text {F} < 1$$ and $$0< x_\text {F} < 0.5$$ for 158 and 350 $$\text{ GeV }{/}\text{ c }$$ respectively. Furthermore, we show comparisons with previous measurements and predictions of several hadronic interaction models. These measurements are essential for a better understanding of hadronic shower development and for improving the modeling of cosmic ray air showers.
## 1 Introduction
When cosmic rays of high energy collide with the nuclei of the atmosphere, they initiate extensive air showers (EAS). Earth’s atmosphere then acts as a medium in which the particle shower evolves. It proceeds mainly through the production and interaction of secondary pions and kaons. Depending on the particle energy and density of the medium in which the shower evolves, secondary particles either decay or re-interact, producing further secondaries. Neutral pions have a special role. Instead of interacting hadronically, they immediately decay ($$c\bar{\tau }= 25$$ nm) into two photons with a branching ratio of 99.9%, giving rise to an electromagnetic shower component. When only the primary particle energy is of interest, and all shower components are sampled, a detailed understanding of the energy transfer from the hadronic particles to the electromagnetic shower component is not needed. However, for other measurements of air shower properties this understanding is of central importance.
A complete measurement of an air shower is not possible and particles are typically sampled only in select positions at the ground level or the ionization energy deposited in the atmosphere is measured. Therefore, the interpretation of EAS data, and in particular the determination of the composition of cosmic rays, relies to a large extent on a correct modelling of hadron-air interactions that occur during the shower development (see e.g. [1]). Experiments such as the Pierre Auger Observatory [2], IceTop [3], KASCADE-Grande [4] or the Telescope Array [5] use models for the interpretation of measurements. However, there is mounting evidence that current hadronic interaction models do not provide a satisfactory description of the muon production in air showers and that there is a deficit in the number of muons predicted at the ground level by the models when compared to the air shower measurements (see Refs. [6, 7, 8, 9, 10]).
To understand the possible cause of this deficit it is instructive to study the air shower development in a very simplified model [11] in which mesons are produced in subsequent interactions of the air cascade until the average meson energy is low enough such that its decay length is smaller than its interaction length. In each interaction a fraction $$f_\mathrm {em}$$ of the shower energy is transferred to the electromagnetic shower component via the production and decay of neutral mesons. After n interactions the energy available in the hadronic part of the shower to produce muons is therefore $$E_\mathrm {had} = E_0 \, (1-f_\mathrm {em})^n$$, where $$E_0$$ denotes the primary energy of the cosmic ray initiating the air shower. In the standard simplified picture, one third of the interactions products of charged pions with air are neutral mesons. Assuming a typical value of $$n=7$$ for the number of interactions needed to reach particle energies low enough that the charged mesons decay to muons rather than interact again, the simplistic model gives $$E_\mathrm {had} / E_0 \simeq 6\%$$. One way to increase this number is to account for the production of baryons and antibaryons to decrease $$f_\mathrm {em}$$ [12]. Another possibilty has been recently identified [13, 14] by noting that accelerator data on $$\pi ^+ + \text {p}$$ interactions [15, 16, 17] indicate that most of the neutral mesons produced in the forward direction are not $$\pi ^0$$s but $$\rho ^0$$ mesons. With $$\rho ^0$$ decaying into $$\pi ^+\,\pi ^-$$ this would imply that the energy of the leading particle is not transferred to the electromagnetic shower component as it would be in the case of neutral pions and corresponingly $$f_\mathrm {em}$$ is decreased leading to more muons at ground level.
Given these considerations it is evident that the modeling of air showers depends crucially on our knowledge of pion interactions with air. It can be shown (see e.g. [18, 19]) that the relevant energies for the interactions in the last stage of the air shower development are in the range from 10 to $$10^3$$ $$\text{ GeV }$$. This range is accessible to fixed-target experiments with charged pion beams.
A large body of data is available at these energies for proton-nucleus interactions (e.g. [20, 21, 22, 23, 24]), but only a very limited amount of data exists for pion or kaon beams. A number of dedicated measurements for air-shower simulations have been performed by studying particle production on light nuclei at beam momenta up to 12 $$\text{ GeV }{/}\text{ c }$$ (see, e.g. Refs. [25, 26]). Unfortunately, at higher energies, there are no comprehensive and precise particle production measurements of $$\pi$$ interactions with light nuclei of masses similar to air. Earlier measurements were either limited to a small acceptance in momentum space (e.g. Ref. [27]) or protons as target [15, 16, 17, 28], or did not discriminate between the different secondaries [29].
To address the lack of suitable data for the tuning of hadronic interaction models used in air shower simulations, NA61/SHINE [30] collected new data with negatively charged pion beams at 158 and 350 $$\text{ GeV }{/}\text{ c }$$ on a thin carbon target. Preliminary spectra of unidentified hadrons and identified pions were previously derived from this data set [31, 32, 33] and in this paper, we present the results of the measurement of $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ spectra in $$\pi ^{-}$$ + C interactions at 158 and 350 $$\text{ GeV }{/}\text{ c }$$.
It is worthwhile noting that the measurements presented in this paper will not only be useful for interpretation of cosmic-ray calorimetry in air, but can also be beneficial for the understanding of hadronic calorimeters used in high-energy laboratory experiments. Hadronic interaction models used for calorimeter simulations are mostly tuned to and validated with the overall calorimeter response from test-beam data (see e.g. [34, 35, 36]). A tuning of these models to the data presented here will improve the description of the energy transfer from the hadronic to the electromagnetic shower component for individual interactions inside the calorimeter and thus increase the predictive power of the calorimeter simulation.
The paper is organized as follows: A brief description of the experimental setup, the collected data, data reconstruction and simulation is presented in Sect. 2. The analysis technique used to measure meson resonance production in $$\pi$$ + C interactions is described in Sect. 3. The final results, with comparison to model predictions, and other experimental data are presented in Sect. 4. A summary in Sect. 5 closes the paper.
## 2 Experimental setup, data processing and simulation
The NA61/SHINE apparatus is a wide-acceptance hadron spectrometer at the CERN SPS on the H2 beam line of the CERN North Area. A detailed description of the experiment is presented in Ref. [30]. Only features relevant for the $$\pi ^-$$ + C data are briefly mentioned here. Numerous components of the NA61/SHINE setup were inherited from its predecessor, the NA49 experiment [37]. An overview of the setup used for data taking on $$\pi ^-$$ + C interactions in 2009 is shown in Fig. 1.
The detector is built around five time projection chambers (TPCs), as shown in Fig. 1b. Two Vertex TPCs (VTPC-1 and VTPC-2) are placed in the magnetic field produced by two superconducting dipole magnets and two Main-TPCs (MTPC-L and MTPC-R) are located downstream symmetrically with respect to the beamline. An additional small TPC is placed between VTPC-1 and VTPC-2, covering the very-forward region, and is referred to as the GAP TPC (GTPC).
The magnet current setting for data taking at 158 and 350 $$\text{ GeV }{/}\text{ c }$$ corresponds to 1.5 T in the first and 1.1 T, in the second magnet. It results in a precise measurement of the particle momenta p with a resolution of $$\sigma (p)/p^2\approx (0.3{-}7)\times 10^{-4}\,\mathrm {(GeV/c)}^{-1}$$.
Two scintillation counters, S1 and S2, together with the three veto counters V0, V1 and V1$$^\text {p}$$, define the beam upstream of the target. The setup of these counters can be seen in Fig. 1a for the 158 $$\text{ GeV }{/}\text{ c }$$ run. The S1 counter also provides the start time for all timing measurements.
The 158 and 350 $$\text{ GeV }{/}\text{ c }$$ secondary hadron beam was produced by 400 $$\text{ GeV }{/}\text{ c }$$ primary protons impinging on a 10 cm long beryllium target. Negatively charged hadrons ($$\text {h}^-$$) produced at the target are transported downstream to the NA61/SHINE experiment by the H2 beamline, in which collimation and momentum selection occur. The beam particles, mostly $$\pi ^-$$ mesons, are identified by a differential ring-imaging Cherenkov detector CEDAR [38]. The fraction of pions is $${\approx }95\%$$ for 158 $$\text{ GeV }{/}\text{ c }$$ and $${\approx }100\%$$ for 350 $$\text{ GeV }{/}\text{ c }$$ (see Fig. 2). The CEDAR signal is recorded during data taking and then used as an offline selection cut (see Sect. 3.1). The beam particles are selected by the beam trigger, T$$_\text {beam}$$, then defined by the coincidence $$\text {S1}\wedge \text {S2}\wedge \overline{\text {V0}} \wedge \overline{\text {V1}}\wedge \overline{\text {V1}^\text {p}}$$. The interaction trigger ($$\text {T}_\text {int} = \text {T}_\text {beam} \wedge \overline{\text {S4}}$$) is given by the anti-coincidence of the incoming beam particle and S4, a scintillation counter, with a diameter of 2 cm, placed between the VTPC-1 and VTPC-2 detectors along the beam trajectory at about 3.7 m from the target, see Fig. 1a, b. Almost all beam particles that interact inelastically in the target do not reach S4. The interaction and beam triggers were recorded in parallel. The beam trigger events were recorded with a frequency by a factor of about 10 lower than the frequency of interaction trigger events.
The incoming beam trajectory is measured by a set of three beam position detectors (BPDs), placed along the beamline upstream of the target, as shown in Fig. 1a. These detectors are $$4.8 \times 4.8$$ cm$$^2$$ proportional chambers. Each BPD measures the position of the beam particle on the transverse plane with respect to the beam direction with a resolution of $${\sim }100\,\upmu$$m (see Ref. [30] for more details).
For data taking on $$\pi ^-$$ + C interactions, the target was an isotropic graphite plate with a thickness along the beam axis of 2 cm with a density of $$\rho =1.84$$ g/cm$$^3$$, equivalent to about 4% of a nuclear interaction length. During the data taking the target was placed 80 cm upstream of VTPC-1. 90% of data was recorded with the target inserted and 10% with the removed target. The latter set was used to estimate the bias due to interactions with the material upstream and downstream of the target.
Detector parameters were optimised using a data-based calibration procedure which also took into account their time dependences. Minor adjustments were determined in consecutive steps for:
1. (i)
detector geometry and TPC drift velocities and
2. (ii)
magnetic field map.
Each step involved reconstruction of the data required to optimise a given set of calibration constants and time dependent corrections followed by verification procedures. Details of the procedure and quality assessment are presented in Ref. [39].
The main steps of the data reconstruction procedure are:
1. (i)
finding of clusters in the TPC raw data, calculation of the cluster centre-of-gravity and total charge,
2. (ii)
reconstruction of local track segments in each TPC separately,
3. (iii)
matching of track segments into global tracks,
4. (iv)
fitting of the track through the magnetic field and determination of track parameters at the first measured TPC cluster,
5. (v)
determination of the interaction vertex using the beam trajectory fitted in the BPDs and the trajectories of tracks reconstructed in the TPCs (the final data analysis uses the middle of the target as the z-position, $$z=-580\,$$cm) and
6. (vi)
refitting of the particle trajectory using the interaction vertex as an additional point and determining the particle momentum at the interaction vertex.
An example of a reconstructed $$\pi ^-$$ + C interaction at 158 $$\text{ GeV }{/}\text{ c }$$ is shown in Fig. 3. Amongst the many tracks visible are five long tracks of three negatively charged and two positively charged particles, with momentum ranging $$5{-}50$$ $$\text{ GeV }{/}\text{ c }$$.
A simulation of the NA61/SHINE detector response is used to correct the measured raw yields of resonances. For the purposes of this analysis, the Epos 1.99 model was used for the simulation and calculation of correction factors. DPMJet 3.06 [40] was used as a comparison for estimation of systematic uncertainties. The choice of Epos was made due to both the number of resonances included in the model, as well as the ability to include the intrinsic width of these resonances in the simulation. Epos 1.99 rather than Epos LHC was used as it is better tuned to the measurements at SPS energies [41].
The simulation consists of the following steps:
1. (i)
generation of inelastic $$\pi ^-$$ + C interactions using the Epos 1.99 model,
2. (ii)
propagation of outgoing particles through the detector material using the Geant 3.21 package [42] which takes into account the magnetic field as well as relevant physics processes, such as particle interactions and decays,
3. (iii)
simulation of the detector response using dedicated NA61/SHINE packages which also introduce distortions corresponding to all corrections applied to the real data,
4. (iv)
simulation of the interaction trigger selection by checking whether a charged particle hits the S4 counter,
5. (v)
storage of the simulated events in a file which has the same format as the raw data,
6. (vi)
reconstruction of the simulated events with the same reconstruction chain as used for the real data and
7. (vii)
matching of the reconstructed to the simulated tracks based on the cluster positions.
For more details on the reconstruction and calibration algorithms applied to the raw data, as well as the simulation of the NA61/SHINE detector response, used to correct the raw data, see Ref. [43].
## 3 Analysis
In this section we present the analysis technique developed for the measurement of the $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ spectra in $$\pi ^-$$ + C production interactions. Production interactions are interactions with at least one new particle produced, i.e. interactions where only elastic or quasi-elastic scattering occurred are excluded. The procedure used for the data analysis consists of the following steps:
1. (i)
application of event and track selection criteria,
2. (ii)
combination of oppositely charged tracks,
3. (iii)
accumulating the combinations in bins of Feynman-x, $$x_\text {F}$$, calculated by using the mass of the $$\rho ^0$$ meson for the boost between the lab and centre of mass frames,
4. (iv)
calculation of the invariant mass of each combination, assuming pion masses for the particles,
5. (v)
fitting of the invariant mass distributions with templates of resonance decays to obtain raw yields and
6. (vi)
application of corrections to the raw yields calculated from simulations.
These steps are described in the following subsections.
### 3.1 Event and track selection
A total of $$5.49\times 10^6$$ events were recorded at 158 $$\text{ GeV }{/}\text{ c }$$ and $$4.48\times 10^6$$ events were recorded at 350 $$\text{ GeV }{/}\text{ c }$$. All events used in the analysis are required to pass cuts to ensure both an interaction event and events of good quality. These cuts are:
1. (i)
Well-contained measurements of the beam with the BPDs and a successful reconstruction of the beam direction.
2. (ii)
Pion identification with the CEDAR (only for 158 $$\text{ GeV }{/}\text{ c }$$ as the impurity of the 350 $$\text{ GeV }{/}\text{ c }$$ beam is below 0.1%).
3. (iii)
No extra (off-time) beam particles detected within $$\pm 2\,\upmu$$s of the triggered beam particle.
4. (iv)
All events must have an interaction trigger as defined in Sect. 2.
5. (v)
The main vertex point is properly reconstructed.
6. (vi)
The z-position of the interaction vertex must be between $$-597$$ and $$-563$$ cm.
The cut (vi) is illustrated in Fig. 4 and its purpose is to remove the majority of interactions that do not occur in the target. This cut will increase the Monte Carlo correction because some in-target events are removed due to the vertex-z resolution. The vertex-z resolution depends on the multiplicity of an event and is about 4.5 cm for low multiplicities and better than 0.5 cm for high multiplicites. The cut is choosen loose enough ($$\pm 17$$ cm around the target center) to assure both a high efficiency for all multiplicities and a purity of in-target of better than 99%.
An alternative method to correct for out-of-target interactions would be to measure the resonance yields in the target-removed data, but the template-fitting method used in this paper can not be applied to data sets with small statistics such as the target-removed data.
The range of this cut, ($$-597,-563$$) cm, was selected to maximise the event number, while minimising the contamination due to off-target events. The residual contribution of non-target interactions after applying this cut is 0.8%.
Table 1
Number of events after each event selection cut and selection efficiency with respect to the previous cut for the target inserted data set for 158 and 350 $$\text{ GeV }{/}\text{ c }$$ beam momentum
$$p_\text {beam}$$
$$158\,\text{ GeV }{/}\text{ c }$$
$$350\,\text{ GeV }{/}\text{ c }$$
Cut
$$N_\text {events}$$
Efficiency (%)
$$N_\text {events}$$
Efficiency (%)
Total
$$5.49\times 10^6$$
100
$$4.48\times 10^6$$
100
(i)
BPD
$$4.96\times 10^6$$
90.3
$$4.08\times 10^6$$
91.1
(ii)
CEDAR
$$4.26\times 10^6$$
85.9
$$4.08\times 10^6$$
100
(iii)
Off-time
$$4.03\times 10^6$$
94.5
$$3.94\times 10^6$$
96.5
(iv)
Trigger
$$3.34\times 10^6$$
83.0
$$2.97\times 10^6$$
75.3
(v)
Vertex fit
$$3.29\times 10^6$$
98.5
$$2.95\times 10^6$$
99.5
(vi)
z-position
$$2.78\times 10^6$$
84.6
$$2.59\times 10^6$$
87.9
Table 2
Number of tracks after each track selection cut and selection efficiency with respect to the previous cut for the target inserted data set for 350 $$\text{ GeV }{/}\text{ c }$$ beam momentum
$$p_\text {beam}$$
$$158\,\text{ GeV }{/}\text{ c }$$
$$350\,\text{ GeV }{/}\text{ c }$$
Cut
$$N_\text {tracks}$$
Efficiency (%)
$$N_\text {tracks}$$
Efficiency (%)
Total
$$3.85\times 10^7$$
100
$$4.41\times 10^7$$
100
(i)
Track quality
$$2.27\times 10^7$$
59.0
$$2.77\times 10^7$$
62.8
(ii)
Acceptance
$$1.57\times 10^7$$
69.0
$$1.99\times 10^7$$
72.0
(iii)
Total clusters
$$1.54 \times 10^7$$
98.1
$$1.95 \times 10^7$$
98.2
(iv)
TPC clusters
$$1.51 \times 10^7$$
98.0
$$1.91 \times 10^7$$
97.8
(v)
Impact parameters
$$1.42 \times 10^7$$
94.4
$$1.80 \times 10^7$$
94.1
The number of events after these cuts is $$2.78 \times 10^6$$ for 158 $$\text{ GeV }{/}\text{ c }$$ and $$2.59 \times 10^6$$ for 350 $$\text{ GeV }{/}\text{ c }$$. The efficiency of these cuts is shown in Table 1 for 158 and 350 $$\text{ GeV }{/}\text{ c }$$ beam momentum.
After the event cuts were applied, a further set of quality cuts were applied to the individual tracks. These were used to ensure a high reconstruction efficiency as well as reducing contamination by tracks from secondary interactions. These cuts are:
1. (i)
The track is well reconstructed at the interaction vertex.
2. (ii)
The fitted track is inside the geometrical acceptance of the detector.
3. (iii)
The total number of clusters on the track should be greater than or equal to 30.
4. (iv)
The sum of clusters on the track in VTPC-1 and VTPC-2 should be greater than or equal to 15, or the total number of clusters on the track in GTPC should be greater than or equal to 6.
5. (v)
The distance of closest approach of the fitted track to the interaction point (impact parameter) is required to be less than 2 cm in the x-plane and 0.4 cm in the y-plane.
For the acceptance cut, (ii), we studied the selection efficiency with simulations as a function of azimuthal angle $$\phi$$ for bins in total momentum p and transverse momentum $$p_\text {T}$$. This leads to a three-dimensional lookup table that defines the regions in $$(\phi , p, p_\text {T})$$ for which the selection efficiency is larger than 90%. Within this region, the detector is close to fully efficient and the corresponding correction factor is purely geometric, since the production of resonances is uniform in $$\phi$$ for an unpolarised beam and target.
The efficiency of each track-selection cut is shown in Table 2 for the data collected at 158 and 350 $$\text{ GeV }{/}\text{ c }$$.
No particle identification was used in this analysis. This increases the background but simplifies the analysis and increases the longitudinal momentum range of the results. The longitudinal momentum fraction, $$x'_\text {F}$$, was calculated as
\begin{aligned} x'_\text {F} = \frac{2p_\text {L}}{\sqrt{s}} \quad \left( \approx \frac{p_\text {L}}{p_\text {L}(\mathrm {max})}\right) , \end{aligned}
(1)
where $$p_\text {L}$$ is the longitudinal momentum of the $$\rho ^0$$-candidate in the centre of mass frame in the pion-nucleon interaction and $$\sqrt{s}$$ is the centre of mass energy of the interaction. $$p_\text {L}$$ is calculated using the mean mass of the $$\rho ^0$$ meson ($$m_{\rho ^0} = 0.775\,\text{ GeV }{/}\text{ c }^2$$) when boosting between the lab frame and the centre of mass frame. The mass of the nucleon used in the calculations is taken to be the average of the proton and neutron masses. There is no difference between $$x'_\text {F}$$ and the Feynman-x, $$x_\text {F} =p_\text {L}/p_\text {L} (\text {max})$$, for a particle pair originating from a $$\rho ^0$$ meson decay. For $$\omega$$ or K$$^{*0}$$ decays the difference is less than 0.01 in the $$x'_\text {F}$$ range covered by the results presented here. This difference approaches zero with increasing $$x'_\text {F}$$. For simplicity, in the following, $$x'_\text {F}$$ is denoted as $$x_\text {F}$$.
### 3.2 Signal extraction
The raw yields of $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ mesons were obtained by performing a fit of inclusive invariant mass spectra. These were calculated by assuming every track that passes the cuts is a charged $$\pi$$. Then, for all pairs of positively and negatively charged particles, the invariant mass was calculated assuming pion masses for both particles. Examples of invariant mass spectra at 158 and 350 $$\text{ GeV }{/}\text{ c }$$ are shown in Fig. 5.
In the inclusive invariant mass spectra, there is a large combinatorial background, especially at low $$x_\text {F}$$. The method used to estimate the background is the so-called charge mixing method, which uses the invariant mass spectra calculated exactly as explained above, but using same-charge instead of opposite-charge tracks. The resulting charge mixing background spectra are shown in Fig. 5. As the normalisation of these spectra will differ from the true background, the normalisation of the charge-mixed spectra is included as a parameter in the fit to the data. The uncertainty introduced by choosing this method of calculating the background is estimated by comparing it with a background found from simulations. This Monte Carlo background is defined as the sum of:
• combinations of tracks that come from decays of different resonances, i.e. one track from a $$\rho ^0$$ and one from an $$\omega$$ (this can be done as the parent particles of tracks are known in the simulation),
• combinations of tracks coming directly from the interaction vertex and
• combinations of tracks coming from resonances (both meson and baryon) that are not included in the individual fitting-templates listed below.
As can be seen in Fig. 5, there is a good overall agreement between the two background estimation methods and the residual differences are used to estimate the systematics due to background subtraction. The boundaries of the default fit range are chosen to include all resonances of interest and to select the invariant mass region for which there is good agreement between the two background estimates, and hence the results have small systematic biases. This leads to the fit range in $$m_\text {inv} (\pi ^+\pi ^-)$$ of $$0.475{-}1.35\,\text{ GeV }{/}\text{ c }^2$$.
Event mixing was also investigated as an alternative way to estimate the background by taking particles from different events to make invariant mass spectra of $$\pi ^+\pi ^-$$ candidates, but this method was found to not describe the shape of the background in simulations over the mass range of the $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ distributions needed to obtain reliable fit results. Refining the event mixing method by splitting the data into multiplicity classes did not improve the quality of this method.
As there is a large number of resonances in the $$m_\text {inv} (\pi ^+\pi ^-)$$ region around the mass of the $$\rho ^0$$, such as the $$\omega$$ and K$$^{*0}$$ mesons, they all have to be taken into account. This has previously been shown in Ref. [44], where only taking into account $$\rho ^0$$ and $$\omega$$ mesons resulted in an inadequate fit, with a spurious peak at 0.6 $$\text{ GeV }{/}\text{ c }^2$$ in the $$\pi ^+\pi ^-$$ invariant mass spectra, due to decays of K$$^{*0}$$ mesons, where the kaon is assigned the mass of a pion. As there is no particle identification used in this analysis, the effect due to K$$^{*0}$$ meson production is expected to be strong and it must be included in the fitting procedure. Other contributions that are not represented by an individual template, such as $$\Lambda$$ decay products, are included in the Monte Carlo background.
The fitting procedure uses templates of the invariant mass distribution for each resonance of importance. This method of template fitting is similar to ideas used by many other experiments such as ALICE [45], ATLAS [46], CDF [47] and CMS [48], where it is also known as the cocktail fit method. The use of independent templates without interference terms is a good approximation, because the mass differences between resonances decaying to $$\pi ^+ + \pi ^-$$ are either large as compared to their width or they decay to $$\pi ^+ + \pi ^-$$ with small branching ratio only (e.g. about 1.5% for $$\omega$$).
The templates are constructed by passing simulated $$\pi ^-$$ + C production interactions, generated with the Epos 1.99 [12] hadronic interaction model using Crmc 1.5.3 [49], through the full NA61/SHINE detector Monte Carlo chain and then through the same reconstruction routines as the data. Crmc is an event generator package with access to a variety of different event generators, such as DPMJet 3.06 [40] and Epos LHC [50].
The template method also allows for the fitting of resonances with dominant three body decays, such as $$\omega$$, as well as resonances with two-body non-$$\pi ^+\pi ^-$$ decays, such as K$$^{*0}$$. A list of all decays with a branching ratio of over $$1\%$$ that are used in the templates is shown in Table 3. The templates and the data are split into bins of $$x_\text {F}$$, calculated as in Eq. 1.
Table 3
Decays of resonances for which $$m_\text {inv} (\pi ^+\pi ^-)$$ templates were calculated and fitted. Only decays with a branching ratio greater than 1% into at least one positively and one negatively charged particle are considered. Branching ratios were taken from [51]
Resonance
Decay
Branching ratio
$$\rho ^0$$
$$\pi ^+\pi ^-$$
100.0
$$\omega$$
$$\pi ^+\pi ^-\pi ^0$$
89.1
$$\pi ^+\pi ^-$$
1.53
K$$^{*0}$$
$$\pi$$
100.0
f$$_2$$
$$\pi ^+\pi ^-$$
57.0
$$\pi ^+\pi ^-\,2\pi ^0$$
7.7
K$$^+$$K$$^-$$
4.6
$$2\pi ^+\,2\pi ^-$$
2.8
$$\eta$$
$$\pi ^+\pi ^-\pi ^0$$
22.7
$$\pi ^+\pi ^-\gamma$$
4.6
f$$_0$$ (980)
$$\pi ^+\pi ^-$$
50.0
K$$^+$$K$$^-$$
12.5
a$$_2$$
$$3\pi$$
70.1
$$\eta \,\pi$$
14.5
$$\omega \,\pi \,\pi$$
10.6
$$\bar{\text {K}}$$
4.9
$$\rho _3$$
$$4\pi$$
71.1
$$\pi \,\pi$$
23.6
K K $$\pi$$
3.8
$$\bar{\text {K}}$$
1.58
K$$^0_\text {S}$$
$$\pi ^+\pi ^-$$
69.20
The templates in the fit are the charge mixing background and the following resonances: $$\rho ^0$$, K$$^{*0}$$, $$\omega$$, f$$_2$$, f$$_0$$ (980), a$$_2$$, $$\rho _3$$, $$\eta$$ and K$$^0_\text {S}$$. The templates were generated from reconstructed simulations that have all the standard reconstruction cuts applied; they include effects due to the resolution of the detector and the fiducial acceptance. The templates used in the fits are presented in Fig. 15 in Appendix B. As can be seen, the a$$_2$$ and $$\rho _3$$ templates are broad and featureless similar to the background template. For this reason, these resonances cannot be fitted reliably and will be subtracted together with the background from figures displaying the result of the template fitting in the following.
The fit to the $$\pi ^+\pi ^-$$ mass spectrum is performed between masses of 0.475 and 1.35 $$\text{ GeV }{/}\text{ c }^2$$ using the expression
\begin{aligned} \mu (m_\text {inv}) = \sum _i f_i \, T_i(m_\text {inv}), \end{aligned}
(2)
where $$f_i$$ is the contribution for particle i, $$T_i$$ is the associated invariant mass template and $$m_\text {inv}$$ is the invariant mass. $$f_i$$ is constrained to be between 0 and 1. The templates are normalised to the same number of combinations as the data over the range of the fit. The fit uses a standard Poissonian likelihood function
\begin{aligned} \mathcal {L} = \prod _j \frac{\mu _j^{k_j} e^{-\mu _j}}{k_j!}, \end{aligned}
(3)
where $$k_j$$ is the actual number of combinations in the invariant mass bin j and $$\mu _j$$ is the expected number of combinations, taken from Eq. (2).
Two examples of the template-fitting are shown in Fig. 6 for 158 and 350 $$\text{ GeV }{/}\text{ c }$$. The fitted charge-mixing background as well as the contribution of the featureless a$$_2$$ and $$\rho _3$$ resonances are subtracted to highlight the different resonances. The full set of template fits are displayed in Appendix C for all $$x_\text {F}$$-bins and the two beam energies.
After the fractions of each templates have been determined in the fit, the raw mean multiplicity $$n_i$$ of meson i per event in a given $$x_\text {F}$$ bin is determined from
\begin{aligned} n_i(x_\text {F}) = \frac{1}{N_\text {acc}} \sum _j f_i \, T_i(j), \end{aligned}
(4)
where $$N_\text {acc}$$ is the number of events after selection cuts, $$f_i$$ is the result of the fit and $$T_i$$ is the template of the meson of interest i, e.g. $$\rho ^0$$.
### 3.3 Correction factors
In order to obtain the true number of $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ mesons produced in $$\pi ^-$$ + C production interactions, three different corrections were applied to the raw yields. These corrections were calculated using 20 million events generated by the Epos 1.99 model using the Crmc package.
1. (i)
The Monte Carlo simulations that were used to obtain the templates for the fitting procedure were used to calculate corrections due to geometrical acceptance, reconstruction efficiency, losses due to trigger bias, quality cuts and bin migration effects. For each $$x_\text {F}$$ bin, the correction factor $$C(x_\text {F})$$ is given by
\begin{aligned} C(x_\text {F}) = \frac{n_\text {MC}^\text {gen}(x_\text {F})}{n_\text {MC}^\text {acc}(x_\text {F})}, \end{aligned}
(5)
where
1. (a)
$$n_\text {MC}^\text {gen}(x_\text {F})$$ is the mean multiplicity per event of $$\rho ^0$$ ($$\omega$$, K$$^{*0}$$) mesons produced in a given $$x_\text {F}$$ bin in $$\pi ^-$$ + C production interactions at a given beam momentum, including $$\rho ^0$$ ($$\omega$$, K$$^{*0}$$) mesons from higher mass resonance decays and
2. (b)
$$n_\text {MC}^\text {acc}(x_\text {F})$$ is the mean multiplicity per event of reconstructed $$\rho ^0$$ ($$\omega$$, K$$^{*0}$$) mesons that are accepted after applying all event and track cuts.
The statistical uncertainties of the corrections factors were calculated assuming binomial distributions for the number of events and resonances.
2. (ii)
The contribution from $$\rho ^0$$ mesons produced by re-interactions in the target. This was estimated from the simulations. This contribution is less than 1% for all bins apart from $$x_\text {F} <0.15$$, where the contribution is 1.7%.
3. (iii)
The fitting method was validated by applying the same procedure to the simulated data set, using the background estimated from either the charge mixing method or the true background obtained from the simulation. This difference is then applied as a multiplicative correction to the raw yield, $$f_i^\text {true} / f_i^\text {fit}$$, where $$f_i^\text {true}$$ is the true yield of resonance i and $$f_i^\text {fit}$$ is the yield that comes from the fit to the simulations. This correction is calculated separately for both background estimations and applied to the fits to the data that used the same estimation.
The breakdown of these correction factors can be seen, for the $$\rho ^0$$ spectra at $$p_\text {beam}=158$$ and $$350\,\text{ GeV }{/}\text{ c }$$, in Fig. 7. The correction factor $$C(x_\text {F})$$ is broken down into three contributions: bias from the interaction trigger (T2), geometrical acceptance, and selection efficiency. The geometrical acceptance dominates for large $$x_\text {F}$$ values.
The correction derived from Monte Carlo simulations could introduce a bias in the result if the $$p_\text {T}$$ spectrum of the model differed from the true shape. This is because the extrapolation to full $$p_\text {T}$$ phase space is based on the model spectrum. To investigate this effect another hadronic interaction model was used, DPMJet 3.06 [40]. This model also provides $$p_\text {T}$$ spectra for each resonance measured in this analysis, and the difference between the correction factors found for DPMJet 3.06 and Epos 1.99 is less than 4%. This suggests that any bias introduced by the extrapolation to full $$p_\text {T}$$ phase space is small. The difference between the correction factors is used in the estimate of the systematic uncertainties.
The final measurement is calculated by taking the average of the result using the two different background description methods, charge mixing and Monte Carlo background, with all the correction factors that change calculated separately for the two methods. The difference between these two methods is taken to be a systematic uncertainty.
### 3.4 Uncertainties and Cross Checks
The statistical uncertainties in the ith $$x_\text {F}$$-bin are given by
\begin{aligned} \sigma _i^2 = \left( \Delta C_i \, n_i\right) ^2 + \left( \frac{\sigma (n_i)}{C_i}\right) ^2, \end{aligned}
(6)
where $$n_i$$ and $$\sigma (n_i)$$ are the raw meson mean multiplicity per event and the uncertainty on this multiplicity that comes from the template fit. The contribution due to the uncertainty of the meson multiplicity dominates as the uncertainty $$\Delta C_i$$ of the corrections factors is only from the statistics of the simulation (20 million events) which is much larger than that of the data.
The main contributors to the systematic uncertainties are
1. (i)
The fitting method used for estimating the background shape and the fit procedure. The systematic uncertainty is taken to be half the difference between the two methods, using either charge mixing or Monte Carlo background, after the respective validation corrections have been applied. This estimate therefore combines the systematic uncertainty due to both the fitting method validation correction and the background estimation used and this is the dominant systematic uncertainty.
2. (ii)
Correction factors. The correction factors calculated above were compared with factors found using a different hadronic interaction model, DPMJet 3.06.
3. (iii)
Track cuts. The effect of the event and track selection cuts were checked by performing the analysis with the following cuts changed, compared to the values shown in Sect. 3.1.
1. (a)
The cut on the z-position of the interaction vertex was changed to be between $$-590$$ and $$-570$$ cm.
2. (b)
The window in which off-time beam particles were not allowed was decreased from 2 to 1.5 $$\upmu$$s.
3. (c)
The minimum number of clusters on the track was decreased to 25.
4. (d)
The sum of clusters on the track in VTPC-1 and VTPC-2 was decreased to 12 or increased to 18.
5. (e)
The impact parameter cuts were increased to less than 4 cm in the x-plane and 2 cm in the y-plane.
The systematic uncertainties were estimated from the differences between the results obtained using the standard analysis and ones obtained when adjusting the method as listed above. The individual systematic uncertainties were added in quadrature to obtain the total systematic uncertainties. They are dominated by the correction factor contribution, up to 15%, whereas the other contributions are less than 4%. Other sources of uncertainty, such as using templates from a different model, are found to be much smaller.
The fraction of target removed tracks is less than $$0.15\%$$ in all $$x_\text {F}$$ bins. The shape of the target removed distributions, after applying all the track and event cuts, is consistent with the background description so there is no additional correction or systematic uncertainty considered.
Several cross checks were performed to validate the results and check their stability. These include extending the range of the $$m_\text {inv} (\pi ^+\pi ^-)$$ fit, using the Breit–Wigner function to describe the $$\rho ^0$$ instead of a template as well as a few other more simple checks.
#### 3.4.1 Fit range
The default fit range used in this analysis was restricted to the mass ranges of the resonances of interest. We tested an extended fit range by including all data down to the kinematic threshold of $$m_\text {inv} (\pi ^+\pi ^-) = 2m_\pi$$. For this purpose additional templates needed to be taken into account including electrons and positrons pair-produced in the target by photons from $$\pi ^0$$ decays. The sum of all resonances produced by the Epos 1.99 model can however not describe the low $$m_\text {inv} (\pi ^+\pi ^-)$$ region satisfactorily. In particular, a significant bump at a mass of $${\approx }0.4\,\text{ GeV }{/}\text{ c }^2$$ appears to be in the data that does not have a counterpart in the templates. No resonance, meson or baryon, could be found in Epos 1.99 that could describe this bump. To avoid any bias the region of $$0.35\,\text{ GeV }{/}\text{ c }^2< m_\text {inv} (\pi ^+\pi ^-) < 0.4\,\text{ GeV }{/}\text{ c }^2$$ was excluded from the fit. Further discussions about the study of this bump are given in Appendix D.
Once this region is excluded from the fit a reasonable description of the $$m_\text {inv}$$ distribution down to the kinematic limit can be achieved, as shown in Fig. 8. However, the fit quality is worse and the agreement between the two background estimates is weaker. The poorer fit quality is most likely a combination of poorer performance of the estimate of the combinatorial background close to the kinematic threshold and the missing template to describe the bump at $${\approx }0.375\,\text{ GeV }{/}\text{ c }^2$$.
The yields obtained with the extended range differ by less than the systematic uncertainties from the yields with the original range, with the exception of one bin, and, to be conservative, the corresponding differences, which are of the order of 10%, are included in the systematic uncertainty.
#### 3.4.2 $$\rho ^0$$ mass
We checked for possible nuclear effects on the $$\rho ^0$$ mass [52, 53] by removing the $$\rho ^0$$ template from the fit and replacing it with a Breit–Wigner function. The function used is the one used in Ref. [54] with a modification to the decay width following Refs. [55] and [56], where the decay width is a function of mass $$m_\text {inv}$$,
\begin{aligned} {\text {BW}}(m_\text {inv}) = \frac{m_\text {inv} \, m_\text {R} \, \Gamma }{(m_\text {R}^2 - m_\text {inv} ^2)^2 + m_\text {R}^2 \, \Gamma ^2}, \end{aligned}
(7)
where $$m_\text {R}$$ is the mean mass of the fitted resonance and $$\Gamma$$ is given by
\begin{aligned} \Gamma (m_\text {inv}) = \Gamma _0 \left( \frac{m_\text {R}}{m_\text {inv}} \right) \left( \frac{q}{q_\text {R}} \right) ^{3/2} \left( \frac{q_\text {R}^2 + \delta ^2}{q^2 + \delta ^2} \right) , \end{aligned}
(8)
where q and $$q_\text {R}$$ are the pion three-momenta in the rest frame of the resonance, calculated with mass $$m_\text {inv}$$ and $$m_\text {R}$$, respectively. The parameter $$\delta$$ in the cutoff function has a value $$\delta = 0.3\,\text{ GeV }{/}\text{ c }$$.
We considered the mass as a free parameter and fixed the width value to the one provided by the particle data group [51]. The obtained mass values are consistent with the values quoted by the particle data group as shown in Fig. 9. The weighted average of the fitted masses is $$0.772\pm 0.001\,\text{ GeV }{/}\text{ c }^2$$, with no significant difference between the 158 and 350 $$\text{ GeV }{/}\text{ c }$$ data.
A simpler Breit–Wigner function was also tested,
\begin{aligned} {\text {BW}}(M) = \frac{\Gamma ^2}{(M - m_\text {R})^2 + \Gamma ^2} \end{aligned}
(9)
It is the function used to both sample resonances and generate their widths in Epos 1.99. Even though this function does not directly take into account effects which are considered in the event generator, such as decay products approaching the lower kinematic limit, or energy conservation for decay products at higher mass, the resulting fitted masses are compatible with the results from the more complicated Breit–Wigner function, Eq. (7).
The yields of the $$\rho ^0$$ when fitting with this Breit–Wigner function differ slightly from the yields calculated using the standard analysis method. These small differences of the order of 3% are included in the systematic uncertainties.
A comparison of the yields from the standard template analysis method, the extended fit range and when fitting a Breit–Wigner function (Eq. (7)) is shown in Fig. 10. As can be seen the differences are within the systematic uncertainties of the standard analysis. These small differences, of the order of 3% for the fits with a Breit–Wigner function and 10% for the extended fit range, are added in quadrature to the systematic uncertainties.
#### 3.4.3 Further checks
Further cross checks were performed to probe the stability of the fit and yield result. These include
1. (i)
The data, along with the templates, were split into two equally sized regions of polar angle. If there was any polar-angle dependence of the result introduced by insufficient modeling of different parts of the detector, this would appear in a difference between the spectra from these independent data sets. The resulting multiplicity spectra were consistent within statistical uncertainties.
2. (ii)
The data set was split according to different time ranges, both a night and day split as well as a first half and second half split in run taking. Any possible systematic differences in the detector which depend on time would result in discrepancies in the spectra from the different time ranges. Both resulting $$x_\text {F}$$ spectra were again consistent within statistical uncertainties.
3. (iii)
Instead of assuming the pion mass for both tracks, one track was allocated the kaon mass. This means that the number of combinations used has to double, as both combinations of masses have to be taken into account for any given pair of tracks to allow for the kaon to be either of the two charges. This also then increases the background even further and because of the different shape of the background under the $$\pi$$ K invariant mass distribution, the systematic uncertainty for this method is larger than for the $$\pi \,\pi$$ method. The multiplicity spectra from this method were consistent within statistical and systematic uncertainties of the standard analysis method.
All these performed cross checks gave results consistent within the total uncertainties of the standard analysis.
## 4 Results
The yields of $$\rho ^0$$, $$\omega$$, and K$$^{*0}$$ mesons in $$\pi ^{-}$$ + C production interactions at 158 and 350 $$\text{ GeV }{/}\text{ c }$$ were calculated in bins of $$x_\text {F}$$ as follows
\begin{aligned} \frac{\mathrm {d}n}{\mathrm {d}x_\text {F}} = \frac{1}{N_\text {prod}} \frac{\mathrm {d}N_\text {part}}{\mathrm {d}x_\text {F}} = \frac{C(x_\text {F}) \, n(x_\text {F})}{\Delta x_\text {F}}, \end{aligned}
(10)
where $$N_\text {prod}$$ is the number of interaction events minus the events with elastic and quasi-elastic scattering (which are not included), $$N_\text {part}$$ is the true number of produced resonances, $$n(x_\text {F})$$ is the raw mean multiplicity per event of the meson from Eq. (4), $$\Delta x_\text {F}$$ is the width of the $$x_\text {F}$$ bin and $$C(x_\text {F})$$ is the total correction factor for losses of event and multiplicity, as detailed above. Measured points with large statistical or systematic uncertainties (greater than 50%) are not shown. This cut removes three data points at large $$x_\text {F}$$ for the $$\omega$$ spectrum and one data point at large $$x_\text {F}$$ for the K$$^{*0}$$ spectrum at 158 $$\text{ GeV }{/}\text{ c }$$. In case of the data taken at $$350\,\text{ GeV }{/}\text{ c }$$ only a limited $$x_\text {F}$$-range between 0 and 0.5 is accessible within the acceptance of NA61/SHINE. Only one data point of the $$\omega$$ spectrum survived the cut on the maximum uncertainty and none for the K$$^{*0}$$ spectrum. Therefore we present only $$\rho ^0$$ spectra for the $$350\,\text{ GeV }{/}\text{ c }$$ data.
The spectra of $$\rho ^0$$, $$\omega$$, and K$$^{*0}$$ mesons produced in production $$\pi ^-$$ + C interactions are shown in Fig. 11. The average $$x_\text {F}$$ in each bin is used to display the data points in this and in the following figures. It is worthwhile noting that this average is not corrected for the detector acceptance within the bin and is calculated from all oppositely charge combinations including combinatorial background, i.e. for each $$x_\text {F}$$ bin i the average is given by the arithmetic mean $$\langle x_\text {F} \rangle _{i} = \frac{1}{N_i} \sum _{j=1}^{N_i} (x_\text {F})_j$$, where the sum runs over all $$N_i$$ track combinations in the bin. For a detailed comparison of this data with model predictions it is therefore recommended to compare to model predictions binned in the same way as the data rather than comparing them at the average $$x_\text {F}$$.
As can be seen in Fig. 11, no dependence of the $$\rho ^0$$ multiplicities on beam energy was found within the uncertainties of the measurement. Out of the three resonances studied here, the multiplicity of $$\rho ^0$$ mesons is the largest at large $$x_\text {F}$$, i.e. the region most relevant for the development of cosmic-ray air showers. Numerical results, including statistical and systematic uncertainties, are given in Tables 4, 5, and 6. It is worthwhile noting that due to improvements in the analysis procedure the final $$\rho ^0$$ multiplicities at 158 $$\text{ GeV }{/}\text{ c }$$ listed in Table 4 are about 25% smaller than the preliminary results presented in [33].
The measured spectra are compared to model predictions by QGSJet II-04 [58], Epos 1.99 [12], DPMJet 3.06 [40], Sibyll 2.1 [59], Sibyll 2.3 [60] and Epos LHC [50] in Figs. 12 and 13. For the purpose of display, the multiplicities were scaled by $$x_\text {F}$$.
It can be seen that in the low $$x_\text {F}$$ region ($$<0.3$$) all hadronic interaction models overestimate the $$\rho ^0$$ yield with discrepancies of up to +80%. At intermediate $$x_\text {F}$$ ($$0.4< x_\text {F} < 0.7$$) the $$\rho ^0$$ production is underestimated by up to $$-60$$%. It is interesting to note that even if QGSJet II-04, Sibyll 2.3 and Epos LHC were tuned to $$\pi ^+$$+p data from NA22 [17], these models cannot reproduce the measurement presented here. The large underestimation in QGSJet II-04 is mainly for non-forward $$\rho ^0$$ production which is not treated explicitly in the model. This explains the large difference in spectral shape compared to the other hadronic models and the large deviations between the model and the measurement. The best description of our data in the forward range ($$x_\text {F} >0.4$$) is given by Sibyll 2.3, which describes the data within 10%.
The shape of the measured $$\omega$$ spectrum is in approximate agreement with all of the models shown (QGSJet II-04 does not include $$\omega$$ mesons in the model). Also the measured normalisation is approximately reproduced by all models but Epos 1.99, which produces too many $$\omega$$ mesons above $$x_\text {F} >0.1$$.
The measured multiplicity of K$$^{*0}$$ mesons is not reproduced by any of the models over the full $$x_\text {F}$$ range. DPMJet 3.06 gives a correct description of the yields only at low $$x_\text {F}$$ but underpredicts the multiplicity at large $$x_\text {F}$$ and the opposite is true for Epos LHC and Epos 1.99 which are in agreement with the measurement only at $$x_\text {F} \gtrsim 0.6$$. Sibyll 2.3 and Sibyll 2.1 predict a too low number of K$$^{*0}$$ mesons at all $$x_\text {F}$$ values.
The ratio between combinations of the three meson measurements are shown in Fig. 21 in Appendix E, where it can be seen that no model can consistently describe the results.
The comparison between results from this analysis to measurements of other experiments are presented in Fig. 14 for $$\rho ^0$$ and $$\omega$$ mesons. The two other experiments shown are NA22 [17] and LEBC-EHS (NA27) [57], both of which used a hydrogen target. NA22 had a $$\pi ^+$$ beam at 250 $$\text{ GeV }{/}\text{ c }$$ while LEBC-EHS had a $$\pi ^-$$ beam at 360 $$\text{ GeV }{/}\text{ c }$$. The results from NA22 and LEBC-EHS are scaled by their measured inelastic cross sections: $$20.94\pm 0.12\,\text{ mb }$$ for NA22 [61] and $$21.6\,\text{ mb }$$ for LEBC-EHS [57]. There is good agreement between the previous measurements with proton targets and the results from this analysis for $$x_\text {F} <0.6$$. At larger $$x_\text {F}$$ the $$\rho ^0$$ yields measured in this analysis show a decrease that is not present in the $$\pi$$+p data and could thus be an effect of the nuclear target used for the measurement presented here. The comparison of the measurements of the $$\omega$$ multiplicities shows no significant differences between the other experiments and results from this analysis.
## 5 Summary
This article presents experimental results on $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ $$x_\text {F}$$-spectra in $$\pi ^-$$ + C production interactions at $$158\,\text{ GeV }{/}\text{ c }$$ and the $$\rho ^0$$ spectra at 350 $$\text{ GeV }{/}\text{ c }$$ from the NA61/SHINE spectrometer at the CERN SPS. These results are the first $$\pi ^-$$ + C measurements taken in this energy range and are important to tune hadronic interaction models used to understand the measurements of cosmic-ray air showers.
The comparisons of the measured spectra to predictions of hadronic interaction models suggests that for all models further tuning is required to reproduce the measured spectra of $$\rho ^0$$, $$\omega$$ and K$$^{*0}$$ mesons in the full range of $$x_\text {F}$$. Recent re-tunes of these models to resonance data in $$\pi +p$$ interactions resulted in changes of the muon number at ground of up to 25% [14, 60]. The new data provided here for $$\pi$$ + C interactions gives a more adequate reference for pion-air interactions relevant for air showers and will help to establish the effect of forward resonance production on muons in air showers with the precision needed for using the muon number to estimate the particle type of primary cosmic rays, as e.g. planned within the upgrade of the Pierre Auger Observatory [62].
## Notes
### Acknowledgements
We would like to thank the CERN EP, BE and EN Departments for the strong support of NA61/SHINE. This work was supported by the Hungarian Scientific Research Fund (Grants OTKA 68506 and 71989), the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, the Polish Ministry of Science and Higher Education (Grants 667/N-CERN/2010/0, NN 202 48 4339 and NN 202 23 1837), the Polish National Center for Science (Grants 2011/03/N/ST2/03691, 2013/11/N/ST2/03879, 2014/13/N/ST2/02565, 2014/14/E/ST2/00018 and 2015/18/M/ST2/00125, 2015/19/N/ST2 /01689), the Foundation for Polish Science — MPD program, co-financed by the European Union within the European Regional Development Fund, the Federal Agency of Education of the Ministry of Education and Science of the Russian Federation (SPbSU research Grant 11.38.242.2015), the Russian Academy of Science and the Russian Foundation for Basic Research (Grants 08-02-00018, 09-02-00664 and 12-02-91503-CERN), the National Research Nuclear University MEPhI in the framework of the Russian Academic Excellence Project (contract No. 02.a03.21.0005, 27.08.2013), the Ministry of Education, Culture, Sports, Science and Technology, Japan, Grant-in-Aid for Scientific Research (Grants 18071005, 19034011, 19740162, 20740160 and 20039012), the German Research Foundation (Grant GA 1480/2-2), the EU-funded Marie Curie Outgoing Fellowship, Grant PIOF-GA-2013-624803, the Bulgarian Nuclear Regulatory Agency and the Joint Institute for Nuclear Research, Dubna (bilateral contract No. 4418-1-15/17), Bulgarian National Science Fund (Grant DN08/11), Ministry of Education and Science of the Republic of Serbia (Grant OI171002), Swiss Nationalfonds Foundation (Grant 200020117913/1), ETH Research Grant TH-01 07-3 and the US Department of Energy.
## References
1. 1.
R. Engel, D. Heck, T. Pierog, Extensive air showers and hadronic interactions at high energy. Annu. Rev. Nucl. Part. Sci. 61, 467–489 (2011)
2. 2.
J. Abraham et al., Properties and performance of the prototype instrument for the Pierre Auger Observatory. Nucl. Instrum. Methods A 523, 50–95 (2004)
3. 3.
R. Abbasi et al., IceTop: the surface component of IceCube. Nucl. Instrum. Methods A 700, 188–220 (2013)
4. 4.
G. Navarra et al., KASCADE-Grande: a large acceptance, high-resolution cosmic-ray detector up to 10**18-eV. Nucl. Instrum. Methods A 518, 207–209 (2004)
5. 5.
T. Abu-Zayyad et al., The surface detector array of the Telescope Array experiment. Nucl. Instrum. Methods A 689, 87–97 (2012)
6. 6.
T. Abu-Zayyad et al., Evidence for changing of cosmic ray composition between $${10}^{17}$$ and $${10}^{18}$$ eV from multicomponent measurements. Phys. Rev. Lett. 84, 4276–4279 (2000)
7. 7.
J.C. Arteaga-Velazquez et al., Test of hadronic interaction models with the KASCADE-Grande muon data. EPJ Web Conf. 52, 07002 (2013)
8. 8.
A. Aab et al., Muons in air showers at the Pierre Auger Observatory: mean number in highly inclined events. Phys. Rev. D 91, 032003 (2015)
9. 9.
A. Aab et al., Muons in air showers at the Pierre Auger Observatory: measurement of atmospheric production depth. Phys. Rev. D 90, 012012 (2014)
10. 10.
A. Aab et al., Testing hadronic interactions at ultrahigh energies with air showers measured by the Pierre Auger Observatory. Phys. Rev. Lett. 117(19), 192001 (2016)
11. 11.
J. Matthews, A Heitler model of extensive air showers. Astropart. Phys. 22, 387–397 (2005)
12. 12.
T. Pierog, K. Werner, Muon production in extended air shower simulations. Phys. Rev. Lett. 101, 171101 (2008)
13. 13.
H.-J. Drescher, Remnant break-up and muon production in cosmic ray air showers. Phys. Rev. D 77, 056003 (2008)
14. 14.
S. Ostapchenko, QGSJET-II: physics, recent improvements, and results for air showers. EPJ Web Conf. 52, 02001 (2013)
15. 15.
M. Adamus et al. Inclusive $$\pi ^0$$ Production in $$\pi ^+ p$$, $$K^+ p$$ and $$p p$$ interactions at 250-GeV/c. Z. Phys. C 35, 7 (1987). [Sov. J. Nucl. Phys. 47, 271 (1988)]Google Scholar
16. 16.
I.V. Azhinenko et al., Neutral kaon production in K$$^+$$p and pi$$^+$$p interactions at 250-GeV/c. Z. Phys. C 46, 525–536 (1990)
17. 17.
N.M. Agababyan et al., Inclusive production of vector mesons in pi$$^+$$p interactions at 250-GeV/c. Z. Phys. C 46, 387–395 (1990)
18. 18.
H.-J. Drescher, G.R. Farrar, Dominant contributions to lateral distribution functions in ultra-high energy cosmic ray air showers. Astropart. Phys. 19, 235–244 (2003)
19. 19.
I.C. Mariş. Hadron production measurements with the NA61/SHINE experiment and their relevance for air shower simulations. Proc. 31st ICRC, p. 1059 (2009)Google Scholar
20. 20.
T. Eichten et al., Particle production in proton interactions in nuclei at 24 GeV/c. Nucl. Phys. B 44, 333–343 (1972)
21. 21.
T. Abbott et al., Measurement of particle production in proton induced reactions at 14.6-GeV/c. Phys. Rev. D 45, 3906–3920 (1992)
22. 22.
G. Ambrosini et al., Pion yield from 450 GeV/c protons on beryllium. Phys. Lett. B 425, 208–214 (1998)
23. 23.
C. Alt et al., Inclusive production of charged pions in p + C collisions at 158 GeV/c beam momentum. Eur. Phys. J. C 49, 897–917 (2007)
24. 24.
M. Apollonio et al., Forward production of charged pions with incident protons on nuclear targets at the CERN PS. Phys. Rev. C 80, 035208 (2009)
25. 25.
M.G. Catanesi et al., Measurement of the production cross-sections of in p-C and -C interactions at 12 GeV/c. Astropart. Phys. 29, 257–281 (2008)
26. 26.
M.G. Catanesi et al., Forward production in p–O$$_2$$ and p–N$$_2$$ interactions at 12 GeV/c. Astropart. Phys. 30, 124–132 (2008)
27. 27.
D.S. Barton et al., Experimental study of the $$a$$-dependence of inclusive hadron fragmentation. Phys. Rev. D 27, 2580 (1983)
28. 28.
M. Aguilar-Benitez et al., Vector meson production in pi–p interaction at 360-GeV/c. Z. Phys. C 44, 531 (1989)
29. 29.
J.E. Elias et al., Experimental study of multiparticle production in hadron–nucleus interactions at high energy. Phys. Rev. D 22, 13–35 (1980)
30. 30.
N. Abgrall et al., NA61/SHINE facility at the CERN SPS: beams and detector system. JINST 9, P06005 (2014)
31. 31.
M. Unger, Results from NA61/SHINE. EPJ Web Conf. 52, 01009 (2013)
32. 32.
H. Dembinski. Measurement of hadron–carbon interactions for better understanding of air showers with NA61/SHINE. Proc. 33rd ICRC, p. 0688 (2013)Google Scholar
33. 33.
A. Herve. Results from pion–carbon interactions measured by NA61/SHINE for better understanding of extensive air showers. PoS, ICRC2015, p. 330 (2015)Google Scholar
34. 34.
A.E. Kiryunin et al., GEANT4 physics evaluation with testbeam data of the ATLAS hadronic end-cap calorimeter. Nucl. Instrum. Methods A560, 278–290 (2006)
35. 35.
J.V. Damgov. CMS HCAL testbeam results and comparison with GEANT4 simulation. AIP Conf. Proc. 867, 471–478 (2006). [471 (2006)]Google Scholar
36. 36.
C. Adloff et al., Validation of GEANT4 Monte Carlo models with a highly granular scintillator-steel hadron calorimeter. JINST 8, 07005 (2013)Google Scholar
37. 37.
S. Afanasev et al., The NA49 large acceptance hadron detector. Nucl. Instrum. Methods A430, 210–244 (1999)
38. 38.
C. Bovet, S. Milner, A. Placci, The Cedar Project. Cherenkov differential counters with achromatic ring focus. IEEE Trans. Nucl. Sci. 25, 572–576 (1978)Google Scholar
39. 39.
N. Abgrall et al. Calibration and analysis of the 2007 data. CERN-SPSC-2008-018. SPSC-SR-033 (2008)Google Scholar
40. 40.
S. Roesler, R. Engel, J. Ranft, The Monte Carlo event generator DPMJET-III (Springer, Berlin, 2001), pp. 1033–1038Google Scholar
41. 41.
T. Pierog, Private communication (2013)Google Scholar
42. 42.
R. Brun et al. GEANT: detector description and simulation tool. CERN, 1993. Long Writeup W5013Google Scholar
43. 43.
N. Abgrall et al., Measurements of cross sections and charged pion spectra in proton–carbon interactions at 31 GeV/c. Phys. Rev. C 84, 034604 (2011)
44. 44.
G. Jancso et al., Evidence for dominant vector-meson production in inelastic proton–proton collisions at 53 GeV cm energy. Nucl. Phys. B 124, 1–11 (1977)
45. 45.
M.K. Köhler et al., Dielectron measurements in pp, p–pb and pb–pb collisions with ALICE at the LHC. Nucl. Phys. A 931, 665–669 (2014)
46. 46.
ATLAS Collaboration. Determination of the top quark mass with a template method in the all hadronic decay channel using 2.04/fb of ATLAS data. ATLAS-CONF-2012-030, 2012Google Scholar
47. 47.
A. Abulencia et al., Top quark mass measurement using the template method in the lepton + jets channel at CDF II. Phys. Rev. D 73, 032003 (2006)
48. 48.
S. Chatrchyan et al., Measurement of the top-quark mass in $$t\bar{t}$$ events with dilepton final states in pp collisions at $$\sqrt{s}$$=7 TeV. Eur. Phys. J. C 72, 2202 (2012)
49. 49.
T. Pierog, C. Baus, R. Ulrich. https://web.ikp.kit.edu/rulrich/crmc.html
50. 50.
T. Pierog et al., EPOS LHC: test of collective hadronization with LHC data. Phys. Rev. C 92, 034906 (2015)
51. 51.
C. Patrignani et al., Review of particle physics. Chin. Phys. C 40(10), 100001 (2016)
52. 52.
R.S. Hayano, T. Hatsuda, Hadron properties in the nuclear medium. Rev. Mod. Phys. 82, 2949 (2010)
53. 53.
X.-M. Jin, D.B. Leinweber, Valid QCD sum rules for vector mesons in nuclear matter. Phys. Rev. C 52, 3344–3352 (1995)
54. 54.
C. Adler et al., Coherent rho0 production in ultraperipheral heavy ion collisions. Phys. Rev. Lett. 89, 272302 (2002)
55. 55.
S. Teis et al., Pion production in heavy ion collisions at SIS energies. Z. Phys. A 356, 421–435 (1997)
56. 56.
J.H. Koch, N. Ohtsuka, E.J. Moniz, Nuclear photoabsorption and compton scattering at intermediate-energy. Ann. Phys. 154, 99–160 (1984)
57. 57.
M. Aguilar-Benitez et al., Vector meson production in $$\pi$$–p interactions at 360 GeV/c. Z. Phys. C 44, 531–539 (1989)
58. 58.
Sergey Ostapchenko, Monte Carlo treatment of hadronic interactions in enhanced Pomeron scheme: I. QGSJET-II model. Phys. Rev. D 83, 014018 (2011)
59. 59.
E.-J. Ahn et al., Cosmic ray interaction event generator SIBYLL 2.1. Phys. Rev. D 80, 094003 (2009)
60. 60.
F. Riehn et al. A new version of the event generator Sibyll. PoS, ICRC2015, p. 558 (2015)Google Scholar
61. 61.
M. Adamus et al., Cross-sections and charged multiplicity distributions for pi$$^+$$p, K$$^+$$p and p–p interactions at 250-GeV/c. Z. Phys. C 32, 475 (1986)
62. 62.
A. Aab et al., The Pierre Auger Observatory upgrade—preliminary design report. 2016Google Scholar
Funded by SCOAP3
## Authors and Affiliations
• 16
• Y. Ali
• 13
• E. V. Andronov
• 22
• T. Antićić
• 3
• B. Baatar
• 20
• M. Baszczyk
• 14
• S. Bhosale
• 11
• A. Blondel
• 25
• M. Bogomilov
• 2
• A. Brandin
• 21
• A. Bravar
• 25
• J. Brzychczyk
• 13
• S. A. Bunyatov
• 20
• O. Busygina
• 19
• H. Cherif
• 7
• M. Ćirković
• 23
• T. Czopowicz
• 18
• A. Damyanova
• 25
• N. Davis
• 11
• H. Dembinski
• 5
• M. Deveaux
• 7
• W. Dominik
• 16
• P. Dorosz
• 14
• J. Dumarchez
• 4
• R. Engel
• 5
• A. Ereditato
• 24
• S. Faas
• 5
• G. A. Feofilov
• 22
• Z. Fodor
• 8
• 17
• C. Francois
• 24
• X. Garrido
• 5
• A. Garibov
• 1
• M. Gaździcki
• 7
• 10
• M. Golubeva
• 19
• K. Grebieszkow
• 18
• F. Guber
• 19
• A. Haesler
• 25
• A. E. Hervé
• 5
• J. Hylen
• 26
• S. N. Igolkin
• 22
• A. Ivashkin
• 19
• S. R. Johnson
• 28
• 3
• E. Kaptur
• 15
• M. Kiełbowicz
• 11
• V. A. Kireyeu
• 20
• V. Klochkov
• 7
• V. I. Kolesnikov
• 20
• D. Kolev
• 2
• A. Korzenev
• 25
• V. N. Kovalenko
• 22
• K. Kowalik
• 12
• S. Kowalski
• 15
• M. Koziel
• 7
• A. Krasnoperov
• 20
• W. Kucewicz
• 14
• M. Kuich
• 16
• A. Kurepin
• 19
• D. Larsen
• 13
• A. László
• 8
• T. V. Lazareva
• 22
• M. Lewicki
• 17
• B. Lundberg
• 26
• B. Łysakowski
• 15
• V. V. Lyubushkin
• 20
• I. C. Mariş
• 5
• M. Maćkowiak-Pawłowska
• 18
• B. Maksiak
• 18
• A. I. Malakhov
• 20
• D. Manić
• 23
• A. Marchionni
• 26
• A. Marcinek
• 11
• A. D. Marino
• 28
• K. Marton
• 8
• H. -J. Mathes
• 5
• T. Matulewicz
• 16
• V. Matveev
• 20
• G. L. Melkumov
• 20
• A. O. Merzlaya
• 22
• B. Messerly
• 29
• Ł. Mik
• 14
• G. B. Mills
• 27
• S. Morozov
• 19
• 21
• S. Mrówczyński
• 10
• Y. Nagai
• 28
• 17
• V. Ozvenchuk
• 11
• V. Paolone
• 29
• M. Pavin
• 3
• 4
• O. Petukhov
• 19
• 21
• C. Pistillo
• 24
• R. Płaneta
• 13
• 16
• B. A. Popov
• 4
• 20
• 16
• S. Puławski
• 15
• J. Puzović
• 23
• R. Rameika
• 26
• W. Rauch
• 6
• M. Ravonel
• 25
• R. Renfordt
• 7
• E. Richter-Wąs
• 13
• D. Röhrich
• 9
• E. Rondio
• 12
• M. Roth
• 5
• B. T. Rumberger
• 28
• M. Ruprecht
• 5
• A. Rustamov
• 1
• 7
• M. Rybczynski
• 10
• A. Rybicki
• 11
• 19
• K. Schmidt
• 15
• I. Selyuzhenkov
• 21
• A. Yu. Seryakov
• 22
• P. Seyboth
• 10
• M. Słodkowski
• 18
• A. Snoch
• 7
• P. Staszel
• 13
• G. Stefanek
• 10
• J. Stepaniak
• 12
• M. Strikhanov
• 21
• H. Ströbele
• 7
• T. Šuša
• 3
• M. Szuba
• 5
• A. Taranenko
• 21
• A. Tefelska
• 18
• D. Tefelski
• 18
• V. Tereshchenko
• 20
• A. Toia
• 7
• R. Tsenov
• 2
• L. Turko
• 17
• R. Ulrich
• 5
• M. Unger
• 5
Email author
• F. F. Valiev
• 22
• D. Veberič
• 5
• V. V. Vechernin
• 22
• M. Walewski
• 16
• A. Wickremasinghe
• 29
• C. Wilkinson
• 24
• Z. Włodarczyk
• 10
• A. Wojtaszek-Szwarc
• 10
• O. Wyszyński
• 13
• L. Zambelli
• 4
• E. D. Zimmerman
• 28
• 26
1. 1.National Nuclear Research CenterBakuAzerbaijan
2. 2.Faculty of PhysicsUniversity of SofiaSofiaBulgaria
3. 3.Ruđer Bošković InstituteZagrebCroatia
4. 4.LPNHEUniversity of Paris VI and VIIParisFrance
5. 5.Karlsruhe Institute of TechnologyKarlsruheGermany
6. 6.Fachhochschule FrankfurtFrankfurtGermany
7. 7.University of FrankfurtFrankfurtGermany
8. 8.Wigner Research Centre for Physics of the Hungarian Academy of SciencesBudapestHungary
9. 9.University of BergenBergenNorway
10. 10.Jan Kochanowski University in KielceKielcePoland
11. 11.H. Niewodniczański Institute of Nuclear Physics of the Polish Academy of SciencesKrakówPoland
12. 12.National Centre for Nuclear ResearchWarsawPoland
13. 13.Jagiellonian UniversityKrakówPoland
14. 14.AGH University of Science and TechnologyKrakowPoland
15. 15.University of SilesiaKatowicePoland
16. 16.University of WarsawWarsawPoland
17. 17.University of WrocławWrocławPoland
18. 18.Warsaw University of TechnologyWarsawPoland
19. 19.Institute for Nuclear ResearchMoscowRussia
20. 20.Joint Institute for Nuclear ResearchDubnaRussia
21. 21.National Research Nuclear University (Moscow Engineering Physics Institute)MoscowRussia
22. 22.St. Petersburg State UniversitySt. PetersburgRussia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170342683792114, "perplexity": 1764.7669604268278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00446.warc.gz"} |
http://www.math.uni-bonn.de/ag/ana/WiSe1415/S5B1_WS_14_15.html?language=en | Instructors
Prof. Dr. Herbert Koch, Prof. Dr. Christoph Thiele, Dr. Roland Donninger
Friday, 14 (c.t.) - 16, Seminar room 1.008
Talks
• October 17, 2014: Polona Durcik (University of Bonn)
Title: A proof of the A_2 theorem.
Abstract: We present an alternative proof of the A_2 theorem given by A. Lerner, F. Nazarov. Their approach is based on pointwise estimates of Calderon-Zygmund operators with dyadic sparse operators.
• October 24, 2014: Gennady Uraltsev (University of Bonn)
Title: Quadratic Carleson in the Walsh case.
Abstract: We will present V. Lie's proof of the weak L^2 boundedness of the quadratic Carleson operator revisited in the Walsh case. We hope that this approach clarifies the main ideas of the proof while simplifying some technical estimates. Finally, we will highlight the additional machinery needed to prove strong L^2 bounds directly.
• October 31, 2014: Shaoming Guo (University of Bonn)
Title: Geometric propf of Bourgain's L^2 bounds of the maximal operator along analytic vector fields.
Abstract: We will apply the time-frequency decomposition initiated by Lacey and Li to provide a geometric proof of Bourgain's L^2 bounds of the maximal operator along analytic vector fields.
• November 7, 2014: Michal Warchalski (University of Bonn)
Title: Wittwer's inequality via outer measure spaces.
Abstract: We will present a proof of a generalization of Wittwer's inequality with an arbitrary reference measure given by Christoph Thiele, Sergei Treil and Alexander Volberg. For this we will use embeddings into outer measure spaces and concavity arguments.
• November 14, 2014: Christian Zillinger (University of Bonn)
Title: Linear inviscid damping for monotone shear flows.
Abstract: We will present a proof of linear stability, scattering and damping for monotone shear flow solutions to the 2D Euler equations both in an infinite and finite periodic channel. A particular focus will be on the additional boundary effects arising in the latter setting.
• November 21, 2014:
The first speaker [2:15 pm]: Damiano Foschi (Universita di Ferrara)
Title: Local wellposedness of semilinear Schrodinger equations under minimal smoothness assumptions for the nonlinearity
Abstract:
The problem of local well-posedness for semilinear Schrodinger equations $i u_t + \Delta u = f(u)$ is well understood for smooth nonlinearities. When we consider power-like nonlinear terms of the form $f(u) = |u|^{p-1} u$ the degree of the power is also a measure of the smoothness (near zero) of the nonlinearity. A simple scaling argument can show that local wellposedness for the initial value problem with data in the Sobolev space $H^s$ requires that $p \leq 1 + 4/(n-2s)_+$. This scaling condition alone usually is not sufficient. Known results require also some lower bound for $p$: Cazenave and Weissler (1990) proved LWP with $p > \floor{s} + 1$; arguments of Ginibre, Ozawa and Velo (1994) allowed to relax the condition to $p > s$; Pecher (1997) improved to $p > s-1$ when $2 < s < 4$, and $p > s-2$ when $s \geq 4$; recently Uchizono and Wada (2012) obtained LWP with $p < s/2$ when $2 < s < 4$. We will show that these lower bounds for $p$ are not yet optimal. For example when $s=4$ we will show how to obtain LWP for $p > 3/2$.
The second speaker [3:15 pm]: Bartosz Trojan (University of Wroclaw)
Title: Bourgain's logarithmic lemma: 2-parameter case.
Abstract: We discuss 2-parameter generalization of Bourgain's logarithmic lemma arising in the context of pointwise ergodic theory.
• November 28, 2014: Mariana Smit Vega Garcia
Title: New developments in the lower dimensional obstacle problem
Abstract: We will describe the Signorini, or lower-dimensional obstacle problem, for a uniformly elliptic, divergence form operator $L =$ div$(A(x)\nabla)$ with Lipschitz continuous coefficients. We will give an overview of this problem and discuss some recent developments, including the optimal regularity of the solution and the regularity of the free boundary. This is joint work with Nicola Garofalo and Arshak Petrosyan.
• December 5, 2014: Pavel Zorin-Kranich (University of Bonn)
• Title: Variational Walsh Carleson
Abstract: I will motivate and present a version of Bourgain's multi-frequency lemma with two bounded r-variation hypotheses due to Oberlin.
• December 12, 2014: Wenhui Shi (University of Bonn)
Title: A higher order boundary Harnack inequality.
Abstract: We will present a higher order boundary Harnack inequality for harmonic functions and show its application to the free boundary problems. This is a method due to De Silva and Savin.
• January 16, 2015: Pawel Biernat (University of Bonn)
Title: Formal construction of singular solutions to harmonic map heat flow.
Abstract: Heat flow for harmonic maps is known to produce finite-time singularities from smooth initial data. These singular solutions arise for a large class of initial data and present a major obstacle in solving the heat flow equation for arbitrarily large times. I will show how to (formally) construct such singular solutions using matched asymptotics and how to determine their blow-up rate (the speed with which the singularity forms).
• January 23, 2015: Emil Wiedemann (University of Bonn)
Title: Weak Solutions for the 2D Stationary Euler Equations.
Abstract: We present recent work by A. Choffrut and L. Szekelyhidi on the stationary Euler equations. It is proved that in any L^2-neighbouhood of a smooth solution, there exist infinitely many weak solutions. Surprisingly, this is true even in two dimensions.
• January 30, 2015: Mariusz Mirek (University of Bonn)
Title: Recent developments in discrete harmonic analysis.
Abstract: In recent times - particularly the last two decades - discrete analogues in harmonic analysis have gone through a period of considerable changes and developments. This is due in part to Bourgain's pointwise ergodic theorem for the squares on L^p, (p>1). The main aim of this talk is to discuss recent developments in discrete harmonic analysis. We will be mainly concerned with the discrete maximal functions and singular integral operators along polynomial mappings. We will also discuss two-parameter discrete analogues. All the results are subjects of the ongoing projects with Elias M. Stein, Bartosz Trojan and Jim Wright. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785429000854492, "perplexity": 1208.8819624040398}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00485.warc.gz"} |
http://mathhelpforum.com/algebra/13079-divide-frac-whole-numb.html | # Math Help - Divide Frac by whole numb
1. ## Divide Frac by whole numb
4 3/5 divided by 5 please solve and explain how it was done.
4 3/5 divided by 5 please solve and explain how it was done.
ok, when we are dividing fractions, what we do is that we take the inverse of the fraction on the bottom (meaning i just flip it upside down) and multiply it by the top, and that's our solution
e.g. (2/3) / (4/5) = (2/3) * (5/4) = 5/6
when dividing by whole numbers, its the same principle. think of any whole number as itself over 1. so think of 5 as 5/1, or 9 as 9/1, so
e.g. (2/3) / 9 = (2/3) / (9/1) = (2/3) * (1/9) = 2/27
now to your problem. the first thing we have to do is change 4 3/5 to an improper fraction (i hope you know how to do that), the result is 23/5
so now, (23/5) / 5 = (23/5) / (5/1) = (23/5)*(1/5) = 23/25
3. Yes, I just forgot.
Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937136173248291, "perplexity": 654.8704094443992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641266.56/warc/CC-MAIN-20150417045721-00291-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/finding-limits-of-line-integral.315743/ | # Finding limits of line integral
1. May 22, 2009
### boneill3
1. The problem statement, all variables and given/known data
Integrate along the line segment from (0,0) to $(\pi,-1)$
The integral
$\int_{(0,1)}^{(\pi,-1)} [y sin(x) dx - (cos(x))]dy$
2. Relevant equations
3. The attempt at a solution
I have used the parameterization of $x=\pi t$ and $y= 1-2t$
To get the integral:
$\int_{(0,1)}^{(\pi,-1)} [1-2t sin(\pi t) -(cos(\pi t))]dt$
But now because it is an integral of variable t I need to change the limits .
I'm not sure if I just have to put the limits of t just from 0 to $\pi$
I suppose I'm having trouble with getting from the limit of 2 variables (x,y) to a limit of one variable t
Thanks
2. May 22, 2009
### Dick
If x=pi*t and y=1-2t, then if you put t=0 then x=0 and y=1, right? If you put t=1 then x=pi and y=(-1), also right? As you came up with that fine parametrization what's the problem with finding limits for t?
3. May 23, 2009
### boneill3
I will need to go back and study more about parametrization.
4. May 23, 2009
### boneill3
When calculating this line integral
$\int_{(0,1)}^{(\pi,-1)} [y sin(x) dx - (cos(x))]dy$
I'm using the formula
$\int_{a}^{b}[f(x(t),y(t))x'(t) + g(x(t),y(t))y'(t)]dt$
with parameterization
I have $x = \pi t$
$y = 1-2t$
so
$x' = \pi$
and
$y' = -2$
plugging into the integral I get
$\int_{(0)}^{(1)} [1-2y sin(\pi t) \pi - (cos(\pi t))-2]$
$= -1$
The question states that the integral is independant of path.
So if I integrate along the initial line segment $(0,1)$to $(0,\pi)$
I should be able to plug in the values f($(-1,\pi)$)-f($(0,1)$)
And it should equal my original integral vaue of -1.
However I get 0
Could someone please check what I've done I show me where I am going wrong ?
Similar Discussions: Finding limits of line integral | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873407483100891, "perplexity": 823.5454905767187}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00514.warc.gz"} |
https://www.physicsforums.com/threads/critical-radius-of-insulation.408544/ | 1. ### mjki9ec3
3
In class, we were taught that the critical radius of insulation (found by differentiating thermal resistance WRT outer radius and setting to zero) gives a minimum resistance and thus and maximum heat loss- i.e. it's not always productive to insulate your pipe.
but could it not also give a maximum resistance and therefore a minimum heat loss?
differentiating again surely would tell you when it would be minimum and maximum? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597398638725281, "perplexity": 1126.4774420364013}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096780.24/warc/CC-MAIN-20150627031816-00123-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://snailstales.blogspot.com/2011/05/protoconchs-of-assiminea.html | ## 30 May 2011
### Protoconchs of Assiminea
There is a generalization that among closely related, especially congeneric, marine snails, the species with smaller protoconchs have planktonic larvae that go thru a free-swimming stage, while those with larger protoconchs have direct developing larvae that hatch out of their eggs as tiny crawling snails. The idea seems to go back at least to Verduin (1977) with possibly even earlier versions.
To compare the sizes of the protoconchs of related species, Verduin (1977) measured the following 2 dimensions of a protoconch, where Dn is the diameter of the nucleus of the protoconch and D1/2 is the diameter of the 1st half whorl.
Since Dn is within D1/2, the 2 measurements are tightly, in fact, linearly, correlated. Nevertheless, a plot of Dn versus D1/2 is a useful way to separate groups of supposedly planktonic versus supposedly direct-developing species as Verduin (1977) showed to be the case with the species in the genus Alvania.
Recently, Aartsen (2008) noted that the application of Verduin's method to the Atlantic and Mediterranean species of Assiminea revealed the existence of 2 groups. However, he did not present a plot. So I added my own measurements of Assiminea succinea to Aartsen's measurements and did a Verduin plot.
To illustrate the intrinsic variability in the dimensions of such traits, I show here the measurements of 4 specimens rather than the mean value.
As far as I know, among these species, life history information is available only for A. grayana, which has planktonic larvae and for A. succinea, which has direct developing larvae. In the plot, the protoconchs of the former are smaller than those of the larger. So at least with those 2 species, we have agreement with the generalization that direct developing larvae are larger than planktonic larvae.
Aartsen. 2008. Basteria 72:165.
Verduin. 1977. Basteria 41:91. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844938278198242, "perplexity": 3661.3153654196726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011250349/warc/CC-MAIN-20140305092050-00075-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://icsecbsemath.com/2017/08/07/icse-board-problems-solved-class-10-ratio-and-proportion/ | $\displaystyle \text{Question 1: If } x, y \text{ and } z \text{ are in continued proportion, prove that: } \\ \\ \frac{(x+y)^2}{(y+z)^2} = \frac{x}{y} \text{. [2010]}$
If $\displaystyle x, y \text{ and } z$ are in continued proportion, then
$\displaystyle \frac{x}{y} = \frac{y}{z} \Rightarrow x = \frac{y^2}{z}$
Applying componendo and dividendo
$\displaystyle \frac{x+y}{x-y} = \frac{y+z}{y-z}$
$\displaystyle \Rightarrow \frac{x+y}{y+z} = \frac{x-y}{y-z}$
Squaring both sides
$\displaystyle \Rightarrow \frac{(x+y)^2}{(y+z)^2} = (\frac{x-y}{y-z})^2$
$\displaystyle \Rightarrow \frac{(x+y)^2}{(y+z)^2} = (\frac{x-y}{y-z})^2$
Substituting
$\displaystyle \Rightarrow \frac{(x+y)^2}{(y+z)^2} = (\frac{\frac{y^2}{z}-y}{y-z})^2$
$\displaystyle \Rightarrow \frac{(x+y)^2}{(y+z)^2} = (\frac{y^2-yz}{z(y-z)})^2 = \frac{y^2}{z^2} = \frac{zx}{z^2} = \frac{x}{z}$
$\displaystyle \\$
$\displaystyle \text{Question 2: Given } x= \frac{\sqrt{a^2+b^2}+\sqrt{a^2-b^2}}{\sqrt{a^2+b^2 }-\sqrt{a^2-b^2 }} \\ \\ \text{ Use componendo and dividendo to prove that: } x^2= \frac{2a^2 x}{x^2+1} \text{ [2010 ] }$
$\displaystyle \text{Given } x= \frac{\sqrt{a^2+b^2}+\sqrt{a^2-b^2}}{\sqrt{a^2+b^2 }-\sqrt{a^2-b^2 }}$
Applying componendo and dividendo
$\displaystyle \frac{x+1}{x-1} = \frac{(\sqrt{a^2+b^2}+\sqrt{a^2-b^2})+(\sqrt{a^2+b^2 }-\sqrt{a^2-b^2 })}{(\sqrt{a^2+b^2}+\sqrt{a^2-b^2})-(\sqrt{a^2+b^2 }-\sqrt{a^2-b^2 })}$
Simplifying
$\displaystyle \frac{x+1}{x-1} = \frac{\sqrt{a^2+b^2}}{\sqrt{a^2-b^2 }}$
Square both sides
$\displaystyle \frac{x^2+1+2x}{x^2-2x+1} = \frac{a^2+b^2}{a^2-b^2}$
Applying componendo and dividendo
$\displaystyle \frac{x^2+1+2x+x^2-2x+1}{x^2+1+2x-x^2+2x-1} = \frac{a^2+b^2+a^2-b^2}{a^2+b^2-a^2+b^2}$
$\displaystyle \frac{2(x^2+1)}{4x} = \frac{2a^2}{2b^2}$
$\displaystyle \frac{x^2+1}{2x} = \frac{a^2}{b^2}$
Simplifying
$\displaystyle b^2 = \frac{2a^2x}{x^2+1}$
$\displaystyle \\$
$\displaystyle \text{Question 3: If } \frac{x^2+y^2}{x^2-y^2} =2 \frac{1}{8} \text{ , find: }$
$\displaystyle \text{i) } \frac{x}{y}$ $\displaystyle \text{ii) } \frac{x^3+y^3}{x^3-y^3 } \text{ [2010] }$
$\displaystyle \text{i) } \text{Given } \frac{x^2+y^2}{x^2-y^2} =2 \frac{1}{8} = \frac{17}{8}$
Applying componendo and dividendo
$\displaystyle \frac{x^2+y^2+x^2-y^2}{x^2+y^2-x^2+y^2} = \frac{17+8}{17-8}$
$\displaystyle \frac{2x^2}{2y^2} = \frac{25}{9}$
Simplifying, we get
$\displaystyle \frac{x}{y} = \frac{5}{3}$
$\displaystyle \text{ii) } \frac{x^3+y^3}{x^3-y^3 }$
Applying componendo and dividendo
$\displaystyle \frac{x^3+y^3+x^3-y^3 }{x^3+y^3-x^3+y^3 }$
$\displaystyle \frac{2x^3}{2y^3} = \Big( \frac{x}{y} \Big)^3 = \Big( \frac{5}{3} \Big)^3 = \frac{125}{9}$
$\displaystyle \\$
Question 4: Using componendo and dividendo, find the value of $\displaystyle x$ : $\displaystyle \frac{\sqrt{3x+4}+\sqrt{3x-5}}{\sqrt{3x+4}-\sqrt{3x-5}} =9 \text{ [2010] }$
$\displaystyle \text{Given } \frac{\sqrt{3x+4}+\sqrt{3x-5}}{\sqrt{3x+4}-\sqrt{3x-5}} =9$
Applying componendo and dividendo
$\displaystyle \frac{(\sqrt{3x+4}+\sqrt{3x-5})+(\sqrt{3x+4}-\sqrt{3x-5})}{(\sqrt{3x+4}+\sqrt{3x-5})-(\sqrt{3x+4}-\sqrt{3x-5})} = \frac{9+1}{9-1}$
$\displaystyle \frac{2\sqrt{3x+4}}{2\sqrt{3x-5}} = \frac{10}{8}$
Simplifying
$\displaystyle \frac{\sqrt{3x+4}}{\sqrt{3x-5}} = \frac{5}{4}$
Square both sides
$\displaystyle \frac{3x+4}{3x-5} = \frac{25}{14}$
$\displaystyle 42x+56 = 75x-125$
Simplifying we get $\displaystyle x = 7$
$\displaystyle \\$
$\displaystyle \text{Question 5: If } x= \frac{\sqrt{a+1}+\sqrt{a-1}}{\sqrt{a+1}-\sqrt{a-1}} \\ \\ \text{using properties of proportion show that: } x^2-2ax+1 \text{ [2010] }$
$\displaystyle \text{Given } x= \frac{\sqrt{a+1}+\sqrt{a-1}}{\sqrt{a+1}-\sqrt{a-1}}$
Applying componendo and dividendo
$\displaystyle \frac{x+1}{x-1} = \frac{(\sqrt{a+1}+\sqrt{a-1})+(\sqrt{a+1}-\sqrt{a-1})}{(\sqrt{a+1}+\sqrt{a-1})-(\sqrt{a+1}-\sqrt{a-1})}$
Simplify
$\displaystyle \frac{x+1}{x-1} = \frac{\sqrt{a+1}}{\sqrt{a-1}}$
Now square both sides
$\displaystyle \frac{x^2+1+2x}{x^2-2x+1} = \frac{a+1}{a-1}$
Simplifying
$\displaystyle x^2+1 = 2ax$
$\displaystyle \text{or } x^2-2ax+1 = 0$
$\displaystyle \\$
$\displaystyle \text{Question 6: Given, } \frac{a}{b} = \frac{c}{d} \text{, prove that: } \frac{3a-5b}{3a+5b} = \frac{3c-5d}{3c+5d} \text{ [2000] }$
$\displaystyle \text{Given } \frac{a}{b} = \frac{c}{d}$
$\displaystyle \Rightarrow \frac{3a}{5b} = \frac{3c}{5d}$
By componendo and dividendo
$\displaystyle \frac{3a+5b}{3a-5b} = \frac{3c+5d}{3c-5d}$
By Alternendo
$\displaystyle \frac{3a-5b}{3a+5b} = \frac{3c-5d}{3c+5d}$
$\displaystyle \\$
$\displaystyle \text{Question 7: If } x=\frac{\sqrt{a+3b}+\sqrt{a-3b}}{\sqrt{a+3b}-\sqrt{a-3b}} \text{, prove that: } 3bx^2-2ax+3b=0 \text{ [2007] }$
$\displaystyle \text{Given } x= \frac{\sqrt{a+3b}+\sqrt{a-3b}}{\sqrt{a+3b}-\sqrt{a-3b}}$
Applying componendo and dividendo
$\displaystyle \frac{x+1}{x-1} = \frac{(\sqrt{a+3b}+\sqrt{a-3b})-(\sqrt{a+3b}-\sqrt{a-3b})}{(\sqrt{a+3b}+\sqrt{a-3b})-(\sqrt{a+3b}-\sqrt{a-3b})}$
$\displaystyle \frac{x+1}{x-1} = \frac{2\sqrt{a+3b}}{-2\sqrt{a-3b}}$
Squaring both sides
$\displaystyle \frac{x^2+2x+1}{x^2-2x+1} = \frac{a+3b}{a-3b}$
Applying componendo and dividendo once again
$\displaystyle \frac{(x^2+2x+1)-(x^2-2x+1)}{(x^2+2x+1)-(x^2-2x+1)} = \frac{(a+3b)-(a-3b)}{(a+3b)-(a-3b)}$
simplifying
$\displaystyle \frac{x^2+1}{2x} = \frac{a}{3b}$
$\displaystyle 3b(x^2+1) = 2ax$
$\displaystyle 3bx^2-2ax+3b=0$ Hence proved.
$\displaystyle \\$
$\displaystyle \text{Question 8: Using the properties of proportion, solve for x Given: } \\ \\ \frac{(x^4+1)}{2x^2} = \frac{17}{8} \text{ [2013] }$
$\displaystyle \text{Given } \frac{(x^4+1)}{2x^2} = \frac{17}{8}$
Applying componendo and dividendo
$\displaystyle \frac{(x^4+1)+2x^2}{(x^4+1)-2x^2} = \frac{17+8}{17-8}$
$\displaystyle \frac{(x^2+1)^2}{(x^2-1)^2} = \frac{25}{9}$
Taking the square root of both sides
$\displaystyle \frac{x^2+1}{x^2-1} = \frac{5}{3}$
$\displaystyle 3x^2+3=5x^2-5$
$\displaystyle x^2 = 4 or x = \pm 2$
$\displaystyle \\$
Question 9: What least number must be added to each of the numbers $\displaystyle 6, 15, 20, \text{ and } 43$ to make them proportional. [2005, 2013]
Let the number added be $\displaystyle x$
$\displaystyle \text{Therefore } (6+x): (15+x) = (20+x): (43+x)$
$\displaystyle \Rightarrow (6+x) \times (43+x) = (20+x) \times (15+x)$
$\displaystyle \Rightarrow x^2+49x+258 = x^2+ 35x +300$
$\displaystyle \Rightarrow x = 3$
$\displaystyle \\$
Question 10: The monthly pocket money of Ravi and Sanjeev are in the ratio of 5:7 Their expenditures are in the ratio of 3:5. If each saves Rs. 80 per month, find their monthly pocket money. [2012]
Let monthly pocket of Rave and Sanjeev by $\displaystyle x \text{ and } y$ respectively.
$\displaystyle \frac{x}{y} = \frac{5}{7} \Rightarrow x = \frac{5}{7} y$
$\displaystyle \frac{x-80}{y-80} = \frac{3}{5}$
Substituting
$\displaystyle \frac{ \frac{5}{7} y-80}{y-80} = \frac{3}{5}$
$\displaystyle \frac{25}{7} x-400=3x-240 \Rightarrow x=280$
Substituting
$\displaystyle y = \frac{5}{7} \times 280 = 200$
$\displaystyle \\$
Question 11: If $\displaystyle (x-9):(3x+6)$ is the triplicate ratio of $\displaystyle 4:9$ , find $\displaystyle x$ . [2014]
$\displaystyle \frac{x-9}{3x+6} = \frac{4^2}{9^2} = \frac{16}{81}$
$\displaystyle 81x-729=48x+96$
$\displaystyle x=25$
$\displaystyle \\$
Question 12: If $\displaystyle a:b=5:3$ , find $\displaystyle (5a+8b):(6a-7b)$ . [2002]
$\displaystyle \text{Given } a:b=5:3$
$\displaystyle \text{or } \frac{a}{b} = \frac{5}{3} \Rightarrow a = b \frac{5}{3}$
Now substituting
$\displaystyle \frac{5a+8b}{6a-7b} = \frac{5 \times b\frac{5}{3}+8b}{6 \times b\frac{5}{3}-7b} = \frac{25+24}{30-21} = \frac{49}{9}$
Hence $\displaystyle (5a+8b):(6a-7b) = \frac{49}{9}$
$\displaystyle \\$
Question 13: The work done by $\displaystyle (x-3)$ men in $\displaystyle (2x+1)$ days and the work done by $\displaystyle (2x+1)$ men in $\displaystyle (x+4)$ days are in the ratio $\displaystyle 3:10$ . Find the value of $\displaystyle x$ . [2003]
Amount of work done by $\displaystyle (x-3)$ men in $\displaystyle (2x+1)$ days $\displaystyle = (x-3)(2x+1)$
Similarly, amount of work done by $\displaystyle (2x+1)$ men in $\displaystyle (x+4)$ days $\displaystyle = (2x+1)(x+4)$
$\displaystyle \text{Given } \frac{(x-3)(2x+1)}{(2x+1)(x+4)} = \frac{3}{10}$
$\displaystyle 10(2x^2+x-6x-3)=3(2x^2+8x+x+4)$
Simplifying
$\displaystyle 2x^2-11x-6=0$
$\displaystyle (x-6)(2x+1) = 0 \Rightarrow x = 6 or x=- \frac{1}{2} (not possible)$
$\displaystyle \text{Therefore } x = 6$
$\displaystyle \\$
Question 14: What number should be subtracted from each of the numbers $\displaystyle 23, 30, 57 \text{ and } 78$ ; so that the ratios are in proportion. [2004]
Let the number subtracted $\displaystyle = x$
$\displaystyle \text{Therefore } (23-x):(30-x)=(57-x):(78-x)$
$\displaystyle \frac{23-x}{30-x} = \frac{57-x}{78-x}$
Simplifying
$\displaystyle x^2-101x+1794 = x^2-87x+1710 \Rightarrow x =6$
$\displaystyle \\$
Question 15: $\displaystyle 6$ is the mean proportion between two numbers $\displaystyle x \text{ and } y \text{ and } 48$ is the third proportion to $\displaystyle x \text{ and } y$ . Find the numbers. [2011]
$\displaystyle \text{Given } 6$ is the mean proportion between two numbers $\displaystyle x \text{ and } y$
$\displaystyle \text{Therefore } \frac{x}{6} ={6}{y} \Rightarrow xy=36 \Rightarrow x = \frac{36}{y}$ … … … … … … i)
Also $\displaystyle \text{Given } 48$ is the third proportion to $\displaystyle x \text{ and } y$
$\displaystyle \text{Therefore } \frac{x}{y} = \frac{y}{48} \Rightarrow y^2=48x$ … … … … … … ii)
Solving i) and ii)
$\displaystyle y^2 = 48 \frac{36}{y}$
$\displaystyle y^3 = 2^3 \times 6^3 \Rightarrow y = 12$
$\displaystyle \text{Hence } x = \frac{36}{12} = 3$
Hence the numbers are $\displaystyle 3 \text{ and } 12$ .
$\displaystyle \\$
$\displaystyle \text{Question 16: If } \frac{8a-5b}{8c-5d} = \frac{8a+5b}{8c+5d} \text{, prove that } \frac{a}{b} = \frac{c}{d} \text{ [2008] }$
$\displaystyle \text{Given } \frac{8a-5b}{8c-5d} = \frac{8a+5b}{8c+5d}$
$\displaystyle \text{or } \frac{8c+5d}{8c-5d} = \frac{8a+5b}{8a-5b}$
Applying Componendo and Dividendo
$\displaystyle \frac{8c+5d+8c-5d}{8c+5d-8c+5d} = \frac{8a+5b+8a-5b}{8a+5b-8a+5b}$
$\displaystyle \frac{16c}{10d} = \frac{16a}{10b}$
$\displaystyle \frac{c}{d} = \frac{a}{b}$
$\displaystyle \text{or } \frac{a}{b} = \frac{c}{d} \text{ Hence proved.}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 292, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507685303688049, "perplexity": 1678.9787561562139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710980.82/warc/CC-MAIN-20221204204504-20221204234504-00726.warc.gz"} |
http://pillars.che.pitt.edu/student/slide.cgi?course_id=10&slide_id=29.0 | # LTR: Convective Mass Transfer
Convective mass transfer refers to the transport of mass due to a moving fluid. Like heat convection, this typically refers to transport across phases, however, here solid-fluid transport is on equal footing with liquid-gas transport (rather than being the dominant example of convection). As with heat transport, it is clear that the rate of mass transfer will depend on the character of the fluid flow.
In analogy with Newton's "Law" of Cooling, we can write an expression for the molar flux due to convection as:
##### EQUATION:
$\displaystyle{N_a = \frac{M_a}{A} = k_c\Delta C_a}$
where $k_c$ is the convective mass transfer coefficient, $N_a$ is the molar flux of species $a$, $M_a$ is the molar flow of a, and A is the interphase area of contact.
##### NOTE:
We could write essentially the same expression based on mass concentrations, but will try to denote mass fluxes/flows with lower case letters. Also, for transport int he gas phase, we will often use partial pressures instead of molar concentrations.
As with heat transfer $k_c$ may also sometimes be referred to as a "film coefficient".
$k_c$ will depend on:
• the geometry of the phase boundaries (unlike heat transport, if we have gas-liquid transport this is a very difficult thing to calculate/measure!)
• the nature of the fluid (here the diffusivity)
• the nature of the flow (fluid mechanics!)
##### NOTE:
Again, determining the parameter, $k_c$, will often be the bulk of the work (or at least the only hard part) in a given convection problem.
##### OUTCOME:
Perform convective mass transfer calculations
##### EXAMPLE:
An aspirin sitting in your stomach has a solubility of 0.15 mol/L (so this is the concentration at the solid-liquid surface). Assuming that the concentration in the bulk of the stomach is zero and that the pill does not shrink, but stays a sphere with a 0.5cm diameter, calculate the molar flow into the stomach when the mass transfer coefficient is 0.1 m/s | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618518948554993, "perplexity": 990.7728132715304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00169.warc.gz"} |
http://tex.stackexchange.com/questions/26843/cite-bibliography-in-beamer | # Cite bibliography in beamer
I am preparing a beamer presentation. In some slides (4 of 30) I want to cite one or two bibliographic references and put them at the bottom of each slide. They lie there just as a bulleted list, not being cited in the text of the slide.
I wonder what is the best approach to do so.
PS: I forgot to mention that bibliographic entries are stored in a .bib file
-
From your previous question, you know how to create references as "bulleted lists" using biblatex. Have you tried the beamer/biblatex combo? – lockstep Aug 29 '11 at 13:47
yes, I have tried this, sorry. The problem now is that I do not get the same format for bibliography in both latex document and beamer presentation; in beamer it gets a bit weird; I do not like ; authors, ... IN Journal ... – flow Aug 30 '11 at 11:04
Have a look at tex.stackexchange.com/q/10682/510. – lockstep Aug 30 '11 at 11:06
ok, but now the problem continues; I get "pp" for the page numbers, like if I was to cite a proceeding instead of a journal, and the volume number appers weird like 13.4 instead of 13(4) ... can not tell beamer to put them in some concrete style? – flow Aug 30 '11 at 11:38
beamer has nothing to do with bibliography styles. For customizing biblatex styles, have a look at this question (e.g. it tells you how to remove "pp."). Anyhow, you should ask a new question if you need additional advice. – lockstep Aug 30 '11 at 11:43
## 1 Answer
EDIT
Here a minimal example, which you should provide.:
\RequirePackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@book{test,
author={John Smith},
title={A book},
publisher={Puplisher},
year={1742},
}
\end{filecontents*}
\documentclass{beamer}
\begin{document}
\begin{frame}
asd\cite{test}
\end{frame}
\begin{frame}
\bibliographystyle{alpha}
\bibliography{\jobname}
\end{frame}
\end{document}
ALSO NO PROBLEM -- but the same problem you have to provide a minimal example.
-
thanks a lot, I forgot to mention that bibliographic entries are stored in a .bib file – flow Aug 29 '11 at 13:37
@flow: Same as usual. – Marco Daniel Aug 29 '11 at 13:42
thanks, now it works. The problem now is that I do not get the same format for bibliography in both latex document and beamer presentation; in beamer it gets a bit weird; I do not like ; authors, ... IN Journal ... – – flow Aug 30 '11 at 11:04
so I wonder how can I give it the exact format for the biliography, I just would like; authors, title, journal, etc (similar so American Medical Association style) – flow Aug 30 '11 at 11:05
Ok, I reformulated my question and put an example here: tex.stackexchange.com/questions/26959/… – flow Aug 30 '11 at 12:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597095608711243, "perplexity": 1714.158366874351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00104-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://brilliant.org/problems/an-easy-way-and-a-hard-way/ | # An Easy Way And A Hard Way I
Algebra Level 4
$\Large \left | \sum_{j=0}^{100} x^{2^j} + \dfrac1{x^{2^j}}\right |$
Given that $$x$$ is a complex number satisfying the constraint $$x + \dfrac1x = 1$$, find the value of the expression above.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530424237251282, "perplexity": 480.92687040609854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607369.90/warc/CC-MAIN-20170523045144-20170523065144-00404.warc.gz"} |
http://mathhelpforum.com/trigonometry/52818-vector-problem.html | # Math Help - Vector Problem
1. ## Vector Problem
So I have this small problem. Two vectors A and B added together give the vector S. Show that S (which I'm assuming is the scalar of S) is equal to (A^2 + B^2 + 2AB Cos (theta) ) / 2 remembering that S . S = S^2 and S= A+B. I'm really not sure how to make a start on it. I'm thinking that it has something to do with the rule a.b= [a][b]cos theta, but I can't seem to make the first leap. Any suggestions
2. Hi,
I'm not sure the question quite makes sense - vectors can't usually be "squared" - you can multiply them with the dot or cross product, but the notation in your question suggests otherwise.
Does the answer refer to the magnitude of the two vectors?
3. I think your right about the squaring of the vector, not being possible. However I don't theink the squraed parts are refering to the vectors. They weren't in bold, which I believe is the usual convention for vectors. So I'm assuming that the are the values or the scalar. ie Ai + Aj or possible the scalar product. Sorry I'm really at a loss about this stuff.
4. It follows at once from this.
$S = A + B \Rightarrow \left( {S \cdot S} \right) = \left( {A + B} \right) \cdot \left( {A + B} \right) = A \cdot A + 2A \cdot B + B \cdot B$
5. So the complete derivation would be something like this:
S.S= (A+B).(A+B)
=> A.A + B.B + 2(A.B)
=> A.A = [A][A] cos theta = [A]^2 because cos 0 = 1
=> B.B = [B][B] cos theta = [B]^2 as above
=> 2(A.B) = 2[A][B] cos theta
putting all the parts together
A^2 + B^2 + 2AB cos theta
then taking the square root bcause S.S = S^2
giving S = (A^2 + B^2 + 2AB cos theta)^1/2
Is that how it goes? Thanks for the help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672074913978577, "perplexity": 911.4279155725616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989142.82/warc/CC-MAIN-20150728002309-00077-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://911weknow.com/12-islanders-puzzle-from-brooklyn-99 | # 12 Islanders Puzzle from Brooklyn 99
A Puzzle is presented where there are twelve identical looking islanders and a seesaw. One of the islanders weighs slightly more or less then the other 11, and you must discover which, by placing islanders in groups on the seesaw. However only three measurements are allowed.
In the show, no solution to the puzzle is presented and at first glace I thought this problem was not solvable, since there are only three measurements of two outcomes (seesaw is balanced, or seesaw is unbalanced), meaning we can only discern between 2 = 8 outcomes rather than the 12 we need. On further reflection there are three outcomes, since the seesaw can be balanced, or heavier on the left or right. So the solution must involve cross-referencing the left, right or balanced measurements of different groups of islanders. Several other solutions are available on the net involve a lot of if-this-then-that type logic. Presented below is a simpler solution.
Each islander is given a position to sit on the seesaw for each round. L: sit on the left, R: sit on the right, -: don?t sit on the seesaw.
The pattern for all islanders is below:
person: A B C D E F G H I J K Lround 1: L L L L R R R R ? ? ? -round 2: L L R R R ? ? ? L R L -round 3: L R R ? ? L R ? L L ? R
For example person F will sit on the right, then stand out, then on the left.
After each round, we can see if the seesaw tilted down on the left (L), right (R) or was balanced (-)
The pattern of the seesaw will match the pattern of one person ( or be exactly reversed ). That person is the heavier (or lighter)
The nice thing about this approach is that the logic is very simple (you just need to know the pattern) and you will always find out whether the person is lighter or heavier.
The tricky bit for me was figuring out that the left/right/balance needed to be cross-referenced, then discovering a pattern for each islander, where;
• the same number of islanders are always on each side of the seesaw
• no pattern is repeated
• the opposite of each used pattern is not used
Note: I?ve since found out this is a variation on a puzzle involving 12 balls and a set of scales, but the same solution works here too. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361060261726379, "perplexity": 860.3495664617104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703561996.72/warc/CC-MAIN-20210124235054-20210125025054-00741.warc.gz"} |
http://physics.stackexchange.com/questions/36027/maxwell-equations-invariant-under-lorentz-transformation-but-not-galilean-transf/36040 | # Maxwell equations invariant under Lorentz transformation but not Galilean transformations
Why Maxwell equations are not invariant under Galilean transformations, but invariant under Lorentz transformations? What is the deep physical meaning behind it?
-
Are you looking for an explicit demonstration of these properties, or what....I mean, that set of equation simple has those mathematical properties. It's sort of like asking why a square ninety degree angles and not sixty degree one. The deep physical meaning is that physics is Einsteinian and not Galilean. – dmckee Sep 11 '12 at 0:52
You can make a quasistatic approximation when the timescale of the sources $T$ and the size of the system $L$ are such that $L/T \ll c$, or $L/c \ll T$. This means that the field propagates through the entire system much faster than the sources vary. This is a non-relativistic approximation since we get action at a distance. Of course, we can't capture all EM phenomena in this approximation. In particular, radiation is usually studied in the opposite regime $r/c \gg T$ where $r$ is the distance to the source. – Robin Ekman Jul 6 '14 at 18:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706436395645142, "perplexity": 344.7939502896057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654264.98/warc/CC-MAIN-20150417045734-00252-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/proof-of-limit-by-definition.267096/ | # Proof of limit by definition
1. Oct 26, 2008
1. The problem statement, all variables and given/known data
find the limit as x -> 0 of (sin^2(x))/(x^2)
2. Relevant equations
limit as x -> xo (fx) = L iff for every epsilon (>0) there exists a delta (>0) st if
| x - xo | < delta then |f(x) - L | < epsilon
3. The attempt at a solution
Let epsilon be positive. I believe the limit equals one, so I will proceed there.
then
| (sin^2(x))/x^2 - 1| < epsion if | x | < delta
But | f(x) - 1| <= |1/x^2 - 1| . And this is where I get stuck. If I pick delta to be small, and |x| < delta,
| (f(x)) - 1 | becomes very large, and thus is not being bound by any epsilon.
2. Oct 26, 2008
### Dick
|f(x)-1| does NOT become very large as x->0. Did you experiment with some numbers, like x=0.01,x=0.0001, etc? Do you know anything about the limit of sin(x)/x as x->0?
3. Oct 26, 2008
You are right, I meant to say that
|1/x^2 - 1| becomes large as x -> 0, which confused me, because
| f(x) - 1| <= |1/x^2 - 1|, which i get because | sin^2(x) | <= 1, so then | sin^2(x)/x^2 - 1 | <= |1/x^2 - 1 |
The limit of sin(x)/x equals 1 when x approaches 0. Are you suggesting that I use the fact that
sin(x)/x has limit 1 at x = 0 and the theorem that if lim f exists and equals L1 at some point xo and lim g exists at the same xo and equals L2, then lim f*g = L1*L2, which then I could use to lim sin^2(x)/x^2 =
lim (sin(x)/x)*(sin(x)/x) = 1 * 1 = 1. Yes, but then I need to prove that lim sin(x)/x equals 1. How would I do this by epsilon-delta definition of limits ( I know how to with squeeze theorem)? I would basically be stuck in the same mess I have above.
4. Oct 26, 2008
### Dick
|sin(x)^2/x^2-1|<=|1/x^2-1| is just not a very good estimate of |f(x)-1|. How do you prove sin(x)/x=1 using the squeeze theorem? You should be able to express that logic in epsilon-delta form. You might also notice |f(x)-1|=|sin(x)/x-1|*|sin(x)/x+1|.
5. Oct 26, 2008
| cos(x) | <= | (sin x)/x | <= | 1 |
Is how I would use the squeeze theorem to solve this one. But then I have no idea how to get from there to a delta, or reduction of | (sin x)/x | to | x | from which I can pull a delta. I have been searching the web for a few hours and have found 0 proofs that (sin x)/x has limit equal to 1. There was a place that had it listed as an exercise of finding deltas, but gave no explanation of how to find one. So I remain stuck.
One other idea I had is
| sin(x)/x | <= | sin(x) | <= | x |
Then you get stuck by the triangle inequality trying to bring back (sin x)/x - 1, the closest I got is:
| sin(x)/x - 1 + 1 | <= | sin(x)/x - 1 + 1 | <= | x - 1 + 1 |
6. Oct 26, 2008
### Dick
I think I'm better at searching the web than you are. I found this:
Multiply 1-cos(x) by (1+cos(x))/(1+cos(x)) and get (1-cos(x)^2)/(1+cos(x))=sin(x)^2/(1+cos(x))<=sin(x)^2 (for small x). You also have sin(x)<=x. So put it all together and get 1-cos(x)<=x^2. So:
1-x^2<=sin(x)/x<=1. Does that look like a form you can use?
7. Oct 26, 2008
Oh, I never said I was good at searching the web, only that I had done it for a while...
Anyways, I like where you went with this. Basically:
|sin x| < |x| . This implies that |(sin x)/x| < |x/x| = 1.
On the other hand, if x is in the open interval (0,pi/2) then | x | < | tan x |, so we get that
| x/sin(x) | < | (tan x) / (sin x) | = | 1/(cos x) |. Then you reciprocate the rationals over the inequality to get that:
| cos x | < | (sin x) / x|. Then you put the two of these together to get:
| cox x | < | (sin x) / x| < 1. (1)
Now for the next bit, we will need again that (sin anynumber) < anynumber, and the famous identity sin^2(x) + cos^2(x) = 1, and then we can use your trick:
(1- cos x) * (1 + cos x)/(1 + cos x) = (1 - (cos x)^2)/(1+ cos x) = ((sin x)^2)/(1+ cos x) <= (sin x)^2
But because (sin x) < x, we get that (1 - cos x) < (x)^2. Then:
- cos x < x^2 -1 (2) , and further
cos x > 1 - x^2. Now because we have limited x to the open interval (0,pi/2), and from the inequalities (1) and (2)
1 - x^2 < (sin x)/x < 1 , and now the big one
- x^2 < (sin x)/x - 1 <0 (3)and zero is obviously less than epsilon (which is chosen positive). However, this is not complete, the above will not hold on all intervals, particularly the above holds when x is in (0,pi/2) in other words | x | < pi/2. Notice that,
(sin x) / x - 1 < x^2
And so if delta equals sqrt(epsilon) we get that if | x | < delta
then (sin x) / x - 1 < x^2 < epsilon
We need to pick delta to be the min of pi/2 and sqrt(epsilon).
8. Oct 26, 2008
### Dick
Yes, I think that does it. You shouldn't have any troubles extending that to get the delta for sin(x)^2/x^2. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166405200958252, "perplexity": 1255.1361227064772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00577.warc.gz"} |
http://math.stackexchange.com/questions/45642/operators-from-c-0 | # Operators from $c_{0}$
My question seems to be easy but I cannot spot the answer. I am interested in ranges of operators defined on $c_0$. The celebrated "operator version" of Sobczyk's theorem says that if we are given a separable Banach space $X$ and its subspace $Y$, then every bounded operator $T\colon Y\to c_0$ can be extended to a bounded operator $\overline{T}$ with $\|\overline{T}\|\leq 2\|T\|$ (categorically speaking, $c_0$ is "separably injective"). I am wondering if I could use this theorem (or anything else) to (dis)prove the following conjecture:
If $X$ is a $c_0$-saturated separable Banach, then the range of every operator $T\colon c_0\to X$ embeds into $c_0$. We know that (consult Lindenstrauss and Tzafriri's book) every quotient of $c_0$ embeds into $c_0$ but how about ranges of this sort of operators?
-
There is a quite recent survey article on Sobczyk's theorem by Cabello Sánchez, Castillo and Yost. While I haven't read it in detail, it might contain some pointers to the literature. – t.b. Jun 16 '11 at 0:21
I'm really not sure if I understand your question correctly, but as it is stated I don't really see how the information that $X$ contains a complemented copy of $c_0$ should help. If $T: c_0 \to Y$ is any operator to a separable Banach space, simply consider $X = Y \oplus c_0$ and compose $T$ with the inclusion $Y \to X$. The range of $T$ will remain the same and certainly $X$ is separable. Also, it would be nice if you could make your question a bit more precise (e.g. what exactly does it mean for the range of $T$ to embed into $c_0$)? – t.b. Jun 16 '11 at 0:40
$T(X)$ isomorphic to a subspace of $c_0$. Right, I was thinking about $c_0$-saturation, since if $X$ contains a copy $c_0$ then in the separable setting it is automatically complemented. Btw, I know this paper quite well. – dziobak Jun 16 '11 at 6:45
Like Theo, I don't really understand what exactly it is that is being asked. I am quite sure that the OP knows that regardless of whether or not $X$ is $c_0$-saturated, an operator $T:c_0 \longrightarrow X$ with closed range has the property that $T(c_0)$ embeds into $c_0$ by the Johnson-Zippin result cited above in the comments to the question.
I am somewhat guessing that the question is: for a $c_0$-saturated $X$ and arbitrary $T:c_0 \longrightarrow X$, is the Banach space $\overline{T(c_0)}$ isomorphic to a subspace of $c_0$?
The answer to this question is no. For a counterexample, take $X$ to be $C(\omega^\omega)$ (or replace $\omega^\omega$ by any larger countable ordinal) and let $(\alpha_n)_{n=1}^\infty$ be an enumeration of $\omega^\omega+1$. For each $n\in \mathbb{N}$ let $\alpha_n^- = \min \{\beta \mid \exists \nu \mbox{ such that } \beta + \omega^\nu = \alpha_n\}$, so that the (continuous) characteristic functions $\chi_{(\alpha_n^-, \alpha_n]}$, $n\in\mathbb{N}$, span a dense linear subspace of $C(\omega^\omega)$. The map $e_n \mapsto 2^{-n}\chi_{(\alpha_n^-, \alpha_n]}$ extends to a (compact) continuous linear operator $T: c_0 \longrightarrow C(\omega^\omega)$ - with dense range. Since $C(\omega^\omega)$ does not embed in $c_0$, the claimed counterexample is achieved.
P.S. If you write $\alpha_n$ as a sum of powers of $\omega$ - i.e., in Cantor normal form - then $\alpha_n^-$ is just the ordinal attained by leaving off the last summand. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487029910087585, "perplexity": 115.44035855718352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00075-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/schwarzschild-metric.794669/ | # Schwarzschild metric
1. Jan 28, 2015
### TimeRip496
How do you obtain this equation M=Gm/c^2. What does M stand for? Is is newton law at infinity? Again what is this newton law at infinity?
2. Jan 28, 2015
### Staff: Mentor
M is just the mass of the black hole using units in which G and c are both equal to one.
We don't have to make this substitution but if we don't we'll be schlepping factors of G and c around everywhere in our equations, and they're complicated enough already.
3. Jan 28, 2015
### TimeRip496
Do you mind telling me a source for such derivation? Cause all the Internet gives is just the derivation of the schwarzschild radius.
4. Jan 28, 2015
### Staff: Mentor
It's just a conversion factor from mass units to length units; $Gm / c^2$ converts the mass $m$ to an equivalent length. The Schwarzschild radius corresponding to $m$ is just twice that equivalent length.
Similar Discussions: Schwarzschild metric | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732397794723511, "perplexity": 1832.2340807072903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814857.77/warc/CC-MAIN-20180223213947-20180223233947-00244.warc.gz"} |
https://brilliant.org/discussions/thread/fibonacci-help-me-2/ | ×
# Fibonacci ??? Help me
Have anyone noticed that when we take 4 consecutive numbers in the Fibonacci list: f(x), f(x+1), f(x+2), f(x+3). We will have:
f(x) * f(x+3) - f(x+1) * f(x+2) =1 or -1
For example: 2 * 8 - 3 * 5 =1
p/s: It is just my view, I do not sure if it is always true.
Note by Khoi Nguyen Ho
1 year, 12 months ago
Sort by:
That is a great observation.
Let me suggest a way of continuing:
Can you list out the values of $$x$$ where the expression is 1, and the values of $$x$$ where the expression is -1?
Do you know Binet's formula which gives you the value of $$f(x)$$? Staff · 1 year, 11 months ago
oh yeah, just adding x, x+1, x+2, x+3 into the Binet's and then minus two products, I found that f(x) * f(x+3) - f(x+1) * f(x+2)= -1 * (-1)^x. Great. Thank you. · 1 year, 11 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878942608833313, "perplexity": 2002.0125631864616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00377-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://singingdrjosh.com/anomalisa-stream-wvzg/bellman-equation-paper-1431a9 | Bullmastiff For Sale Philippines 2020, Amherst County Jail Inmate Search, Calgary Airport To Banff Shuttle, Wrath Meaning In Bisaya, Dacia Stepway Prix Maroc, Uss Abraham Lincoln Crew, Dacia Stepway Prix Maroc, "/>
# bellman equation paper
The main difference between optimal control of linear systems and nonlinear systems lies in that the latter often requires solving the nonlinear Hamilton–Jacobi–Bellman (HJB) equation instead of the Riccati equation (Abu-Khalaf and Lewis, 2005, Al-Tamimi et … Path Dependent PDEs. ). Initially, the system is assumed to have no faulty components, i.e. (2006) Linear forward-backward stochastic differential equations with random coefficients. A neces- Example 1. Stochastic Linear-Quadratic Control. (2019) Uniqueness of Viscosity Solutions of Stochastic Hamilton-Jacobi Equations. Keyword: Bellman Equation Papers related to keyword: G. Barles - A. Briani - E. Chasseigne (SIAM Journal on Control and Optimization ) A Bellman approach for regional optimal control problems in R^N (2014) G. Barles - A. Briani - E. Chasseigne (ESAIM: Control Optimisation and Calculus of Variation) (2014) Mean field games with partially observed major player and stochastic mean field. (2008) Differentiability of Backward Stochastic Differential Equations in Hilbert Spaces with Monotone Generators. (2012) ε-Nash Mean Field Game theory for nonlinear stochastic dynamical systems with mixed agents. According to the strategy proposed in Theorem Estimation and Control of Dynamical Systems, 395-407. (2014) General Linear Quadratic Optimal Stochastic Control Problem Driven by a Brownian Motion and a Poisson Random Martingale Measure with Random Coefficients. (2015) Time-inconsistent optimal control problem with random coefficients and stochastic equilibrium HJB equation. 2013. (1996) Existence, uniqueness and space regularity of the adapted solutions of a backward spde. It is also important to note that and , , are independent Bernoulli random variables with success probability and , respectively. (2004) Quadratic Hedging and Mean-Variance Portfolio Selection with Random Parameters in an Incomplete Market. nothing at zero cost; b) detect the number of faulty components at the cost of Nonlinear Analysis: Theory, Methods & Applications 70:4, 1776-1796. (2019) Multi-dimensional optimal trade execution under stochastic resilience. Two numerical examples are presented to demonstrate the results in the cases of fixed and variable rates. . In the Bellman equation, the value function Φ(t) depends on the value function Φ(t+1). (2015) ε-Nash equilibria for a partially observed mean field game with major player. Stochastic H (2008) Backward Stochastic Riccati Equations and Infinite Horizon L-Q Optimal Control with Infinite Dimensional State Space and Random Coefficients. 2015. (1991) Adapted solution of a backward semilinear stochastic evolution equation. In the design of control systems for industrial applications, it is important to achieve a certain level of fault tolerance. The classical Hamilton–Jacobi–Bellman (HJB) equation can be regarded as a special case of the above problem. A Mean Field Games and Mean Field Type Control Theory, 67-87. (2005) Strong, mild and weak solutions of backward stochastic evolution equations. Control, 343-360. shows the optimal course of action for the above setting, in different scenarios in terms of the number of faulty processors (based on the most recent observation). Stochastic Control for Non-Markov Processes. For all s ∈ S: s \in \mathcal{S}: s ∈ S: { + \sum_{i,j} {\sigma_{ij}(x,v,t)\partial _{x_i } \Psi _{j,t} (x)} } \right\}dt - \Psi _t (x)dW_t ,\quad \Phi _T (x) = h(x), \hfill \\ \end{gathered}\] where the coefficients $\sigma _{ij}$, $b_i$, L, and the final datum h may be random. Given any realization , , and , , the following equality holds irrespective of strategy , The proof follows from equations ( (2019) A Weak Martingale Approach to Linear-Quadratic McKean–Vlasov Stochastic Control Problems. ∎. Proceedings of IEEE International Midwest Symposium on Circuits and Systems, 2018. Three courses of action are defined to troubleshoot the faulty system: (i) let the system operate with faulty components; (ii) inspect the system, and (iii) repair the system. Classical Solutions to the Master Equation. Their drawback, however, is that the fixed points may not be reachable. Encyclopedia of Systems and Control, 1-6. (2013) Stochastic H 2/H ∞ control with random coefficients. Inspection and repair with fixed price. The number 51 represents the use of 51 discrete values to parameterize the value distribution ZZZ. Each course of action has an implementation cost. (2015) Stochastic minimum-energy control. In this paper, we presented a fault-tolerant scheme for a system consisting of a number of homogeneous components, where each component can fail at any time with a prescribed probability. Introduction. [■] where is the cost of operating with faulty processors. [■] (2011) Backward linear-quadratic stochastic optimal control and nonzero-sum differential game problem with random jumps. Stochastic Control Theory, 209-244. The Fascination of Probability, Statistics and their Applications, 435-446. The first option is to do nothing and let the system continue operating without disruption at no implementation cost. ∎. Consider a stochastic dynamic system consisting of internal components. Optimization in a Random Environment. 16. ), ( [■] (2006) Weak Dirichlet processes with a stochastic control perspective. (2020) A Stochastic Approximation Approach for Foresighted Task Scheduling in Cloud Computing. As a future work, one can investigate the case where there are a sufficiently large number of components using the law of large numbers (2009) A class of backward doubly stochastic differential equations with non-Lipschitz coefficients. (2020) The Link between Stochastic Differential Equations with Non-Markovian Coefficients and Backward Stochastic Partial Differential Equations. Bernoulli random variables with success probability . Stochastic Control Theory, 31-78. Stochastic Hamilton–Jacobi–Bellman Equations, Copyright © 1991 Society for Industrial and Applied Mathematics. (2015) A new comparison theorem of multidimensional BSDEs. The Hamilton–Jacobi–Bellman equation (HJB) is a partial differential equation which is central to optimal control theory. co-state = shadow value Bellman can be written as ˆV(x) = max u2U H(x;u;V′(x)) Hence the \Hamilton" in Hamilton-Jacobi-Bellman Can show: playing around with FOC and envelope condition The optimal solution of the approximate model is obtained from the Bellman equation ( I. C51 works like this. (2012) Maximum principle for quasi-linear backward stochastic partial differential equations. We consider the following numerical parameters: Figure Probabilistic Theory of Mean Field Games with Applications II, 447-539. Probability, Uncertainty and Quantitative Risk, Journal of Network and Computer Applications, Journal of Optimization Theory and Applications, Stochastic Processes and their Applications, Journal of Mathematical Analysis and Applications, Journal de Mathématiques Pures et Appliquées, Discrete and Continuous Dynamical Systems, Acta Mathematicae Applicatae Sinica, English Series, Applied Mathematics-A Journal of Chinese Universities, Journal of Systems Science and Complexity, International Journal of Theoretical and Applied Finance, Nonlinear Analysis: Theory, Methods & Applications, Communications on Pure and Applied Mathematics, Journal of Applied Mathematics and Stochastic Analysis, Infinite Dimensional Analysis, Quantum Probability and Related Topics, Random Operators and Stochastic Equations, SIAM J. on Matrix Analysis and Applications, SIAM/ASA J. on Uncertainty Quantification, Journal / E-book / Proceedings TOC Alerts, backward stochastic differential equation, Society for Industrial and Applied Mathematics. (2005) SEMI-LINEAR SYSTEMS OF BACKWARD STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS IN ℝ. Outline 1. We will define and as follows: is the transition probability. The shorthand notation denotes vector , . Probabilistic Theory of Mean Field Games with Applications II, 541-663. Be directly available separation theorem for stochastic linear quadratic control for systems consisting of components! Have no faulty components at time, and is the cost of inspection repair... Management for a class of Hamilton-Jacobi equations independent Bernoulli random variables with success probability is fault-tolerant for. Some concluding bellman equation paper are given in Section [ ■ ] ) is obtained iteratively over a number. Of Forward-Backward stochastic differential equations function Learning numerical SIMULATION of BSDEs related BSPDEs... Component is either in the overal cost function ( [ ■ ] random variable, the! Bspdes with Applications II, 3-106 ) Characterization of optimal Feedback for stochastic singular linear quadratic control problems control,! From trials where the success probability is ) Linear-Quadratic optimal control the design of system. State with probability and their Applications, 435-446 Lévy processes ■ ] ), which is more Dijkstra... Incomplete Market that with the variable rate, the RAND Corporation, paper,... \Mathcal { s }: s \in \mathcal { s }: s ∈ s: 15 that., t ) $uniquely solving the equation Global adapted solution of the HJB for. Modeling, pandemics and vaccines will help in the optimization problem is by. Bsdes and related backward stochastic partial differential equations in ℝ is that the state of each may! ( actions ) at our disposal, represented bellman equation paper the unique viscosity solution of associated Bellman equations Reinforcement. Variable rates prove difficult to compute semilinear stochastic evolution equation formulation of estimation problems for a class PDEs. Processing machines differential Games with random coefficients x, t ) depends on the Cauchy-Dirichlet in! Using this method as well differential game problem with random coefficients respectively, to real and natural numbers constant! Games and Mean Field Games with Applications II, 323-446 ) one-dimensional BSDEs with finite and Infinite Horizon!, they do not depend on the Existence of solutions of Hamilton–Jacobi–Bellman equations time that are observed identifying an solution. The indicator function ) End-to-end CNN-based dueling deep Q-Network and,, are independent Bernoulli random variables success! The problem of optimal controls for backward stochastic differential equations in ℝ autonomous cell activation in Cloud-RANs on... Action is described as: where is the cost of inspection and repair be variable option... To do nothing and let the system to detect the number 51 the! Solution of a backward semilinear stochastic evolution equation proof of Indefinite Linear-Quadratic stochastic optimal control for stochastic. { Euler equation state-dependent weights deterministic and random optimal control equation arising in the rapid fight this. Quadratic Hedging and Mean-Variance Portfolio Selection with random Parameters in a half space for backward stochastic integral differential. Cost of inspecting the system continue operating without disruption at no implementation cost Bellman { equation. Fixed and variable rates the current page, to improve the search results or fix bugs with a dynamic. Approach for Foresighted Task Scheduling in Cloud Computing Equilibrium for Affine-Quadratic Zero-Sum stochastic differential Games with a player. Indefinite Linear-Quadratic stochastic optimal control of distributed Parameter and stochastic linear quadratic optimal control problems for in. On to the Bellman equations assumed that the near-optimal action changes sequentially time. Equation is a partial differential equations with jumps and with non-Lipschitzian coefficients in Hilbert spaces stochastic. ) Semi-linear systems of backward stochastic partial differential equations with jumps and viscosity solutions of discrete reflected backward SDE s! ( 2011 ) one-dimensional BSDEs with Semi-linear growth and general growth generators SWING option PRICING against. An Incomplete Market policy for joint compression and transmission control in delay-constrained energy harvesting IoT.! We are interested in a Complete Market to access this collection under stochastic.... The Euler equation tight probability measures on s: s \in \mathcal { s } s. Books which Bellman wrote is quite amazing, the proof follows from ( [ ]. Equilibrium HJB equation for optimal control of Forward-Backward stochastic differential systems in Infinite Dimensions failure each... Right-Hand side of ( [ ■ ] ) we need a little more useful notation their probabilities the Hamilton–Jacobi–Bellman. The cost of repairing the faulty processors pandemics and vaccines will help in the overal cost function in [... Φ ( t+1 ) and is the cost of repairing the faulty components, i.e on and. The Cauchy-Dirichlet problem in Hilbert spaces: a stochastic STACKELBERG differential Games with random coefficients, 47-72 high bias semi-gradient. Anonymous comments for the original model CNN-based dueling deep Q-Network for autonomous activation! Are ubiquitous in RL and are necessary to understand how RL algorithms work Equilibrium for Affine-Quadratic stochastic... Mean–Variance Hedging result presented in the cases of fixed and may change with value... Time and be the probability of the Cole–Hopf transformation drawback is circumvented by restricting to! And may change with the value distribution ZZZ and rough PDEs: classical and viscosity solution of doubly! Is an -optimal solution for this problem is to do nothing and let the continue! And as follows: where is the cost of repairing the faulty processors at state and take we. Non-Linear expectations mfgs with a Dominating player new PARADIGM of optimal Feedback for stochastic optimal control problem Lévy! A Feedback Nash Equilibrium for Affine-Quadratic Zero-Sum stochastic differential equations in Infinite Dimensions for autonomous cell for... To be the unique viscosity solution of the results in the preceding Section by simulations optimal stochastic control action end... Some application sum of i.i.d, 3-106 stochastic partial differential equations with jumps viscosity... Martingale Approach to Linear-Quadratic McKean–Vlasov stochastic control perspective under Non-Markovian switching application to the Mean–variance Hedging Multi-dimensional optimal trade under... This post, I will show you how to prove it easily by time an... And is the cost of inspecting the system is assumed to have good empirical performance Hilbert space-valued forward–backward stochastic equations. One-Dimensional backward stochastic Riccati equations, with application to the reachable set circumvented by attention. Cost of inspection and repair be constant, i.e., they do not depend on interpretation. Ieee International Midwest Symposium on Circuits and systems, 2018 variables with success probability and, the! Achieve a certain level of fault tolerance a Bernoulli probability distribution function of outcomes! Jumps and with non-Lipschitzian coefficients in Hilbert spaces: a stochastic dynamic system consisting of components! Energy saving in Cloud-RANs a Feedback Nash Equilibrium for Affine-Quadratic Zero-Sum stochastic differential equations the HJB.... Part IIof ECON2149 Benjamin Moll Harvard University, Spring 2018 may bellman equation paper 1 modeling. Processing machines comments for the original model are not fixed and variable rates their probabilities an optimal for. The optimal strategy for the problem is to find the shortest path algorithm, the RAND Corporation, P-480... And a REINSURER the interpretation of the approximate model is obtained from the available information by to... W2N-Theory of Super-Parabolic backward stochastic differential equations in Hilbert spaces: a stochastic.. Time and be the unique viscosity solution of one-dimensional backward stochastic differential equations introduction to the {... Computational complexity remains unchanged at each iteration and does not increase with time current page, to the! Solving the above equation POMDP ) on solving approximate versions of a semilinear! Action is expressed as follows: where is the transition probability up in state probability. System continue operating without disruption at no implementation cost defined as the from! Notion of -vectors, an approximate value function Φ ( t )$ uniquely the... Systems [ ■ ] between an INSURER and a generalization of the Master equation, is the action set RL... A partial differential equations and Applications control for an affine equation with local time is circumvented by restricting attention the... Poisson point process how RL algorithms work, respectively, to improve the search results or fix with! ) differentiability of backward stochastic differential equations and their Applications, 435-446 no faulty components at time, we three... To be the probability that a processor fails found 51 to have no faulty.! Approach for Foresighted Task Scheduling in Cloud Computing but time complexity of Bellman-Ford is also NP-hard [ ■ ] some... Paper P-480, January 1954 if there is no observation at time that observed! Is assumed to have no faulty components at time, then in backward stochastic partial differential equations equation ( )... With Monotone generators 2011 ) backward stochastic differential equations and Infinite time.! Repairing the faulty components, i.e Burgers PDEs with random coefficients -optimal solution the... For controlled backward stochastic partial differential equations and Infinite time horizons on title above here... Control and viscosity solution of a degenerate backward spde the standard Q-function used in Learning... A fault-tolerant control include power systems and aircraft flight control systems for industrial Applications, 435-446: classical viscosity! And refer, respectively, to improve the search results or fix bugs with a Large number of processors. Number of points in the operating mode or faulty s }: s ∈:. 37 Reinforcement Learning course at the School of AI solution to POMDP variance of importance sampling,., \Psi ) ( x, t ) depends on the Existence of optimal Feedback stochastic! First option is to repair the faulty processors at time that are.! Hamilton–Jacobi–Bellman equation to prove it easily variable, and,, are independent Bernoulli random are. For Super-Parabolic backward stochastic differential equations in ℝ Super-Parabolic backward stochastic integral partial differential equations with random and! Find the shortest path in a graph of 51 discrete values to parameterize value... Methods developed in [ ■ ] ) is obtained iteratively over a finite number of papers books! Random Martingale Measure with random jumps and books which Bellman wrote is quite.... One-Dimensional BSDEs with Semi-linear growth and general growth generators thus, the value function is obtained over. 2/H ∞ control with partial information Constrained stochastic LQ optimal control with random coefficients to access collection... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9125754237174988, "perplexity": 1356.2743597663343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00210.warc.gz"} |
https://www.physicsforums.com/threads/emf-induced-in-rotating-rod-inside-uniform-magnetic-field.964615/ | # EMF induced in rotating rod inside uniform magnetic field
• Start date
#### songoku
1,177
23
1. Homework Statement
A 40 cm rod is rotated about its centre inside a region of uniform magnetic field of 6.4 T. Given that the speed of rotation is 15 rad/s, find potential difference between the centre and either end of the rod
2. Homework Equations
emf = - ΔΦ / Δt
ω = 2π / T
3. The Attempt at a Solution
emf = - B cos θ . ΔA / Δt = - B . πr2 / T
I just need to plug the numbers with r = 20 cm (because from center to either end of rod)?
Thanks
Related Introductory Physics Homework Help News on Phys.org
#### Delta2
Homework Helper
Gold Member
2,384
668
In one full period T, the radius of the rod which is $r=20cm$ (since $d=40cm$ is the diameter) covers a surface of a full circle which is $\pi r^2$. You can use the diameter but then you ll have to take the formula $\pi\frac{d^2}{4}$ for the surface of the circle.
Last edited:
#### songoku
1,177
23
In one full period T, the radius of the rod which is $r=20cm$ (since $d=40cm$ is the diameter) covers a surface of a full circle which is $\pi r^2$. You can use the diameter but then you ll have to take the formula $\pi\frac{d^2}{4}$ for the surface of the circle.
For the time, do I use the period because half of the rod travels full circle in one full period or I use half of period because one whole rod covers one full circle in half period?
Thanks
#### Delta2
Homework Helper
Gold Member
2,384
668
For the time, do I use the period because half of the rod travels full circle in one full period or I use half of period because one whole rod covers one full circle in half period?
Thanks
You use the full period for half rod, otherwise if you follow the 2nd approach you find the EMF between the two ends of the rod. But the problem asks for the EMF between one end and the center, that's why we have to take the area that the half rod covers in one full period.
#### songoku
1,177
23
You use the full period for half rod, otherwise if you follow the 2nd approach you find the EMF between the two ends of the rod. But the problem asks for the EMF between one end and the center, that's why we have to take the area that the half rod covers in one full period.
If the question asks the emf between two ends of the rod, will the answer be zero because they have the same value and the difference = 0?
Thanks
#### Delta2
Homework Helper
Gold Member
2,384
668
If the question asks the emf between two ends of the rod, will the answer be zero because they have the same value and the difference = 0?
Thanks
Yes, the emf between the two ends is zero.
#### songoku
1,177
23
Thank you very much
"EMF induced in rotating rod inside uniform magnetic field"
• Posted
Replies
33
Views
9K
• Posted
Replies
6
Views
271
• Posted
Replies
13
Views
4K
• Posted
Replies
9
Views
1K
• Posted
Replies
1
Views
2K
• Posted
Replies
1
Views
12K
• Posted
Replies
13
Views
8K
• Posted
Replies
1
Views
5K
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291111588478088, "perplexity": 954.38474939524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00102.warc.gz"} |
http://math.stackexchange.com/questions/213316/shock-waves-characteristics | shock waves characteristics
I'm trying to solve $u_t + u^2u_x = 0$ with $u(x, 0) = 2 + x$.
I'm thinking to proceed by characteristics where we have above that $\frac{dx}{dt} = 1$ and $dy/dt = u^2$, but not sure if this will help. This is from shock waves idea.
Here's what I have
ut + u^2ux = 0
q +u^2 p =0
u^2 p +q =0
dx/ u^2 = dy/1
and u^-1/-1 + C = y
then u(x,t)^-1 + C = y
is this correct?
-
Please learn to $\LaTeX$ format your quesitons. There are tutorials on the web and you can see this. Presumably ut is $u_t=\frac {\partial u}{\partial t}$ but I am not sure how to parse u2ux. Is it $\frac {\partial^2 u}{\partial x^2}?$ – Ross Millikan Oct 14 '12 at 0:19
Given that the method of characteristics gives $\frac{dy}{dt} = u^2$, I think she means $u^2u_x$. – Michael Albanese Oct 14 '12 at 0:24
@Michael, that is correct – mary Oct 14 '12 at 1:02
As timur comments, the characteristic equations are wrong. They should be stated against a parameter, not the involved variables. You are mistaking $t$ with $y$. The construction of the characteristics is based on the supposition that if $x = x(\eta)$ and $t = t(\eta)$, then $$\frac{d}{d\eta}u\big(x(\eta),t(\eta)\big) = u_x x'(\eta) + u_t t'(\eta) = u^2 u_x + u_t = 0$$ and then one says $x'(\eta) = u^2$, $t'(\eta) = 1$, $u'(\eta) = 0$. See my answer for a full analysis. – Pragabhava Oct 16 '12 at 8:31
Also, if I understand your work, the method you are using is for solving fully nonlinear first order PDE's, and you are using it wrong. Your problem is quasilinear, and there is no need to introduce $p$ and $q$. This are only introduced in the case the derivatives of $u$ are involved nonlinearly in the equation. I strongly suggest you to study the first chapter of John's Partial Differential Equations, as I believe you are very confused. Any doubts, we can try to help. – Pragabhava Oct 16 '12 at 8:40
The quasilinear first order PDE $$a\big(x,y,u(x,y)\big) u_x(x,y) + b\big(x,y,u(x,y)\big)u_y(x,y) = c\big(x,y,u(x,y)\big)$$ where $a,\,b,\,c \in C^1$ with data $\mathcal{C}(\xi) = \big(x(\xi), y(\xi), u(\xi)\big) \in C^1$ and with $$\begin{vmatrix} \frac{dx}{d\xi} & a \\ \frac{dy}{d\xi} & b\end{vmatrix} \neq 0$$ has a unique solution near $\mathcal{C}$ given by \begin{align} \frac{d x}{d \eta} &= a & x\big|_{\eta = 0}&= x(\xi)\\ \frac{d y}{d \eta} &= b & y\big|_{\eta = 0}&= y(\xi)\\ \frac{d u}{d \eta} &= c & u\big|_{\eta = 0}&= u(\xi)\\ \end{align}
For proof and geometrical interpretation, see F. John's Partial Differential Equations1.4)
In your case, $\mathcal{C}(\xi) = \big(\xi,0,\xi+2\big)$. Near $\eta \sim 0$ $$\begin{vmatrix} \frac{dx}{d\xi} & a \\ \frac{dy}{d\xi} & b\end{vmatrix} = \begin{vmatrix} 1 & u^2 \\ 0 & 1\end{vmatrix} = 1$$ and the solution is unique.
The system of ODE's is \begin{align} \frac{d x}{d \eta} &= u^2 & x\big|_{\eta = 0}&= \xi\\ \frac{d t}{d \eta} &= 1 & t\big|_{\eta = 0}&= 0\\ \frac{d u}{d \eta} &= 0 & u\big|_{\eta = 0}&= \xi + 2\\ \end{align} with solution $$t = \eta, \quad u = \xi + 2, \quad x = (\xi + 2)^2 \eta + \xi.$$
The characteristics are $t = \frac{x - \xi}{(\xi + 2)^2}$ hence $\xi = -2$ is a special point. As $\xi \rightarrow \infty$, $t \rightarrow 0$. As $\xi \rightarrow -\infty$, $t \rightarrow 0$. As $\xi \rightarrow -2$, $t \rightarrow \infty$.
This of course, means that there is no solution when the characteristics meet.
A simple explanation for this is that the transformation $$(x,t) \rightarrow (\xi,\eta)$$ is invertible iff $$\begin{vmatrix} \partial_\xi x & \partial_\eta x \\ \partial_\xi t & \partial_\eta t \end{vmatrix} = 1 + 4\eta + 2\xi \eta \neq 0$$ meaning there is no solution when $\xi = -\frac{1 + 4 \eta}{2\eta}$ or, inverting the transformation, when $$t = - \frac{1}{4(x+2)}$$
$\hskip.75in$
Lastly, inverting for $\xi$
$$\xi = \frac{-(1 + 4t) \pm \sqrt{1 + 4t(2 + x)}}{2 t}$$
and
$$u(x,t) = \frac{-1 \pm \sqrt{1 + 4t(2 + x)}}{2 t}.$$
In order to determine the correct sign, we must look at the initial condition. For the minus sign $\lim_{t \rightarrow 0} u(x,t) = -\infty$, while the plus sign gives the correct answer. Hence
$$u(x,t) = \frac{-1 + \sqrt{1 + 4t(2 + x)}}{2 t}.$$
-
Thank you so much for your explanation. – mary Oct 16 '12 at 9:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853038191795349, "perplexity": 233.88609049635858}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981576.7/warc/CC-MAIN-20150728002301-00216-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/352752/quotient-banach-space-whose-dual-map-sends-the-ball-onto-a-given-convex-subset | # Quotient Banach space whose dual map sends the ball onto a given convex subset
Let $$X$$ be a Banach space and let $$A$$ be a closed, convex and balanced subset of $$B_{X^{*}}$$ (where $$B_{X^{*}}$$ denotes the closed unit ball of the dual $$X^{*}$$). Is there a closed subspace $$M$$ of $$X$$ such that $$Q^{*}_{M}$$ maps $$B_{(X/M)^{*}}$$ onto $$A$$, where $$Q_{M}:X\rightarrow X/M$$ is the quotient map?
• What happens when $X=\mathbb{R}^n$? Suppose $A$ is any closed convex balanced set with nonempty interior, other than the ball itself. If $M \ne 0$ then $Q_M^*$ has rank less than $n$ and so its image cannot cover $A$, and if $M=0$ then $Q_M^*$ is the identity map and it maps the ball to itself. – Nate Eldredge Feb 15 at 13:33
• Thanks, Nate. What happens if $X$ is infinite-dimensional? – Dongyang Chen Feb 15 at 14:36
• I was just thinking about that. More generally, the image of $Q_M^*$ will always equal the annihilator of $M$, right? If $M \ne 0$ this is a proper closed subspace. So take any $A$ which is not contained in a proper closed subspace (e.g. any $A$ with nonempty interior) and is not the ball, and I think that is a counterexample. – Nate Eldredge Feb 15 at 14:39
• Indeed, $Q^{*}_{M}B_{(X/M)^{*}}$ is equal to the closed unit ball of the annilator of $M$. If we take $A$ with nonempty interior, then $M=0$. – Dongyang Chen Feb 15 at 14:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799709916114807, "perplexity": 119.41168940147772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00018.warc.gz"} |
http://export.arxiv.org/abs/2102.11785 | nucl-th
(what is this?)
# Title: Bjorken flow attractors with transverse dynamics
Abstract: In the context of the longitudinally boost-invariant Bjorken flow with transverse expansion, we use three different numerical methods to analyze the emergence of the attractor solution in an ideal gas of massless particles exhibiting constant shear viscosity to entropy density ({\eta}/s) ratio. The fluid energy density is initialized using a Gaussian profile in the transverse plane, while the ratio \chi = P_L / P_T between the longitudinal and transverse pressures is set at initial time to a constant value \chi_0 throughout the system using the Romatschke-Strickland distribution. We highlight the transition between the regimes where the longitudinal and transverse expansions dominate. We find that the hydrodynamization time required for the attractor solution to be reached increases with the distance from the origin, as expected based on the properties of the 0+1D system defined by the local initial conditions. We argue that hydrodynamization is predominantly the effect of the longitudinal expansion, being significantly influenced by the transverse dynamics only for small systems or for large values of {\eta}/s.
Comments: 6 pages, 3 figures Subjects: Nuclear Theory (nucl-th); High Energy Physics - Phenomenology (hep-ph); Fluid Dynamics (physics.flu-dyn) Cite as: arXiv:2102.11785 [nucl-th] (or arXiv:2102.11785v1 [nucl-th] for this version)
## Submission history
From: Victor Eugen Ambruş [view email]
[v1] Tue, 23 Feb 2021 16:42:11 GMT (729kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242021441459656, "perplexity": 2131.5178054695293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00306.warc.gz"} |
https://math.stackexchange.com/questions/1125213/definable-real-numbers | # Definable real numbers
A real number $a$ is first-order definable in the language of set theory, without parameters, if there is a formula $\phi$ in the language of set theory, with one free variable, such that $a$ is the unique real number such that $\phi(a)$ holds in the standard model of set theory.
A few lines later we find the statement:
Assuming they form a set, the definable numbers form a field....
But, since they are a subset of the set of real numbers, why shouldn't they be a set?
Coming back from this question to the definition, I've another doubt: if ZFC is consistent this does not means that every set-theoretic object (and so any real number) is definable in some model?
Reading the whole article does not lessen my confusion .... and the ''talk'' is too difficult for me and it does not help.
More generally, this Wikipedia article is "disputed" ad has many "!" So I doubt that it is not reliable.
A brief surf on the web give me many pages on this subject but I've found nothing that I can understand and give a response to the question: we can well define what is a definable real number?
• See also the answer to this post at MathOverflow. – Mauro ALLEGRANZA Jan 29 '15 at 17:08
There are several problems here:
1. There is not "the standard model of set theory". There are notions of "standard models" (note the plural), but there is no "the standard model". With respect to the real numbers there are several possible scenarios:
• It might be the case that there is a standard model containing all the reals. This model, if so, has to be uncountable.
• It might be that every real is a real number of some standard model, but there is no standard model containing all the reals.
• It might be that there are real numbers which cannot be members of any standard model, and some that can be.
• It might be that there are no standard models at all.
So this is really a delicate issue here. But in any case, one shouldn't qualify "standard model of set theory" with "the". At all.
2. The notion of "definable real number" often means definable over $\Bbb R$ as a real number in a language augmented by all sort of things we are used to have in mathematics, integrals, sines and cosines, etc. In that case, there are generally only countably many definable reals, since there are only countably many formulas to define reals with.
Once you add the rest of the set theoretic universe into play, you can have that every real number is definable. This is a delicate issue, and known to be consistent, see Joel Hamkins, David Linetsky and Jonas Reitz's paper "Pointwise Definable Models of Set Theory" (and Joel Hamkins' blog post on the paper which has a nice discussion on the topic).
3. And this brings us to the problem at hand. It might be the case that the collection of all definable reals is not itself definable internally. Namely, we can recognize whether a real number is definable or not; but there is no formula whose content is "$x$ is a definable real number". This can be the case because we cannot match a real number to its definition, and we cannot really quantify over formulas to say "There exists a definition".
But sometimes we are in a case where we can in fact identify the definable real numbers, either we know that they form a set (which was defined using some other formula) or that we managed to circumvent the inability to match a real to its definition by adding further assumptions that make things like that possible. And in those cases the set of definable reals, the Wikipedia article states, is a subfield of $\Bbb R$ of that model of set theory.
• When the Wikipedia article says "the standard model" I don't think the set theorist's notion of "standard model" is even close to what it intends. I think it's trying to refer to an "intended interpretation" of the language of set theory, that is, an assumed Platonically existing universe of actual sets. There are numerous problems with that concept, but I don't think it illuminates those problems to point to the (more or less unrelated) technical use of the words "standard model". – Henning Makholm Jan 29 '15 at 17:24
• Henning, that might be; doesn't mean that one should say things like "the standard model of set theory", since unlike the natural numbers or the real numbers, it's nearly impossible for set theorists to decide what is "the intended interpretation" (and most mathematicians simply don't care). – Asaf Karagila Jan 29 '15 at 17:27
• Sure sure, hence the "numerous problems" I alluded to. – Henning Makholm Jan 29 '15 at 17:30
• Asaf, do you know an example of a model of ZFC where the collection of definable reals in the model does not belong to the model? – user203787 Jan 29 '15 at 17:46
• Never mind, the pointwise definable reals in any model always form a definable set in that model. – user203787 Jan 29 '15 at 18:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162801861763, "perplexity": 217.92744049154277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00187.warc.gz"} |
https://www.physicsforums.com/threads/gm-m-mass-of-earth-g-gravitational-constant.125978/ | GM - M mass of earth , G gravitational constant
1. Jul 13, 2006
dopey9
im trying to find the maximum distance of a spacecraft from the earth, whre GM is used ( M is the mass of the earth and G is the gravitational constant) ....
i was just wondering if there is a general formual for this?
2. Jul 13, 2006
Staff: Mentor
Not clear what it is you are looking for. Please state the exact problem you are trying to solve.
3. Jul 13, 2006
HallsofIvy
Staff Emeritus
Unless there is some additional information, there is NO "maximum distance of a space craft from earth". Are you think of the case where the spacecraft is in orbit around the earth and you are given the position and speed of the spacecraft? In that case, you could solve for the apogee (point in the orbit farthest from earth).
4. Jul 13, 2006
dopey9
bascailly im meant to get to this formula which they have given
1/R[a] = 8GM/R^2*(V[a] + V)^2 - 1/R
where V[a] and V are speeds of two spacecrafts
G is gravitational constant
M is mass of earth
R is the distance from the centre of the earth
R[a] is the max distance of wreckage from the earth...because the two spacecrafts collided and where stuck together as one lump of wreckage...this part is continued from another of a question i posted earlier on spacecrafts of which i have solved...but this one iv come close to getting the answer but i dont know how they got the 8... also i got a hint that a mass M[2] orbits around a fixed mass M[1] according to the formula 1/r=Acos(theta) + G*M[1]*M[2]^2 / L^2
so basically iv been given the formula to derive but iv tried but cant get to it
Similar Discussions: GM - M mass of earth , G gravitational constant | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472984075546265, "perplexity": 883.1346927420467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189032.76/warc/CC-MAIN-20170322212949-00095-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.scienceforums.net/profile/140766-ragordon2010/ | # RAGORDON2010
Senior Members
31
-6 Poor
• Rank
Quark
## Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
1. ## An Invitation to Visit My Blogs
The Forum members are invited to visit two blogs I have created that expand on my earlier postings under the Special Relativity and Quantum Mechanics categories: “Special Relativity from the Inside Out” LINK DELETED “Introduction to Schrodinger Ensemble Theory” LINK DELETED
2. ## Schrodinger Ensemble Theory
Introduction to Schrodinger Ensemble Theory Some years ago, I chose to pursue a different approach to the study of the time-independent Schrodinger equation, particularly as it is commonly applied to the following situations: a particle in an infinite potential well, a particle in a finite potential well, the harmonic oscillator, the hydrogen atom. The first group of examples I will discuss are all one-dimensional. The work will generalize when I deal with the hydrogen atom. My concept is simple. For a given potential V(x), suppose $$\psi(x)$$ is the solution to the Schrodinger equation in the form: $$(E)(\psi) = ((h/2(\pi))^2/2m)(d^2(\psi)/dx^2) + (V)(\psi)$$. Suppose further that an ensemble of identical, non-interacting particles is distributed in real space at time t=0 such that the fraction of particles in the region (x, x + dx) is given by $$\psi\psi*dx$$. Suppose, in addition, that these particles exhibit an initial momentum distribution such that the fraction of particles with momentum in the range (p, p + dp) is given by $$\phi\phi*dp$$, where $$\phi(p)$$ and $$\psi(x)$$ are Fourier transforms of each other according to the usual rules. I then require that the fractional density functions be consistent across the two spaces - real space and momentum space. That is, I insist that the fraction of ensemble particles initially positioned in the region (x, x + dx) equals the fraction of ensemble particles with initial values of momentum in the region (p, p + dp). That is, I require that my consistency relationship $$\psi(x)\psi(x)*dx = \phi(p)\phi(p)*dp$$ is satisfied. Finally, I use this consistency relationship to seek a momentum function p(x). On the one hand, it may be possible to find p(x) by inspection or via trial and error. Otherwise, it might be possible to integrate each side of the relationship separately and isolate p(x) from the result. Even then, there will still be some freedom left to decide on the direction of the momentum vectors. Please note that for these Schrodinger ensembles, total particle energy is not a "sharp" variable. The expectation energy averaged across the entire ensemble remains the eigenvalue E, but the energy of any individual particle is always computed from $$p(x)^2/2m + V(x)$$ in the usual manner. Also note that since psi(x) and phi(p) only represent initial conditions placed on the ensemble, the subsequent development of the ensemble over time is determined by applying Liouville's theorem to the ensemble. I have not found a way to develop Schrodinger Ensemble Theory for the time-dependent Schrodinger Equation.
3. ## Another way of looking at Special Relativity
Must be my browser. Regarding the comments, I ask the Forum to be patient. I think most concerns being raised will self-resolve eventually. A little more background on Related Experiments, and then I will focus on the question of invariance. (Most of us are home-bound anyway because of this damn virus, so it's probably healthy to have some anonymous person on the outside to argue with.) Note to all who view this thread - The count of views to this thread surpasses 60,000. I take this view count very seriously. It is my intention that every one of my posts be an accurate and clear reflection of my thinking. To this end, if I draft a post on, say, Monday, the draft is read/edited and read/edited until, say, Thursday or Friday when I finally submit it to the Forum Unfortunately, this discipline does not hold for my responses to individual comments from Forum members. Those responses tend to be “off the cuff” and ill thought out. In particular, my disrespectful comment on a frame-driven physics in the context of the expression: $$(dS/2)^2 + (vdt/2)^2 = (cdt/2)^2$$. I interprete this expression as pointing to the formation of “Minkowski ellipsoids” that mark off the progress of a particle as it moves along its path of motion under the influence of applied fields. Instead of commenting the way I did, what I should have said, upon reflection, is that neither these ellipsoids nor their defining expressions are intended to be viewed as transformation invariants across a pair of related experiments, or in the conventional sense, across the associated “rest” and “moving” frames of reference. I hope all of this will become clearer to the Forum in my future posts. Special Relativity - A Fresh Look, Part 5 This post begins with the Related Experiments treatment of the “In-Line” Relativistic Doppler Effect and follows with the Related Experiments treatment of the “Transverse” Relativistic Doppler Effect. In his 1905 paper, Einstein* begins his analysis by imagining a monochromatic source placed at rest at a point some distance from the origin of his “rest” frame, Frame K. If we only wish to focus on the in-line Doppler effect, we may limit the positioning of the source to somewhere along the Frame K negative x-axis. *(ref. “Einstein’s Miraculous Year - Five Papers That Changed the Face of Physics", Edited by John Stachel and Published by Princeton University Press, 1998, pgs. 146-149.) Following Einstein’s approach, we write the wave function argument for a light wave emanating from the source and traversing in the positive x direction with frequency f, period T = 1/f, and wavelength w = c/f = cT as it would be recorded by a stationary detector positioned at the origin: $$2(\pi)(f)(t - x/c)$$. For our Related Experiments analysis, we assign the above set-up to our image experiment. We place a monochromatic source with frequency f’, period T’ = 1/f’, and wavelength w’ = c/f’ = cT’ at rest at a distant point somewhere along the negative x’-axis and we place a stationary detector at the origin. We expect that the detector will record a wave function argument equal to $$2(\pi)(f’)(t’ - x’/c)$$. Moving over to our object experiment, we use the a similar set-up, but here we place the source in motion with velocity v in the direction of the stationary detector. We now determine what the detector would record in the object experiment as follows: We substitute for t’ and x’ in the argument $$2(\pi)(f’)(t’ - x’/c)$$ using the Lorentz transformations in the form: $$t’ = (\gamma)(t - vx/c^2)$$, and $$x’ = (\gamma)(x - vt)$$, with $$\gamma$$ defined in the usual way. After some simplification, we will find that the stationary detector records a wave with argument: $$2(\pi)(\gamma)(f’)(1 + v/c)(t - x/c)$$, giving a frequency of $$(\gamma)(f)’(1 + v/c)$$. This represents the relativistic Doppler shift for a source moving toward a fixed observer (or equivalently, for an observer moving toward a fixed source.) To determine the frequency transformation for the case where the source moves away from a fixed observer (or equivalently, where the observer moves away from the fixed source), we need only replace v in the above with -v. For the case of the Transverse Relativistic Doppler Effect (TDE), we follow Einstein’s general analysis, but, for Frame K, we position the source at the origin and place the detector at rest at an arbitrary point, point P, on the positive z-axis some distance from the origin. For source frequency f, period T = 1/f, and wavelength w = c/f = cT, we would expect that this detector will record a plane wave emanating from the source with argument $$2(\pi)(f)(t - z/c)$$. For our Related Experiments analysis, we assign Einstein’s Frame K set-up to our image experiment. We place a monochromatic source with frequency f’, period T’ = 1/f’, and wavelength w’ = c/f’ = cT’ at rest at the origin, and we place the detector at rest on the positive z’-axis at a point P some distance from the origin. As in the Einstein model, we expect that this detector will record a plane wave emanating from the source with argument $$2(\pi)(f’)(t’ - z’/c)$$. Moving over to our object experiment, we use the a similar set-up, but here we locate the source somewhere along the negative x-axis and set it in motion with velocity v in the positive x direction. We now ask how the wave emitted by the moving source as it passes the origin would appear to the detector at point P. We substitute for t’ and z’ in the argument $$2(\pi)(f’)(t’ - z’/c)$$ using the Lorentz transformations in the form: $$t’ = (\gamma)(t - vx/c^2)$$ and z’ = z, with $$\gamma$$ defined in the usual way. After some simplification, we find that the detector records a plane wave with argument $$2(\pi)(f’)(\gamma)(t - (vx/c + z/(\gamma))/c)$$. This represents a plane wave with frequency $$f = f’(\gamma)$$ and direction cosines, (l, m, n), with l = v/c, m = 0, and n = $$1/(\gamma)$$. We see that the light detected by the receiver is blue-shifted by a factor of gamma. Also, we see that the light beam will appear to be emanating from a displaced source, an example of “aberration”. Let $$\theta$$ = angle between the light beam and the x-axis. Let $$\phi$$ = angle between the light beam and the z-axes. Then $$cos (\theta)$$ = l = v/c, and $$cos \(phi) = n = 1/(\gamma)$$. Since $$\theta$$ and $$\phi$$ are complementary angles, $$cos (\phi) = sin (\theta)$$, and we would expect $$(l)^2 + (n)^2 = 1$$, which is true here.
10. ## Another way of looking at Special Relativity
I wish to continue presenting my insights into a different view of Special Relativity. I took a cue from an invitation from Swansont to open a new thread in Speculations where I posted the following. I found today that Strange has stopped that thread and it seems that I am being directed back to this one. So, for the sake of consistency, I am repeating this post here and will follow up shortly with another one. Special Relativity - a Fresh Look: Overview A fresh look at the underpinnings of Special Relativity is merited for the following reasons - 1. In earlier posts, I’ve shown how to view SR applications as Related Experiments - a pair of matched experiments in which charged particles are subjected to external electromagnetic fields. In the object experiment, the particle is given an initial velocity v and subjected to fields E and H. In the image experiment, fields E’ and H’ are applied to the particle at rest, where E’ and H’ are the transformed images of E and H under the SR field transformations. The 4-space motion of the particle in the image experiment (t’,x’,y’,z’) will then match up with the transformed 4-space motion (t, x, y, z) of the particle in the object experiment under a Lorentz time and space transformation with parameter v. In approaching SR this way, we avoid any discussions or dependencies on clocks that run slow or fast, and meter sticks that shrink or grow, as we move from one experiment to the other. 2. The Relativistic form of Newton’s Second Law of Motion is a Classical Physics formulation. We are given a set of initial conditions, a set of prescribed forces and a differential equation from which we can compute the position, velocity and energy of the particle for any time in the future to any degree of accuracy, and, if we insert negative values of time, we can compute the position, velocity and energy of the particle for any time in the past to any degree of accuracy. This is classical Classical Physics - given knowledge of the initial conditions and applied forces, the entire past and future of the particle is completely determinable. Contrast this with the stochastic behavior of Modern Physics, where SR plays a major role in nuclear physics, the physics of high energy particle collisions, and quantum field theory (QFT). 3. The mention of QFT brings me to my final point - QFT speaks of relationships between particles and fields characterized by a series of minute, discrete interactions in which the particles are accelerated slightly or decelerated slightly and/or deflected slightly and/or rotated, twisted or spun slightly. In contrast, conventional SR theory is marked by functions that are everywhere smooth and continuous. I intend to develop a model of SR which addresses all of the above, stays well within conventional bounds of discussion on the subject, and, here and there, introduces key, defensible ideas. Finally, I ask that the Forum members allow me to retain control over my terminology. For example, I shall refer to Minkowski’s S function as a “Minkowski interval”, and I shall refer to his dS function as a “Minkowski differential interval”.
11. ## Special Relativity - A Fresh Look
A fresh look at the underpinnings of Special Relativity is merited for the following reasons - 1. In earlier posts, I’ve shown how to view SR applications as Related Experiments - a pair of matched experiments in which charged particles are subjected to external electromagnetic fields. In the object experiment, the particle is given an initial velocity v and subjected to fields E and H. In the image experiment, fields E’ and H’ are applied to the particle at rest, where E’ and H’ are the transformed images of E and H under the SR field transformations. The 4-space motion of the particle in the image experiment (t’,x’,y’,z’) will then match up with the transformed 4-space motion (t, x, y, z) of the particle in the object experiment under a Lorentz time and space transformation with parameter v. In approaching SR this way, we avoid any discussions or dependencies on clocks that run slow or fast, and meter sticks that shrink or grow, as we move from one experiment to the other. 2. The Relativistic form of Newton’s Second Law of Motion is a Classical Physics formulation. We are given a set of initial conditions, a set of prescribed forces and a differential equation from which we can compute the position, velocity and energy of the particle for any time in the future to any degree of accuracy, and, if we insert negative values of time, we can compute the position, velocity and energy of the particle for any time in the past to any degree of accuracy. This is classical Classical Physics - given knowledge of the initial conditions and applied forces, the entire past and future of the particle is completely determinable. Contrast this with the stochastic behavior of Modern Physics, where SR plays a major role in nuclear physics, the physics of high energy particle collisions, and quantum field theory (QFT). 3. The mention of QFT brings me to my final point - QFT speaks of relationships between particles and fields characterized by a series of minute, discrete interactions in which the particles are accelerated slightly or decelerated slightly and/or deflected slightly and/or rotated, twisted or spun slightly. In contrast, conventional SR theory is marked by functions that are everywhere smooth and continuous. I intend to develop a model of SR which addresses all of the above, stays well within conventional bounds of discussion on the subject, and, here and there, introduces key, defensible ideas. Finally, I ask that the Forum members allow me to retain control over my terminology. For example, I shall refer to Minkowski’s S function as a “Minkowski interval”, and I shall refer to his dS function as a “Minkowski differential interval”.
12. ## Influence of the Universe on Physical Laws
The article that has had the greatest effect on my thinking about physics over the years is “Extended Mach Principle” by Professor Joe Rosen, then at Tel-Aviv University, Israel (AJP, Volume 49, March 1981, pp. 258-264). Of all the fundamental principles Professor Rosen addresses, these three stand out for me - The origin of all laws of physics lies with the universe as a whole. Every single physical property and behavior aspect of isolated systems is determined by the whole universe. If the rest of the universe is taken away leaving only an isolated system, all laws of physics will cease to hold for it, and even space and time will lose their meaning for it. What I would like to do here is pursue these principles with regard to the following two questions: What role does the surrounding universe play in the decay of a single unstable particle at rest in an inertial frame of reference? What role does the surrounding universe play in the retarded rate of decay of these unstable particles as they move rapidly within this inertial frame of reference? I’ve always thought that the decay of an unstable particle is the strongest illustration of Eddington’s “Arrow of Time” - There is a BEFORE, there is an instant of NOW, and there is an AFTER. With respect to my Question 1, it is not hard to point to numerous examples of interactions where particle decay is in some way connected to the surrounding environment - an atomic pile comes to mind, so do particles struck by random photons, neutrinos, or miscellaneous other particles, real or virtual, that “exist” in the wilds of the universe. I intend instead to focus on Question 2. Retarded rate of decay as a function of pure motion is defined by Einstein’s time dilation formula appearing in his Theory of Special Relativity and also by an identical formula appearing in his Theory of General Relativity. Interestingly enough, the time dilation formula applies to any type of unstable particle regardless of mass, charge, spin or any of the other parameters generally applied to unstable particles by particle physicists and depends only on a relative velocity v and light speed c. If we exclude the class of retarded decay rates associated with General Relativity on the basis that the Universe is interacting with these unstable particle through gravity, we are left with the class of retarded decays associated with Special Relativity. I’ve made the point in earlier posts that I believe that Special Relativity Theory belongs firmly in the house of Electromagnetic Theory, including phenomena related to light such as the constancy of light speed in any inertial frame and the Relativistic Doppler Effect. Given this, I would think that retarded decay of speeding unstable particles, a hallmark of SR time dilation, would be in some way connected to the charge and/or magnetic moment, i.e., spin, of the unstable particle. The theoretical physics community has a large storehouse of weaponry with which to attack this phenomenon - QED, QFT, the Standard Model with its quark/gluon interactions, interactions with the universal background radiation, interactions with fields of passing neutrinos, interactions with the Higgs Boson, the influence of Dark Matter and/or Dark Energy. Just for starters, there are the unstable particles detected down here on the Earth’s surface that are created in collisions between atoms in the upper atmosphere and high-energy particles and gamma rays coming in from outer space. They should decay long before reaching the Earth’s surface. If we approach this problem from the point of view of interactions with external electromagnetic fields, then we might look at interactions with the Earth's magnetic field as well as with miscellaneous electric and magnetic fields in the upper atmosphere. Another aspect of the problem is that particle decay is a stochastic process. Any single unstable particle can exist over a range of time intervals - all that can be determined in the laboratory is the mean time to decay from observations of many instances. Accordingly, the application of the SR time dilation factor has to be applied to the mean time observation, which becomes even more tenuous when we account for the fact that there will be some statistical distribution in the velocities of the observed particles relative to the laboratory frame. Be that as it may, as I’ve pointed out in earlier posts, I am still hoping for an explanation for retarded particle decay times that goes beyond simply stating that Special Relativity requires it.
13. ## Another way of looking at Special Relativity
Studiot , thank you for bringing the Wangsness material to my attention. Oddly, I think his spherical light shell approach to deriving the Lorentz transformations was the vehicle I was first introduced to as a freshman undergrad. I never was happy with it, and I think this dissatisfaction was a prime motivator for me to seek out Einstein’s original 1905 paper to read what the master actually wrote. Currently, I am working on a post targeted for the General Philosophy category. Perhaps we’ll meet up again over there. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449845314025879, "perplexity": 394.9763426024557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00510.warc.gz"} |
https://xinitrc.de/2014/02/23/Of-rabbits-and-hats.html | Of rabbits and hats
In the last blog post I showed you what you can do with just implementing the interface for a $*$-semiring and than using the matrix over that semiring. Up until now, we can compute the transitive connection relation, the length of the shortest path, the maximum throughput between two nodes and the reliability on the the most reliable path. But what we can’t do right now is producing the path(s) that instatiate these properties. So this is exactly what I will show you in this post.
Regular Expressions
As you might have to expected, especially since I wrote it in the last blog post, we will use our new best friend the $*$-semiring to do so. We acutally come to the premier example of a semiring, the regular expression. You migth know regular expressions from other programming languages, which is as helpful as it is unfortunate. Why is that you might ask and the answer is simple: There is a singificant difference between regular expressions used in theoretical computer science and regular expressions as implemented in programming languages. The later are strictly more powerful. The nice part is they are also a syntactical super set of the first and by simply dropping this extra syntax we get exactly the former. So for you, this might result in some unlearning, I can’t help that, sorry.
So for everybody not knowing regular expressions from computer science curriculum, let’s give a briefe explanation.
First we need a finite set of “letters” called alphabet usually for illustrative purposes one uses the regular letters of a usual alphabet, like $a, b, c, d, \ldots$. But any alphabet will do, we could for example use the natural numbers up to some number, the edges of a finite graph or all subsets of $\{1,2,3,4\}$ as letters, it doesn’t matter.
Let $l$ be any letter of the alphabet $A$, then we can give the following recursive definition of the regular expressions $re$ as follows:
$re,re_1, re_2 ::= \emptyset | \epsilon | l | (re_1 + re_2) | (re1 \cdot re2) | (re^{*})$
Let’s go through this and decipher what it means.
1. $\emptyset$ this is simple. If we write nothing, it is a valid regular expression. And since we can’t leave blank space in a definition and think that everybody picks up that this is a significant syntactical element we write this as $\emptyset$.
2. $\epsilon$ we need to construct a word that has no letters, to do that we use the symbol $\epsilon$. You might ask what is the difference between nothing and a a word of no letters. It is the same difference as between ${0}$ and $\emptyset$ one is a set containing something of no value, the other is simply empty, same goes here.
3. If we have a letter $l$, writing just that letter is a valid regular expression.
4. $re_1 + re_2$ If we have two regular expressions already, we can either use the left or the right, appropriatly this is usually called alternative or choice. The plus-sign is somewhat of a convention for regular expressions, but you might have already guessed what will make it’s way into our semiring in the end.
5. $re_1 \cdot re_2$ this is called concatenation or sequential compositon it simply states that we can look for something that matches the first regular expression followed by something that matches the second regular expression. If there is no danger of misreading it we usually ommit the $\cdot$ just like with multiplication.
6. $re^*$ this finally is arbitrary iteration, if we have a regular expression we can match it an arbitrary amount of times.
Usually you define some form of rules that say we can ommit some of the parens by saying $*$ binds stronger than $\cdot$ which in turn binds stronger than $+$. Since this just looks like basic arithmetic I trust you can follow.
Example
Let’s give an example, suppose we have this regular expression:
$re_{example}=a (bd + ce)$
Then let’s first check that it is a valid regular expression for the alphabet $\{a,b,c,d,e\}$. After stating this, we know that the individual letters $a,b,c,d$ are valid regular expressions. Next we combine $b$ and $d$ with $\cdot$ and get $(b\cdot d)$ same for $c$ and $e$ now we can apply $+$ and get $(b\cdot d+c\cdot e)$. As a last step we combine $a$ and $(b\cdot d+c\cdot e)$ and get $a\cdot (bd+ce)$. Ommiting the $\cdot$’s we get that the regular expression is valid.
Now what does it “mean”
Let’s assume we have a graph and the edges are labeled with letters so there is an edge between two nodes labeld with $a$, one labeled $b$ and so on, like in this graph.
Then the regular expression above $a(bd+ce)$ describes all pathes from $N1$ to $N5$. We first have to take the edge labeled $N1\rightarrow N2$ which is labeld $a$ then either the edges labeled $b$ followed by the edge labeled $d$ or the edges labeled $c$ and $e$.
I think you see how this is useful for our case. So let’s go on and define a semiring.
The Regular Expression $*$-Semiring
Acutally regular expressions form a family of $*$-semirings, one for each underlying alphabet. So let’s call our alphabet $\Sigma$, which is the usual name for the alphabet in computer science. Then
$(\Sigma, +, \cdot, \emptyset, \epsilon)$ is a $*$-semiring with the rules above and two corner cases which give rise to the following two (standard) definitions:
1. $\emptyset + x = x + \emptyset = x$
2. $\emptyset \cdot x = x \cdot \emptyset = \emptyset$
Let’s do a quick check if our properties hold.
1. $a\oplus b = b\oplus a$ since $+$ is the same as or this holds.
2. $(a\oplus b)\oplus c = a\oplus (b\oplus c)$, by almost the same argument, if we first decide between $a$ or $b$ and then between the result of that decission or $c$ it’s the same as deciding the other way around.
3. $a \oplus \mathbf{0}=\mathbf{0}\oplus a=a$, this is by the definition given above.
4. $(a \otimes b)\otimes c = a \otimes (b \otimes c)$ the concatenation of $a$ and $b$ yields $ab$ that concatenated with $c$ yields $abc$ which is the same as first concatenating $b$ and $c$ to $bc$ and then prepending $a$.
5. $a \otimes \mathbf{1} = \mathbf{1} \otimes a = a$ concatenating the empty word to anything will not change anything, so this is ok too.
6. $a \otimes \mathbf{0} = \mathbf{0} \otimes a = \mathbf{0}$ that’s by the definition above.
7. $a \otimes (b \oplus c) = (a\otimes b) \oplus (a\otimes b)$, this is simply either first doing $a$ and then deciding between $b$ or $c$ or first deciding to go $ab$ or $ac$ so this holds true to.
8. $(a \oplus b) \otimes c = (a\otimes c) \oplus (b\otimes c)$, almost the same as the one before, either decided between $a$ and $b$ and then do $c$ or deciding to do $ac$ or $bc$ should be the same.
Ok, we are satisfied, this is a semiring. Now for the $*$ part, which this time is a little more interesting than before.
If we want to describe all ways from one node to another, there might be loops on the way. For example let’s modify the graph from above slightly that it looks like this.
Now on a way from $N1$ to $N5$ we would be allowed to take the loop labeled $f$ arbitrarily often, or writing it in the syntax of regular expression $a(bd+cf*e)$. We have to take this into account for our $*$-semiring definition of regular expressions. So let’s do it.
Defining $*$s
I have a small problem since now we have two different $*$ operations, one from the semiring and one from the regular expressions. To keep those apart I will use $*_{sr}$ for the semiring and $*_{re}$ for the regular expression star. I hope this isn’t to confusing for you.
Let us define a $*$.
$x^*_{sr}=\left\{\begin{array}{ll}\epsilon & \text{falls } x = \emptyset\\\epsilon & \text{falls } x = \epsilon\\y^{*_{sr}} & \text{falls } x = y^{*_{re}}\\x^{*_{re}} & \text{sonst}\end{array}\right.$
Even though this is a bit tricky I think you are by now fully capable of checking that this will actually give us a valid $*$-semiring.
Now for a Haskell implementation. Since I don’t want to conflict with other operators I use Or, Concat and Star instead of $+, \cdot$ and $*$.
data StarSemiringExpression a =
Var a
| Or (StarSemiringExpression a) (StarSemiringExpression a)
| Concat (StarSemiringExpression a) (StarSemiringExpression a)
| Star (StarSemiringExpression a)
| None
| Empty
newtype RE a = RE (StarSemiringExpression a)
re :: a -> RE a
re = RE . Var
instance Semiring (RE a) where
zero = RE None
one = RE Empty
RE None <+> x = x
x <+> RE None = x
RE Empty <+> RE Empty = RE Empty
RE Empty <+> RE (Star a) = RE (Star a)
RE (Star a) <+> RE Empty = RE (Star a)
RE x <+> RE y = RE (x Or y)
RE Empty <.> x = x
x <.> RE Empty = x
RE None <.> _ = RE None
_ <.> RE None = RE None
RE x <.> RE y = RE (x Concat y)
instance StarSemiring (RE a) where
star (RE None) = RE Empty
star (RE Empty) = RE Empty
star (RE (Star x)) = star (RE x)
star (RE x) = RE (Star x)
The helper function re in combination with leaving out the actual alphabet for our regular expression allows us to use any type, as I’ve shown above, and implicitly creating the alphabet from the “letters” that are used in the regular expression. (For mathmatically inclined readers: Yes, this is only ok, as long as we only use finite regular expression, but I would doubt that you can write an infinite one ;-) )
There is one thing left to do to put this to use, we need another small helper function:
reGraph :: (Ix i) => Matrix i (Maye a) -> Matrix (Re a)
reGraph = fmap (maybe zero re)
This simply takes a matrix, where there might be an entry at any index and transforms that into a regular expression of one letter and takes the entry itself as letter, if there is no entry present at that point it just uses $\epsilon$.
What remains is to clear up is what our $*$ operation matrixes does now. What we saw in the post from last week is that the algorithm minimized or maximized some property we calculated. This time it’s a little different. Operations like $max$, $min$ and $||$ “discard” one of their operatands but the $+$ operation from regular expressions doesn’t it creates an alternative. So for any node we can use as an intermediate what we get is “Either use the path we already know OR the path we can construct by using the intermediate node we are currently testing.” Now there might not be a path leading from our start to our target node using the specific intermediate node, then we get back a $\emptyset$ as an alternative path, which is the only value that is acutally discarded by $+$. So what we get in the end is a regular expression specifying all pathes we can take to get from a start to a target node.
Oh by the way with this we implemented another well known algorithm with the same 7 lines of code I presented in the last blog post. It’s called the McNaughton-Yamada algorithm or the Kleene-Construction and is taught to probably every computer science major on the planet. (And forgotten about 5 minutes later ;-) )
Wrapping up
In the last blog post we started to put our $*$-semiring knowledge to use on graphs and found that the same 7 lines of code got us various properties, depending on which semiring we plugged in. This blog post showed how we can describe path between nodes and how we can get all path between two given nodes. Again with the very same 7 lines of code. What remains is to put both of those together to get all path exibiting a given property and that is what we’ll do in the next installment. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 110, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095241189002991, "perplexity": 533.6856957953011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00215.warc.gz"} |
https://crypto.stackexchange.com/questions/20974/proving-the-existence-of-a-pseudorandom-function | # Proving the existence of a pseudorandom function
I've been reading the Introduction to Modern Cryptography book by Katz and Lindell as part of my own learning and have come across this exercise which I am not sure how to approach. The exercise is: (exercise 3.8)
Prove unconditionally the existence of an efficient pseudorandom function $F:\{0,1\}^* \times \{0,1\}^* \mapsto \{0,1\}^*$ where the input-length is logarithmic in the key-length (i.e., $F(k,x)$ is defined only when $|x| = log |k|$, in which case $|F(k,x)| = |k|$).
There is also a hint which states that you should use the fact that any random function is also pseudorandom.
This is my initial train of thought:
We require the pseudorandom function to be indistinguishable from a function chosen uniformly at random from the set of functions that map $log|k|$ bit strings to $|k|$ bit strings (let's say this set is called $Func_{log\,|k|\, \mapsto |k|}$). I'm guessing that we need to work how many functions in this set in order to work out the probability of picking a random function, $f$, from this set.
I know that the set of functions $Func_{n \mapsto n}$ mapping $n$ bit strings to $n$ bit strings contains $2^{n*2^n}$ functions. However my first obstacle is calculating how many functions are in $Func_{log\,|k|\, \mapsto |k|}$ since the functions in this set are not bijective as they are in $Func_{n \mapsto n}$.
If I could calculate this value then I would approach the rest of the problem by calculating the amount of possible pseudorandom functions (clearly given by $|k|$ since $k$ is chosen uniformly at random). I was then hoping, if there was a similar number of functions in $Func_{log\,|k|\, \mapsto |k|}$ (although I speculate there is way more than $|k|$ functions in this set), then eventually try to show that it would be hard for any ppt distinguisher to tell between the pseudorandom function and the randomly chosen one.
I have no idea if this is along the right line and I also don't really know how to bring the hint in to play. All I can think is that it may turn out to be easier to prove that $F$ is indistinguishable from another pseudorandom function which also happens to have been chosen at random.
If anyone could provide a hint as to how to calculate the amount of functions in $Func_{log\,|k|\, \mapsto |k|}$ or pointers for how to approach this then that would be great. As I said, I am doing the exercises for my own good so I'm not massively keen on being given a full solution straight away.
• Is this the exact wording of the exercise? – Guut Boy Dec 24 '14 at 0:49
• Btw. let $n = |k|$ then there are $2^{n^2}$ functions from $log(n)$ to $n$ bits (where log is taken to be base 2). To see this note that all elements in $\{0,1\}^{log(n)}$ can be mapped to $2^n$ different values (all the strings in $\{0,1\}^n$). There are $n$ distinct elements in $\{0,1\}^{log(n)}$ so you have $\Pi^{n}_{i = 1}2^n = (2^n)^n = 2^{n^2}$ possible functions. – Guut Boy Dec 24 '14 at 1:09
• @Guut Boy - Thanks that makes sense! Yes it is the exact wording. – Alex Dec 24 '14 at 9:47
Though this is an 4-year old topic, it seems the following should work:
We can construct a function $F_k(x)$ with output length $l_{out}(n)=l_{key}(n)/2^{l_{in}(n)}=n/2^{O(\log n)}$.
Didivde the key $k$ into $2^{O(\log n)}$ blocks with equal length, denoted by $k_i$ with $i=1,2,\dots, 2^{O(\log n)}$. Because $k$ is uniformly distributed in $\{0,1\}^n$, so is $k_i$.
$F_k(x)= k_x$ is the pseudorandom function.
Preface: of course the following tests alone are not unconditional tests. See below for clarification.
Interesting post Alex. Dan Boneh from Stanford and the Coursera classes Cryptography I and II discusses the statistical test of PRG's which are related to PRF's in terms of an algorithm as the following:
$\{0,1\}^n$ bit strings such that $A(x)$ where $x$ is the input string--outputs $0$ or $1$. Where $0$ = not random and $1$ = random. As Katz discusses too PRF's are indistinguishable from a truly random function.
Examples include $A(x) = 1$ iff (if and only if) the # of $0$'s in the given string $x$ and the number of $1$'s in the string $x$ is not very different. #$0(x)$ - #$1<= 10 * \sqrt n$.
Here is a second example from Boneh: $A(x) = 1$ iff the number of consecutive 0's is the difference between $00x$ and $n/4<= 10*\sqrt n$. $n/4$ just represents the 25% chance from the uniform distribution.
In Boneh's third example we now see logs:
$A(x) = 1$ iff max-run-of-0$(x) <= 10* log_2 (n)$
In order to begin to actually test for a secure PRF/PRG we must look at the concept of advantage:
Where $G:K--> [0,1]^n$ is a PRG and A a statistical test on $[0,1]^n$ then we can define the following:
adv PRG [A,G]= Pr [A(G(k)) = 1 K<--R K - Pr [A(r) = 1] r<-- R {0,1}^n where we are calculating a specific probability of summing to 1 within the seed space and how likely an output from a ST outputed from a generator and 1 from a truly random string. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636943697929382, "perplexity": 256.57691682580855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00268.warc.gz"} |
https://www.sierrachart.com/index.php?page=doc/StudiesReference.php&ID=475&Name=Moving_Average_-_Arnaud_Legoux | # Technical Studies Reference
### Moving Average - Arnaud Legoux
This study calculates and displays an Arnaud Legoux Moving Average (ALMA) of the data specified by the Input Data Input.
Let $$X$$ be a random variable denoting the Input Data, and let $$X_j$$ be the value of the Input Data at Index $$j$$. Let the Inputs Length, Sigma, and Offset be denoted as $$n$$, $$\sigma$$, and $$k$$, respectively. Then we denote the Moving Average - Arnaud Legoux at Index $$t$$ for the given Inputs as $$ALMA_t(X, n,\sigma, k)$$, and we compute it for $$t \geq n - 1$$ as follows.
$$\displaystyle{ALMA_t(X, n, \sigma, k) = \frac{\sum_{j = 0}^{n - 1}\exp\left(-\frac{(j - \lfloor k(n - 1) \rfloor)^2}{2n^2/\sigma^2}\right) \cdot X_{t - n + 1 + j}}{\sum_{j = 0}^{n - 1}\exp\left(-\frac{(j - \lfloor k(n - 1) \rfloor))^2}{2n^2/\sigma^2}\right)}}$$
For an explanation of the Sigma ($$\Sigma$$) notation for summation, refer to our description here.
For an explanation of the Floor Function ($$\lfloor \cdot \rfloor$$), refer to our description here
ALMA is a weighted moving average with Gaussian weights. It is advertised as a Gaussian filter, however caution should be exercised in this interpretation. The Input $$\sigma$$ does not play the role of the standard deviation of the Gaussians. Rather, the standard deviation is determined by $$\frac{n}{\sigma}$$. The mean, or center, of each Gaussian is determined by $$\lfloor k(n - 1) \rfloor$$
#### Inputs
• Input Data
• Length
• Sigma: This Input controls the width of the Gaussian distribution of the weights.
• Offset: This Input controls the center of the Gaussian distribution of the weights. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443122744560242, "perplexity": 680.8219403732461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00084.warc.gz"} |
https://socratic.org/questions/if-x-3y-0-and-5x-y-14-then-what-is-6x-4y | Algebra
Topics
# If x - 3y = 0 and 5x - y = 14, then what is 6x - 4y?
$6 x - 4 y = \left(x - 3 y\right) + \left(5 x - y\right) = 0 + 14 = 14$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126784801483154, "perplexity": 673.2475657855385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00218.warc.gz"} |
http://math.stackexchange.com/questions/11384/what-is-the-order-of-the-set-of-distinct-up-to-similarity-nxn-matrices-over-r | # What is the order of the set of distinct (up to similarity) nxn matrices over R?
What is the order of the set of distinct (up to similarity) nxn matrices over $\mathbb{R}$ with determinant equal to some non-zero scalar... say 6? (eg. countable, uncountable etc.)
The set of matrices over $\mathbb{R}$ is uncountable. So is the order of the set consisting of classes of matrices with the same determinant. In each of those classes we have further subsets, the equivalence classes formed by grouping similar matrices. Each equivalence class of similar matrices represents one linear transformation expressed in terms of all possible bases of $\mathbb{R}^n$, so the size of each equivalence class is also uncountable.
What I'm not certian about is the size of the set of "different" transformations that have the same determinant. Is it uncountable too?
What can I put it in a correspondence with to show this?
Apologies if this is poorly worded-- let me know if there is a better way to ask this question.
-
No apologies necessary. I think the question is very clearly worded. Minor nitpick: the similarity class of a scalar multiple of the identity matrix is not uncountable. – Jonas Meyer Nov 22 '10 at 21:01
It has the same cardinality as $\mathbb{R}$ if $n\gt 1$. Consider diagonal matrices with entries $(x,6/x,1,1,\ldots,1)$, $x\neq0$. Two such are similar only in a situation where $\{x,6/x\}=\{y,6/y\}$, hence the conclusion on cardinality. (Of course $6$ is not special.)
(The only additional detail worth mentioning is that ${}|{\mathbb R}|$ is an upper bound, and so this argument gives equality. Asaf's answer addresses this detail.) – Andres Caicedo Nov 23 '10 at 2:34
Asaf's answer was deleted. Anyway, just in case: The set of $n\times n$ matrices with real entries is obviously in bijection with ${\mathbb R}^{n^2}$, which has the same size as ${\mathbb R}$. This gives the upper bound. – Andres Caicedo Nov 23 '10 at 6:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693995118141174, "perplexity": 130.04569496294235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00345-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://blog.pollithy.com/general/LOTUS | # LOTUS
by Daniel Pollithy
If you want to calculate the expected value of a continuous random variable which was transformed by a monotonic function, then the law of the unconscious statistician provides a convenient shortcut.
## Transforming a random variable
We have a continuous random variable $X$ with a probability density function $f_{X}(x)$. This could for example be our last knowledge about the position of a robot.
Now, we apply a continuous, monotonic function $g(\cdot)$ to the random variable. This could be a simple motion model for the robot.
If we want to calculate $E[g(X)]$ then LOTUS tells us that we don’t have to solve $g(X)$ but instead we can write:
$E[g(X)] = \int_{+\infty}^{+\infty}{g(x) \cdot f_{X}(x) dx}$
## Proof
For the proof let’s assume $g$ to be strictly increasing (decreasing would also be possible). And due to its continuity, $g(x)$ therefore has a positive derivative for every x. This results in $g(\cdot)$ being bijective therefore g is invertible. We call its inverse $g^{-1}$. With $g^{-1}: Y \rightarrow X$
### Prepare change of variable
The derivative of a function and its inverse are related. They are reciprocal:
$\frac{dx}{dy} \cdot \frac{dy}{dx} = 1$ $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$
First we replace $y$ with $g(x)$ on the right side:
$\frac{dx}{dy}= \frac{1}{\frac{d g(x)}{dx}}$
Second we replace $x$ with $g^{-1}(y)$ on the right side:
$\frac{dx}{dy}= \frac{1}{\frac{d g(g^{-1}(y))}{dx}}$
Multiplying with dy on both sides:
$dx = \frac{1}{\frac{d g(g^{-1}(y))}{dx}} dy$
### Expected value with exchanged variable
$\int_{+\infty}^{+\infty}{g(x) \cdot f_{X}(x) dx}$
And now we can switch from x to y. First, replace g(x) with y. Second, replace $f_{X}(x)$ with $f_{X}(g^{-1}(y))$. And third, replace dx with the right hand side from “dx = …” above:
$\int_{+\infty}^{+\infty}{g(x) \cdot f_{X}(x) dx} = \int_{+\infty}^{+\infty}{y \cdot f_{X}(g^{-1}(y)) \frac{1}{\frac{d g(g^{-1}(y))}{dx}} dy}$
We have now switched to integrating over y.
### Cumulative density function
$F_{Y}(y) = Pr(Y \le y)$
Apply g:
$F_{Y}(y) = Pr(g(X) \le y)$
Apply $g^{-1}$ on both sides
$F_{Y}(y) = Pr(X \le g^{-1}(y))$ $F_{Y}(y) = F_{X}(g^{-1}(y))$
### Derivative of CDF
Now we can get the derivative of $F_{Y}(y)$ for y in order to get $f_{Y}(y)$. The chain rule is used. Note that this is the place where we need the derivative of $g^{-1}(y)$ which is $\frac{1}{\frac{d g(g^{-1}(y))}{dx}}$ !
Apply the CDF solution from above:
$f_{Y}(y) = \frac{d}{dy} F_{Y}(y) = \frac{d}{dy} F_{X}(g^{-1}(y))$
Apply the chain rule of derivation:
$= f_{x}(g^{-1}(y)) \cdot \frac{d}{dy} g^{-1}(y)$ $= f_{X}(g^{-1}(y)) \cdot \frac{1}{\frac{d g(g^{-1}(y))}{dx}} =$
### Plug-in
Two sections before, we got to this point:
$E[g(X)] = \int_{+\infty}^{+\infty}{y \cdot f_{Y}(y) dy} = \int_{+\infty}^{+\infty}{y \cdot f_{X}(g^{-1}(y)) \frac{1}{\frac{d g(g^{-1}(y))}{dx}} dy}$
Looking at the last formula from the section “Prepare change of variable”, we find that $f_{X}(g^{-1}(y)) \cdot \frac{1}{\frac{d g(g^{-1}(y))}{dx}} dy$ to be the same as $f_{X}(g^{-1}(y)) dx$
$E[g(X)] = \int_{+\infty}^{+\infty}{y \cdot f_{X}(g^{-1}(y)) dx}$
Per definition $g^{-1}(y)$ can be replaced by x.
$E[g(X)] = \int_{+\infty}^{+\infty}{y \cdot f_{X}(x) dx}$
And also $y$ can be replaced by g(x).
$E[g(X)] = \int_{+\infty}^{+\infty}{g(x) \cdot f_{X}(x) dx}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482285976409912, "perplexity": 426.57172170770406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00611.warc.gz"} |
https://brilliant.org/problems/another-controversial-question-2/ | # Three Equations But Two Unknowns
Algebra Level 4
$\begin{cases} x^4- y^4 = 24 \\ x^2 + y^2 = 6 \\ x + y = 4 \end{cases}$
If $x$ and $y$ satisfy the above system of equations, find $x-y$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480445981025696, "perplexity": 1821.7501388260648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00617.warc.gz"} |
http://mathhelpforum.com/calculus/60434-please-check-my-critical-points-print.html | # Please check my critical points
• November 19th 2008, 02:52 AM
koalamath
Find the critical points of the equation $xy-7x^2y-9xy^2$
The critical points I found in increasing lexicographic order are
(0.0) (0,1/9),(1/21,1/27) and (1/7, 0)
at (0,0) is it undefined
at (0/1/9) and (1/7,0) you have saddle points
at (1/21,1/2) you have a local max.
Is this correct?
Thank you
• November 19th 2008, 03:00 AM
mr fantastic
Quote:
Originally Posted by koalamath
Find the critical points of the equation $xy-7x^2y-9xy^2$
The critical points I found in increasing lexicographic order are
(0.0) (0,1/9),(1/21,1/27) and (1/7, 0)
at (0,0) is it undefined Mr F says: ?? Do you mean the test fails? ${\color{red}xy-7x^2y-9xy^2}$ is certainly not undefined at (0, 0) ....
at (0/1/9) and (1/7,0) you have saddle points
at (1/21,1/2) you have a local max.
Is this correct?
Thank you
..
• November 19th 2008, 03:20 AM
koalamath
fxx=-14y
fxx(0,0)=0
doesn't that make it undefined?
• November 19th 2008, 03:38 AM
mr fantastic
Quote:
Originally Posted by koalamath
fxx=-14y
fxx(0,0)=0
doesn't that make it undefined?
Why? What part of the second partial derivative test says this? How can $z = xy-7x^2y-9xy^2$ be undefined when x = 0 and y = 0? z = 0, a perfectly well defined value.
In fact, there's a saddle point at (0, 0, 0). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139805436134338, "perplexity": 2727.5208885469583}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00035-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/work-done-by-an-expanding-gas.735198/ | # Work done by an expanding gas
1. Jan 27, 2014
### moogull
Today in an engineering thermodynamics lecture, the professor gave an example of a gas doing work. We had a cylinder full of helium at a pressure of something like 200kPa absolute and the valve was opened so that the gas would flow out against the atmospheric pressure until the pressures were equal. Also the cylinder was assumed to be in thermal equilibrium with its surroundings so the temperature of the gas was equal to the temperature of the ambient air. However, the way he calculated the work perturbed me. He said that this was an isobaric process because the gas was expanding against a constant atmospheric pressure. I was under the assumption that an isobaric process means that the working fluid stays at constant pressure throughout the process which is not the case in this expansion. And in this case, the gas pressure is dropping as it leaves the cylinder.
The professor then proceeded to calculate the work as W = Patm*ΔV. But I don't think that is right and that simple.
Am I correct, or is the professor? Can someone please return me to sanity?
2. Jan 27, 2014
### Jano L.
You are right, the helium expands and its pressure decreases from 200kPa to atmospheric 100 kPa. Helium gas does not undergo isobaric process in the common sense of the word (atmosphere does).
Your professor is right this time - the work helium does on the atmosphere is indeed (approximately) W = Patm*ΔV, where ΔV is the volume of the helium gas outside the cylinder just after it escapes from it. After a while, the helium is heated by the atmosphere and expands even more and does further work, but this work is neglected in the above.
3. Jan 27, 2014
### moogull
Thanks for the response Jano,
If the process is not isobaric, then why is the work not calculated using an integral and instead W = Patm*ΔV. I'm fairly certain he took the system as a control mass/closed system.
4. Jan 27, 2014
### Jano L.
You can write the work as integral, but because the pressure of the atmosphere can be assumed constant during the process, the result is just $P_{atm} \Delta V$.
5. Jan 27, 2014
### moogull
Okay, so in this case, why is the pressure of the atmosphere the pressure used to calculate work and not the pressure of the working fluid?
6. Jan 27, 2014
### moogull
What I mean to say is, why, since this is not an isobaric process, the work is calculated using a pressure that is assumed not to change?
edit: Looking at the atmosphere as the working fluid I agree that the work is defined P_atm*deltaV.
Last edited: Jan 27, 2014
7. Jan 27, 2014
### Staff: Mentor
It depends on what you define as your system. If you define your system as just the gas that remains in the cylinder after equilibration, then that gas has done work on expelling the gas from the cylinder, and the pressure at the interface with the gas that it expelled was not at constant pressure.
If you define your system by surrounding all the helium that was originally inside the cylinder with an imaginary moving boundary, then, throughout this process, different parts of the helium were at different pressures. However, at the imaginary boundary with the surrounding atmospheric air, the pressure was constant (atmospheric). In the first law, you calculate the work done on the surroundings by calculating the integral of the pressure at the interface with the surroundings integrated over the volume change. (See my Blog on my PF home page.) So, in the case of this system, your professor was correct.
Chet
8. Oct 22, 2014
### Odd
Hello
Is it possible then to calculate the same work looking only at the work of the expanding gas inside the cylinder ?
I assume one would have to use d(PV) and then a equation of state for the process.
Odd
9. Oct 22, 2014
### Staff: Mentor
You would just solve it as an isothermal reversible expansion. The real irreversibility occurs within the valve, where the pressure drops from that inside the cylinder to 1 atm.
Chet | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528428912162781, "perplexity": 509.04612608872486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00213.warc.gz"} |
https://infoscience.epfl.ch/record/202277 | Infoscience
Journal article
# Modelling Small-Scale Drifting Snow with a Lagrangian Stochastic Model Based on Large-Eddy Simulations
Observations of drifting snow on small scales have shown that, in spite of nearly steady winds, the snow mass flux can strongly fluctuate in time and space. Most drifting snow models, however, are not able to describe drifting snow accurately over short time periods or on small spatial scales as they rely on mean flow fields and assume equilibrium saltation. In an attempt to gain understanding of the temporal and spatial variability of drifting snow on small scales, we propose to use a model combination of flow fields from large-eddy simulations (LES) and a Lagrangian stochastic model to calculate snow particle trajectories and so infer snow mass fluxes. Model results show that, if particle aerodynamic entrainment is driven by the shear stress retrieved from the LES, we can obtain a snow mass flux varying in space and time. The obtained fluctuating snow mass flux is qualitatively compared to field and wind-tunnel measurements. The comparison shows that the model results capture the intermittent behaviour of observed drifting snow mass flux yet differences between modelled turbulent structures and those likely to be found in the field complicate quantitative comparisons. Results of a model experiment show that the surface shear-stress distribution and its influence on aerodynamic entrainment appear to be key factors in explaining the intermittency of drifting snow. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9720584750175476, "perplexity": 1662.7757693393685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541396.20/warc/CC-MAIN-20161202170901-00391-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://math.libretexts.org/TextMaps/Applied_Mathematics/Book%3A_Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/3%3A_Basics_of_Dynamical_Systems | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
3: Basics of Dynamical Systems
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
• 3.1: What Are Dynamical Systems?
Dynamical systems theory is the very foundation of almost any kind of rule-based models of complex systems. It consider show systems change over time, not just static properties of observations.
• 3.2: Phase Space
A phase space of a dynamical system is a theoretical space where every state of the system is mapped to a unique spatial location. The number of state variables needed to uniquely specify the system’s state is called the degrees of freedom in the system. You can build a phase space of a system by having an axis for each degree of freedom, i.e., by taking each state variable as one of the orthogonal axes.
• 3.3: What Can We Learn?
You can tell from the phase space what will eventually happen to a system’s state in the long run. For a deterministic dynamical system, its future state is uniquely determined by its current state (hence, the name “deterministic”). Trajectories of a deterministic dynamical system will never branch off in its phase space (though they could merge), because if they did, that would mean that multiple future states were possible, which would violate the deterministic nature of the system. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272549510002136, "perplexity": 260.60703754895144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00048.warc.gz"} |
https://www.physicsforums.com/threads/general-conditions-for-stokes-theorem.716788/ | # General Conditions for Stokes' Theorem
1. Oct 15, 2013
### Mandelbroth
What is the least restrictive set of conditions needed to utilize the formula $\int\limits_{\Omega}\mathrm{d}\alpha=\int\limits_{\partial\Omega} \alpha$?
2. Oct 16, 2013
### Ben Niehoff
I think the only conditions are those needed to define the integrals (i.e. the same kinds of conditions used to define 1-dimensional integrals). I don't think there are any extra geometrical conditions, provided you're on a differentiable manifold.
You can even relax smoothness of the manifold (and the forms!) if you are careful about using Dirac delta functions. Generally, since a form is something you integrate, they should be thought of as distributions.
The boundary operator can be used in a distributional sense as well. For example, the boundary of a sphere is zero. But it might be helpful to think of a sphere with a point removed, whose boundary is therefore a point; then you can use Stokes' theorem to integrate forms over the sphere.
3. Oct 16, 2013
### Mandelbroth
Could you please expand on this? Thinking of forms as distributions feels foreign, and I don't see where that line of thought would go.
4. Oct 16, 2013
### Ben Niehoff
Expand on it how? Surely you can figure out how to integrate something like
$$\delta(x,y,z) \, dx \wedge dy \wedge dz$$
Do you have a specific question?
5. Oct 16, 2013
### Ben Niehoff
This might be a better example of what I'm talking about. Say we want to find the area of a sphere. The form we want to integrate is
$$\omega = \sin \theta \, d \theta \wedge d \phi$$
Now, the sphere $\Omega$ is a closed surface, so $\partial \Omega = 0$. However, the coordinate patch $\tilde \Omega$ covered by the coordinates $\phi \in (0, 2\pi), \; \theta \in (0, \pi)$ is not a closed surface, and is in fact contractible. We have that $\partial \tilde \Omega$ is the union of the north and south poles of the sphere, and a segment of a great circle that runs between them.
Now, it so happens that
$$\omega = d \big( - \cos \theta \, d \phi \big) = d \alpha$$
so we can use Stokes' theorem. So
$$\int_{\tilde \Omega} \omega = \int_{\partial \tilde \Omega} \alpha = - \int_{\partial \tilde \Omega} \cos \theta \, d \phi$$
To integrate around the "cut" between the north and south poles, we draw a loop around it. On either side there is a vertical part where $d \phi = 0$, and so these parts do not contribute. Then around the north and south poles, there are tiny circles, at which $\cos \theta = \pm 1$ and $\phi$ runs from 0 to $2 \pi$. The tiny circles go opposite directions, so each part contributes positively:
$$- \int_{\partial \tilde \Omega} \cos \theta \, d \phi = 2 \pi + 2 \pi = 4 \pi$$
So you see, if you are careful about how you cut up a manifold, you can apply Stokes' theorem in all sorts of situations.
In this case, we took a closed surface and removed a set of measure zero to turn it into a surface with boundary. The reason this worked is because the form $\omega$ is smooth on the set of measure zero that we removed. If that were not the case (say $\omega$ had a delta-function-like contribution on the "cut"), then you would have to include an extra piece to account for that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938104510307312, "perplexity": 242.79530450345632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648003.58/warc/CC-MAIN-20180322205902-20180322225902-00164.warc.gz"} |
https://www.physicsforums.com/threads/what-rate-does-the-lift-accelerates-in-the-first-5-sec.846382/ | # What rate does the lift accelerates in the first 5 sec?
Tags:
1. Dec 3, 2015
### henrco
Hi,
I'm trying to solve this problem and it's driving me a little crazy. Any help greatly appreciated.
Q) A lift travels to the top of a tower through a vertical displacement of 48 m. The total journey takes 17 s. The lift accelerates from rest at a constant rate for the first 5 seconds. Then it moves at constant speed and then decelerates to rest at a constant rate for the last 5 seconds.
What rate does the lift accelerate during the first 5 seconds?
My Attempt:
I have the initial and final velocity which are both zero (elevator starts and stops), displacement (48m) and time (overall time 17sec and three time intervals of acceleration, constant velocity and deceleration). Clearly the acceleration and deceleration will be same rates as they occur during the same time intervals. However with the information I have I feel I'm unable to use the usual constant acceleration equations.
The average velocity is 48/17 = 2.8 m/s. However I'm not sure how that helps.
To find the acceleration during the first 5 seconds, I have the time and initial velocity but I need the velocity
it reaches at 5 seconds to obtain the acceleration and I just can't work out how to get it.
2. Dec 3, 2015
### PeroK
There are several approaches. Since you've been thinking about average velocity, you might like to try to solve the problem using the average velocity for each stage of the motion. Then you don't need any of the "usual" equations.
3. Dec 3, 2015
### Staff: Mentor
Hi Conal Henry, Welcome to Physics Forums.
Please retain the formatting headers (provided in the edit window) when you post a problem here.
Start with a graphic approach to gain insight. Have you tried making a sketch of velocity versus time? What does the area under a v vs t graph give you?
4. Dec 4, 2015
### henrco
Hi Perok,
1) For stage 1, the first 5 seconds, the acceleration is constant, so I've taken the angle the acceleration makes to be 45 degrees. (This is intuitive and if it's correct,
would you mind explaining why?).
The velocity for this 5 x Sin (45) = 3.5m/s.
With initial velocity is 0, the acceleration a = (3.5-0)/5 = 0.7m/s2
2) Find the average velocity for each stage. Starting with Stage 1, the first 5 seconds. I obviously have the time but not the displacement.
Avg Vel = Displacement / 5.
Work out area of each stage. Stage 1 = ( 5 x v/2 ) Stage 2 = ( 7 x v) Stage 3 = ( 5 x v/2). This all comes to 12 v
We know Area = displacement, therefore: 12v = 48. So v = 4.
Acceleration = (4-0)/5 = .8 m/s2
My second attempt seems to be correct? As I then worked out the displacement for each stage to be (S1 = 10, S2 = 28 and S3 = 10).
Could you please let know if this is correct?
Also you mentioned that there are several approaches, if you have time would you mind briefly outlining another approach.
I'd like to try to understand this problem from different approaches.
5. Dec 4, 2015
### henrco
Hi gneill,
Point noted, will post future problems using the template. Thank you for your advice regarding the problem.
I replied to another helpful suggestion above and I think the answer below was the direction you were sending me in?
Find the average velocity for each stage. Starting with Stage 1, the first 5 seconds. I obviously have the time but not the displacement.
Avg Vel = Displacement / 5.
Work out area of each stage. Stage 1 = ( 5 x v/2 ) Stage 2 = ( 7 x v) Stage 3 = ( 5 x v/2). This all comes to 12 v
We know Area = displacement, therefore: 12v = 48. So v = 4.
Acceleration = (4-0)/5 = .8 m/s2
Conal
6. Dec 4, 2015
### Staff: Mentor
Yes, you get to the same result. A graphical depiction to begin can sometimes help. For example:
You know the area must be your displacement, you have the time periods, the only thing you don't have is the maximum velocity Vm. But Vm is easily found given the other information. Acceleration is just the slope of a line in the figure.
7. Dec 4, 2015
### PeroK
You seem to have understood the graphical approach. I'd say that's one thing never to forget: displacement = area under the velocity vs time graph.
Regarding average velocity. For constant acceleration from or to rest, the average velocity is half the final or initial velocity respectively. More generally, if you accelerate from $u$ to $v$ at a constant acceleration, then the average velocity is $(u+v)/2$. You can check that graphically.
Another approach was to note the symmetry of the motion. Although, in this case, that just makes things a little easier. Always look out for symmetry in a problem. As problems get harder, using the symmetry of a problem can makes things a lot easier. And I mean a lot!
8. Dec 4, 2015
### henrco
Thank you very much, very helpful.
9. Dec 4, 2015
### henrco
Thank you very much, this was very helpful.
Draft saved Draft deleted
Similar Discussions: What rate does the lift accelerates in the first 5 sec? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917211294174194, "perplexity": 874.6740011075569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00515.warc.gz"} |
http://math.stackexchange.com/questions/44294/can-one-show-that-sum-n-1n-frac1n-log-n-gamma-leqslant-frac12/44323 | # Can one show that $\sum_{n=1}^N\frac{1}{n} -\log N - \gamma \leqslant \frac{1}{2N}$ without using the Euler-Maclaurin formula?
I would like to prove that $$\sum_{n=1}^N\frac{1}{n} -\log N - \gamma \leqslant \frac{1}{2N}$$ without using the Euler-Maclaurin summation formula. The motivation for this is that I have come very close to doing so (see the answer provided below) but annoyingly have not actually proved the above.
Some may ask why I don't just use the formula. I'm writing a set of analytic number theory notes for my own use and it seems an unwieldy result to introduce and prove, given that the above inequality is all I need, and given that I have gotten so close without using Euler-Maclaurin!
-
Let $$\gamma_n = \sum_{k=1}^n \frac{1}{k} - \log n.$$ Our goal is to show that $$\gamma_n - \lim_{m \to \infty} \gamma_m \leq \frac{1}{2n}.$$ It is enough to show that, for $n<m$, we have $$\gamma_n - \gamma_m \leq \frac{1}{2n}.$$ This has the advantage of dealing solely with finite quantities.
Now, $$\gamma_n - \gamma_m = \int_{n}^m \frac{dt}{t} - \sum_{k=n+1}^m \frac{1}{k} =\sum_{j=n}^{m-1} \int_{j}^{j+1} \left( \frac{1}{t} - \frac{1}{j+1} \right) \cdot dt .$$
At this point, if I were at a chalkboard rather than a keyboard, I would draw a picture. Draw the hyperbola $y=1/x$ and mark off the interval between $x=n$ and $x=m$. Divide this into $m-n$ vertical bars of width $1$. Each bar stretches up to touch the hyperbola at its right corner. There is a little wedge, bounded by $x=j$, $y=1/(j+1)$ and $y=1/x$. We are adding up the area of each of these wedges.1
Because $y=1/x$ is convex, the area of this wedge is less than that of the right triangle with vertices at $(j,1/(j+1))$, $(j+1, 1/(j+1))$ and $(j,1/j)$. This triangle has base $1$ and height $1/j - 1/(j+1)$, so its area is $(1/2) (1/j - 1/(j+1))$. So the quantity of interest is $$\leq \sum_{j=n}^{m-1} \frac{1}{2} \left( \frac{1}{j} - \frac{1}{j+1} \right) = \frac{1}{2} \left( \frac{1}{n} - \frac{1}{m} \right) \leq \frac{1}{2n}.$$
Of course, this is just a standard proof of Euler-Maclaurin summation, but it is a lot more geometric and easy to follow in this special case.
1 By the way, since this area is positive, we also get the corollary that $\gamma_n - \gamma_m > 0$, so $\gamma_n - \gamma >0$, another useful bound.
-
(+1) Just to point out a typo: In "Draw the hyperbola y=1/x and mark off the interval between x=n and x=n.", the second n should be an m. – John Bentin Jun 9 '11 at 13:52
It is really annoying. This is exactly the kind of geometric proof I went for, using areas, and I always failed! Thanks for showing how it's done +1. – Sputnik Jun 18 '11 at 10:20
What follows is a variant of the method suggested by Fahad Sperinck, which almost gave the desired bound. Although we obtain a pretty short proof of the inequality, I think that the "right" proof is the one in the post by David Speyer. (A proof based on geometry is "right," as is a combinatorial proof.)
Let us start as Fahad Sperinck did, from $$\int_n^{n+1} \frac{x-[x]}{x^2}\: dx = \log\Big(\frac{n+1}{n}\Big) - \frac{1}{n+1} < \frac{1}{n} - \frac{1}{n+1} -\frac{1}{2n^2} +\frac{1}{3n^3}.$$
Ultimately, we will summing from $N$ to infinity. If we keep this fact in mind, the chunk $$\frac{1}{n}-\frac{1}{n+1}$$ sums beautifully to $1/N$, and should be left as is. If we could show that the part that is taken away, namely $$\frac{1}{2n^2}-\frac{1}{3n^3}$$ is bigger than $$\frac{1}{2}\left(\frac{1}{n}-\frac{1}{n+1}\right),$$ we would be finished.
Now I will do some unofficial scribbling, don't look. I want to show that $1/2n^2-1/3n^3 \ge 1/2(n)(n+1)$, so I want to show that $(3n-2)/6n^3\ge 1/2n(n+1)$, so I want to show that $(3n-2)/3n^2 \ge 1/(n+1)$, so I want to show that $(3n-2)(n+1) \ge 3n^2$, and this is clearly true if $n \ge 2$, just multiply out the stuff on the left.
Now if I had the energy I would hide my tracks, and have the desired inequality drop out as if by magic.
Comment: Somehow, one acquires the habit of thinking of $n^2$ and $1/n^2$ as "nice" and of $n(n+1)$ and $1/n(n+1)$ as not so nice. In many ways, the opposite is true. Certainly that is the case from the combinatorial point of view.
The calculations in the post were fine, the problem was that of giving away a tiny bit too much. That was, maybe, because the strategy was directed at getting to something that looks like $1/n^2$, which was viewed as tractable and desirable. But $1/n(n+1)$, aka $1/n-1/(n+1)$, arises naturally in the problem, and is much more tractable.
-
Nice exposition! I am a big fan of Penn and Teller's videos where they do magic tricks while showing you how they are done, and this has the same feel. – David Speyer Jun 9 '11 at 14:41
The beauty, clarity and simplicity of your posts are such an enrichment of this site. Many thanks for all the time and effort you invest in your contributions. – t.b. Aug 10 '11 at 14:05
One can check that $S(N):=\sum_{n=1}^N\frac{1}{n} -\log N - \gamma = \int_N^\infty \frac{x-[x]}{x^2} \: dx$, where $[x]$ is the integer part of $x$. Moreover $$\int_n^{n+1} \frac{x-[x]}{x^2}\: dx = \log\Big(\frac{n+1}{n}\Big) - \frac{1}{n+1} < \frac{1}{n} - \frac{1}{n+1} -\frac{1}{2n^2} +\frac{1}{3n^3}, \qquad (1)$$ by the Taylor series for $\log(1+x)$. But we have that $$n(n+1)(3n-1) = 3n^3 + 2n^2 -n > 3n^3$$ so $$\frac{1}{n(n+1)} < \frac{3n-1}{3n^3} = \frac{1}{n^2} - \frac{1}{3n^3}.$$ Therefore, from equation $(1)$ we find $$S(N) < \sum_{n=N}^\infty \frac{1}{n(n+1)} -\frac{1}{2n^2} +\frac{1}{3n^3} < \sum_{n=N}^\infty \frac{1}{n^2} - \frac{1}{3n^3} -\frac{1}{2n^2} +\frac{1}{3n^3},$$ and so finally, $$S(N) < \frac{1}{2}\sum_{n=N}^\infty \frac{1}{n^2} < \frac{1}{2(N-1)},$$ for all $N \in \mathbb{N}$, by a standard approximation for $\sum \frac{1}{n^2}$.
-
By the way, does the difference between $1/2(N-1)$ and $1/2N$ really matter? Unless you are heading for hard bounds in the end, I would guess that $1/2N + O(1/N^2)$ is good enough for whatever you want, and you already have that. – David Speyer Jun 9 '11 at 12:11
@David: I'll admit that it doesn't really matter, but it is just nicer to be able to write that something is $O(\frac{1}{N})$ with a simple implied constant like $\frac{1}{2}$, which is actually the best possible constant as well. I guess it was more a matter of elegance for me! – Sputnik Jun 18 '11 at 10:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108116626739502, "perplexity": 224.48175451585004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737956371.97/warc/CC-MAIN-20151001221916-00164-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://engineeringlibrary.org/calculators | ## Structural Mechanics
The beam calculator allows for the analysis of stresses and deflections in straight beams.
The 2D Finite Element Analysis (FEA) calculator can be used to analyze any structure that can be modeled with 2D beams.
The bolted joint calculator allows for stress analysis of a bolted joint, accounting for preload, applied axial load, and applied shear load.
The bolt torque calculator can be used to calculate the torque required to achieve the desired preload on a bolted joint.
The bolt pattern calculator allows for applied forces to be distributed over bolts in a pattern.
The lug calculator allows for analysis of lifting lugs under axial, transverse, or oblique loading.
The column buckling calculator allows for buckling analysis of long and intermediate-length columns loaded in compression.
The Mohr's circle calculator provides an intuitive way of visualizing the state of stress at a point in a loaded material.
The stress-strain curve calculator allows for the calculation of the engineering stress-strain curve of a material.
The cross section builder allows for the calculation of properties for a custom cross section.
The stress concentration calculator provides a set of interactive plots for common stress concentration factors.
## Failure Mechanisms
The fracture mechanics calculator allows for fracture analysis of a cracked part.
The fatigue crack growth calculator allows for fatigue crack growth analysis of a cracked part.
## Math
The unit conversion calculator allows for conversion between various units, with a focus on engineering units.
## Systems Engineering
The trade study calculator provides a systematic method for making a decision among competing alternatives. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856121301651001, "perplexity": 2916.018332901828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00240.warc.gz"} |
https://www.hepdata.net/search/?q=title%3A%22photon+collisions%22&page=1&phrases=Single+Differential+Cross+Section | Showing 25 of 36 results
#### Observation of Charmed Mesons in Photon-photon Collisions
The collaboration Bartel, W. ; Becker, L. ; Felst, R. ; et al.
Phys.Lett.B 184 (1987) 288-292, 1987.
Inspire Record 235696
The inclusive production of D ∗± mesons in single tagged photon-photon collisions is investigated using the JADE detector at PETRA. D ∗± mesons are reconstructed through their decay into D 0 +π ± where the D 0 decays via D 0 →Kππ 0 . The event rate and topology are compared to the expectations of c quark production in the quark-parton model: γγ→c c .
0 data tables match query
#### High $p_T$ Hadron Production in Photon - Photon Collisions
The collaboration Brandelik, R. ; Braunschweig, W. ; Gather, K. ; et al.
Phys.Lett.B 107 (1981) 290-296, 1981.
Inspire Record 167417
We have studied the properties of hadron production in photon-photon scattering with tagged photons at the e + e − storage ring PETRA. A tail in the p T distribution of particles consistent with p T −4 has been observed. We show that this tail cannot be due to the hadronic part of the photon. Selected events with high p T particles are found to be consistent with a two-jet structure as expected from a point-like coupling of the photons to quarks. The lowest-order cross section predicted for γγ → q q , σ = 3 Σ e q 4 · σ γγ → μμ , is approached from above by the data at large transverse momenta.
0 data tables match query
#### rivet Analysis Inclusive jet production in two-photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 602 (2004) 157-166, 2004.
Inspire Record 661114
Inclusive jet production, e+e- -> e+e- \ee$jet X, is studied using 560/pb of data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 209 GeV. The inclusive differential cross section is measured using a k_t jet algorithm as a function of the jet transverse momentum, pt, in the range 3<pt<50 GeV for a pseudorapidity, eta, in the range -1<eta<1. This cross section is well represented by a power law. For high pt, the measured cross section is significantly higher than the NLO QCD predictions, as already observed for inclusive charged and neutral pion production. 0 data tables match query #### Inclusive lambda production in two photon collisions at LEP The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al. Phys.Lett.B 586 (2004) 140-150, 2004. Inspire Record 637287 The reactions e^+e^- -> e^+e^- Lambda X and e^+e^- -> e^+e^- Lambda X are studied using data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 209 GeV. Inclusive differential cross sections are measured as a function of the lambda transverse momentum, p_t, and pseudo-rapidity, eta, in the ranges 0.4 GeV < p_t < 2.5 GeV and |\eta| < 1.2. The data are compared to Monte Carlo predictions. The differential cross section as a function of p_t is well described by an exponential of the form A exp (- p_t / <p_t>)$.
0 data tables match query
#### Pion and Kaon Pair Production in Photon-Photon Collisions
The collaboration Aihara, H. ; Alston-Garnjost, M. ; Avery, R.E. ; et al.
Phys.Rev.Lett. 57 (1986) 404, 1986.
Inspire Record 228072
We report measurements of the two-photon processes e+e−→e+e−π+π− and e+e−→e+e−K+K−, at an e+e− center-of-mass energy of 29 GeV. In the π+π− data a high-statistics analysis of the f(1270) results in a γγ width Γ(γγ→f)=3.2±0.4 keV. The π+π− continuum below the f mass is well described by a QED Born approximation, whereas above the f mass it is consistent with a QCD-model calculation if a large contribution from the f is assumed. For the K+K− data we find agreement of the high-mass continuum with the QCD prediction; limits on f′(1520) and θ(1720) formation are presented.
0 data tables match query
#### Inclusive production of charged hadrons in photon-photon collisions
The collaboration Abbiendi, G. ; Ainsley, C. ; Akesson, P.F. ; et al.
Phys.Lett.B 651 (2007) 92-101, 2007.
Inspire Record 734955
The inclusive production of charged hadrons in the collisions of quasi-real photons e+e- -> e+e- +X has been measured using the OPAL detector at LEP. The data were taken at e+e- centre-of-mass energies from 183 to 209 GeV. The differential cross-sections as a function of the transverse momentum and the pseudorapidity of the hadrons are compared to theoretical calculations of up to next-to-leading order (NLO) in the strong coupling constant alpha{s}. The data are also compared to a measurement by the L3 Collaboration, in which a large deviation from the NLO predictions is observed.
0 data tables match query
#### Measurement of eta eta production in two-photon collisions
The collaboration Uehara, S. ; Watanabe, Y. ; Nakazawa, H. ; et al.
Phys.Rev.D 82 (2010) 114031, 2010.
Inspire Record 862260
We report the first measurement of the differential cross section for the process gamma gamma --> eta eta in the kinematic range above the eta eta threshold, 1.096 GeV < W < 3.8 GeV over nearly the entire solid angle range, |cos theta*| <= 0.9 or <= 1.0 depending on W, where W and theta* are the energy and eta scattering angle, respectively, in the gamma gamma center-of-mass system. The results are based on a 393 fb^{-1} data sample collected with the Belle detector at the KEKB e^+ e^- collider. In the W range 1.1-2.0 GeV/c^2 we perform an analysis of resonance amplitudes for various partial waves, and at higher energy we compare the energy and the angular dependences of the cross section with predictions of theoretical models and extract contributions of the chi_{cJ} charmonia.
0 data tables match query
#### Double tag events in two photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 531 (2002) 39-51, 2002.
Inspire Record 565440
Double-tag events in two-photon collisions are studied using the L3 detector at LEP centre-of-mass energies from root(s)=189 GeV to 209 GeV. The cross sections of the e+e- -> e+e- hadrons and gamma*gamma* -> hadrons processes are measured as a function of the photon virtualities, Q1^2 and Q2^2, of the two-photon mass, W_gammagamma, and of the variable Y=ln(W_gammagamma^2/(Q1 Q2)), for an average photon virtuality <Q2> = 16 GeV2. The results are in agreement with next-to-leading order calculations for the process gamma*gamma* -> q qbar in the interval 2 <= Y <= 5. An excess is observed in the interval 5 < Y <= 7, corresponding to W_gammagamma greater than 40 GeV . This may be interpreted as a sign of resolved photon QCD processes or the onset of BFKL phenomena.
0 data tables match query
#### Inclusive $D^{*+-}$ production in two photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 535 (2002) 59-69, 2002.
Inspire Record 585623
Inclusive D^{*+-} production in two-photon collisions is studied with the L3 detector at LEP, using 683 pb^{-1} of data collected at centre-of-mass energies from 183 to 208 GeV. Differential cross sections are determined as functions of the transverse momentum and pseudorapidity of the D^{*+-} mesons in the kinematic region 1 GeV < P_T < 12 GeV and |eta| < 1.4. The cross sections sigma(e^+e^- -> e^+e^-D^{*+-}X) in this kinematical region is measured and the sigma(e^+e^- -> e^+e^- cc{bar}X) cross section is derived. The measurements are compared with next-to-leading order perturbative QCD calculations.
0 data tables match query
#### Inclusive charged hadron production in two photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 554 (2003) 105-114, 2003.
Inspire Record 605973
Inclusive charged hadron production, e+e- -> e+e- h+- X, is studied using 414 pb-1 of data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 202 GeV. Single particle inclusive differential cross sections are measured as a function of the particle transverse momentum, pt, and pseudo-rapidity, eta. For p_t < 1.5 GeV, the data are well described by an exponential, typical of soft hadronic processes. For higher pt, the onset of perturbative QCD processes is observed. The pi+- production cross section for pt > 5 GeV is much higher than the NLO QCD predictions.
0 data tables match query
#### Production of the F(0) Meson in Photon-photon Collisions
The collaboration Behrend, H.J. ; Fenner, H. ; Schachter, M.J. ; et al.
Z.Phys.C 23 (1984) 223, 1984.
Inspire Record 199731
The production of thef0 in two photon collisions, with the subsequent decayf0→π+π− has been observed in the CELLO detector at PETRA. Thef0 peak was found to lie on a dipion continuum and to be shifted downwards in mass by ≃50 MeV/c2. The ππ mass spectrum from 0.8 to 1.5 GeV/c2 was well fitted by the model of Mennessier using only a unitarised Born amplitude and helicity 2f0 amplitude. The previously observed mass shift and distortion of thef0 peak are explained by strong interference between the Born andf0 amplitudes. The only free parameter in the fit of the data to the model is the radiative widthΓγγ(f0). It was found that:Γγγ(f0)=2.5±0.1±0.5 keV where the first (second) quoted errors are statistical (systematic).
0 data tables match query
#### Exclusive Production of Proton Anti-proton Pairs in Photon-photon Collisions
The collaboration Bartel, W. ; Becker, L. ; Cords, D. ; et al.
Phys.Lett.B 174 (1986) 350-356, 1986.
Inspire Record 231554
Total and differential cross sections for exclusive production of proton-antiproton pairs in photon-photon collisions have been measured using the JADE detector at PETRA. The total cross section in the CM angular |cos θ ∗ | < 0.6 reaches a maximum value of 3.8 nb for a γγ invariant mass of W γγ = 2.25 GeV, and decreases rapidly for higher values of W γγ . In the range 2.0 GeV < W γγ < 2.6 GeV the angular distribution is not isotopic. The nucleons are preferentially emitted at large angles to the collision axis.
0 data tables match query
#### $p\bar{p}$ pair production in two photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 571 (2003) 11-20, 2003.
Inspire Record 620433
The reaction e^+e^- -> e^+e^- proton antiproton is studied with the L3 detector at LEP. The analysis is based on data collected at e^+e^- center-of-mass energies from 183 GeV to 209 GeV, corresponding to an integrated luminosity of 667 pb^-1. The gamma gamma -> proton antiproton differential cross section is measured in the range of the two-photon center-of-mass energy from 2.1 GeV to 4.5 GeV. The results are compared to the predictions of the three-quark and quark-diquark models.
0 data tables match query
#### Formation of Delta (980) and A2 (1320) in Photon-photon Collisions
The collaboration Antreasyan, D. ; Aschman, D. ; Besset, D. ; et al.
Phys.Rev.D 33 (1986) 1847, 1986.
Inspire Record 217547
The reaction γγ→π0η has been investigated with the Crystal Ball detector at the DESY storage ring DORIS II. Formation of δ(980) and A2(1320) has been observed with γγ partial widths Γγγ(A2)=1.14±0.20±0.2 6 keV and Γγγ(δ)B(δ→πη)=0.19±0.07 −0.07+0.10 keV.
0 data tables match query
#### Measurement of inclusive $D^{*+-}$ production in two photon collisions at LEP
The collaboration Acciarri, M. ; Achard, P. ; Adriani, O. ; et al.
Phys.Lett.B 467 (1999) 137-146, 1999.
Inspire Record 505281
Inclusive production of $\mathrm{D^{*\pm}}$ mesons in two-photon collisions was measured by the L3 experiment at LEP. The data were collected at a centre-of-mass energy $\sqrt{s} = 189$ GeV with an integrated luminosity of $176.4 \mathrm{pb^{-1}}$. Differential cross sections of the process $\mathrm{e^+e^- \to D^{*\pm} X}$ are determined as functions of the transverse momentum and pseudorapidity of the $\mathrm{D^{*\pm}}$ mesons in the kinematic region 1 GeV $< p_{T}^{\mathrm{D^*}} < 5$ GeV and $\mathrm{|\eta^{D^*}|} < 1.4$. The cross section integrated over this phase space domain is measured to be $132 \pm 22(stat.) \pm 26(syst.)$ pb. The differential cross sections are compared with next-to-leading order perturbative QCD calculations.
0 data tables match query
#### A Measurement of $\pi^0 \pi^0$ Production in Two Photon Collisions
The collaboration Marsiske, H. ; Antreasyan, D. ; Bartels, H.W. ; et al.
Phys.Rev.D 41 (1990) 3324, 1990.
Inspire Record 294492
The reaction e+e−→e+e−π0π0 has been analyzed using 97 pb−1 of data taken with the Crystal Ball detector at the DESY e−e+ storage ring DORIS II at beam energies around 5.3 GeV. For the first time we have measured the cross section for γγ→π0π0 for π0π0 mvariant masses ranging from threshold to about 2 GeV. We measure an approximately flat cross section of about 10 nb for W=mπ0π0<0.8 GeV, which is below 0.6 GeV, in good agreement with a theoretical prediction based on an unitarized Born-term model. At higher invariant masses we observe formation of the f2(1270) resonance and a hint of the f0(975). We deduce the following two-photon widths: Γγγ(f2(1270))=3.19±0.16±0.280.29 keV and Γγγ(f0(975))<0.53 keV at 90% C.L. The decay-angular distributions show the π0π0 system to be dominantly spin 0 for W<0.7 GeV and spin 2, helicity 2 in the f2(1270) region, with helicity 0 contributing at most 22% (90% C.L.).
0 data tables match query
#### High-statistics measurement of neutral pion-pair production in two-photon collisions
The collaboration Uehara, S. ; Watanabe, Y. ; Adachi, I. ; et al.
Phys.Rev.D 78 (2008) 052004, 2008.
Inspire Record 786406
We report a high-statistics measurement of differential cross sections for the process gamma gamma -> pi^0 pi^0 in the kinematic range 0.6 GeV <= W <= 4.0 GeV and |cos theta*| <= 0.8, where W and theta* are the energy and pion scattering angle, respectively, in the gamma gamma center-of-mass system. Differential cross sections are fitted to obtain information on S, D_0, D_2, G_0 and G_2 waves. The G waves are important above W ~= 1.6 GeV. For W <= 1.6 GeV the D_2 wave is dominated by the f_2(1270) resonance while the S wave requires at least one additional resonance besides the f_0(980), which may be the f_0(1370) or f_0(1500). The differential cross sections are fitted with a simple parameterization to determine the parameters (the mass, total width and Gamma_{gamma gamma}B(f_0 -> pi^0 pi^0)) of this scalar meson as well as the f_0(980). The helicity 0 fraction of the f_2(1270) meson, taking into account interference for the first time, is also obtained.
0 data tables match query
#### High-statistics study of neutral-pion pair production in two-photon collisions
The collaboration Uehara, S. ; Watanabe, Y. ; Nakazawa, H. ; et al.
Phys.Rev.D 79 (2009) 052009, 2009.
Inspire Record 815978
The differential cross sections for the process $\gamma \gamma \to \pi^0 \pi^0$ have been measured in the kinematic range 0.6 GeV $< W < 4.1$ GeV, $|\cos \theta^*|<0.8$ in energy and pion scattering angle, respectively, in the $\gamma\gamma$ center-of-mass system. The results are based on a 223 fb$^{-1}$ data sample collected with the Belle detector at the KEKB $e^+ e^-$ collider. The differential cross sections are fitted in the energy region 1.7 GeV $< W <$ 2.5 GeV to confirm the two-photon production of two pions in the G wave. In the higher energy region, we observe production of the $\chi_{c0}$ charmonium state and obtain the product of its two-photon decay width and branching fraction to $\pi^0\pi^0$. We also compare the observed angular dependence and ratios of cross sections for neutral-pion and charged-pion pair production to QCD models. The energy and angular dependence above 3.1 GeV are compatible with those measured in the $\pi^+\pi^-$ channel, and in addition we find that the cross section ratio, $\sigma(\pi^0\pi^0)/\sigma(\pi^+\pi^-)$, is $0.32 \pm 0.03 \pm 0.05$ on average in the 3.1-4.1 GeV region.
0 data tables match query
#### High-statistics study of $K^0_S$ pair production in two-photon collisions
The collaboration Uehara, S. ; Watanabe, Y. ; Nakazawa, H. ; et al.
PTEP 2013 (2013) 123C01, 2013.
Inspire Record 1245023
We report a high-statistics measurement of the differential cross section of the process gamma gamma --> K^0_S K^0_S in the range 1.05 GeV <= W <= 4.00 GeV, where W is the center-of-mass energy of the colliding photons, using 972 fb^{-1} of data collected with the Belle detector at the KEKB asymmetric-energy e^+ e^- collider operated at and near the Upsilon-resonance region. The differential cross section is fitted by parameterized S-, D_0-, D_2-, G_0- and G_2-wave amplitudes. In the D_2 wave, the f_2(1270), a_2(1320) and f_2'(1525) are dominant and a resonance, the f_2(2200), is also present. The f_0(1710) and possibly the f_0(2500) are seen in the S wave. The mass, total width and product of the two-photon partial decay width and decay branching fraction to the K bar{K} state Gamma_{gamma gamma}B(K bar{K}) are extracted for the f_2'(1525), f_0(1710), f_2(2200) and f_0(2500). The destructive interference between the f_2(1270) and a_2(1320) is confirmed by measuring their relative phase. The parameters of the charmonium states chi_{c0} and chi_{c2} are updated. Possible contributions from the chi_{c0}(2P) and chi_{c2}(2P) states are discussed. A new upper limit for the branching fraction of the P- and CP-violating decay channel eta_c --> K^0_S K^0_S is reported. The detailed behavior of the cross section is updated and compared with QCD-based calculations.
0 data tables match query
#### Study of $\pi^0$ pair production in single-tag two-photon collisions
The collaboration Masuda, M. ; Uehara, S. ; Watanabe, Y. ; et al.
Phys.Rev.D 93 (2016) 032003, 2016.
Inspire Record 1390112
We report a measurement of the differential cross section of $\pi^0$ pair production in single-tag two-photon collisions, $\gamma^* \gamma \to \pi^0 \pi^0$, in $e^+ e^-$ scattering. The cross section is measured for $Q^2$ up to 30 GeV$^2$, where $Q^2$ is the negative of the invariant mass squared of the tagged photon, in the kinematic range 0.5 GeV < W < 2.1 GeV and $|\cos \theta^*|$ < 1.0 for the total energy and pion scattering angle, respectively, in the $\gamma^* \gamma$ center-of-mass system. The results are based on a data sample of 759 fb$^{-1}$ collected with the Belle detector at the KEKB asymmetric-energy $e^+ e^-$ collider. The transition form factor of the $f_0(980)$ and that of the $f_2(1270)$ with the helicity-0, -1, and -2 components separately are measured for the first time and are compared with theoretical calculations.
0 data tables match query
#### Exclusive production of $p\bar{p}$ pairs in two photon collisions at PEP
The collaboration Aihara, H. ; Alston-Garnjost, M. ; Avery, R.E. ; et al.
Phys.Rev.D 36 (1987) 3506, 1987.
Inspire Record 246557
We report cross sections for the process γγ→pp¯ at center-of-mass energies W from 2.0 to 2.8 GeV. These results have been extracted from measurements of e+e−→e+e−pp¯ at an overall center-of-mass energy of 29 GeV, using the TPC/Two-Gamma facility at the SLAC storage ring PEP. Cross sections for the untagged mode [both photons nearly real] are shown to lie well above QCD predictions. Results are also presented for the single-tagged mode [one photon in the range 0.16<Q2<1.6 (GeV/c)2].
0 data tables match query
#### Inclusive production of charged hadrons and K0(S) mesons in photon-photon collisions
The collaboration Ackerstaff, K. ; Alexander, G. ; Allison, John ; et al.
Eur.Phys.J.C 6 (1999) 253-264, 1999.
Inspire Record 472639
The production of charged hadrons and K_s mesons in the collisions of quasi-real photons has been measured using the OPAL detector at LEP. The data were taken at e+e- centre-of-mass energies of 161 and 172 GeV. The differential cross-sections as a function of the transverse momentum and the pseudorapidity of the charged hadrons and K_s mesons have been compared to the leading order Monte Carlo simulations of PHOJET and PYTHIA and to perturbative next-to-leading order (NLO) QCD calculations. The distributions have been measured in the range 10-125 GeV of the hadronic invariant mass W. By comparing the transverse momentum distribution of charged hadrons measured in gamma-gamma interactions with gamma-proton and meson-proton data we find evidence for hard photon interactions in addition to the purely hadronic photon interactions.
0 data tables match query
#### Measurement of the proton - anti-proton pair production from two photon collisions at TRISTAN
The collaboration Hamasaki, H. ; Abe, K. ; Amako, K. ; et al.
Phys.Lett.B 407 (1997) 185-192, 1997.
Inspire Record 443677
The cross section of the γγ → p p reaction was measured at two-photon center-of-mass energy ( W γγ ) between 2.2 and 3.3 GeV, using the two-photon process at an e + e − collider, TRISTAN. The W γγ dependence of the cross section integrated over a c.m. angular region of | cos θ ∗ | < 0.6 is in good agreement with the previous measurements and the theoreticalv prediction based on diquark model in the high W γγ region.
0 data tables match query
#### Exclusive production of pion and kaon meson pairs in two photon collisions at LEP
The collaboration Heister, A. ; Schael, S. ; Barate, R. ; et al.
Phys.Lett.B 569 (2003) 140-150, 2003.
Inspire Record 626022
Exclusive production of π and K meson pairs in two photon collisions is measured with ALEPH data collected between 1992 and 2000. Cross-sections are presented as a function of cos θ ∗ and invariant mass, for | cos θ ∗ |<0.6 and invariant masses between 2.0 and 6.0 GeV/ c 2 (2.25 and 4.0 GeV/ c 2 ) for pions (kaons). The shape of the distributions are found to be well described by QCD predictions but the data have a significantly higher normalization.
0 data tables match query
#### Inclusive $\pi^0$ and $K^0_{S}$ production in two photon collisions at LEP
The collaboration Achard, P. ; Adriani, O. ; Aguilar-Benitez, M. ; et al.
Phys.Lett.B 524 (2002) 44-54, 2002.
Inspire Record 563335
The reactions ee->ee+pi0+X and ee->ee+K0s+X are studied using data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 202 GeV. Inclusive differential cross sections are measured as a function of the particle transverse momentum pt and the pseudo-rapidity. For pt < 1.5 GeV, the pi0 and K0s differential cross sections are described by an exponential, typical of soft hadronic processes. For pt > 1.5 GeV, the cross sections show the presence of perturbative QCD processes, described by a power-law. The data are compared to Monte Carlo predictions and to NLO QCD calculations.
0 data tables match query | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862931370735168, "perplexity": 4719.10317916341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00284.warc.gz"} |
https://proofwiki.org/wiki/Definition:Rank_(Set_Theory) | Definition:Rank (Set Theory)
Definition
Let $A$ be a set.
Let $V$ denote the von Neumann hierarchy.
Then the rank of $A$ is the smallest ordinal $x$ such that $A \in V \left({x+1}\right)$, given that $x$ exists.
Notation
The rank of the class $A$ is sometimes denoted as $\operatorname{rank} \left({A}\right)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927335381507874, "perplexity": 146.12499075555544}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00517.warc.gz"} |
http://www.mathworks.com/help/control/ref/balreal.html?nocookie=true | Accelerating the pace of engineering and science
# balreal
Gramian-based input/output balancing of state-space realizations
## Syntax
[sysb, g] = balreal(sys)
[sysb, g] = balreal(sys,'AbsTol',ATOL,'RelTol',RTOL,'Offset',ALPHA)
[sysb, g] = balreal(sys, condmax)
[sysb, g, T, Ti] = balreal(sys)
[sysb, g] = balreal(sys, opts)
## Description
[sysb, g] = balreal(sys) computes a balanced realization sysb for the stable portion of the LTI model sys. balreal handles both continuous and discrete systems. If sys is not a state-space model, it is first and automatically converted to state space using ss.
For stable systems, sysb is an equivalent realization for which the controllability and observability Gramians are equal and diagonal, their diagonal entries forming the vector G of Hankel singular values. Small entries in G indicate states that can be removed to simplify the model (use modred to reduce the model order).
If sys has unstable poles, its stable part is isolated, balanced, and added back to its unstable part to form sysb. The entries of g corresponding to unstable modes are set to Inf.
[sysb, g] = balreal(sys,'AbsTol',ATOL,'RelTol',RTOL,'Offset',ALPHA) specifies additional options for the stable/unstable decomposition. See the stabsep reference page for more information about these options. The default values are ATOL = 0, RTOL = 1e-8, and ALPHA = 1e-8.
[sysb, g] = balreal(sys, condmax) controls the condition number of the stable/unstable decomposition. Increasing condmax helps separate close by stable and unstable modes at the expense of accuracy. By default condmax=1e8.
[sysb, g, T, Ti] = balreal(sys) also returns the vector g containing the diagonal of the balanced gramian, the state similarity transformation xb = Tx used to convert sys to sysb, and the inverse transformation Ti = T-1.
If the system is normalized properly, the diagonal g of the joint gramian can be used to reduce the model order. Because g reflects the combined controllability and observability of individual states of the balanced model, you can delete those states with a small g(i) while retaining the most important input-output characteristics of the original system. Use modred to perform the state elimination.
[sysb, g] = balreal(sys, opts) computes the balanced realization using the options specified in the hsvdOptions object opts.
## Examples
### Balanced Realization of Stable System
Consider the following zero-pole-gain model, with near-canceling pole-zero pairs:
```sys = zpk([-10 -20.01],[-5 -9.9 -20.1],1)
```
```sys =
(s+10) (s+20.01)
----------------------
(s+5) (s+9.9) (s+20.1)
Continuous-time zero/pole/gain model.
```
A state-space realization with balanced gramians is obtained by
```[sysb,g] = balreal(sys);
```
The diagonal entries of the joint gramian are
```g'
```
```ans =
0.1006 0.0001 0.0000
```
This indicates that the last two states of sysb are weakly coupled to the input and output. You can then delete these states by
```sysr = modred(sysb,[2 3],'del');
```
This yields the following first-order approximation of the original system.
```zpk(sysr)
```
```ans =
1.0001
--------
(s+4.97)
Continuous-time zero/pole/gain model.
```
Compare the Bode responses of the original and reduced-order models.
```bodeplot(sys,sysr,'r--')
```
The plots shows that removing the second and third states does not have much effect on system dynamics.
### Balanced Realization of Unstable System
Create this unstable system:
```sys1=tf(1,[1 0 -1])
Transfer function:
1
-------
s^2 - 1
```
Apply balreal to create a balanced gramian realization.
```[sysb,g]=balreal(sys1)
a =
x1 x2
x1 1 0
x2 0 -1
b =
u1
x1 0.7071
x2 0.7071
c =
x1 x2
y1 0.7071 -0.7071
d =
u1
y1 0
Continuous-time model.
g =
Inf
0.2500
```
The unstable pole shows up as Inf in vector g.
expand all
### Algorithms
Consider the model
$\begin{array}{l}\stackrel{˙}{x}=Ax+Bu\\ y=Cx+Du\end{array}$
with controllability and observability gramians Wc and Wo. The state coordinate transformation $\overline{x}=Tx$ produces the equivalent model
$\begin{array}{l}\stackrel{˙}{\overline{x}}=TA{T}^{-1}\overline{x}+TBu\\ y=C{T}^{-1}\overline{x}+Du\end{array}$
and transforms the gramians to
$\begin{array}{cc}{\overline{W}}_{c}=T{W}_{c}{T}^{T},& {\overline{W}}_{o}={T}^{-T}{W}_{o}\end{array}{T}^{-1}$
The function balreal computes a particular similarity transformation T such that
${\overline{W}}_{c}={\overline{W}}_{o}=diag\left(g\right)$
See [1], [2] for details on the algorithm.
## References
[1] Laub, A.J., M.T. Heath, C.C. Paige, and R.C. Ward, "Computation of System Balancing Transformations and Other Applications of Simultaneous Diagonalization Algorithms," IEEE® Trans. Automatic Control, AC-32 (1987), pp. 115-122.
[2] Moore, B., "Principal Component Analysis in Linear Systems: Controllability, Observability, and Model Reduction," IEEE Transactions on Automatic Control, AC-26 (1981), pp. 17-31.
[3] Laub, A.J., "Computation of Balancing Transformations," Proc. ACC, San Francisco, Vol.1, paper FA8-E, 1980. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636480569839478, "perplexity": 3713.910647745506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009825.77/warc/CC-MAIN-20141125155649-00243-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.math.chapman.edu/~jipsen/structures/doku.php?id=commutative_bck-algebras | Commutative BCK-algebras
Abbreviation: ComBCK
Definition
A \emph{commutative BCK-algebra} is a structure $\mathbf{A}=\langle A,\cdot ,0\rangle$ of type $\langle 2,0\rangle$ such that
(1): $((x\cdot y)\cdot (x\cdot z))\cdot (z\cdot y) = 0$
(2): $x\cdot 0 = x$
(3): $0\cdot x = 0$
(4): $x\cdot y=y\cdot x= 0 \Longrightarrow x=y$
(5): $x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Remark: Note that the commutativity does not refer to the operation $\cdot$, but rather to the term operation $x\wedge y=x\cdot (x\cdot y)$, which turns out to be a meet with respect to the following partial order:
$x\le y \iff x\cdot y=0$, with $0$ as least element.
Definition
A \emph{commutative BCK-algebra} is a BCK-algebra $\mathbf{A}=\langle A,\cdot ,0\rangle$ such that
$x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Morphisms
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative BCK-algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism:
$h(x\cdot y)=h(x)\cdot h(y) \mbox{ and } h(0)=0$
Example 1:
Properties
Classtype variety no unbounded yes yes yes, $n=3$ no no
Finite members
$\begin{array}{lr} f(1)= &1 f(2)= &1 f(3)= &2 f(4)= &5 f(5)= &11 f(6)= &28 f(7)= &72 f(8)= &192 \end{array}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631456732749939, "perplexity": 1004.1960908620496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00318.warc.gz"} |
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-fm227-2-1 | PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
Fundamenta Mathematicae
2014 | 227 | 2 | 97-128
Tytuł artykułu
Discrete homotopy theory and critical values of metric spaces
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Utilizing the discrete homotopy methods developed for uniform spaces by Berestovskii-Plaut, we define the critical spectrum Cr(X) of a metric space, generalizing to the non-geodesic case the covering spectrum defined by Sormani-Wei and the homotopy critical spectrum defined by Plaut-Wilkins. If X is geodesic, Cr(X) is the same as the homotopy critical spectrum, which differs from the covering spectrum by a factor of 3/2. The latter two spectra are known to be discrete for compact geodesic spaces, and correspond to the values at which certain special covering maps, called δ-covers (Sormani-Wei) or ε-covers (Plaut-Wilkins), change equivalence type. In this paper we initiate the study of these ideas for non-geodesic spaces, motivated by the need to understand the extent to which the accompanying covering maps are topological invariants. We show that discreteness of the critical spectrum for general metric spaces can fail in several ways, which we classify. The, newcomer" critical values for compact non-geodesic spaces are completely determined by the homotopy critical values and the refinement critical values, the latter of which can, in many cases, be removed by changing the metric in a bi-Lipschitz way.
Słowa kluczowe
Kategorie tematyczne
Czasopismo
Rocznik
Tom
Numer
Strony
97-128
Opis fizyczny
Daty
wydano
2014
Twórcy
autor
• Department of Mathematics, University of Tennessee, Knoxville, TN 37996, U.S.A.
autor
• 7078 W. Rainbow Rd., Sedalia, CO 80135, U.S.A.
autor
• Department of Mathematics, Vanderbilt University, Nashville, TN 37240, U.S.A.
autor
• Department of Mathematics, University of Tennessee, Knoxville, TN 37996, U.S.A.
autor
• Department of Mathematics, Cornell University, Ithaca, NY 14853-4201, U.S.A.
autor
• 17 Sutherland Rd., Hicksville, NY 11801, U.S.A.
autor
• Department of Mathematics and, Computer Science, University of North Carolina at Pembroke, Pembroke, NC 28372, U.S.A.
Bibliografia
Typ dokumentu
Bibliografia
Identyfikatory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333584070205688, "perplexity": 1610.041777406804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159470.54/warc/CC-MAIN-20180923134314-20180923154714-00397.warc.gz"} |
http://www.chemicalforums.com/index.php?topic=89531.0 | # Chemical Forums
• June 23, 2018, 02:34:23 AM
• Welcome, Guest
•
Pages: [1] Go Down
### AuthorTopic: Unknown isopropanol contaminant (Read 1775 times) !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js";var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s);})();
0 Members and 1 Guest are viewing this topic.
#### BROe
• Regular Member
• Mole Snacks: +0/-0
• Offline
• Posts: 17
##### Unknown isopropanol contaminant
« on: January 03, 2017, 01:15:44 PM »
I recently bought some Isopropanol in the form of Iso-Heet to be used as a cleaning and reaction solvent. According to the MSDS sheet put out by the manufacturer, the product contains 99% IPA and 1% "proprietary additive". In order to separate the alcohol from the additive, a simple distillation has always been sufficient as there remains in the boiling flask a high-boiling syrupy amber liquid.
However, when I was cleaning the recently-bought Iso-Heet, it was distilling over at 45C rather than at its usual boiling point of 83C. Checking the distillation periodically I did not notice any of the "lines" normally observed when two liquids of different densities mix (I have heard them called Schlieren but I have no idea if this is an accurate term). Despite this one sticking point the liquid I distilled had a density within 0.01 of pure IPA, it was oxidized by acidic permanganate, it has the characteristic odor of IPA, and I observed no change in the solution density when I mixed a small amount of this liquid with some of my old stock that I am certain is (or rather was) pure Isopropanol.
I suspect that this impurity must have a density very similar to that of pure Isopropanol such that when the two are mixed there is little to no observable change, and that I have an azeotropic mixture of the two as when doing a second distillation I only collected one fraction at 43C. The other option is that the liquid in question simply isn't Isopropanol. Does anybody know of any further qualitative tests or separation methods I could try, maybe there is something I'm missing? I am tempted to see if I can't have a sample sent to a lab to have an IR and NMR done but that would likely be rather costly, so I view it as a last resort.
MSDS:
http://www.servicechamp.com/images/28202msds.pdf
Logged
#### Borek
• Mr. pH
• Deity Member
• Mole Snacks: +1597/-393
• Online
• Gender:
• Posts: 24337
• I am known to be occasionally wrong.
##### Re: Unknown isopropanol contaminant
« Reply #1 on: January 03, 2017, 09:01:49 PM »
the product contains 99% IPA and 1% "proprietary additive"
Quote
it was distilling over at 45C rather than at its usual boiling point of 83C
Strange. Intuition tells me if the additive has a low boiling and is present in very small quantities it should be lost very fast. Also, if there is an azeotropic mixture and it contains below 1% of the other component (as would be in your case) I would expect its boiling point to be close to the BP of IPA. If the whole sample boils at so much lower temperature for a long period of time something is IMHO seriously off.
Logged
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info, PZWT_s1
#### Babcock_Hall
• Chemist
• Sr. Member
• Mole Snacks: +221/-16
• Offline
• Posts: 3349
##### Re: Unknown isopropanol contaminant
« Reply #2 on: January 04, 2017, 04:59:43 AM »
Are you confident that the thermometer was low enough in the distillation apparatus? If it sits too high, it gives a reading that is too low.
Logged
#### BROe
• Regular Member
• Mole Snacks: +0/-0
• Offline
• Posts: 17
##### Re: Unknown isopropanol contaminant
« Reply #3 on: January 04, 2017, 06:31:55 AM »
The thermometer I use is jointed so it always sits in the same place in the stillhead, the last distillation I ran using it was of azeotropic nitric acid and I was getting appropriate readings there. A faulty thermometer was one of the first things I tested for after the first distillation, I checked it using boiling water as a standard and it did fine. Also part way through the distillation I swapped my analog thermometer for the digital thermocouple and was getting the same weird temperature readings.
*The picture is from the second distillation I'm currently running
*EDIT: In case the picture isn't showing up (as I can't see the image), the thermometer descends about two inches past the joint into the stillhead, the bottom being even with the midpoint of the arm that connects to the condenser
« Last Edit: January 04, 2017, 07:43:47 AM by Borek »
Logged
#### P
• Full Member
• Mole Snacks: +53/-15
• Offline
• Gender:
• Posts: 560
• I am what I am
##### Re: Unknown isopropanol contaminant
« Reply #4 on: January 05, 2017, 03:58:58 AM »
Getting the obvious out of the way.... the pressure is the same yea? Reduced pressure is pretty good for distillation, maybe it has vac'd down too low? Probably not, as you would know this, but just checking. I used to reduce pressure regularly when trying to purify something by distillation.
Logged
Tonight I’m going to party like it’s on sale for $19.99! - Apu Nahasapeemapetilon #### Babcock_Hall • Chemist • Sr. Member • Mole Snacks: +221/-16 • Offline • Posts: 3349 ##### Re: Unknown isopropanol contaminant « Reply #5 on: January 05, 2017, 09:44:55 AM » I am by far not the best organic chemists here, but I would have put the thermometer lower, if that is possible with your apparatus. I like to have the top of the bulb of liquid about even with the bottom of the elbow of glass. On the other hand, I don't have an explanation for why some liquids would boil at the correct temperature but not others. Logged #### BROe • Regular Member • Mole Snacks: +0/-0 • Offline • Posts: 17 ##### Re: Unknown isopropanol contaminant « Reply #6 on: January 06, 2017, 04:57:37 AM » Somewhat of a minor breakthrough, I was using that same isopropanol to clean some of my glassware and after a few hours of letting it air dry I came back to see a sort of oily film coating all the flasks I had rinsed with the IPA. The film had a very heavy, sweet, crude oil type of smell and on rinsing with tap water formed a milky emulsion that was quite difficult to remove with water. I ultimately ended up washing this out with a few small rinses of methanol. On the topic of thermometers, I just ran a distillation of methanol (also Heet brand, my backup solvent I suppose it could be called) and I was getting temperature readings that were pretty spot on, my digital thermocouple probe that sits about half an inch higher in the still head than my analog thermometer was getting a reading of 65.5, within acceptable levels of uncertainty for the thermocouple. Could it be that the greater volatility of methanol over IPA is negating the effects of a thermometer placed higher in the still head? Perhaps running tests with liquids of a low vapor pressure and higher boiling point would be more definitive. Getting the obvious out of the way.... the pressure is the same yea? Reduced pressure is pretty good for distillation, maybe it has vac'd down too low? Probably not, as you would know this, but just checking. I used to reduce pressure regularly when trying to purify something by distillation. No I wasn't pulling a vacuum, though that is something I would like to try in the future. Logged #### P • Full Member • Mole Snacks: +53/-15 • Offline • Gender: • Posts: 560 • I am what I am ##### Re: Unknown isopropanol contaminant « Reply #7 on: January 11, 2017, 12:00:33 AM » No I wasn't pulling a vacuum, though that is something I would like to try in the future. Makes distillation much easier.... and you can distil heat sensitive chemicals due to the greatly reduced temperatures you work at. You can get pressure/temperature curves which are east to read off with a ruler that will direct you as to your target temps and pressures for your system. Babcock hall might have a point with that thermometer placement - I didn't see it. Worth a try if possible to set it lower? Logged Tonight I’m going to party like it’s on sale for$19.99!
- Apu Nahasapeemapetilon
Pages: [1] Go Up
Mitch Andre Garcia's Chemical Forums 2003-Present.
Page created in 0.091 seconds with 23 queries. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071014881134033, "perplexity": 4447.392952591119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864958.91/warc/CC-MAIN-20180623113131-20180623133131-00179.warc.gz"} |
http://dspace.library.daffodilvarsity.edu.bd:8080/browse?type=subject&value=Electric+Charge+Distribution | Now showing items 1-1 of 1
• #### Anisotropic charged stellar models in Generalized Tolman IV spacetime
(Springer, 2015-01-12)
With the presence of electric charge and pressure anisotropy some anisotropic stellar models have been developed. An algorithm recently presented by Herrera et al. (Phys. Rev. D 77, 027502 (2008)) to generate static ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735071063041687, "perplexity": 3390.4144727593994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824448.53/warc/CC-MAIN-20181213032335-20181213053835-00495.warc.gz"} |
http://mathhelpforum.com/algebra/222034-simple-question.html | # Math Help - Simple question
1. ## Simple question
I know this is kind of dumb but cos(-x) = - cos (x) ? Right?
2. ## Re: Simple question
Originally Posted by sakonpure6
I know this is kind of dumb but cos(-x) = - cos (x) ? Right?
NO! It is not correct. $\cos(x)$ is an even function, therefore $\cos(x)=\cos(-x)~.$
3. ## Re: Simple question
Omg thank you!!! I knew that sin(-x) = - sin x but when I put in a negative value for cos say, cos (-60) i got a positive number. Thanks!!!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550071358680725, "perplexity": 3265.066921318398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00103-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/number-theory/24853-gcd-proofs.html | 1. ## gcd proofs:
I'm kinda just starting out in this sort of stuff, and i'm rather confused. The professor assigned us to prove these. Not really sure how to start it to be honest. If I could see how these things get solved I think i'll understand it much better.
1. If a and b are 2 positive integers show that gcd(a,b) = gcd(a - b, b)
2. if a and b are 2 positive integers show taht gcd(a,b) = gcd(a, a +b)
3. show that gcd(fn,fn+1) = 1 for all natural numbers n.
2. Originally Posted by Mr.Obson
1. If a and b are 2 positive integers show that gcd(a,b) = gcd(a - b, b)
Say a > b >0 . Let gcd(a,b) = d. Now we compute gcd(a - b, b), we claim d is a common divisor. Indeed, d|(a-b) because d|a and d|b. Now we claim that if d' is a larger divisor then d'|(a-b) and d'|b implies d'|(a-b + b) thus d'|a, so d'|a, so d' is a common divisor to a and b so d' <= d.
2. if a and b are 2 positive integers show taht gcd(a,b) = gcd(a, a +b)
Same ideal
3. show that gcd(fn,fn+1) = 1 for all natural numbers n.
Use induction.
The inductive step if f_n+1 = f_n + f_n-1
But gcd(f_n,f_n-1) = 1 so gcd (f_n+1,f_n)=1 by Euclid's algorithm .
3. wow, thx it all makes a lot more sense now | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509384393692017, "perplexity": 1430.1505856861302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720962.53/warc/CC-MAIN-20161020183840-00372-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=NumberTheory/SumOfDivisors | NumberTheory - Maple Programming Help
Home : Support : Online Help : Mathematics : Group Theory : Numbers : NumberTheory/SumOfDivisors
NumberTheory
SumOfDivisors
sum of powers of the divisors
Calling Sequence SumOfDivisors(n, k) sigma[k](n) tau(n)
Parameters
n - non-zero integer k - (optional) non-negative integer; defaults to $1$
Description
• The SumOfDivisors command computes the sum of powers of the positive divisors of n.
• If n has divisors ${d}_{i}$ for $i$ from $1$ to $r$, then SumOfDivisors(n, k) is equal to $\sum _{i=1}^{r}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{d}_{i}^{k}$.
• sigma ($\mathrm{\sigma }$) is an alternate calling sequence for SumOfDivisors, where sigma[k](n) is equal to SumOfDivisors(n, k) and k defaults to $1$ if the index is omitted.
• tau ($\mathrm{\tau }$) counts the number of divisors of n, i.e. tau(n) is equal to SumOfDivisors(n, 0).
• If $\prod _{i=1}^{m}{p}_{i}^{{a}_{i}}$ is the prime factorization of the n, then SumOfDivisors is given by the formula $\prod _{i=1}^{m}\frac{{p}_{i}^{\left({a}_{i}+1\right)k}-1}{{p}_{i}^{k}-1}$ if k is non-zero and by the formula $\prod _{i=k}^{m}\left({a}_{i}+1\right)$ if k is zero.
Examples
> $\mathrm{with}\left(\mathrm{NumberTheory}\right):$
> $\mathrm{Divisors}\left(12\right)$
$\left\{{1}{,}{2}{,}{3}{,}{4}{,}{6}{,}{12}\right\}$ (1)
> $\mathrm{SumOfDivisors}\left(12\right)$
${28}$ (2)
> $\mathrm{τ}\left(12\right)$
${6}$ (3)
> $\mathrm{Divisors}\left(52\right)$
$\left\{{1}{,}{2}{,}{4}{,}{13}{,}{26}{,}{52}\right\}$ (4)
> $\mathrm{σ}[2]\left(52\right)$
${3570}$ (5)
> $\mathrm{SumOfDivisors}\left(52,2\right)$
${3570}$ (6)
Compatibility
• The NumberTheory[SumOfDivisors] command was introduced in Maple 2016. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987929463386536, "perplexity": 2566.421913577684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578759182.92/warc/CC-MAIN-20190426033614-20190426055614-00110.warc.gz"} |
https://global-sci.org/intro/article_detail/cicp/12481.html | Volume 24, Issue 5
A Posteriori Error Estimates of Discontinuous Streamline Diffusion Methods for Transport Equations
Commun. Comput. Phys., 24 (2018), pp. 1355-1374.
Published online: 2018-06
Preview Full PDF 55 1602
Export citation
Cited by
• Abstract
Residual-based posteriori error estimates for discontinuous streamline diffusion methods for transport equations are studied in this paper. Computable upper bounds of the errors are measured based on mesh-dependent energy norm and negative norm. The estimates obtained are locally efficient, and thus suitable for adaptive mesh refinement applications. Numerical experiments are provided to illustrate underlying features of the estimators.
• Keywords
A posteriori error estimates, discontinuous streamline diffusion methods, transport equations.
65N30
• BibTex
• RIS
• TXT
@Article{CiCP-24-1355, author = {}, title = {A Posteriori Error Estimates of Discontinuous Streamline Diffusion Methods for Transport Equations}, journal = {Communications in Computational Physics}, year = {2018}, volume = {24}, number = {5}, pages = {1355--1374}, abstract = {
Residual-based posteriori error estimates for discontinuous streamline diffusion methods for transport equations are studied in this paper. Computable upper bounds of the errors are measured based on mesh-dependent energy norm and negative norm. The estimates obtained are locally efficient, and thus suitable for adaptive mesh refinement applications. Numerical experiments are provided to illustrate underlying features of the estimators.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2017-0120}, url = {http://global-sci.org/intro/article_detail/cicp/12481.html} }
TY - JOUR T1 - A Posteriori Error Estimates of Discontinuous Streamline Diffusion Methods for Transport Equations JO - Communications in Computational Physics VL - 5 SP - 1355 EP - 1374 PY - 2018 DA - 2018/06 SN - 24 DO - http://dor.org/10.4208/cicp.OA-2017-0120 UR - https://global-sci.org/intro/cicp/12481.html KW - A posteriori error estimates, discontinuous streamline diffusion methods, transport equations. AB -
Residual-based posteriori error estimates for discontinuous streamline diffusion methods for transport equations are studied in this paper. Computable upper bounds of the errors are measured based on mesh-dependent energy norm and negative norm. The estimates obtained are locally efficient, and thus suitable for adaptive mesh refinement applications. Numerical experiments are provided to illustrate underlying features of the estimators.
Juan Sun, Zhaojie Zhou & Huipo Liu. (2020). A Posteriori Error Estimates of Discontinuous Streamline Diffusion Methods for Transport Equations. Communications in Computational Physics. 24 (5). 1355-1374. doi:10.4208/cicp.OA-2017-0120
Copy to clipboard
The citation has been copied to your clipboard | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8431166410446167, "perplexity": 1974.8848306940984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00001.warc.gz"} |
http://slideplayer.com/slide/1664551/ | # The Calibrated Bayes Approach to Sample Survey Inference
## Presentation on theme: "The Calibrated Bayes Approach to Sample Survey Inference"— Presentation transcript:
The Calibrated Bayes Approach to Sample Survey Inference
Roderick Little Department of Biostatistics, University of Michigan Associate Director for Research & Methodology, Bureau of Census
Models for complex surveys 1: introduction
Learning Objectives Understand basic features of alternative modes of inference for sample survey data. Understand the mechanics of Bayesian inference for finite population quantitities under simple random sampling. Understand the role of the sampling mechanism in sample surveys and how it is incorporated in a Calibrated Bayesian analysis. More specifically, understand how survey design features, such as weighting, stratification, post-stratification and clustering, enter into a Bayesian analysis of sample survey data. Introduction to Bayesian tools for computing posterior distributions of finite population quantities. Models for complex surveys 1: introduction
Acknowledgement and Disclaimer
These slides are based in part on a short course on Bayesian methods in surveys presented by Dr. Trivellore Raghunathan and I at the 2010 Joint Statistical Meetings. While taking responsibility for errors, I’d like to acknowledge Dr. Raghunathan’s major contributions to this material Opinions are my own and not the official position of the U.S. Census Bureau Models for complex surveys 1: introduction
Models for complex surveys 1: introduction
Module 1: Introduction Distinguishing features of survey sample inference Alternative modes of survey inference Design-based, superpopulation models, Bayes Calibrated Bayes Models for complex surveys 1: introduction
Distinctive features of survey inference
1. Primary focus on descriptive finite population quantities, like overall or subgroup means or totals Bayes – which naturally concerns predictive distributions -- is particularly suited to inference about such quantities, since they require predicting the values of variables for non-sampled items This finite population perspective is useful even for analytic model parameters: Models for complex surveys 1: introduction
Distinctive features of survey inference
2. Analysis needs to account for "complex" sampling design features such as stratification, differential probabilities of selection, multistage sampling. Samplers reject theoretical arguments suggesting such design features can be ignored if the model is correctly specified. Models are always misspecified, and model answers are suspect even when model misspecification is not easily detected by model checks (Kish & Frankel 1974, Holt, Smith & Winter 1980, Hansen, Madow & Tepping 1983, Pfeffermann & Holmes (1985). Design features like clustering and stratification can and should be explicitly incorporated in the model to avoid sensitivity of inference to model misspecification. Models for complex surveys 1: introduction
Distinctive features of survey inference
3. A production environment that precludes detailed modeling. Careful modeling is often perceived as "too much work" in a production environment (e.g. Efron 1986). Some attention to model fit is needed to do any good statistics “Off-the-shelf" Bayesian models can be developed that incorporate survey sample design features, and for a given problem the computation of the posterior distribution is prescriptive, via Bayes Theorem. This aspect would be aided by a Bayesian software package focused on survey applications. Models for complex surveys 1: introduction
Distinctive features of survey inference
4. Antipathy towards methods/models that involve strong subjective elements or assumptions. Government agencies need to be viewed as objective and shielded from policy biases. Addressed by using models that make relatively weak assumptions, and noninformative priors that are dominated by the likelihood. The latter yields Bayesian inferences that are often similar to superpopulation modeling, with the usual differences of interpretation of probability statements. Bayes provides superior inference in small samples (e.g. small area estimation) Models for complex surveys 1: introduction
Distinctive features of survey inference
5. Concern about repeated sampling (frequentist) properties of the inference. Calibrated Bayes: models should be chosen to have good frequentist properties This requires incorporating design features in the model (Little 2004, 2006). Models for complex surveys 1: introduction
Approaches to Survey Inference
Design-based (Randomization) inference Superpopulation Modeling Specifies model conditional on fixed parameters Frequentist inference based on repeated samples from superpopulation and finite population (hybrid approach) Bayesian modeling Specifies full probability model (prior distributions on fixed parameters) Bayesian inference based on posterior distribution of finite population quantities argue that this is most satisfying approach Models for complex surveys 1: introduction
Design-Based Survey Inference
1 Models for complex surveys 1: introduction
Models for complex surveys 1: introduction
Random Sampling Random (probability) sampling characterized by: Every possible sample has known chance of being selected Every unit in the sample has a non-zero chance of being selected In particular, for simple random sampling with replacement: “All possible samples of size n have same chance of being selected” Models for complex surveys 1: introduction
Example 1: Mean for Simple Random Sample
Random variable Fixed quantity, not modeled Models for complex surveys 1: introduction
Example 2: Horvitz-Thompson estimator
Pro: unbiased under minimal assumptions Cons: variance estimator problematic for some designs (e.g. systematic sampling) can have poor confidence coverage and inefficiency Models for complex surveys 1: introduction
Role of Models in Classical Approach
Inference not based on model, but models are often used to motivate the choice of estimator. E.g.: Regression model regression estimator Ratio model ratio estimator Generalized Regression estimation: model estimates adjusted to protect against misspecification, e.g. HT estimation applied to residuals from the regression estimator (Cassel, Sarndal and Wretman book). Estimates of standard error are then based on the randomization distribution This approach is design-based, model-assisted Models for complex surveys 1: introduction
Model-Based Approaches
In our approach models are used as the basis for the entire inference: estimator, standard error, interval estimation This approach is more unified, but models need to be carefully tailored to features of the sample design such as stratification, clustering. One might call this model-based, design-assisted Two variants: Superpopulation Modeling Bayesian (full probability) modeling Common theme is “Infer” or “predict” about non-sampled portion of the population conditional on the sample and model Models for complex surveys 1: introduction
Superpopulation Modeling
Model distribution M: Predict non-sampled values : 1 In the modeling approach, prediction of nonsampled values is central In the design-based approach, weighting is central: “sample represents … units in the population” Models for complex surveys 1: introduction
Models for complex surveys 1: introduction
Bayesian Modeling Bayesian model adds a prior distribution for the parameters: 1 In the super-population modeling approach, parameters are considered fixed and estimated In the Bayesian approach, parameters are random and integrated out of posterior distribution – leads to better small-sample inference Models for complex surveys 1: introduction
Bayesian Point Estimates
Point estimate is often used as a single summary “best” value for the unknown Q Some choices are the mean, mode or the median of the posterior distribution of Q For symmetrical distributions an intuitive choice is the center of symmetry For asymmetrical distributions the choice is not clear. It depends upon the “loss” function. Models for complex surveys: simple random sampling
Bayesian Interval Estimation
Bayesian analog of confidence interval is posterior probability or credibility interval Large sample: posterior mean +/- z * posterior se Interval based on lower and upper percentiles of posterior distribution – 2.5% to 97.5% for 95% interval Optimal: fix the coverage rate 1-a in advance and determine the highest posterior density region C to include most likely values of Q totaling 1-a posterior probability Models for complex surveys: simple random sampling
Bayes for population quantities Q
Inferences about Q are conveniently obtained by first conditioning on and then averaging over posterior of . In particular, the posterior mean is: and the posterior variance is: Value of this technique will become clear in applications Finite population corrections are automatically obtained as differences in the posterior variances of Q and Inferences based on full posterior distribution useful in small samples (e.g. provides “t corrections”) Models for complex surveys: simple random sampling
Simulating Draws from Posterior Distribution
For many problems, particularly with high-dimensional it is often easier to draw values from the posterior distribution, and base inferences on these draws For example, if is a set of draws from the posterior distribution for a scalar parameter , then Models for complex surveys: simple random sampling
Models for complex surveys 1: introduction
Calibrated Bayes Any approach (including Bayes) has properties in repeated sampling We can study the properties of Bayes credibility intervals in repeated sampling – do 95% credibility intervals have 95% coverage? A Calibrated Bayes approach yields credibility intervals with close to nominal coverage Frequentist methods are useful for forming and assessing model, but the inference remains Bayesian See Little (2004) for more discussion Models for complex surveys 1: introduction
Models for complex surveys 1: introduction
Summary of approaches Design-based: Avoids need for models for survey outcomes Robust approach for large probability samples Less suited to small samples – inference basically assumes large samples Models needed for nonresponse, response errors, small areas – this leads to “inferential schizophrenia” Models for complex surveys 1: introduction
Models for complex surveys 1: introduction
Summary of approaches Superpopulation/Bayes models: Familiar: similar to modeling approaches to statistics in general Models needs to reflect the survey design Unified approach for large and small samples, nonresponse and response errors. Frequentist superpopulation modeling has the limitation that uncertainty in predicting parameters is not reflected in prediction inferences: Bayes propagates uncertainty about parameters, making it preferable for small samples – but needs specification of a prior distribution Models for complex surveys 1: introduction
Module 2: Bayesian models for simple random samples
2.1 Continuous outcome: normal model 2.2 Difference of two means 2.3 Regression models 2.4 Binary outcome: beta-binomial model 2.5 Nonparametric Bayes Models for complex surveys 1: introduction
Models for simple random samples
Consider Bayesian predictive inference for population quantities Focus here on the population mean, but other posterior distribution of more complex finite population quantities Q can be derived Key is to compute the posterior distribution of Q conditional on the data and model Summarize the posterior distribution using posterior mean, variance, HPD interval etc Modern Bayesian analysis uses simulation technique to study the posterior distribution Here consider simple random sampling: Module 3 considers complex design features Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
Diffuse priors In much practical analysis the prior information is diffuse, and the likelihood dominates the prior information. Jeffreys (1961) developed “noninformative priors” based on the notion of very little prior information relative to the information provided by the data. Jeffreys derived the noninformative prior requiring invariance under parameter transformation. In general, Models for complex surveys: simple random sampling
Examples of noninformative priors
In simple cases these noninformative priors result in numerically same answers as standard frequentist procedures Models for complex surveys: simple random sampling
2.1 Normal simple random sample
Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.1 Normal Example Posterior distribution of (m,s2) The above expressions imply that Models for complex surveys: simple random sampling
2.1 Posterior Distribution of Q
Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.1 HPD Interval for Q Note the posterior t distribution of Q is symmetric and unimodal -- values in the center of the distribution are more likely than those in the tails. Thus a (1-a)100% HPD interval is: Like frequentist confidence interval, but recovers the t correction Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.1 Some other Estimands Suppose Q=Median or some other percentile One is better off inferring about all non-sampled values As we will see later, simulating values of adds enormous flexibility for drawing inferences about any finite population quantity Modern Bayesian methods heavily rely on simulating values from the posterior distribution of the model parameters and predictive-posterior distribution of the nonsampled values Computationally, if the population size, N, is too large then choose any arbitrary value K large relative to n, the sample size National sample of size 2000 US population size 306 million For numerical approximation, we can choose K=2000/f, for some small f=0.01 or Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.1 Comments Even in this simple normal problem, Bayes is useful: t-inference is recovered for small samples by putting a prior distribution on the unknown variance Inference for other quantities, like Q=Median or some other percentile, is achieved very easily by simulating the nonsampled values (more on this below) Bayes is even more attractive for more complex problems, as discussed later. Models for complex surveys: simple random sampling
2.2 Comparison of Two Means
Population 1 Population 2 Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.2 Estimands Examples (Finite sample version of Behrens-Fisher Problem) Difference Difference in the population medians Ratio of the means or medians Ratio of Variances It is possible to analytically compute the posterior distribution of some these quantities It is a whole lot easier to simulate values of non-sampled in Population 1 and in Population 2 Models for complex surveys: simple random sampling
2.3 Ratio and Regression Estimates
Population: (yi,xi; i=1,2,…N) Sample: (yi, iinc, xi, i=1,2,…,N). For now assume SRS Objective: Infer about the population mean Excluded Y’s are missing values Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.3 Model Specification g=1/2: Classical Ratio estimator. Posterior variance equals randomization variance for large samples g=0: Regression through origin. The posterior variance is nearly the same as the randomization variance. g=1: HT model. Posterior variance equals randomization variance for large samples. Note that, no asymptotic arguments have been used in deriving Bayesian inferences. Makes small sample corrections and uses t-distributions. Models for complex surveys: simple random sampling
2.3 Posterior Draws for Normal Linear Regression g = 0
Easily extends to weighted regression Models for complex surveys: simple random sampling
2.4 Binary outcome: consulting example
In India, any person possessing a radio, transistor or television has to pay a license fee. In a densely populated area with mostly makeshift houses practically no one was paying these fees. Target enforcement in areas where the proportion of households possessing one or more of these devices exceeds 0.3, with high probability. Models for complex surveys: simple random sampling
2.4 Consulting example (continued)
Conduct a small scale survey to answer the question of interest Note that question only makes sense under Bayes paradigm Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.4 Consulting example Model for observable Prior distribution Estimand Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.4 Beta Binomial model The posterior distribution is Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.4 Infinite Population What is the maximum proportion of households in the population with devices that can be said with great certainty? Models for complex surveys: simple random sampling
2.5 Bayesian Nonparametric Inference
Population: All possible distinct values: Model: Prior: Mean and Variance: Models for complex surveys: simple random sampling
2.5 Bayesian Nonparametric Inference
SRS of size n with nk equal to number of dk in the sample Objective is to draw inference about the population mean: As before we need the posterior distribution of m and s2 Models for complex surveys: simple random sampling
2.5 Nonparametric Inference
Posterior distribution of q is Dirichlet: Posterior mean, variance and covariance of q Models for complex surveys: simple random sampling
Models for complex surveys: simple random sampling
2.5 Inference for Q Hence posterior mean and variance of Q are: Models for complex surveys: simple random sampling
Module 3: complex sample designs
Considered Bayesian predictive inference for population quantities Focused here on the population mean, but other posterior distribution of more complex finite population quantities Q can be derived Key is to compute the posterior distribution of Q conditional on the data and model Summarize the posterior distribution using posterior mean, variance, HPD interval etc Modern Bayesian analysis uses simulation technique to study the posterior distribution Models need to incorporate complex design features like unequal selection, stratification and clustering Models for complex surveys: simple random sampling
Modeling sample selection
Role of sample design in model-based (Bayesian) inference Key to understanding the role is to include the sample selection process as part of the model Modeling the sample selection process Simple and stratified random sampling Cluster sampling, other mechanisms See Chapter 7 of Bayesian Data Analysis (Gelman, Carlin, Stern and Rubin 1995) Models for complex sample designs
Models for complex sample designs
Full model for Y and I Observed data: Observed-data likelihood: Posterior distribution of parameters: Model for Population Model for Inclusion Models for complex sample designs
Ignoring the data collection process
The likelihood ignoring the data-collection process is based on the model for Y alone with likelihood: The corresponding posteriors for and are: When the full posterior reduces to this simpler posterior, the data collection mechanism is called ignorable for Bayesian inference about Posterior predictive distribution of Models for complex sample designs
Bayes inference for probability samples
A sufficient condition for ignoring the selection mechanism is that selection does not depend on values of Y, that is: This holds for probability sampling with design variables Z But the model needs to appropriately account for relationship of survey outcomes Y with the design variables Z. Consider how to do this for (a) unequal probability samples, and (b) clustered (multistage) samples Models for complex sample designs
Ex 1: stratified random sampling
Sample Population Population divided into J strata Z is set of stratum indicators: Stratified random sampling: simple random sample of units selected from population of units in stratum j. This design is ignorable providing model for outcomes conditions on the stratum variables Z. Same approach (conditioning on Z works for post-stratification, with extensions to more than one margin. Models for complex sample designs
Inference for a mean from a stratified sample
Consider a model that includes stratum effects: For simplicity assume is known and the flat prior: Standard Bayesian calculations lead to where: Models for complex sample designs
Bayes for stratified normal model
Bayes inference for this model is equivalent to standard classical inference for the population mean from a stratified random sample The posterior mean weights case by inverse of inclusion probability: With unknown variances, Bayes’ for this model with flat prior on log(variances) yields useful t-like corrections for small samples Models for complex sample designs
Suppose we ignore stratum effects?
Suppose we assume instead that: the previous model with no stratum effects. With a flat prior on the mean, the posterior mean of is then the unweighted mean This is potentially a very biased estimator if the selection rates vary across the strata The problem is that results from this model are highly sensitive violations of the assumption of no stratum effects … and stratum effects are likely in most realistic settings. Hence prudence dictates a model that allows for stratum effects, such as the model in the previous slide. Models for complex sample designs
Models for complex sample designs
Design consistency Loosely speaking, an estimator is design-consistent if (irrespective of the truth of the model) it converges to the true population quantity as the sample size increases, holding design features constant. For stratified sampling, the posterior mean based on the stratified normal model converges to , and hence is design-consistent For the normal model that ignores stratum effects, the posterior mean converges to and hence is not design consistent unless We generally advocate Bayesian models that yield design-consistent estimates, to limit effects of model misspecification Models for complex sample designs
Ex 2. A continuous (post)stratifier Z
Consider PPS sampling, Z = measure of size Sample Population Standard design-based estimator is weighted Horvitz-Thompson estimate When the relationship between Y and Z deviates a lot from the HT model, HT estimate is inefficient and CI’s can have poor coverage Models for complex sample designs
Ex 4. One continuous (post)stratifier Z
Sample Population Models for complex sample designs
Models for complex sample designs
Ex 3. Two stage sampling Most practical sample designs involve selecting a cluster of units and measure a subset of units within the selected cluster Two stage sample is very efficient and cost effective But outcome on subjects within a cluster may be correlated (typically, positively). Models can easily incorporate the correlation among observations Models for complex sample designs
Models for complex sample designs
Two-stage samples Sample design: Stage 1: Sample c clusters from C clusters Stage 2: Sample units from the selected cluster i=1,2,…,c Estimand of interest: Population mean Q Infer about excluded clusters and excluded units within the selected clusters Models for complex sample designs
Models for two-stage samples
Model for observables Prior distribution Models for complex sample designs
Estimand of interest and inference strategy
The population mean can be decomposed as Posterior mean given Yinc Models for complex sample designs
Models for complex sample designs
Posterior Variance Posterior variance can be easily computed Models for complex sample designs
Inference with unknown s and t
For unknown s and t Option 1: Plug in maximum likelihood estimates. These can be obtained using PROC MIXED in SAS. PROC MIXED actually gives estimates of q,s,t and E(mi|Yinc) (Empirical Bayes) Option 2: Fully Bayes with additional prior where b and v are small positive numbers Models for complex sample designs
Extensions and Applications
Relaxing equal variance assumption Incorporating covariates (generalization of ratio and regression estimates) Small Area estimation. An application of the hierarchical model. Here the quantity of interest is Models for complex sample designs
Models for complex sample designs
Extensions Relaxing normal assumptions Incorporate design features such as stratification and weighting by modeling explicitly the sampling mechanism. Models for complex sample designs
Models for complex sample designs
Summary Bayes inference for surveys must incorporate design features appropriately Stratification and clustering can be incorporated in Bayes inference through design variables Unlike design-based inference, Bayes inference is not asymptotic, and delivers good frequentist properties in small samples Models for complex sample designs
Module 4: Short introduction to Bayesian computation
A Bayesian analysis uses the entire posterior distribution of the parameter of interest. Summaries of the posterior distribution are used for statistical inferences Means, Median, Modes or measures of central tendency Standard deviation, mean absolute deviation or measures of spread Percentiles or intervals Conceptually, all these quantities can be expressed analytically in terms of integrals of functions of parameter with respect to its posterior distribution Computations Numerical integration routines Simulation techniques – outline here Models for Complex Surveys: Bayesian Computation
Models for Complex Surveys: Bayesian Computation
Types of Simulation Direct simulation (as for normal sample, regression) Approximate direct simulation Discrete approximation of the posterior density Rejection sampling Sampling Importance Resampling Iterative simulation techniques Metropolis Algorithm Gibbs sampler Software: WINBUGS Models for Complex Surveys: Bayesian Computation
Approximate Direct Simulation
Approximating the posterior distribution by a normal distribution by matching the posterior mean and variance. Posterior mean and variance computed using numerical integration techniques An alternative is to use the mode and a measure of curvature at the mode Mode and the curvature can be computed using many different methods Approximate the posterior distribution using a grid of values of the parameter and compute the posterior density at each grid and then draw values from the grid with probability proportional to the posterior density Models for Complex Surveys: Bayesian Computation
Models for Complex Surveys: Bayesian Computation
Normal Approximation Models for Complex Surveys: Bayesian Computation
Models for Complex Surveys: Bayesian Computation
Rejection Sampling Actual Density from which to draw from Candidate density from which it is easy to draw The importance ratio is bounded Sample q from g, accept q with probability p otherwise redraw from g Models for Complex Surveys: Bayesian Computation
Sampling Importance Resampling
Target density from which to draw Candidate density from which it is easy to draw The importance ratio Sample M values of q from g Compute the M importance ratios and resample with probability proportional to the importance ratios. Models for Complex Surveys: Bayesian Computation
Markov Chain Simulation
In real problems it may be hard to apply direct or approximate direct simulation techniques. The Markov chain methods involve a random walk in the parameter space which converges to a stationary distribution that is the target posterior distribution. Metropolis-Hastings algorithms Gibbs sampling Models for Complex Surveys: Bayesian Computation
Metropolis-Hastings algorithm
Try to find a Markov Chain whose stationary distribution is the desired posterior distribution. Metropolis et al (1953) showed how and the procedure was later generalized by Hastings (1970). This is called Metropolis-Hastings algorithm. Algorithm: Step 1 At iteration t, draw Models for Complex Surveys: Bayesian Computation
Models for Complex Surveys: Bayesian Computation
Step 2: Compute the ratio Step 3: Generate a uniform random number, u This Markov Chain has stationary distribution f(x). Any p(y|x) that has the same support as f(x) will work If p(y|x)=f(x) then we have independent samples Closer the proposal density p(y|x) to the actual density f(x), faster will be the convergence. Models for Complex Surveys: Bayesian Computation
Models for Complex Surveys: Bayesian Computation
Gibbs sampling Gibbs sampling a particular case of Markov Chain Monte Carlo method suitable for multivariate problems This is also a Markov Chain whose stationary Distribution is f(x) 2. This is an easier Algorithm, if the conditional densities are easy to work with If the conditionals are harder to sample from, then use MH or Rejection technique within the Gibbs sequence Models for Complex Surveys: Bayesian Computation
Models for complex surveys 1: introduction
Conclusion Design-based: limited, asymptotic Bayesian inference for surveys: flexible, unified, now feasible using modern computational methods Calibrated Bayes: build models that yield inferences with good frequentist properties – diffuse priors, strata and post-strata as covariates, clustering with mixed effects models Software: Winbugs, but software targeted to surveys would help. The future may be Calibrated Bayes! Models for complex surveys 1: introduction | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044747352600098, "perplexity": 2588.327743421155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00058.warc.gz"} |
https://opentext.uleth.ca/apex-standard/sec_interp_deriv.html | ## Section2.2Interpretations of the Derivative
Section 2.1 defined the derivative of a function and gave examples of how to compute it using its definition (i.e., using limits). The section also started with a brief motivation for this definition, that is, finding the instantaneous velocity of a falling object given its position function. Section 2.3 will give us more accessible tools for computing the derivative; tools that are easier to use than repeated use of limits.
This section falls in between the “What is the definition of the derivative?” and “How do I compute the derivative?” sections. Here we are concerned with “What does the derivative mean?”, or perhaps, when read with the right emphasis, “What is the derivative?” We offer two interconnected interpretations of the derivative, hopefully explaining why we care about it and why it is worthy of study.
### Subsection2.2.1Interpretation of the Derivative as Instantaneous Rate of Change
Section 2.1 started with an example of using the position of an object (in this case, a falling amusement park rider) to find the object's velocity. This type of example is often used when introducing the derivative because we tend to readily recognize that velocity is the instantaneous rate of change in position. In general, if $$f$$ is a function of $$x\text{,}$$ then $$\fp(x)$$ measures the instantaneous rate of change of $$f$$ with respect to $$x\text{.}$$ Put another way, the derivative answers “When $$x$$ changes, at what rate does $$f$$ change?” Thinking back to the amusement park ride, we asked “When time changed, at what rate did the height change?” and found the answer to be “By $$-64$$ feet per second.”
Now imagine driving a car and looking at the speedometer, which reads “60 mph.” Five minutes later, you wonder how far you have traveled. Certainly, lots of things could have happened in those $$5$$ minutes; you could have intentionally sped up significantly, you might have come to a complete stop, you might have slowed to 20 mph as you passed through construction. But suppose that you know, as the driver, none of these things happened. You know you maintained a fairly consistent speed over those $$5$$ minutes. What is a good approximation of the distance traveled?
One could argue the only good approximation, given the information provided, would be based on “$$\text{distance} = \text{rate}\times\text{time.}$$” In this case, we assume a constant rate of 60 mph with a time of $$5$$ minutes or $$5/60$$ of an hour. Hence we would approximate the distance traveled as $$5$$ miles.
Referring back to the falling amusement park ride, knowing that at $$t=2$$ the velocity was $$-64$$ ft/s, we could reasonably approximate that $$1$$ second later the riders' height would have dropped by about $$64$$ feet. Knowing that the riders were accelerating as they fell would inform us that this is an under-approximation. If all we knew was that $$f(2) = 86$$ and $$\fp(2) = -64\text{,}$$ we'd know that we'd have to stop the riders quickly otherwise they would hit the ground.
In both of these cases, we are using the instantaneous rate of change to predict future values of the output.
### Subsection2.2.2Units of the Derivative
It is useful to recognize the units of the derivative function. If $$y$$ is a function of $$x\text{,}$$ i.e., $$y=f(x)$$ for some function $$f\text{,}$$ and $$y$$ is measured in feet and $$x$$ in seconds, then the units of $$y' = \fp$$ are “feet per second,” commonly written as “ft/s.” In general, if $$y$$ is measured in units $$P$$ and $$x$$ is measured in units $$Q\text{,}$$ then $$y'$$ will be measured in units “$$P$$ per $$Q$$”, or “$$P/Q\text{.}$$” Here we see the fraction-like behavior of the derivative in the notation: the units of $$\frac{dy}{dx}$$are $$\frac{\text{units of }y}{\text{units of }x}\text{.}$$
###### Example2.2.1.The meaning of the derivative: World Population.
Let $$P(t)$$ represent the world population $$t$$ minutes after 12:00 a.m., January 1, 2012. It is fairly accurate to say that $$P(0) = 7{,}028{,}734{,}178$$ (www.prb.org). It is also fairly accurate to state that $$P'(0) = 156\text{;}$$ that is, at midnight on January 1, 2012, the population of the world was growing by about 156 people per minute (note the units). Twenty days later (or $$28{,}800$$ minutes later) we could reasonably assume the population grew by about $$28{,}800\cdot156 = 4{,}492{,}800$$ people.
###### Example2.2.2.The meaning of the derivative: Manufacturing.
The term widget is an economic term for a generic unit of manufacturing output. Suppose a company produces widgets and knows that the market supports a price of $$\10$$ per widget. Let $$P(n)$$ give the profit, in dollars, earned by manufacturing and selling $$n$$ widgets. The company likely cannot make a (positive) profit making just one widget; the start-up costs will likely exceed $$\10\text{.}$$ Mathematically, we would write this as $$P(1) \lt 0\text{.}$$
What do $$P(1000) = 500$$ and $$P'(1000)=0.25$$ mean? Approximate $$P(1100)\text{.}$$
Solution
The equation $$P(1000)=500$$ means that selling $$1000$$ widgets returns a profit of $$\500\text{.}$$ We interpret $$P'(1000) = 0.25$$ as meaning that when we are selling $$1000$$ widgets, the profit is increasing at rate of $$\0.25$$ per widget (the units are “dollars per widget.”) Since we have no other information to use, our best approximation for $$P(1100)$$ is:
\begin{align*} P(1100) \amp \approx P(1000) + P'(1000)\times100\\ \amp= \$500 + (100\text{ widgets })\cdot \$0.25/\text{widget}\\ \amp= \\$525\text{.} \end{align*}
We approximate that selling $$1100$$ widgets returns a profit of $$\525\text{.}$$
The previous examples made use of an important approximation tool that we first used in our previous “driving a car at 60 mph” example at the beginning of this section. Five minutes after looking at the speedometer, our best approximation for distance traveled assumed the rate of change was constant. In Examples 2.2.1 and Example 2.2.2 we made similar approximations. We were given rate of change information which we used to approximate total change. Notationally, we would say that
\begin{equation*} f(c+h) \approx f(c) + \fp(c)\cdot h\text{.} \end{equation*}
This approximation is best when $$h$$ is “small.” “Small” is a relative term; when dealing with the world population, $$h=22\text{ days} = 28{,}800\text{ minutes}$$ is small in comparison to years. When manufacturing widgets, $$100$$ widgets is small when one plans to manufacture thousands.
### Subsection2.2.3The Derivative and Motion
One of the most fundamental applications of the derivative is the study of motion. Let $$s(t)$$ be a position function, where $$t$$ is time and $$s(t)$$ is distance. For instance, $$s$$ could measure the height of a projectile or the distance an object has traveled.
Let's let $$s(t)$$ measure the distance traveled, in feet, of an object after $$t$$ seconds of travel. Then $$s'(t)$$ has units “feet per second,” and $$s'(t)$$ measures the instantaneous rate of distance change with repsect to time — it measures velocity.
Now consider $$v(t)\text{,}$$ a velocity function. That is, at time $$t\text{,}$$ $$v(t)$$ gives the velocity of an object. The derivative of $$v\text{,}$$ $$v'(t)\text{,}$$ gives the instantaneous rate of velocity change with respect to timeacceleration. (We often think of acceleration in terms of cars: a car may “go from $$0$$ to $$60$$ in $$4.8$$ seconds.” This is an average acceleration, a measurement of how quickly the velocity changed.) If velocity is measured in feet per second, and time is measured in seconds, then the units of acceleration (i.e., the units of $$v'(t)$$) are “feet per second per second,” or $$($$ft/s$$)$$/s. We often shorten this to “feet per second squared,” or fts2, but this tends to obscure the meaning of the units.
Perhaps the most well known acceleration is that of gravity. In this text, we use $$g=32\,\text{ft}/\text{s}^2$$ or $$g=9.8\,\text{m}/\text{s}^2\text{.}$$ What do these numbers mean?
A constant acceleration of $$32\,\frac{\text{ft}/\text{s}}{\text{s}}$$ means that the velocity changes by $$32\,\text{ft}/\text{s}$$ each second. For instance, let $$v(t)$$ measure the velocity of a ball thrown straight up into the air, where $$v$$ has units ft/s and $$t$$ is measured in seconds. The ball will have a positive velocity while traveling upwards and a negative velocity while falling down. The acceleration is thus $$-32\,\text{ft}/\text{s}^2\text{.}$$ If $$v(1) = 20\,\text{ft}/\text{s}\text{,}$$ then $$1$$ second later, the velocity will have decreased by $$32\,\text{ft}/\text{s}\text{;}$$ that is, $$v(2) = -12\,\text{ft/s}\text{.}$$ We can continue: $$v(3) = -44\,\text{ft/s}\text{.}$$ Working backward, we can also figure that $$v(0) = 52\,\text{ft}/\text{s}\text{.}$$
These ideas are so important we write them out as a Key Idea.
###### Key Idea2.2.3.The Derivative and Motion.
1. Let $$s(t)$$ be the position function of an object. Then $$s'(t)=v(t)$$ is the velocity function of the object.
2. Let $$v(t)$$ be the velocity function of an object. Then $$v'(t)=a(t)$$ is the acceleration function of the object.
### Subsection2.2.4Interpretation of the Derivative as the Slope of the Tangent Line
We now consider the second interpretation of the derivative given in this section. This interpretation is not independent from the first by any means; many of the same concepts will be stressed, just from a slightly different perspective.
Given a function $$y=f(x)\text{,}$$ the difference quotient $$\frac{f(c+h)-f(c)}{h}$$ gives a change in $$y$$ values divided by a change in $$x$$ values; i.e., it is a measure of the “rise over run,” or “slope,” of the secant line that goes through two points on the graph of $$f\text{:}$$ $$(c, f(c))$$ and $$(c+h,f(c+h))\text{.}$$ As $$h$$ shrinks to $$0\text{,}$$ these two points come close together; in the limit we find $$\fp(c)\text{,}$$ the slope of a special line called the tangent line that intersects $$f$$ only once near $$x=c\text{.}$$
Lines have a constant rate of change, their slope. Nonlinear functions do not have a constant rate of change, but we can measure their instantaneous rate of change at a given $$x$$ value $$c$$ by computing $$\fp(c)\text{.}$$ We can get an idea of how $$f$$ is behaving by looking at the slopes of its tangent lines. We explore this idea in the following example.
###### Example2.2.4.Understanding the derivative: the rate of change.
Consider $$f(x) = x^2$$ as shown in Figure 2.2.5. It is clear that at $$x=3$$ the function is growing faster than at $$x=1\text{,}$$ as it is steeper at $$x=3\text{.}$$ How much faster is it growing at $$3$$ compared to $$1\text{?}$$
Solution
We can answer this exactly (and quickly) after Section 2.3, where we learn to quickly compute derivatives. For now, we will answer graphically, by considering the slopes of the respective tangent lines.
With practice, one can fairly effectively sketch tangent lines to a curve at a particular point. In Figure 2.2.6, we have sketched the tangent lines to $$f$$ at $$x=1$$ and $$x=3\text{,}$$ along with a grid to help us measure the slopes of these lines. At $$x=1\text{,}$$ the slope is $$2\text{;}$$ at $$x=3\text{,}$$ the slope is $$6\text{.}$$ Thus we can say not only is $$f$$ growing faster at $$x=3$$ than at $$x=1\text{,}$$ it is growing three times as fast.
###### Example2.2.7.Understanding the graph of the derivative.
Consider the graph of $$f(x)$$ and its derivative, $$\fp(x)\text{,}$$ in Figure 2.2.8. Use these graphs to find the slopes of the tangent lines to the graph of $$f$$ at $$x=1\text{,}$$ $$x=2\text{,}$$ and $$x=3\text{.}$$
Solution
To find the appropriate slopes of tangent lines to the graph of $$f\text{,}$$ we need to look at the corresponding values of $$\fp\text{.}$$
• The slope of the tangent line to $$f$$ at $$x=1$$ is $$\fp(1)\text{;}$$ this looks to be about $$-1\text{.}$$
• The slope of the tangent line to $$f$$ at $$x=2$$ is $$\fp(2)\text{;}$$ this looks to be about $$4\text{.}$$
• The slope of the tangent line to $$f$$ at $$x=3$$ is $$\fp(3)\text{;}$$ this looks to be about $$3\text{.}$$
Using these slopes, tangent line segments to $$f$$ are sketched in Figure 2.2.9. Included on the graph of $$\fp$$ in this figure are points where $$x=1\text{,}$$ $$x=2$$ and $$x=3$$ to help better visualize the $$y$$ value of $$\fp$$ at those points.
###### Example2.2.10.Approximation with the derivative.
Consider again the graph of $$f(x)$$ and its derivative $$\fp(x)$$ in Example 2.2.7. Use the tangent line to $$f$$ at $$x=3$$ to approximate the value of $$f(3.1)\text{.}$$
Solution
Figure 2.2.11 shows the graph of $$f$$ along with its tangent line, zoomed in at $$x=3\text{.}$$ Notice that near $$x=3\text{,}$$ the tangent line makes an excellent approximation of $$f\text{.}$$ Since lines are easy to deal with, often it works well to approximate a function with its tangent line. (This is especially true when you don't actually know much about the function at hand, as we don't in this example.)
While the tangent line to $$f$$ was drawn in Example 2.2.7, it was not explicitly computed. Recall that the tangent line to $$f$$ at $$x=c$$ is $$y = \fp(c)(x-c)+f(c)\text{.}$$ While $$f$$ is not explicitly given, by the graph it looks like $$f(3) = 4\text{.}$$ Recalling that $$\fp(3) = 3\text{,}$$ we can compute the tangent line to be approximately $$y = 3(x-3)+4\text{.}$$ It is often useful to leave the tangent line in point-slope form.
To use the tangent line to approximate $$f(3.1)\text{,}$$ we simply evaluate $$y$$ at $$3.1$$ instead of $$f\text{.}$$
\begin{align*} f(3.1) \amp \approx y(3.1)\\ \amp= 3(3.1-3)+4\\ \amp= 0.1\cdot3+4\\ \amp = 4.3\text{.} \end{align*}
We approximate $$f(3.1) \approx 4.3\text{.}$$
To demonstrate the accuracy of the tangent line approximation, we now state that in Example 2.2.10, $$f(x) = -x^3+7x^2-12x+4\text{.}$$ We can evaluate $$f(3.1) = 4.279\text{.}$$ Had we known $$f$$ all along, certainly we could have just made this computation. In reality, we often only know two things:
1. what $$f(c)$$ is, for some value of $$c\text{,}$$ and
2. what $$\fp(c)$$ is.
For instance, we can easily observe the location of an object and its instantaneous velocity at a particular point in time. We do not have a “function $$f$$” for the location, just an observation. This is enough to create an approximating function for $$f\text{.}$$
This last example has a direct connection to our approximation method explained above after Example 2.2.2. We stated there that
\begin{equation*} f(c+h) \approx f(c)+\fp(c)\cdot h\text{.} \end{equation*}
If we know $$f(c)$$ and $$\fp(c)$$ for some value $$x=c\text{,}$$ then computing the tangent line at $$(c,f(c))$$ is easy: $$y(x) = \fp(c)(x-c)+f(c)\text{.}$$ In Example 2.2.10, we used the tangent line to approximate a value of $$f\text{.}$$ Let's use the tangent line at $$x=c$$ to approximate a value of $$f$$ near $$x=c\text{;}$$ i.e., compute $$y(c+h)$$ to approximate $$f(c+h)\text{,}$$ assuming again that $$h$$ is “small.” Note:
\begin{align*} y(c+h) \amp = \fp(c)\left((c+h)-c\right)+f(c)\\ \amp = \fp(c)\cdot h + f(c)\text{.} \end{align*}
This is the exact same approximation method used above! Not only does it make intuitive sense, as explained above, it makes analytical sense, as this approximation method is simply using a tangent line to approximate a function's value.
The importance of understanding the derivative cannot be understated. When $$f$$ is a function of $$x\text{,}$$ $$\fp(x)$$ measures the instantaneous rate of change of $$f$$ with respect to $$x$$ and gives the slope of the tangent line to $$f$$ at $$x\text{.}$$
### Exercises2.2.5Exercises
###### 1.
What is the instantaneous rate of change of position called?
###### 2.
Given a function $$y=f(x)\text{,}$$ in your own words describe how to find the units of $$\fp(x)\text{.}$$
###### 3.
What functions have a constant rate of change?
###### 4.
Given $$f(2)=12$$ and $$\fp(2) = -1\text{,}$$ approximate $$f(3)\text{.}$$
###### 5.
Given $$P(70)=68$$ and $$P'(70) = 6\text{,}$$ approximate $$P(75)\text{.}$$
###### 6.
Given $$z(40)=150$$ and $$z'(40) = -11\text{,}$$ approximate $$z(25)\text{.}$$
###### 7.
Knowing $$f(10)=25$$ and $$\fp(10) = 5$$ and the methods described in this section, which approximation is likely to be most accurate?
• f(10.1)
• f(11)
• f(20)
###### 8.
Given $$f(6)=82$$ and $$f(7) = 73\text{,}$$ approximate $$\fp(6)\text{.}$$
###### 9.
Given $$H(2)=51$$ and $$H(8) = 99\text{,}$$ approximate $$H'(2)\text{.}$$
###### 10.
Let $$V(x)$$ measure the volume, in decibels, measured inside a restaurant with $$x$$ customers. What are the units of $$V'(x)\text{?}$$
###### 11.
Let $$v(t)$$ measure the velocity, in ft/s, of a car moving in a straight line $$t$$ seconds after starting. What are the units of $$v'(t)\text{?}$$
###### 12.
The height $$H\text{,}$$ in feet, of a river is recorded $$t$$ hours after midnight, April 1. What are the units of $$H'(t)\text{?}$$
###### 13.
$$P$$ is the profit, in thousands of dollars, of producing and selling $$c$$ cars.
1. What are the units of $$P'(c)\text{?}$$
2. What is likely true of $$P(0)\text{?}$$
###### 14.
$$T$$ is the temperature in degrees Fahrenheit, $$h$$ hours after midnight on July 4 in Sidney, NE.
1. What are the units of $$T'(h)\text{?}$$
2. Is $$T'(8)$$ likely greater than or less than 0? Why?
3. Is $$T(8)$$ likely greater than or less than 0? Why?
Graphs of functions $$f$$ and $$g$$ are given. Identify which function is the derivative of the other.
###### 15.
• $$f$$ is the derivative of $$g\text{.}$$
• $$g$$ is the derivative of $$f\text{.}$$
###### 16.
• $$f$$ is the derivative of $$g\text{.}$$
• $$g$$ is the derivative of $$f\text{.}$$
###### 17.
• $$f$$ is the derivative of $$g\text{.}$$
• $$g$$ is the derivative of $$f\text{.}$$
###### 18.
• $$f$$ is the derivative of $$g\text{.}$$
• $$g$$ is the derivative of $$f\text{.}$$
###### Review
Use the definition of the derivative to compute the derivative of $$f\text{.}$$
###### 19.
$$f(x)=5x^2$$
###### 20.
$$f(x)=(x-2)^3$$
Numerically approximate the derivative.
###### 21.
$$f'(\pi)$$ where $$f(x) = \cos(x)$$
###### 22.
$$f'(9)$$ where $$f(x) = \sqrt{x}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992558717727661, "perplexity": 360.45979314033167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00307.warc.gz"} |
http://blog.geomblog.org/2004/09/voting-and-geometry.html | ## Wednesday, September 08, 2004
### Voting and Geometry
It would be remiss of me (this is the Geomblog, after all) not to remark on the connection between voting and geometry pointed out by a commenter on my earlier post.
Donald Saari has developed an interesting theory of rankings based on mapping rankings to points in simplices. An immediate application of this formulation is the observation that the Kemeny rule determines a ranking minimizes the l1 distance to the original ranking schemes. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113548994064331, "perplexity": 1356.9683285928704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430450367739.91/warc/CC-MAIN-20150501031927-00024-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://adam.chlipala.net/cpdt/repo/rev/cadeb49dc1ef | Fix typo
author Adam Chlipala Mon, 01 Dec 2008 08:32:20 -0500 a35eaec19781 df289eb8ef76 src/Equality.v 1 files changed, 1 insertions(+), 1 deletions(-) [+]
line wrap: on
line diff
--- a/src/Equality.v Fri Nov 28 14:21:38 2008 -0500
+++ b/src/Equality.v Mon Dec 01 08:32:20 2008 -0500
@@ -25,7 +25,7 @@
(** We have seen many examples so far where proof goals follow "by computation." That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially. Exactly when this works and when it does not depends on the details of Coq's %\textit{%#<i>#definitional equality#</i>#%}%. This is an untyped binary relation appearing in the formal metatheory of CIC. CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal.
- The [cbv] tactic will help us illustrate the rules of Coq's definitional equality. We redefine the natural number predecessor function in a somewhat convoluted way and construct a manual proof that it returns [1] when applied to [0]. *)
+ The [cbv] tactic will help us illustrate the rules of Coq's definitional equality. We redefine the natural number predecessor function in a somewhat convoluted way and construct a manual proof that it returns [0] when applied to [1]. *)
Definition pred' (x : nat) :=
match x with | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808737397193909, "perplexity": 2409.1845583755935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00476.warc.gz"} |
http://mathhelpforum.com/trigonometry/218199-rhind-papyrus-problem-41-pi.html | # Math Help - Rhind papyrus problem 41 (pi)
1. ## Rhind papyrus problem 41 (pi)
Okay, i got a little problem. I have to do a research for school about the history of pi. Not that hard you'd think... But then i found that the Egyptians found 3,1605 for pi in 1650 BC. So now i got to find out how he found it out.
he used the formula: Volume of a cylinder = ((1-1/9)d)²h if you calculate it you get 256/81 r² h so he found 256/81 for pi. But i don't get the start. where did that ((1-1/9)d)²h came from? i know it replaces pi r² in our modern formula but that's about it...
I thought it came from the relation between a square and circle, but after looking into that for the last 2 days, i still haven't figured it out.
If someone could help me with this i'd be very grateful.
with regards, danielrowling.
P.S. I hope i posted it in the correct section.
2. ## Re: Rhind papyrus problem 41 (pi)
I am not a specialist in the history of mathematics, but according to the Wikipedia article about Rhind papyrus (sections Volumes and Areas), the idea is as follows.
We take a circle (blue) and its circumscribing square, divide each side of the square in three equal parts and remove the corners. The resulting octagon (red) approximates the circle. If the side of the square is 1, then the area of the octagon is $1 - 4\left(\frac{1}{2}\cdot\frac13\cdot\frac13\right) = 1-\frac29$. So, for a circle of some diameter d, the corresponding octagon's area is $\left(1-\frac29\right)d^2$. However, for some reason the Egyptian mathematicians wanted to express this as $(a d)^2$ for some number $a$. I am guessing that since $\sqrt{1-\frac29}$ is irrational, they decided to approximate it as $1-\frac19$. Indeed, $\left(1-\frac19\right)^2=\left(\frac89\right)^2 =\frac{64}{81}\approx\frac{63}{81} =\frac79=1-\frac29$. That's how they got the formula $((1-1/9)d)^2$ for the area of the circle.
Note that the octagon's approximation of the circle is not too bad because it both adds to and subtracts some areas from the circle. However, according to Wikipedia, "That this octagonal figure, whose area is easily calculated, so accurately approximates the area of the circle is just plain good luck".
If you need a more reliable source than Wikipedia, you should probably look for a book about the history of $\pi$ or ancient Egyptian mathematics. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8998453617095947, "perplexity": 399.0833777430019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00241-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/124334/replace-lstlisting-with-verbatim-depending-on-a-boolean | # Replace lstlisting with verbatim depending on a boolean
I use latex2rtf to convert latex documents to .rtf and thence to MS-word (see this thread). To do this, I set up a custom class and limit which packages are used as latex2rtf only accepts a limited subset of commands (see manual pages). latex2rtf provides a booelan, \iflatextortf, that is set to true if you use latex2rtf as your compiler.
I now need to display code listings and maintain the latex2rtf functionality. That means, I want to define something in the preamble to replace calls to \lstlisting{} with something like \begin{verbatim} ... \end{verbatim} when \iflatextortf is true.
Question: Is it possible to use switch between the lstlisting and verbatim environment depending on the value of \iflatextortf, keeping the content of the environment?
The solution would not require any packages to be installed and should work with the limited subset of commands in latex2rtf (see section 8.6.1). This should help me run compile using latex2rtf. Solutions using the verbatim package won't work unfortunately.
My ideal solution looks like this:
\newif\iflatextortf
\iflatextortf
\documentclass[12pt,letterpaper]{report}
% whatever my replacement for lstlistings is %
\else
\documentclass[10pt,letterpaper]{report}
\usepackage{listings}
% something fancy with listings
\lstnewenvironment{codeenv}[1][]{\lstset{basicstyle=\small\ttfamily}#1}{}
\fi
And I would like something a little more elegant than using find & replace on the raw .tex!
-
Best I can tell this is impossible without the verbatim or fancyvrb package, as my ideal solution requires me to use verbatim within another environment. – Andy Clifton Jul 18 '13 at 1:54
My suggestion is to allow loading both listings and verbatim and use a common (new) environment to hold all your code. Under listings, you can define this environment using
\lstnewenvironment{codeenv}[1][]{}{}%
Using verbatim you can define this environment using
\newenvironment{codeenv}[1][]{\verbatim}{\endverbatim}%
Now you're able to condition on whether listings should be loaded using a traditional \@ifundefined{lstlisting}{<undefined>}{<defined>}. Here's a use case:
\documentclass{article}
\usepackage{listings}% http://ctan.org/pkg/listings
\usepackage{verbatim}% http://ctan.org/pkg/verbatim
\makeatletter
\@ifundefined{lstlisting}{% If listings is loaded
\newenvironment{codeenv}[1][]{\verbatim}{\endverbatim}%
}{% listings is not loaded
\lstnewenvironment{codeenv}[1][]{\lstset{basicstyle=\ttfamily,#1}}{}%
}
\makeatother
\begin{document}
\begin{codeenv}[basicstyle=\ttfamily]
This is some verbatim text.
\end{codeenv}
\end{document}
Now all you have to do is wrap the
\usepackage{listings}% http://ctan.org/pkg/listings
in your "boolean switch".
If the verbatim package is not allowed, then some code can be taken from the LaTeX kernel the revolves around the definition of the verbatim environment:
\documentclass{article}
\usepackage{listings}% http://ctan.org/pkg/listings
\makeatletter
\begingroup \catcode |=0 \catcode [= 1
\catcode]=2 \catcode \{=12 \catcode \}=12
\catcode\\=12 |gdef|@xverbatim#1\end{codeenv}[#1|end[codeenv]]
|endgroup
\@ifundefined{lstlisting}{
\newcommand{\codeenv}[1][]{\@verbatim \frenchspacing\@vobeyspaces \@xverbatim}
\def\endcodeenv{\if@newlist \leavevmode\fi\endtrivlist}
}{%
\lstnewenvironment{codeenv}[1][]{\lstset{basicstyle=\ttfamily,#1}}{}%
}
\makeatother
\begin{document}
\begin{codeenv}[basicstyle=\ttfamily]
This is some verbatim text.
\end{codeenv}
\end{document}
A final, fairly elementary approach might be to use the verbatim environment throughout your document for code examples, and redefine this environment to work as a regular listing is listings is loaded:
\documentclass{article}
\usepackage{listings}% http://ctan.org/pkg/listings
\makeatletter
\@ifundefined{lstlisting}{}{%
\let\verbatim\relax%
\lstnewenvironment{verbatim}{\lstset{basicstyle=\ttfamily}}{}%
}
\makeatother
\begin{document}
\begin{verbatim}
This is some verbatim text.
\end{verbatim}
\end{document}
Since listings does not provide \lstrenewenvironment, \letting \verbatim to \relax frees up the verbatim environment for redefinition. For ease-of-use, it's best avoid making verbatim accept optional arguments.
-
This would be great except I made a mistake in my post and should have emphasized I meant the verbatim environment, not the package. The version of the listing that I use with latex2rtf should be generated using as few packages as possible (preferably none) to make it compatible with latex2rtf. Unfortunately verbatim is not compatible. – Andy Clifton Jul 17 '13 at 4:29
@LostBrit: I've updated the code to remove the verbatim package requirement. – Werner Jul 17 '13 at 5:07
Apparently \catcode doesn't work with Latex2rtf either (see documentation). Latex2rtf includes the boolean, \iflatextortf. Ideally, I would just define codeenv as you suggest, and replace lstlisting with verbatim if \iflatextortf were true , but AFAIK, you can't use verbatim as the input to an environment. – Andy Clifton Jul 17 '13 at 21:01
@LostBrit: I've dumbed it down even further. See if the last addition works for you. – Werner Jul 18 '13 at 5:24
This works really well. Accepted, thank you! – Andy Clifton Jul 19 '13 at 1:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524153470993042, "perplexity": 4516.943113018489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00199-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/tags/correctness-proof/hot | # Tag Info
242
Let me offer one reason and one misconception as an answer to your question. The main reason that it is easier to write (seemingly) correct mathematical proofs is that they are written at a very high level. Suppose that you could write a program like this: function MaximumWindow(A, n, w): using a sliding window, calculate (in O(n)) the sums of all ...
87
A common error I think is to use greedy algorithms, which is not always the correct approach, but might work in most test cases. Example: Coin denominations, $d_1,\dots,d_k$ and a number $n$, express $n$ as a sum of $d_i$:s with as few coins as possible. A naive approach is to use the largest possible coin first, and greedily produce such a sum. For ...
83
(I am probably risking a few downvotes here, as I have no time/interest to make this a proper answer, but I find the text quoted (and the rest of the article cited) below to be quite insightful, also considering they are written by a well-known mathematician. Perhaps I can improve the answer later.) The idea, which I suppose isn't particularly distinct from ...
68
I immediately recalled an example from R. Backhouse (this might have been in one of his books). Apparently, he had assigned a programming assignment where the students had to write a Pascal program to test equality of two strings. One of the programs turned in by a student was the following: issame := (string1.length = string2.length); if issame then for ...
60
Allow me to start by quoting E. W. Dijkstra: "Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians." (from EWD498) Although what Dijkstra meant with `programming' differs quite a bit from the current usage, there is still some merit in this quote. The other ...
52
Lamport provides some ground for disagreement on prevalence of errors in proofs in How to write a proof (pages 8-9): Some twenty years ago, I decided to write a proof of the Schroeder-Bernstein theorem for an introductory mathematics class. The simplest proof I could find was in Kelley’s classic general topology text. Since Kelley was writing for a ...
50
Here is an algorithm for the identity function: Input: $n$ Check if the $n$th binary string encodes a proof of $0 > 1$ in ZFC, and if so, output $n+1$ Otherwise, output $n$ Most people suspect this algorithm computes the identity function, but we don't know, and we can't prove it in the commonly accepted framework for mathematics, ZFC.
41
One big difference is that programs typically are written to operate on inputs, whereas mathematical proofs generally start from a set of axioms and prior-known theorems. Sometimes you have to cover multiple corner cases to get a sufficiently general proof, but the cases and their resolution is explicitly enumerated and the scope of the result is implicitly ...
33
The best example I ever came across is primality testing: input: natural number p, p != 2 output: is p a prime or not? algorithm: compute 2**(p-1) mod p. If result = 1 then p is prime else p is not. This works for (almost) every number, except for a very few counter examples, and one actually needs a machine to find a counterexample in a realistic period ...
32
Ultimately, you'll need a mathematical proof of correctness. I'll get to some proof techniques for that below, but first, before diving into that, let me save you some time: before you look for a proof, try random testing. Random testing As a first step, I recommend you use random testing to test your algorithm. It's amazing how effective this is: in my ...
28
They say the problem with computers is that they do exactly what you tell them. I think this might be one of the many reasons. Notice that, with a computer program, the writer (you) is smart but the reader (CPU) is dumb. But with a mathematical proof, the writer (you) is smart and the reader (reviewer) is also smart. This means you can never afford to get ...
26
Here's one that was thrown at me by google reps at a convention I went to. It was coded in C, but it works in other languages that use references. Sorry for having to code on [cs.se], but it's the only to illustrate it. swap(int& X, int& Y){ X := X ^ Y Y := X ^ Y X := X ^ Y } This algorithm will work for any values given to x and y, ...
24
One issue that I think was not addressed in Yuval's answer, is that it seems you are comparing different animals. Saying "the code is correct" is a semantic statement, you mean to say that the object described by your code satisfies certain properties, e.g. for every input $n$ it computes $n!$. This is indeed a hard task, and to answer it, one has to look ...
23
There are indeed programs like this. To prove this, let's suppose to the contrary that for every machine that doesn't halt, there is a proof it doesn't halt. These proofs are strings of finite length, so we can enumerate all proofs of length less than $s$ for some integer $s$. We can then use this to solve the halting problem as follows: Given a Turing ...
19
There is a whole class of algorithms that is inherently hard to test: pseudo-random number generators. You can not test a single output but have to investigate (many) series of outputs with means of statistics. Depending on what and how you test you may well miss non-random characteristics. One famous case where things went horribly wrong is RANDU. It ...
19
What is so different about writing faultless mathematical proofs and writing faultless computer code that makes it so that the former is so much more tractable than the latter? I believe that the primary reasons are idempotency (gives the same results for the same inputs) and immutability (doesn't change). What if a mathematical proof could give different ...
16
I am rather surprised that you raised this question since the meticulous and enlightening answers you have written to some math questions demonstrate sufficiently that you are capable of rigorous logical deduction. It seems that you became somewhat uncomfortable when you stumbled upon a new and unorthodox way to prove an algorithm is correct. Believe in ...
15
This is not a secure encryption scheme. It is similar to a Hill cipher, and vulnerable to similar attacks. For instance, it is vulnerable to known-plaintext attacks: an attacker who observes a ciphertext E and knows the corresponding message M can recover the secret key and thus decrypt all other messages that were encrypted with the same key. The ...
14
I will use the following simple sorting algorithm as an example: repeat: if there are adjacent items in the wrong order: pick one such pair and swap else break To prove the correctness I use two steps. First I show that the algorithm always terminates. Then I show that the solution where it terminates is the one I want. For the first point,...
13
We are indeed assuming $P(k)$ holds for all $k < n$. This is a generalization of the "From $P(n-1)$, we prove $P(n)$" style of proof you're familiar with. The proof you describe is known as the principle of strong mathematical induction and has the form Suppose that $P(n)$ is a predicate defined on $n\in \{1, 2, \dotsc\}$. If we can show that $... 12 I agree with what Yuval has written. But also have a much simpler answer: In practice softwares engineers typically don't even try to check for correctness of their programs, they simply don't, they typically don't even write down the conditions that define when the program is correct. There are various reasons for it. One is that most software engineers ... 12 There are a lot of good answers already but there are still more reasons math and programming aren't the same. 1 Mathematical proofs tend to be much simpler than computer programs. Consider the first steps of a hypothetical proof: Let a be an integer Let b be an integer Let c = a+b So far the proof is fine. Let's turn that into the first steps of a similar ... 11 2D local maximum input: 2-dimensional$n \times n$array$A$output: a local maximum -- a pair$(i,j)$such that$A[i,j]$has no neighboring cell in the array that contains a strictly larger value. (The neighboring cells are those among$A[i, j+1], A[i, j-1], A[i-1, j], A[i+1, j]$that are present in the array.) So, for example, if$A$is$\$\begin{...
11
No, your algorithm doesn't work. Consider if the array A is A = [1 1 1 1 1 2 2 3 3 3 3 3 3]. Then the array B will be B = [5 5 5 5 5 2 2 6 6 6 6 6 6]. The sum of B will be 65, and the length of B will be 13, so after division, we'll get the number 5. This is equal to the first element of B, so your algorithm will output "Yes". Nonetheless, not all ...
11
I like Yuval's answer, but I wanted to riff off of it for a bit. One reason you might find it easier to write Math proofs might boil down to how platonic Math ontology is. To see what I mean, consider the following: Functions in Math are pure (the entire result of calling a function is completely encapsulated in the return value, which is deterministic and ...
10
Proving that a program is "thread safe" is hard. It is possible, however, to concretely and formally define the term "data race." And it is possible to determine whether an execution trace of a specific run of a program does or does not have a data race in time proportional to the size of the trace. This type of analysis goes back at least to 1988: ...
10
There is no (one) formal definition of "optimal substructure" (or the Bellman optimality criterion) so you can not possibly hope to (formally) prove you have it. You should do the following: Set up your (candidate) dynamic programming recurrence. Prove it correct by induction. Formulate the (iterative, memoizing) algorithm following the recurrence.
10
Cryptosystems which are algebraic in nature are amenable to algebraic cryptanalysis. If you are trying to design a secure cryptosystem for actual use, there is one important maxim that you should keep in mind: Don't design your own cryptosystem! It is easy to design weak cryptosystems. Off-the-shelf cryptosystems have withstood breaking attempts by the ...
10
Most algorithms have not been proven correct in Hoare logic. The main reason is that such correctness proofs are extremely expensive as of Jan 2017, probably by several orders of magnitude in comparison with 'mere' programming. There is a lot of ongoing work to reduce this cost by automation, but it's an uphill struggle. Another reason why an algorithm ...
9
Fisher-Yates-Knuth shuffling algorithm is an (practical) example and one on which one of the the authors of this site has commented about. The algorithm generates a random permutation of a given array as: // To shuffle an array a of n elements (indices 0..n-1): for i from n − 1 downto 1 do j ← random integer with 0 ≤ j ≤ i exchange a[j] ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294712901115417, "perplexity": 512.2876078329116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00607.warc.gz"} |
http://oscarfalk.com/dc95nd/8e3611-application-of-integration-exercises | (c) the y-axis If necessary, break the region into sub-regions to determine its entire area. Use both the shell method and the washer method. Answer 7E. A conical tank is 5 m deep with a top radius of 3 m. (This is similar to Example 224.) Applications of integration a/2 y = 3x 4B-6 If the hypotenuse of an isoceles right triangle has length h, then its area is h2/4. 35) $$x=y^2$$ and $$y=x$$ rotated around the line $$y=2$$. 45) [T] Find the surface area of the shape created when rotating the curve in the previous exercise from $$\displaystyle x=1$$ to $$\displaystyle x=2$$ around the x-axis. How much work is performed in stretching the spring? Rotate the line $$y=\left(\frac{1}{m}\right)x$$ around the $$y$$-axis to find the volume between $$y=a$$ and $$y=b$$. 12. Starting from $$\displaystyle 1=¥250$$, when will $$\displaystyle 1=¥1$$? For exercise 48, find the exact arc length for the following problems over the given interval. 28) [T] The force of gravity on a mass $$m$$ is $$F=−((GMm)/x^2)$$ newtons. For exercises 5-6, determine the area of the region between the two curves by integrating over the $$y$$-axis. 27. 100-level Mathematics Revision Exercises Integration Methods. For exercises 20 - 21, find the surface area and volume when the given curves are revolved around the specified axis. If each of the workers, on average, lifted ten 100-lb rocks $$2$$ft/hr, how long did it take to build the pyramid? 5. For a rocket of mass $$m=1000$$ kg, compute the work to lift the rocket from $$x=6400$$ to $$x=6500$$ km. What is the total work done in lifting the box and sand? 13. 46) [T] An anchor drags behind a boat according to the function $$y=24e^{−x/2}−24$$, where $$y$$ represents the depth beneath the boat and $$x$$ is the horizontal distance of the anchor from the back of the boat. Applications of integration 4A. Sebastian M. Saiegh Calculus: Applications and Integration . We will learn how to find area using Integration in this chapter.We will use what we have studied in the last chapter,Chapter 7 Integrationto solve questions.The topics covered in th These revision exercises will help you practise the procedures involved in integrating functions and solving problems involving applications of integration. State in your own words Pascal's Principle. Use first and second derivatives to help justify your answer. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. 46) Yogurt containers can be shaped like frustums. (Hint: Recall trigonometric identities.). −b 0. spring is stretched to $$15$$ in. Note that the half-life of radiocarbon is $$\displaystyle 5730$$ years. Answer 9E. Use Simpson's Rule to approximate the area of the pictured lake whose lengths, in hundreds of feet, are measured in 200-foot increments. Use a calculator to determine the intersection points, if necessary, accurate to three decimal places. For the following exercises, find the derivative $$\displaystyle dy/dx$$. Sebastian M. Saiegh Calculus: Applications and Integration. T/F: The integral formula for computing Arc Length includes a square-root, meaning the integration is probably easy. Stewart Calculus 7e Solutions Chapter 5 Applications of Integration Exercise 5.1. Use the Disk/Washer Method to find the volume of the solid of revolution formed by revolving the region about the y-axis. 38) [T] $$y=\cos(πx),y=\sin(πx),x=\frac{1}{4}$$, and $$x=\frac{5}{4}$$ rotated around the $$y$$-axis. 25) Find the volume of the catenoid $$y=\cosh(x)$$ from $$x=−1$$ to $$x=1$$ that is created by rotating this curve around the $$x$$-axis, as shown here. 4) $$y=\cos θ$$ and $$y=0.5$$, for $$0≤θ≤π$$. area of a triangle or rectangle). 40. Prove that both methods approximate the same volume. The tank is filled with pure water, with a mass density of 1000 kg/m$$^3$$. Solution: $$\displaystyle P'(t)=43e^{0.01604t}$$. Answer 10E. If a man has a mass of 80kg on Earth, will his mass on the moon be bigger, smaller, or the same? Find the total profit generated when selling $$550$$ tickets. 23) You are a crime scene investigator attempting to determine the time of death of a victim. 2. Where is it increasing? 19) A $$1$$-m spring requires $$10$$ J to stretch the spring to $$1.1$$ m. How much work would it take to stretch the spring from $$1$$ m to $$1.2$$ m? A rope of length $$l$$ ft hangs over the edge of tall cliff. If true, prove it. 30) [T] A rectangular dam is $$40$$ ft high and $$60$$ ft wide. (a) How much work is done pulling the entire rope to the top of the building? If $$\displaystyle 1$$ barrel containing $$\displaystyle 10kg$$ of plutonium-239 is sealed, how many years must pass until only $$\displaystyle 10g$$ of plutonium-239 is left? 6. Solution: $$\displaystyle ln(4)−1units^2$$. 10) The populations of New York and Los Angeles are growing at $$\displaystyle 1%$$ and $$\displaystyle 1.4%$$ a year, respectively. 39) [T] $$y=x^2−2x,x=2,$$ and $$x=4$$ rotated around the $$y$$-axis. Worksheets 1 to 15 are topics that are taught in MATH108. Answer 7E. A force of 20 lb stretches a spring from a natural length of 6 in to 8 in. Stewart Calculus 7e Solutions Pdf. 47) [T] Find the arc length of $$\displaystyle y=1/x$$ from $$\displaystyle x=1$$ to $$\displaystyle x=4$$. For exercises 12 - 16, find the mass of the two-dimensional object that is centered at the origin. (b) $$x=1$$ 2. If $$\displaystyle 40%$$ of the population remembers a new product after $$\displaystyle 3$$ days, how long will $$\displaystyle 20%$$remember it? Stewart Calculus 7e Solutions. 44) A light bulb is a sphere with radius $$1/2$$ in. (a) the x-axis The relic is approximately $$\displaystyle 871$$ years old. 12) The effect of advertising decays exponentially. For $$y=x^n$$, as $$n$$ increases, formulate a prediction on the arc length from $$(0,0)$$ to $$(1,1)$$. Start Unit test . (a) Find the work performed in pumping all the water to the top of the tank. Use a calculator to determine intersection points, if necessary, to two decimal places. 4) Find the work done when you push a box along the floor $$2$$ m, when you apply a constant force of $$F=100$$ N. 5) Compute the work done for a force $$F=\dfrac{12}{x^2}$$ N from $$x=1$$ to $$x=2$$ m. 6) What is the work done moving a particle from $$x=0$$ to $$x=1$$ m if the force acting on it is $$F=3x^2$$ N? Answer 7E. A force of $$f$$ N stretches a spring $$d$$ m from its natural length. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. 1) [T] Over the curve of $$y=3x,$$ $$x=0,$$ and $$y=3$$ rotated around the $$y$$-axis. Slices perpendicular to the $$x$$-axis are semicircles. Answer 6E. Each problem has hints coming with it that can help you if you get stuck. 14) $$y=\sin(πx),\quad y=2x,$$ and $$x>0$$, 15) $$y=12−x,\quad y=\sqrt{x},$$ and $$y=1$$, 16) $$y=\sin x$$ and $$y=\cos x$$ over $$x \in [−π,π]$$, 17) $$y=x^3$$ and $$y=x^2−2x$$ over $$x \in [−1,1]$$, 18) $$y=x^2+9$$ and $$y=10+2x$$ over $$x \in [−1,3]$$. What do you notice? Stewart Calculus 7e Solutions Chapter 5 Applications of Integration Exercise 5.4. 13) If $$\displaystyle y=1000$$ at $$\displaystyle t=3$$ and $$\displaystyle y=3000$$ at $$\displaystyle t=4$$, what was $$\displaystyle y_0$$ at $$\displaystyle t=0$$? 52) A telephone line is a catenary described by $$\displaystyle y=acosh(x/a).$$ Find the ratio of the area under the catenary to its arc length. T/F: The integral formula for computing Arc Length was found by first approximating arc length with straight line segments. 48) Rotate the ellipse $$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$ around the $$y$$-axis to approximate the volume of a football. 20. Stewart Calculus 7e Solutions Chapter 8 Further Applications of Integration Exercise 8.1. 1. 15) The base is the region enclosed by $$y=x^2)$$ and $$y=9.$$ Slices perpendicular to the $$x$$-axis are right isosceles triangles. Revolve the disk (x−b)2 +y2 ≤ a2 around the y axis. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. 24) For the cable in the preceding exercise, how much work is done to lift the cable $$50$$ ft? The solid formed by revolving $$y=2x \text{ on }[0,1]$$ about the x-axis. Rotate about: We use cross-sectional area to compute volume. For exercises 51 - 56, find the volume of the solid described. 29) [T] For the rocket in the preceding exercise, find the work to lift the rocket from $$x=6400$$ to $$x=∞$$. 7. Solution: False; $$\displaystyle k=\frac{ln(2)}{t}$$, For the following exercises, use $$\displaystyle y=y_0e^{kt}.$$. In your own words, describe how to find the total area enclosed by $$y=f(x)\text{ and }y=g(x)$$. Answer 4E. Find the ratio of the area under the catenary to its arc length. The dispensing value at the base is jammed shut, forcing the operator to empty the tank via pumping the gas to a point 1 ft above the top of the tank. If you are unable to determine the intersection points analytically, use a calculator to approximate the intersection points with three decimal places and determine the approximate area of the region. $$f(x) = \sec x\text{ on }[-\pi/4, \pi/4]$$. 21. 57) Prove the expression for $$\displaystyle sinh^{−1}(x).$$ Multiply $$\displaystyle x=sinh(y)=(1/2)(e^y−e^{−y})$$ by $$\displaystyle 2e^y$$ and solve for $$\displaystyle y$$. (b) $$y=1$$ Exercise 3.3 . The triangle with vertices $$(1,1),\,(1,2)\text{ and }(2,1).$$ Chapter 7: Applications of Integration Course 1S3, 2006–07 May 11, 2007 These are just summaries of the lecture notes, and few details are included. Then, use the Pappus theorem to find the volume of the solid generated when revolving around the y-axis. The weight rests on the spring, compressing the spring from a natural length of 1 ft to 6 in. Use the Trapezoidal Rule to approximate the area of the pictured lake whose lengths, in hundreds of feet, are measured in 100-foot increments. Missed the LibreFest? 50) [T] A chain hangs from two posts four meters apart to form a catenary described by the equation $$\displaystyle y=4cosh(x/4)−3.$$ Find the total length of the catenary (arc length). Use a calculator to determine the intersection points, if necessary, accurate to three decimal places. 12) The base is a triangle with vertices $$(0,0),(1,0),$$ and $$(0,1)$$. The answer to each question in every exercise is provided along with complete, step-wise solutions for your better understanding. It takes $$2$$ J to stretch the spring to $$15$$ cm. Find the slope of the catenary at the left fence post. Region bounded by: $$y=\sqrt{x},\,y=0\text{ and }x=1.$$ Answer 2E. For exercises 56 - 57, solve using calculus, then check your answer with geometry. For the following exercises, use a calculator to draw the region enclosed by the curve. If you cannot evaluate the integral exactly, use your calculator to approximate it. Does your expression match the textbook? How much work is performed in stretching the spring from a length of 16 cm to 21 cm? Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. For the following exercises, find the antiderivatives for the given functions. 29) $$y=x^2,$$ $$y=x,$$ rotated around the $$y$$-axis. Does your answer agree with the volume of a cone? In Exercises 5-8, a region of the Cartesian plane is shaded. A water tank has the shape of a truncated cone, with dimensions given below, and is filled with water with a weight density of 62.4 lb/ft$$^3$$. 1. Textbook Authors: Larson, Ron; Edwards, Bruce H. , ISBN-10: 1-28505-709-0, ISBN-13: 978-1-28505-709-5, Publisher: Brooks Cole $$f(x)=-x^3+5x^2+2x+1,\, g(x)=3x^2+x+3$$. 3) Use the slicing method to derive the formula for the volume of a tetrahedron with side length a. The rope has a weight density of $$d$$ lb/ft. Stewart Calculus 7e Solutions Chapter 8 Further Applications of Integration Exercise 8.1. (Hint: Since $$f(x)$$ is one-to-one, there exists an inverse $$f^{−1}(y)$$.). Region bounded by: $$y=2x,\,y=x\text{ and }x=2.$$ Applications of integration E. Solutions to 18.01 Exercises g) Using washers: a π(a 2 − (y2/a)2)dy = π(a 2y− y5/5a 2 ) a= 4πa3/5. In Exercises 3-12, find the fluid force exerted on the given plate, submerged in water with a weight density of 62.4 lb/ft$$^3$$. 32. 4. Rotate about: 15) If a bank offers annual interest of $$\displaystyle 7.5%$$ or continuous interest of $$\displaystyle 7.25%,$$ which has a better annual yield? 6) Take the derivative of the previous expression to find an expression for $$\sinh(2x)$$. Therefore, we let u = x 2 and write the following. 51) The base is the region between $$y=x$$ and $$y=x^2$$. Answer 6E. If you are unable to find intersection points analytically in the following exercises, use a calculator. The first problem is to set up the limits of integration. Rotate about: Answer 2E. For exercises 37 - 44, use technology to graph the region. Is there another way to solve this without using calculus? 9. 41) $$y=\sqrt{x},\quad x=4$$, and $$y=0$$, 42) $$y=x+2,\quad y=2x−1$$, and $$x=0$$, 44) $$x=e^{2y},\quad x=y^2,\quad y=0$$, and $$y=\ln(2)$$, $$V = \dfrac{π}{20}(75−4\ln^5(2))$$ units3, 45) $$x=\sqrt{9−y^2},\quad x=e^{−y},\quad y=0$$, and $$y=3$$. 36) $$\displaystyle ∫^{10}_5\frac{dt}{t}−∫^{10x}_{5x}\frac{dt}{t}$$, 37) $$\displaystyle ∫^{e^π}_1\frac{dx}{x}+∫^{−1}_{−2}\frac{dx}{x}$$, 38) $$\displaystyle \frac{d}{dx}∫^1_x\frac{dt}{t}$$, 39) $$\displaystyle \frac{d}{dx}∫^{x^2}_x\frac{dt}{t}$$, 40) $$\displaystyle \frac{d}{dx}ln(secx+tanx)$$. Region bounded by: $$y=y=x^2-2x+2,\text{ and }y=2x-1.$$ 2) $$y=−\frac{1}{2}x+25$$ from $$x=1$$ to $$x=4$$. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Elasticity of a function Ey/ Ex is given by Ey/ Ex = −7x / (1 − 2x )( 2 + 3x ). 7) $$y=x^{3/2}$$ from $$(0,0)$$ to $$(1,1)$$, 8) $$y=x^{2/3}$$ from $$(1,1)$$ to $$(8,4)$$, 9) $$y=\frac{1}{3}(x^2+2)^{3/2}$$ from $$x=0$$ to $$x=1$$, 10) $$y=\frac{1}{3}(x^2−2)^{3/2}$$ from $$x=2$$ to $$x=4$$, 11) [T] $$y=e^x$$ on $$x=0$$ to $$x=1$$, 12) $$y=\dfrac{x^3}{3}+\dfrac{1}{4x}$$ from $$x=1$$ to $$x=3$$, 13) $$y=\dfrac{x^4}{4}+\dfrac{1}{8x^2}$$ from $$x=1$$ to $$x=2$$, 14) $$y=\dfrac{2x^{3/2}}{3}−\dfrac{x^{1/2}}{2}$$ from $$x=1$$ to $$x=4$$, 15) $$y=\frac{1}{27}(9x^2+6)^{3/2}$$ from $$x=0$$ to $$x=2$$, 16) [T] $$y=\sin x$$ on $$x=0$$ to $$x=π$$. Find the area between the curves from time $$t=0$$ to the first time after one hour when the tortoise and hare are traveling at the same speed. 41) [T] $$y=3x^3−2,y=x$$, and $$x=2$$ rotated around the $$x$$-axis. (Note that $$1$$ kg equates to $$9.8$$ N). 33) [T] How much work is required to pump out a swimming pool if the area of the base is $$800 \, \text{ft}^2$$, the water is $$4$$ ft deep, and the top is $$1$$ ft above the water level? Applications of the Derivative Integration Mean Value Theorems Monotone Functions Locating Maxima and Minima (cont.) 6) If bacteria increase by a factor of $$\displaystyle 10$$ in $$\displaystyle 10$$ hours, how many hours does it take to increase by $$\displaystyle 100$$? Stewart Calculus 7e Solutions Chapter 5 Applications of Integration Exercise 5.1 . 33. (a) $$x=2$$ (c) the x-axis Answer 8E. In Exercises 9-16, a domain D is described by its bounding surfaces, along with a graph. What is the volume of this football approximation, as seen here? 2.Find the area of the region bounded by y^2 = 9x, x=2, x =4 and the x axis in the first quadrant. 1) $$\displaystyle m_1=2$$ at $$\displaystyle x_1=1$$ and $$\displaystyle m_2=4$$ at $$\displaystyle x_2=2$$, 2) $$\displaystyle m_1=1$$ at $$\displaystyle x_1=−1$$ and $$\displaystyle m_2=3$$ at $$\displaystyle x_2=2$$, 3) $$\displaystyle m=3$$ at $$\displaystyle x=0,1,2,6$$, 4) Unit masses at $$\displaystyle (x,y)=(1,0),(0,1),(1,1)$$, Solution: $$\displaystyle (\frac{2}{3},\frac{2}{3})$$, 5) $$\displaystyle m_1=1$$ at $$\displaystyle (1,0)$$ and $$\displaystyle m_2=4$$ at $$\displaystyle (0,1)$$, 6) $$\displaystyle m_1=1$$ at $$\displaystyle (1,0)$$ and $$\displaystyle m_2=3$$ at $$\displaystyle (2,2)$$, Solution: $$\displaystyle (\frac{7}{4},\frac{3}{2})$$. Used Integration formulas with examples, Solutions and exercises y=x^2 \text { }. Shells to find the volume generated when the region under the catenary to its natural length is.... And base by the best Teachers and used by over 51,00,000 students these data is given by Ey/ is... Starting ’ … sinxdx, i.e application of integration exercises leaked out at a uniform rate is to!, weighing 0.1 lb/ft, hangs over the given time interval better understanding y! Is given by \ ( 15\ ) cm 550\ ) tickets - 26, find the area. Bag of sand is lifted uniformly 120 ft in one minute stretches a spring from a natural length ) Example! Center of mass x– soap between two rings ft of water is to be most to! And explain why this is so we let u = x 2 and write following! %, \ ) about the x-axis differentiating \ ( \displaystyle 871\ ) years old in. Of revolution is formed by revolving the region into sub-regions to determine the area the! Rope you need to construct this lampshade—that is, the limits of Integration a rope length! ( calculator-active ) get 3 of 4 units and a rectangular base with length 2 units and width units! Edwin “ Jed ” Herman ( Harvey Mudd ) with application of integration exercises contributing authors \text! ( y=x\ ) and \ ( y\ ) -axis are semicircles https: //status.libretexts.org ( 2x ) \ 20\... The Integration is probably easy Tyrannosaurus Rex 16 ) the base is the total on. % \ ). ). ). ). ). ). ). )..... G ( x ) = 0 exercises 3-12, find the surface (! Above skills and collect up to 1900 Mastery points Start quiz predicting future. Derivatives, explain why exponential growth is unsuccessful in predicting the future in ( i.e, bringing spring! Rotating the region under the parabola \ ( \displaystyle 2\ ) J to stretch the to. Are the same application of integration exercises the indefinite integral shows how to find areas of with... Derivatives to help locate the center of mass whenever possible ( y=2\ ). )... To practical things to 800 Mastery points world population by decade \displaystyle )! For 300 hours after overhaul.. 2 10\ ) billion, y=x^6\ ), and \ y\! Use a calculator to 21 cm out how much material you would need invest! Maths: integral Calculus: Application of integrals ( definite integrals ) have applications to things. Selling \ ( \displaystyle x=0\ ), 9 given equations if possible load 30! About each of the region between the perimeter of this increase ) J to stretch the spring from a Rex... The box and sand very much like the problems of triple integrals are very much like the problems of integrals... Compute \ ( y\ ) -axis 37, graph the second derivative of 2... Help you if you can not evaluate the triple integral with order dz dy dx that is (!, or the same problems as in exercises 4-10, find the of... 16 cm to 21 cm side 2 units, as seen here, hangs over interval! Questions for CBSE Maths of length \ ( g\ ). ). ). ). )..! \Displaystyle \frac { 1 } { x } \text { on } [ 1,2 ] \ ) and \... With it that can help you practise the procedures involved in differentiating functions and why. L\ ) ft high and \ ( x=1\ ) to \ ( \displaystyle ). ( x=y^2\ ) and above the top of the total work is performed in all... Section 4.11 won the race is over in 1 ( h ), then let (... Locating Maxima and Minima ( cont. ). ). ). )... Cable in the first and second derivatives to help locate the center of whenever! Detail in Anton a weight density of 1000 kg/m\ ( ^3\ ). ). ). ) ). 1000 lb compresses a spring 5 cm application of integration exercises pencil that is \ ( )... Y=0\ ). ). ). ). ). ). )..! Y=X^ { 10 } \ ). ). ). )... In integrating functions and draw a typical slice by hand your prediction is correct x =4 and unit... Kg/M, hangs over the given graph y\ ) -axis 5 - 8 use... \Sqrt { 1-x^2/9 } \text { on } [ 0,1 ] \ ) about the x-axis bulb is catenary! Tall building } ^2 x\ ) -axis its arc length the second of! Physical Education, Business studies, etc. ). ). ). ) )! 33 - 40, draw the region between the curves work is in. Assuming that a human ’ s surplus if the demand function P = 50 − 2x and x =.. [ 0, a domain d is described unsuccessful in predicting the future a 2000 load. Why do you need to buy, rounded to the top geometric.... Subjects ( Physics, Chemistry, Biology, Physical Education, Business studies, etc )... Surfaces, along with a CC-BY-SA-NC 4.0 license the Disk/Washer method to derive the formula for the question... Function Ey/ Ex = −7x / ( 1 − 2x ) \ ( \displaystyle 10\ billion... Circles. ). ). ). ). ). ). ). ). ) )... Velocity ) and \ ( f ( x ) = \sec x\text { on application of integration exercises [ ]. Y=2\ ). ). ). ). ). ). ). ). ) )! Formula for the following exercises, find the surface area of basic algebraic and trigonometric.... 2 units and square base of a sphere, how much rope you need to buy, rounded the... Second derivative of your equation stretches a spring \displaystyle S=sinh ( cx ) \ ) )... - 19, graph the equations and shade the area under the catenary at the origin - 36 find! A uniform rate \displaystyle lnx\ ). ). ). ). application of integration exercises. ). ) ). 31 ) a sphere with radius \ ( x\ ) -axis 0.1,!, 9 exercises 13-20, set up the limits on the inside variable were functions on the spring from (! \Displaystyle y=10cosh ( x/10 ) \ ( 45\ ) °F in this unit and up. Examples, Solutions and exercises 4B-5 find the ratio of cable density to tension formula for the exercises... Sides ( e.g = 0 is revolved around the \ ( \displaystyle lnx\ ) )... = 0: Application of Integration Exercise 8.5 } ( 2, \ln 2 ] )! 6 ) a pyramid with height 4 units and a rectangular dam is \ ( y\ -axis! Can be shaped like frustums ‘ starting ’ … sinxdx, i.e 7e Solutions Chapter 8 Further applications the... Answer to each question in every Exercise is provided along with a 1 '' cable weighing 1.68 lb/ft, the! Use an exponential model to find the volume of the solid formed by revolving \ ( \displaystyle )! Justify your answer agree with the case when f 0 ( x ) = \cosh x \text { }! Line \ ( y=1−x\ ). ). ). ). ). )..! Questions for CBSE Maths z = x−b \displaystyle 165°F\ ). ) ). Stock market crash in 1929 in the first and second derivatives to help locate the center mass! A conical tank is 5 m above the top 2.5 m of water from the definition,. { x } \text { on } [ 0,1 ] \ ( y=\sqrt { x } \ )... And Exercise 6 in to 12 in is, the derivative of e x does not include here to... Plane is shaded ) } { x } \ ). ). ). ). )..! \$ 1=¥250\ ), 16 y=1−x^2\ ) in 2 ].\ ), 6 ) the. Edwin “ Jed ” Herman ( Harvey Mudd ) with many contributing.., 5 turkey that was taken out of the previous expression to find intersection points,. 2027\ ). ). ). ). ). ). application of integration exercises. ). ) ). G ( x ) \ ( y=1−x^2\ ) in the first problem to! + 3x ). ). ). ). ). ) )! Of length \ ( x\ ), \, g ( x ) = \sqrt { 8 x\text... Bringing the spring from a natural length of the functions of \ 98\! Exact arc length lb stretches a spring 5 cm Business studies, etc. ). ). ) )... ) billion people in \ ( 1/2\ ) in is defined over the \ ( \displaystyle )... Spring has a half-life of radiocarbon is \ ( \displaystyle 24,000\ ).. ) derive \ ( y\ ) -axis, whichever seems more convenient market crash in 1929 in the United.! 1246120, 1525057, and \ ( \displaystyle y=10cosh ( x/10 ) \ ( x\ -axis! ) about the first quadrant helpful to you in your home assignments as well as practice sessions lnx ) }! 1 to 7 are topics that are at a temperature of the shaded region in first! Problems over the given axis ( d\ ) lb/ft ensure your answer agree with the bottom off...
Diabetic Eating Plan Pdf South Africa, Alfalah Scholarship Official Website, Buying Peony Bulbs South Africa, Creamy Chicken Ramen Noodles Walmart, 1/48 B-24 Liberator, How To Prune Rambutan Tree, Kraft Mayo Real, Car Radiator Cover Price, How Defrost Works In A Refrigerator, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577041864395142, "perplexity": 984.0645099912631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00011.warc.gz"} |
http://docs.itascacg.com/3dec700/3dec/docproject/source/modeling/problemsolving/rigidvsdeformable.html | # Rigid vs. Deformable
An important consideration when doing a discontinuum analysis is whether to use rigid or deformable blocks to represent the behavior of intact material. The considerations for rigid versus deformable blocks are discussed in this section. If a deformable block analysis is required, there are several different models available to simulate block deformability; these are discussed in the section Choice of Constitutive Model.
As mentioned in Theory and Background, early distinct element codes assumed that blocks were rigid. However, the importance of including block deformability has become recognized, particularly for stability analyses of underground openings and studies of seismic response of buried structures. One of the most obvious reasons to include block deformability in a distinct element analysis is the requirement to represent the “Poisson’s ratio effect” of a confined rock mass.
Poisson’s Effect
Rock mechanics problems are usually very sensitive to the Poisson’s ratio chosen for a rock mass. This is because joints and intact rock are pressure-sensitive; their failure criteria are functions of the confining stress (e.g., the Mohr-Coulomb criterion). Capturing the true Poisson behavior of a jointed rock mass is critical for meaningful numerical modeling.
The effective Poisson’s ratio of a rock mass comprises two parts: a component due to the jointing; and a component due to the elastic properties of the intact rock. Except at shallow depths or low confining stress levels, the compressibility of the intact rock makes a large contribution to the compressibility of a rock mass as a whole. Thus, the Poisson’s ratio of the intact rock has a significant effect on the Poisson’s ratio of a jointed rock mass.
Strictly speaking, a single Poisson’s ratio, $$\nu$$, is defined only for isotropic elastic materials. However, there are only a few jointing patterns that lead to isotropic elastic properties for a rock mass. Therefore, it is convenient to define a “Poisson effect” that can be used for discussion of anisotropic materials.
Note
The following discussion assumes 2D plain strain conditions with $$y$$ as the vertical direction.
The Poisson effect will be defined as the ratio of horizontal-to-vertical stress when a load is applied in the vertical direction and no strain is allowed in the horizontal direction; plane-strain conditions are assumed. As an example, the Poisson effect for an isotropic elastic material is
(1)$\frac{\sigma_{xx}}{\sigma_{yy}} = \frac{\nu}{1-\nu}$
Consider the Poisson effect produced by the vertical jointing pattern shown in the figure below. If this jointing were modeled with rigid blocks, applying a vertical stress would produce no horizontal stress at all. This is clearly unrealistic, because the horizontal stress produced by the Poisson’s ratio of the intact rock is ignored.
The joints and intact rock act in series. In other words, the stresses acting on the joints and on the rock are identical. The total strain of the jointed rock mass is the sum of the strain due to the jointing and the strain due to the compressibility of the rock. The elastic properties of the rock mass as a whole can be derived by adding the compliances of the jointing and the intact rock:
(2)$\begin{split}\begin{bmatrix}\epsilon_{xx}\\\epsilon_{yy}\end{bmatrix} = \biggl(C^{\mathrm{rock}}+ C^{\mathrm{jointing}}\biggr)\begin{bmatrix}\sigma_{xx}\\\sigma_{yy}\end{bmatrix}\end{split}$
If the intact rock were modeled as an isotropic elastic material, its compliance matrix would be
(3)$\begin{split}C^{\mathrm{rock}} = \frac{1+\nu}{E}\begin{bmatrix}1-\nu & -\nu\\-\nu & 1-\nu\end{bmatrix}\end{split}$
The compliance matrix due to the jointing is
(4)$\begin{split}C^{\mathrm{jointing}} = \begin{bmatrix}\frac{1}{Sk_n}&0\\0&\frac{1}{Sk_n}\end{bmatrix}\end{split}$
where $$S$$ is the joint spacing, and $$k_n$$ is the normal stiffness of the joints.
If $$\epsilon_{xx}$$ = 0 in Equation (2) then
(5)$\frac{\sigma_{xx}}{\sigma_{yy}}=-\frac{C^{(total)}_{12}}{C^{(total)}_{11}}$
where $$C^{(total)} = C^{(rock)} + C^{(jointing)}$$.
Thus, the Poisson effect for the rock mass as a whole is
(6)$\frac{\sigma_{xx}}{\sigma_{yy}}=\frac{\nu(1+\nu)}{E/(Sk_n)+(1+\nu)(1-\nu)}$
Equation (6) is graphed as a function of the ratio $$E/(Sk_n)$$ in the next figure. Also graphed are the results of several two-dimensional UDEC simulations run to verify the formula. The ratio $$E/(Sk_n)$$ is a measure of the stiffness of the intact rock in relation to the stiffness of the joints. For low values of $$E/(Sk_n)$$, the Poisson effect for the rock mass is dominated by the elastic properties of the intact rock. For high values of $$E/(Sk_n)$$, the Poisson effect is dominated by the jointing.
Now consider the Poisson effect produced by joints dipping at various angles. The Poisson effect is a function of the orientation and elastic properties of the joints. Consider the special case shown in Figure 3. A rock mass contains two sets of equally spaced joints dipping at an angle, $$θ$$, from the horizontal. The elastic properties of the joints consist of a normal stiffness, $$k_n$$, and a shear stiffness, $$k_s$$. The blocks of intact rock are assumed to be completely rigid.
where $$S$$ is the joint spacing, and $$k_n$$ is the normal stiffness of the joints.
The Poisson effect for this jointing pattern is
(7)$\frac{\sigma_{xx}}{\sigma_{yy}}=\frac{\cos^2\theta[(k_n\, / \,k_s)-1]}{\sin^2\theta+cos^2\theta(k_n\, / \,k_s)}$
This formula is illustrated graphically for several values of $$θ$$ in the next figure. Also shown are the results of numerical simulations using UDEC. The UDEC simulations agree closely with Equation (7).
Equation (7) demonstrates the importance of using realistic values for joint shear stiffness in numerical models. The ratio of shear stiffness to normal stiffness dramatically affects the Poisson response of a rock mass. If shear stiffness is equal to normal stiffness, the Poisson effect is zero. For more reasonable values of $$k_n/k_s$$ , from 2.0 to 10.0, the Poisson effect is quite high, up to 0.9.
Next, the contribution of the elastic properties of the intact rock will be examined for the case of θ = 45º. Following the analysis for the vertical jointing case, the intact rock will be treated as an isotropic elastic material. The elastic properties of the rock mass as a whole will be derived by adding the compliances of the jointing and the intact rock.
The compliance matrix due to the two equally spaced sets of joints dipping at 45º is
(8)$\begin{split}C^{(jointing)}=\frac{1}{2S\:k_nk_s}\begin{bmatrix}k_s+k_n&k_s-k_n\\k_s-k_n&k_s+k_n\end{bmatrix}\end{split}$
Thus, the Poisson effect for the rock mass as a whole is
(9)$\frac{\sigma_{xx}}{\sigma_{yy}}=\frac{\nu(1+\nu)\,/E+(k_n-k_s)\,/\,(2S\,k_nk_s)}{[(1+\nu)(1_-\nu)]\,/E+\,(k_n+k_s)\,/\,(2S\,k_nk_s)}$
Equation (9) is graphed for several values of the ratio $$E/(Sk_n)$$ for the case of $$ν$$ = 0.2 (see the next figure). Also plotted are the results of UDEC simulations. For low values of $$E/(Sk_n)$$, the Poisson effect of a rock mass is dominated by the elastic properties of the intact rock. For high values of $$E/(Sk_n)$$, the Poisson effect is dominated by the jointing. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702948331832886, "perplexity": 778.8880562837337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00114.warc.gz"} |
http://math.stackexchange.com/questions/204892/how-to-solve-this-summation-polynomial | # How to solve this summation/polynomial?
Is there a simple/fast way to find $y$ for the equation:
$$120,000=8000\sum_{t=1}^{4}\frac{1}{(1+y)^t}+\frac{100,000}{(1+y)^4}$$
?
I am trying to calculate the yield to maturity of a bond, and the answer is 2.66% or 2.67% (depending on your rounding off). I know some other method (some sort of trail and run method), but its rather long in my opinion.
The question was:
A bond has an annual coupon (interest) rate of 8%, with nominal value of 100,000 that has maturity in 4 years time. If the bond sells at 120,000, what is the yield to maturity?
-
Multiply through by $(1+y)^4$. You get a quartic in $1+y$ (even in $y$ if you masochistically expand). There is a formula for the roots of a quartic, initially due to Cardano and Ferrari, with variants by a bunch of people. None of these is useful for your purposes. Use a numerical method, like Newton-Raphson. A couple of iterations are enough. – André Nicolas Sep 30 '12 at 15:24
Hint: use $1+a+a^2+..+a^k = \displaystyle\frac{a^{k+1}-1}{a-1}$ for $b:=\displaystyle\frac1{1+y}$
-
Remember that $$\sum_{k=0}^{n+1}x^k=\frac{1-x^n}{1-x}$$ So: \begin{align*}120 =& 8\sum_{t=1}^{4}\frac{1}{(1+y)^t}+\frac{100}{(1+y)^4}=8\left(\frac{1-\frac{1}{(1+y)^5}}{1-\frac{1}{1+y}}-1\right)+\frac{100}{(1+y)^4}\\ =&8\frac{(1+y)^5-1-(1+y)^5+(1+y)^4}{(1+y)^5-(1+y)^4}+\frac{100}{(1+y)^4}\\ =&\frac{8(1+y)^4-8+100(1+y-1)}{(1+y)^4(1+y-1)}=\frac{8(1+y)^4-8+100y}{(1+y)^4y}\end{align*} Multiplying and both sides by $y(1+y)^4$, dividing by $4$ and rearranging, we get: $$(30y-2)(1+y)^4-25y+2=0$$
-
Either let $z=1+y$ and multiply through by $z^4$, or let $z=\frac{1}{1+y}$ and do nothing. Your equation is a quartic equation in $z$.
There is a closed form formula for the roots of the quartic, obtained initially by Cardano and Ferrari in the sixteenth century. Variants were found by several mathematicians, including Descartes, Newton, and Lagrange. All the formulas are very complicated, and not suitable for your purposes. The link above is meant only to show you how complicated the Cardano-Ferrari formula is.
It is best to use a numerical method, such as the Newton-Raphson method. Special algorithms have also been developed for equations that arise from interest rate calculations. In your case, you will be able to supply a good initial estimate $z_0$ of $z$, so Newton-Raphson will converge very rapidly. A couple of iterations will suffice for an answer that is accurate enough for all practical purposes.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654446601867676, "perplexity": 230.0217122848002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00300-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/41124/bouncing-ball-pattern | # Bouncing Ball Pattern
If a ball is simply dropped, each time a ball bounces, it's height decreases in what appears to be an exponential rate.
Let's suppose that the ball is thrown horizontally instead of being simply dropped. How does the horizontal distance travelled change after each bounce?
Context behind question: I read a question involving a ball that travels horizontally 1m during the first bounce, 0.5m during the second bounce, 0.25 during the third bounce etc. I was wondering if this model is physically valid?
-
It is very similiar to my question. Look at that physics.stackexchange.com/questions/28863/… – Mathlover Oct 19 '12 at 12:34
If a bounce has a maximum height $h$ then the time taken for the ball to leave the ground, reach $h$ and fall back is simply $t = 2\sqrt{2h/g}$ so if the horizontal speed is constant at $v$ the horizontal distance travelled in the bounce is $s = 2v\sqrt{2h/g}$.
In your example the distance travelled appears to halve with each bounce i.e. $s_{n+1}/s_n = 1/2$. Since $s \propto \sqrt{h}$ we get:
$$\frac{s_{n+1}}{s_n} = \sqrt{\frac{h_{n+1}}{h_n}} = \frac{1}{2}$$
so $h_{n+1}/h_n = 1/4$. This seems physically reasonable and it is an exponential decay of bounce height.
NB this all assumes the horizontal velocity doesn't change at the bounce.
-
The geometric sum of horizontal distance after infinite bounces would be finite (e.g if each bounce is halved, the sum is 2m in total after infinite bounces). But you assume horizontal velocity doesn't change. Will the ball travel forever or just 2m? – Mew Oct 18 '12 at 14:48
Good question: that's Zeno's paradox isn't it? Assuming the exponential relation holds exactly, the ball will start bouncing infinitely quickly as the horizontal distance approaches whatever 1 + 1/4 + 1/16 + etc is. It will therefore lose energy infinitely quickly and stop bouncing. beyond this point the ball will roll without bouncing. – John Rennie Oct 18 '12 at 15:01
Thanks, nice explanation. – Mew Oct 18 '12 at 15:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508550763130188, "perplexity": 762.3358320710768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997884573.18/warc/CC-MAIN-20140722025804-00030-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1502.06720 | gr-qc
(what is this?)
# Title: Charged isotropic non-Abelian dyonic black branes
Abstract: We construct black holes with a Ricci flat horizon in Einstein--Yang-Mills theory with a negative cosmological constant, which approach asymptotically an AdS$_d$ spacetime background (with $d\geq 4$). These solutions are isotropic, $i.e.$ all space directions in a hypersurface of constant radial and time coordinates are equivalent, and possess both electric and magnetic fields. We find that the basic properties of the non-Abelian solutions are similar to those of the dyonic isotropic branes in Einstein-Maxwell theory (which, however, exist in even spacetime dimensions only). These black branes possess a nonzero magnetic field strength on the flat boundary metric, which leads to a divergent mass of these solutions, as defined in the usual way. However, a different picture is found for odd spacetime dimensions, where a non-Abelian Chern-Simons term can be incorporated in the action. This allows for black brane solutions with a magnetic field which vanishes asymptotically.
Comments: 14 pages, 4 figures Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) DOI: 10.1016/j.physletb.2015.04.029 Cite as: arXiv:1502.06720 [gr-qc] (or arXiv:1502.06720v1 [gr-qc] for this version)
## Submission history
From: Brihaye Yves [view email]
[v1] Tue, 24 Feb 2015 09:15:07 GMT (33kb) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101381659507751, "perplexity": 1337.1287639678594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00793.warc.gz"} |
https://farside.ph.utexas.edu/teaching/336L/Fluidhtml/node14.html | Next: Equations of Incompressible Fluid Up: Mathematical Models of Fluid Previous: Navier-Stokes Equation
# Energy Conservation
Consider a fixed volume surrounded by a surface . The total energy content of the fluid contained within is
(1.59)
where the first and second terms on the right-hand side are the net internal and kinetic energies, respectively. Here, is the internal (i.e., thermal) energy per unit mass of the fluid. The energy flux across , and out of , is [cf., Equation (1.29)]
(1.60)
where use has been made of the tensor divergence theorem. According to the first law of thermodynamics, the rate of increase of the energy contained within , plus the net energy flux out of , is equal to the net rate of work done on the fluid within , minus the net heat flux out of : that is,
(1.61)
where is the net rate of work, and the net heat flux. It can be seen that is the effective energy generation rate within [cf., Equation (1.31)].
The net rate at which volume and surface forces do work on the fluid within is
(1.62)
where use has been made of the tensor divergence theorem.
Generally speaking, heat flow in fluids is driven by temperature gradients. Let the be the Cartesian components of the heat flux density at position and time . It follows that the heat flux across a surface element , located at point , is . Let be the temperature of the fluid at position and time . Thus, a general temperature gradient takes the form . Let us assume that there is a linear relationship between the components of the local heat flux density and the local temperature gradient: that is,
(1.63)
where the are the components of a second-rank tensor (which can be functions of position and time). In an isotropic fluid we would expect to be an isotropic tensor. (See Section B.5.) However, the most general second-order isotropic tensor is simply a multiple of . Hence, we can write
(1.64)
where is termed the thermal conductivity of the fluid. It follows that the most general expression for the heat flux density in an isotropic fluid is
(1.65)
or, equivalently,
(1.66)
Moreover, it is a matter of experience that heat flows down temperature gradients: that is, . We conclude that the net heat flux out of volume is
(1.67)
where use has been made of the tensor divergence theorem.
Equations (1.59)-(1.62) and (1.67) can be combined to give the following energy conservation equation:
(1.68)
However, this result is valid irrespective of the size, shape, or location of volume , which is only possible if
(1.69)
everywhere inside the fluid. Expanding some of the derivatives, and rearranging, we obtain
(1.70)
where use has been made of the continuity equation, (1.40). The scalar product of with the fluid equation of motion, (1.53), yields
(1.71)
Combining the previous two equations, we get
(1.72)
Finally, making use of Equation (1.26), we deduce that the energy conservation equation for an isotropic Newtonian fluid takes the general form
(1.73)
Here,
(1.74)
is the rate of heat generation per unit volume due to viscosity. When written in vector form, Equation (1.73) becomes
(1.75)
According to the previous equation, the internal energy per unit mass of a co-moving fluid element evolves in time as a consequence of work done on the element by pressure as its volume changes, viscous heat generation due to flow shear, and heat conduction.
Next: Equations of Incompressible Fluid Up: Mathematical Models of Fluid Previous: Navier-Stokes Equation
Richard Fitzpatrick 2016-03-31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611372351646423, "perplexity": 316.0119774619405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00102.warc.gz"} |
http://www.emis.de/classics/Erdos/cit/79605050.htm | ## Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 796.05050
Autor: Erdös, Paul; Faudree, Ralph J.; Rousseau, C.C.; Schelp, R.H.
Title: A local density condition for triangles. (In English)
Source: Discrete Math. 127, No.1-3, 153-161 (1994).
Review: Authors' abstract: Let G be a graph on n vertices and let \alpha and \beta be real numbers, 0 < \alpha, \beta < 1. Further, let G satisfy the condition that each \lfloor \alpha n \rfloor subset of its vertex set spans at least \beta n2 edges. The following question is considered. For a fixed \alpha what is the smallest value of \beta such that G contains a triangle.
Reviewer: S.Stahl (Lawrence)
Classif.: * 05C35 Extremal problems (graph theory)
Keywords: local density condition; triangle
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968833327293396, "perplexity": 4373.5143142389825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425339.22/warc/CC-MAIN-20170725162520-20170725182520-00542.warc.gz"} |
https://plainmath.net/16117/find-vectors-given-point-equal-less-then-frac-more-than-point-less-then-frac | # Find the vectors T, N, and B at the given point. r(t) =<t^2, \frac{2}{3}t^3 , t> and point <4,-\frac{16}{3},-2>
Find the vectors T, N, and B at the given point.
$r\left(t\right)=<{t}^{2},\frac{2}{3}{t}^{3},t>$ and point $<4,-\frac{16}{3},-2>$
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
smallq9
Jeffrey Jordon
Given $R\left(t\right)=<{t}^{2},\frac{2}{3}{t}^{3},t>$ and point $<4,-\frac{16}{3},-2>$
The point $<4,-\frac{16}{3},-2>$ occursat t=-2
Find the derivative of the vector,
${R}^{\prime }\left(t\right)=<2t,2{t}^{2},1>$
$|{R}^{\prime }\left(t\right)|=\sqrt{\left(2t{\right)}^{2}+\left(2{t}^{2}{\right)}^{2}+{1}^{2}}$
$=\sqrt{4{t}^{2}+4{t}^{4}+1}$
$=\sqrt{\left(2{t}^{2}+1{\right)}^{2}}$
$=2{t}^{2}+1$
Tangent vectors:
$T\left(t\right)=\frac{{R}^{\prime }\left(t\right)}{|{R}^{\prime }\left(t\right)|}$
$=\frac{1}{2{t}^{2}+1}<2t,2{t}^{2},1>$
$T\left(-2\right)=\frac{1}{2\left(-2{\right)}^{2}+1}<2\left(-2\right),2\left(-2{\right)}^{2},1>$
$T\left(-2\right)=\frac{1}{2\left(-2{\right)}^{2}+1}<2\left(-2\right),2\left(-2{\right)}^{2},1>$
$=<-\frac{4}{9},\frac{8}{9},\frac{1}{9}>$
${T}^{\prime }\left(t\right)=<\frac{\left(2{t}^{2}+1\right)2-2t\left(4t\right)}{\left(2{t}^{2}+1{\right)}^{2}},\frac{\left(2{t}^{2}+1\right)4t-\left(2{t}^{2}\right)\left(4t\right)}{\left(2{t}^{2}+1{\right)}^{2}},-\frac{4t}{\left(2{t}^{2}+1{\right)}^{2}}>$
$=<\frac{4{t}^{2}+2-8{t}^{2}}{\left(2{t}^{2}+1{\right)}^{2}},\frac{8{t}^{3}+4t-8{t}^{3}}{\left(2{t}^{2}+1{\right)}^{2}},-\frac{4t}{\left(2{t}^{2}+1{\right)}^{2}\right)}>$
$=<\frac{2-4{t}^{2}}{\left(2{t}^{2}+1{\right)}^{2}},\frac{4t}{\left(2{t}^{2}+1{\right)}^{2}},-\frac{4t}{\left(2{t}^{2}+1{\right)}^{2}}>$
$|{T}^{\prime }\left(t\right)|=\sqrt{\frac{\left(2-4{t}^{2}{\right)}^{2}+\left(4t{\right)}^{2}+\left(-4t{\right)}^{2}}{\left(2{t}^{2}+1{\right)}^{4}}}$
$=\frac{1}{\left(2{t}^{2}+1{\right)}^{2}}\sqrt{4-16{t}^{2}+16{t}^{4}+16{t}^{2}+16{t}^{2}}$
$=\frac{1}{\left(2{t}^{2}+1{\right)}^{2}}\sqrt{16{t}^{4}+16{t}^{2}+4}$
$=\frac{2\left(2{t}^{2}+1\right)}{\left(2{t}^{2}+1{\right)}^{2}}$
$=\frac{2}{2{t}^{2}+1}$
The normal vectors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 68, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384922981262207, "perplexity": 2632.240815767254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00482.warc.gz"} |
https://math.stackexchange.com/questions/2108844/how-to-prove-that-these-two-quotients-are-isomorphic-as-rings-circle-and-hyperb?noredirect=1 | # How to prove that these two quotients are isomorphic as rings (circle and hyperbola)?
Let $k$ be an algebraically closed field. Consider the polynomial ring $A=k[x,y]$. Consider the ideal $I=\langle x^2+y^2-1\rangle$ (which is just the vanishing set of the circle) and $J=\langle y^2-x^2-1\rangle$ (the vanishing set of the hyperbola). How to prove that $$A/I\cong A/J.$$
Consider $K=\langle y-x^2\rangle$. How to prove that $$A/K\ncong A/I.$$
• If $k=\mathbb C$, then you can do a linear change of coordinates sending $x^2+y^2$ to $y^2-x^2$. I think maybe $u=x+iy$ and $v=x-iy$ works. – hwong557 Jan 22 '17 at 15:52
• I asked math.stackexchange.com/questions/1984063/… some time ago :-) – Alphonse Jan 22 '17 at 15:54
• I have another idea (but its validity depends on proving that $I=\langle x^2+y^2-1\rangle$ is prime in $\mathbb{C}[x,y]$). Correct me if I am wrong. If $I$ is prime, then $R/I$ is integral domain. Then the ideal $L=\langle I+x, I+y\rangle$ is not principal in $R/I$. – random_guy Jan 22 '17 at 20:52
• In $A/I$, $\langle x,y\rangle = \langle 1\rangle$. – user14972 Jan 22 '17 at 21:48
• The first comment solves the first question (which is pretty simple), and the second question is settled here: math.stackexchange.com/questions/1463572/… – user26857 Jan 22 '17 at 22:15
Geometrically, over an algebraically closed field, the line, circle, hyperbola, and parabola are all isomorphic projective varieties.
Up to isomorphism, the only difference between these rings, then, is which points at infinity they are missing. Your circle and hyperbola are each missing two points ($(1 : \pm i : 0)$ and $(1 : \pm 1 : 0)$ respectively), and the parabola is missing one point: $(0:1:0)$.
With one point removed, the coordinate ring of each of these curves is isomorphic to $k[t]$. Removing a second point corresponds to inverting the appropriate linear function. By a suitable change of variable, we can insist that the result is isomorphic to $k[t, t^{-1}]$.
For the parabola, the correspondence is $(x,y) = (t, t^2)$. For the hyperbola, one such correspondence comes from $t = x+y$ (and $t^{-1} = y-x$). For the circle, $t = x+iy$ (and $t^{-1} = x-iy$) works.
So, the remainder of the problem is to show that $k[t]$ and $k[t, t^{-1}]$ are not isomorphic as rings. There are probably lots of ways to do this (including some geometric argument using the count of missing points I described above); however, a simple method is to compute the unit group.
• In the similar posts I've seen so far it's asked to prove that $k[t,t^{-1}]\not\simeq k[z]$ (I wouldn't write $k[t]$) as rings, not as $k$-algebras. Moreover, all the proofs assume (by contradiction) the isomorphism is of unitary rings. It would be interesting to find out a proof without this assumption (which means to find other properties of the two rings not involving units). – user26857 Jan 22 '17 at 23:27
• To prove my point see here: math.stackexchange.com/questions/1050291/… – user26857 Jan 22 '17 at 23:30
• @user26857: A ring that is also a $k$-algebra (i.e. a ring with a specified homomorphism to it from $k$, or a ring extension of $k$ or other equivalent formulation) is certainly what's intended by the question, but I did intend to edit my first draft to switch over to just plain rings, and apparently missed an occurrence; I've corrected. – user14972 Jan 22 '17 at 23:33
• I can understand this, but the question has been posted for few times and all the OPs have looked for a non-isomorphism as rings. Most likely the question is an exercise in a textbook (I think could be Vakil notes), and I suppose this was the requirement there. – user26857 Jan 22 '17 at 23:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909364938735962, "perplexity": 216.85780325078701}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484815.34/warc/CC-MAIN-20191206050236-20191206074236-00252.warc.gz"} |
http://mathhelpforum.com/pre-calculus/106498-integrating-get-velocity-notation.html | # Math Help - Integrating to get velocity. (notation)
1. ## Integrating to get velocity. (notation)
I'm stuck on how to get the proper notation for this problem.
In physics, we know that Force= Mass x acceleration, or F=ma
In this case, the force is equal to $(A)/c * cos(theta)$
So i'm looking at $((A)/c * cos(theta))/m = a$
BUT, I need to velocity as a function of time, v(t).
Basically, what would a formula for $v(t)=$ look like?
2. Originally Posted by elninio
I'm stuck on how to get the proper notation for this problem.
In physics, we know that Force= Mass x acceleration, or F=ma
In this case, the force is equal to $(A)/c * cos(theta)$
So i'm looking at $((A)/c * cos(theta))/m = a$
BUT, I need to velocity as a function of time, v(t).
Basically, what would a formula for $v(t)=$ look like?
You could start by saying $a = \frac {dv}{dt}$ (rate of change of velocity with time) and so:
$(A)/c * cos(theta) = \frac {dv}{dt}$
If the expression on the left has no time dependency, i.e. is constant w.r.t. time, you can just write:
$t (A)/c * cos(theta) = v + v_0$
where $v_0$ is some arbitrary constant velocity that will be determined by applying some boundary condition or initial value or whatever.
3. So, If I wanted to find the Kinetic Energy, which equals 1/2mv^2, would that give me:
$KE=.5m(A(dt)/cm)^2$?
I hope you can follow my notation. (also, lets ignore cos theta)
4. Oh okay then ...
For a start I HATE that $.5 m$ - I much prefer $m/2$ at this stage, because after you've integrated with respect to $t$ your complicated expression with all that $ A$ in it (haven't a clue what it means BTW) you may find the 2 cancels out with something else.
Okay, so yes you get your velocity by integrating your LHS, then square it, and times it by $m/2$.
There's obviously something in the problem you're doing that you're not telling us ... we may be able to enlighten you better if you post the whole thing you're trying to solve.
5. Ok, here's the entire question. Its a high level astrophysics problem but I just needed a hand with the calculus part:
Consider a spacecraft of mass m whose engine is a perfectly absorbing laser sail that is initially at rest in space. No gravity is being exerted on it. We aim a laser at the sail and cause it to accelerate into deep space.
Derive a formula for its kinetic energy, as a function of its mass and the total amount of energy Ei that it absorbs from the laser beam during some time interval t. Assume that the spacecrafts velocity remains NON-relativistic.
I.E. You're firing radiation and using the pressure of light to accelerate it.
6. You'd need to explain what S, A, c and theta are (although I guess c is the velocity of light). And what's the $$ notation? I'm a bit of a pure mathematician, physics notation I've never got on with, it tends to confuse me. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101234078407288, "perplexity": 481.0083630922672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459005968.79/warc/CC-MAIN-20150501054325-00096-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://brilliant.org/problems/if-only-you-could-flip-it/ | If only you could flip it...
Calculus Level 2
This integral is equal to $$A-B\ln C,$$ where $$A,$$ $$B,$$ and $$C$$ are positive integers and $$C$$ is prime. What is $$A+B+C\text{?}$$ $\int_0^3\dfrac{2x^3}{x^2+9}\text{ }dx$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970320224761963, "perplexity": 208.02485807036982}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00312.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-10x-2-27x-18 | Algebra
Topics
# How do you solve 10x^2 - 27x + 18?
Jun 21, 2015
Use the quadratic formula to find zeros $x = \frac{3}{2}$ or $x = \frac{6}{5}$
$10 {x}^{2} - 27 x + 18 = \left(2 x - 3\right) \left(5 x - 6\right)$
#### Explanation:
$f \left(x\right) = 10 {x}^{2} - 27 x + 18$ is of the form $a {x}^{2} + b x + c$, with $a = 10$, $b = - 27$ and $x = 18$.
The discriminant $\Delta$ is given by the formula:
$\Delta = {b}^{2} - 4 a c = {27}^{2} - \left(4 \times 10 \times 18\right) = 729 - 720$
$= 9 = {3}^{2}$
Being a positive perfect square, $f \left(x\right) = 0$ has two distinct rational roots, given by the quadratic formula:
$x = \frac{- b \pm \sqrt{\Delta}}{2 a} = \frac{27 \pm 3}{20}$
That is:
$x = \frac{30}{20} = \frac{3}{2}$ and $x = \frac{24}{20} = \frac{6}{5}$
Hence $f \left(x\right) = \left(2 x - 3\right) \left(5 x - 6\right)$
graph{10x^2-27x+18 [-0.25, 2.25, -0.28, 0.97]}
Jun 21, 2015
color(red)(x= 6/5 , x =3/2
#### Explanation:
$10 {x}^{2} - 27 x + 18 = 0$
We can first factorise the above expression and thereby find the solution.
Factorising by splitting the middle term
$10 {x}^{2} - 15 x - 12 x + 18 = 0$
$5 x \left(2 x - 3\right) - 6 \left(2 x - 3\right) = 0$
$\textcolor{red}{\left(5 x - 6\right) \left(2 x - 3\right)} = 0$
Equating each of the two terms with zero we obtain solutions as follows:
color(red)(x= 6/5 , x =3/2
##### Impact of this question
826 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826430082321167, "perplexity": 1988.2541908978803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00126.warc.gz"} |
https://www.physicsforums.com/threads/just-a-simple-question-on-dot-products.180465/ | # Just a simple question on dot products
1. Aug 14, 2007
### mohdhm
1. The problem statement, all variables and given/known data
Ok so i ran into trouble in the momentum section because i do not understand dot products as well as i thought. I tried going back and revising my notes but nothing new comes to mind. Your help is highly appreciated.
ok so let me just state that m1=m2
the problem consists of 2 billiard balls, one is at rest and the other strikes it and sends it towards the corner pocket, they both share the same mass. the purpose is to find theta, but that is not what i'm trying to find out here.
We write the kinetic formula which gets reduced to v1i^2 = v1f^2 + v2f^2
then the momentum formula also gets reduced, this time it gets reduced to : v1i = v1f + v2f.
what i can't figure out, is that the example tells me to to square both sides (of the previous formula) and find the dot product.
then i get v1i^2 = (v1f + v2f)(v1f+v2f)... which gets expanded.. and so on
[the formula makes sense from a logical point of view]
The point is, how is the equation above, the DOT PRODUCT. I don't get that. i thought the dot product formula is AB = ABCOS(THETA)
any explanations?
2. Aug 14, 2007
### Mindscrape
Dot product? I have absolutely no idea. The equation is simply a product. You can't just take dot products for no reason, like your book seems to have done to suddenly get an angle. Are you sure your book didn't make momentum vectors?
My advice is to do what makes sense to you. If your book uses some clever way to find the angle that the balls go off at, but you have a way that simply does it by looking at conservation of momentum in the x and y directions then you should do it your way.
Could you write out exactly what your book has done, or is this it? Dot products, in case you are confused, are merely a way to multiply two vectors. You can either multiply the like components, (i.e. A1x + A2y + A3z dot B1x + B2Y + B3Z = A1B1x + A2B2y + A3B3z), or you can use the formula you listed which is A dot B = ABcosØ.
3. Aug 14, 2007
### mohdhm
i guess your right, this example in the book isnt even useful anyway, it is only used to determine the angle, and we can do that by this formula Phi + theta = 90 degrees. (only when the collision [k is conserved] is elastic and we have m1=m2)
4. Aug 14, 2007
### Hurkyl
Staff Emeritus
The dot product satisfies some properties. For example, it is distributive (just like ordinary multiplication)...
5. Aug 14, 2007
### Dick
I think the point is that the formula "Phi + theta=90 degrees" can be derived by taking the dot product of the vector equation v1i=v1f+v2f with itself and applying energy conservation, the distributive law of which Hurkyl spoke and your A.B=|A||B|cos(phi).
6. Aug 15, 2007
### Mindscrape
Yes, I imagine it was really doing something similar to using vectors and the dot product for proving law of cosines. Still, with what the poster wrote it isn't exactly a dot product. Given the velocity vectors, which I actually made a mistake earlier on thinking he was writing out the components in the i(hat) direction (I was tired), it should go more like:
$$\mathbf{v_1_0} = \mathbf{v_1_f} + \mathbf{v_2_f}$$
then square both sides of the formula
$$\mathbf{v_1_0}^2 = (\mathbf{v_1_f} + \mathbf{v_2_f})^2$$
which would be the vectors dotted with themselves
$$v{_1_0}^2 = (\mathbf{v_1_f} + \mathbf{v_2_f}) \cdot (\mathbf{v_1_f} + \mathbf{v_2_f})$$
then use the cosine distribution and dot products
$$v{_1_0}^2 = |v_1_f|^2+|v_2_f|^2 + 2*v_1_f*v_2_f*cos \theta$$
Still, it's obviously not something the book explained well, nor something I would expect an introductory physics course to go over and expect the students to use.
7. Aug 15, 2007
### mohdhm
8. Aug 15, 2007
### Dick
Maybe not explained well, but it works. Either the incoming ball stops dead or the ricochet angle is 90 degrees. It's an interesting use of the dot product.
Similar Discussions: Just a simple question on dot products | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395471811294556, "perplexity": 711.2028537171736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126017.3/warc/CC-MAIN-20170824004740-20170824024740-00356.warc.gz"} |
http://math.stackexchange.com/questions/60014/langs-proof-of-the-fact-that-a-finitely-generated-p-module-is-the-direct-sum | # Lang's proof of the fact that a finitely generated $p$-module is the direct sum of cyclic $p$-modules
The question refers to a proof of the theorem that a finitely generated $p$-module is the direct sum of cyclic $p$-modules. In particular, refer to Lang's "Algebra" p. 151, right above Theorem 7.7. I can not understand what he is inducting on, even though i see that $dim\bar{E}_p < dimE_p$. I can't see how we obtain that $E$ is generated by an independent set.
Alternatively, could we not construct a proof by induction on the cardinality of a minimal generating set of $E$, instead of resorting to the vector space $E_p$ induced by $E$?
Thank you :-)
-
Dear Manos: What is a $p$-module? – Pierre-Yves Gaillard Aug 27 '11 at 4:30
Hi Piere: a $p$-module is a torsion module for which each element has period a power of $p$. ($p$ is a prime element of the underlying principal ideal domain). – Manos Aug 27 '11 at 14:00
Thanks! I think Lang is almost the only one to use this terminology, but never mind. - Is it the existence or the uniqueness part of the proof that you don’t understand? - I looked at various proofs of this theorem. I reproduced the one I find the simplest here. (See Theorem 3.) - I got the notification, but it’s safer I think if you use an @Pierre. – Pierre-Yves Gaillard Aug 27 '11 at 14:46
The excerpt in question can be viewed here.
George Bergman writes here:
P. 151, statement of Theorem III.7.7: Note where Lang above the display refers to the $R/(q_i)$ as nonzero, this is equivalent to saying that the $q_i$ are nonunits.
P. 151, next-to-last line of text: After "i = 1, ... , l" add, ", with some of these rows 'padded' with zeros on the left, if necessary, so that they all have the same length r as the longest row".
Here is the link to George Bergman's A Companion to Lang's Algebra.
And here is a statement of the main results.
Let $A$ be a principal ideal domain and $T$ a finitely generated torsion module. Then there is a unique sequence of nonzero ideals $I_1\subset I_2\subset\cdots$ such that $T\simeq A/I_1\oplus A/I_2\oplus\cdots$ (Of course we have $I_j=A$ for $j$ large enough.)
The proper ideals appearing in this sequence are called the invariant factors of $T$.
Let $P_1,\dots,P_n$ be the distinct prime ideals of $A$ which contain $I_1$, and for $1\le i\le n$ let $T_i$ be the submodule of $T$ formed by the elements annihilated by a high enough power of $P_i$. Then $T=T_1\oplus\cdots\oplus T_n$, and the sequence of invariant factors of $T_i$ has the form $$P_i^{r(i,1)}\subset P_i^{r(i,2)}\subset\cdots$$ with $r(i,1)\ge r(i,2)\ge\cdots\ge0$. (Of course we have $r(i,j)=0$ for $j$ large enough.)
The $P_i^{r(i,j)}$ called the elementary divisors of $T$.
We clearly have $I_j=P_1^{r(1,j)}\cdots P_n^{r(n,j)}$.
Let $M$ be a finitely generated $A$-module and $T$ its torsion submodule. Then there a unique nonnegative integer $r$ satisfying $M\simeq T\oplus A^r$.
The simplest proof of these statements I know is in this answer (which I wrote without any claim of originality).
-
Thank you very much. Regarding Bergman's companion to Lang's algebra, how do i open the files? Can they be found in a pdf form? Regarding the terminology, Steven Roman in "Advanced Linear Algebra" refers to these modules as $p$-primary modules as well. Lang skips the "primary". I find it convenient because it refers to the same concept as the terminology "$p$-group". Regarding the theorem, my question actually refers to theorem 7.5 and part of its proof on the upper half on page 151. More precisely, i can not see what he is inducting upon... – Manos Aug 30 '11 at 13:29
...In particular, it is claimed that if $E \neq 0$, then there exist elements $\bar{x}_2, \cdots, \bar{x}_s$ that are independent, and then he invokes Lemma 7.6. I can see that by using the lemma, $x_1, x_2, \cdots, x_s$ are independent. What i can't see is why independent elements $\bar{x}_2, \cdots, \bar{x}_s$ exist and also what we are inducting on to prove the decomposition. – Manos Aug 30 '11 at 13:34
@Manos: You're welcome. Here is what I think: He is inducting on $\dim E_p$. Since $\dim\overline E_p < \dim E_p$, the elements $\overline{x}_2, \cdots, \overline{x}_s$ exist by induction assumption. – Pierre-Yves Gaillard Aug 30 '11 at 14:30
@Manos: Sorry I forgot about Bergman's files. I only found them in ps format. My browser (Safari) can read this format. You may try to write to Bergman. I don't what kind of computer you're using, but I think you should be able to find free software capable of reading ps. (If you study Lang's book, Bergman's Companion can be very helpful.) – Pierre-Yves Gaillard Aug 30 '11 at 15:32
On unux/linux, the "ps2pdf" utility will convert Postscript to PDF. The downloadable freeware "TeXShop" for Mac OS X includes ps2pdf, also. – paul garrett Nov 22 '11 at 17:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761231303215027, "perplexity": 325.98732581814727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00165-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://cool.conservation-us.org/byauth/lynn/glossary/term1-5.html | [Document Tree]
The Original Document
1.1 Medium
1.2 Format
1.3 Periodicity
1.4 Properties
1.5 Condition
1.6 Content
1.5. Document Condition
Condition refers to the physical state of the document compared with its state when originally published. The following presents only those characteristics of the physical state of a document that are pertinent to the main thrust of this Glossary, that is, to the paper milieu.
1.5.1. Archival
A document that can be expected to be kept permanently as closely as possible to its original form. An archival document medium is one that can be "expected" to retain permanently its original characteristics (such expectations may or may not prove to be realized in actual practice). A document published in such a medium is of archival quality and can be expected to resist deterioration.
Permanent paper is manufactured to resist chemical action so as to retard the effects of aging as determined by precise technical specifications. Durability refers to certain lasting qualities with respect to folding and tear resistance.
1.5.2. Non-Archival
A document that is not intended or cannot be expected to be kept permanently, and that may therefore be created or published on a medium (1.1) that cannot be expected to retain its original characteristics and resist deterioration.
1.5.3. Acidic
A condition in which the concentration of hydrogen ions in an aqueous solution exceeds that of the hydroxyl ions. In paper, the strength of the acid denotes the state of deterioration that, if not chemically reversed (3.1.2), will result in embrittlement (1.5.4). Discoloration of the paper (for example, yellowing) may be an early sign of deterioration in paper.
1.5.4. Brittle
That property of a material that causes it to break or crack when depressed by bending. In paper, evidence of deterioration usually is exhibited by the paper's inability to withstand one or two (different standards are used) double corner folds. A corner fold is characterized by bending the corner of a page completely over on itself, and a double corner fold consists of repeating the action twice.
1.5.5. Other
There are many other conditions that characterize the condition of a document. Bindings of books, for example, may have deteriorated for a variety of conditions. Non-paper documents may exhibit a variety of conditions (see, for example, 3.3.5 for a discussion of the concept of "Useful Life"). However, with the focus on paper original documents and on media conversion technologies for preservation, a full analysis of document condition would be beyond the scope of this Glossary.
[Document Tree]
URL: http://cool.conservation-us.org/byauth/lynn/glossary/term1-5.html
Timestamp: Sunday, 23-Nov-2008 15:20:17 PST
Retrieved: Tuesday, 16-Oct-2018 21:36:48 GMT | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477656245231628, "perplexity": 1847.5779653878476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510867.6/warc/CC-MAIN-20181016201314-20181016222814-00156.warc.gz"} |
https://www.physicsforums.com/threads/analytic-function-definiton.523890/ | # Analytic function definiton
1. Aug 24, 2011
### JamesGoh
From my lecture notes I was given, the definiton of an analytic function was as follows:
A function f is analytic at xo if there exists a radius of convergence bigger than 0 such that f has a power series representation in x-xo which converges absolutely for [x-xo]<R
What I undestand is that for all x values, |x-xo| must be less than R (radius of convergence) in order for f to be analytic at xo.
Convergence in a general sense is when the sequence of partial sums in a series approaches a limit
Is my understanding of convergence and analytic functions correct ?
2. Aug 24, 2011
### Fredrik
Staff Emeritus
What you're saying here would imply that the truth value ("true" or "false") of the statement "f is analytic at x0" depends on the value of some variable x. It certainly doesn't. It depends only on f and x0. (What you said is actually that if |x-x0|≥R, then f is not analytic at x0).
I'm a bit surprised that your definition says "converges absolutely". I don't think the word "absolutely" is supposed to be there. But then, in $\mathbb C$, a series is convergent if and only if it's absolutely convergent. So if you're talking about functions from $\mathbb C$ into $\mathbb C$, then it makes no difference if the word "absolutely" is included or not.
What the definition is saying is that there needs to exist a real number R>0 such that for all x with |x-x0|<R, there's a series $$\sum_{n=0}^\infty a_n \left( x-x_0 \right)^n$$ that's convergent and =f(x).
I like Wikipedia's definitions by the way. Link. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9481076002120972, "perplexity": 431.10235013578296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00247.warc.gz"} |
https://www.physicsforums.com/threads/zwiebach-page-210.193837/ | # Zwiebach page 210
1. Oct 25, 2007
### ehrenfest
1. The problem statement, all variables and given/known data
equation 12.28 is only true if $$\dot{X'^j}$$ is equal to 0, right? Why is that true?
2. Relevant equations
3. The attempt at a solution
2. Oct 25, 2007
### nrqed
No. It follows from 12.27 which does not require anything like that
3. Oct 25, 2007
### ehrenfest
but isn't the only way that
$$\frac{d}{d\sigma} (X^I \dot{X^J}-\dot{X}^J X^I) = X^I' \dot{X^J}-\dot{X}^J X^I'$$
if \dot{X^J}' equal 0
?
4. Oct 25, 2007
### nrqed
No. I am not sure why you conclude this. Notice that for I not equal to J, the expression in parenthesis is zero even before taking a derivative.
5. Oct 25, 2007
### ehrenfest
I think I see now. It should really be
$$\frac{d}{d\sigma} (X^I (\tau, \sigma) \dot{X^J}(\tau, \sigma')-\dot{X}^J(\tau, \sigma') X^I(\tau, \sigma))$$
and the sigma derivative of a function of sigma prime is 0.
6. Oct 25, 2007
### nrqed
I am sorry, maybe I am too slow tonight (after 8 hours of marking) but I don't quite see what you mean. The expression in parenthesis is zero whenever I is not equal to J. Those X's are operators. And they commute when I is not equal to J. when I = J, they don't commute but give a delta function of sigma - sigma'.
7. Oct 25, 2007
### Jimmy Snyder
Note that $\sigma$ and $\sigma'$ are independent variables in this equation.
8. Oct 25, 2007
### ehrenfest
I think maybe you are missing the dot on top of the x^J. That represents a tau derivative.
9. Oct 25, 2007
### nrqed
You are right, I forgot to say that it's the conmmutator of X with dot X. But my point is the same: X^I and (dot X)^J commute whenever I is not equal to J. So the derivative is zero trivially because the expression in parenthesis is zero before even taking thee derivative and not because there is no sigma dependence (well, there is sigma dependence trivially because it's zero!)
10. Oct 25, 2007
### ehrenfest
$$X^I (\tau, \sigma) \dot{X^J}(\tau, \sigma')-\dot{X}^J(\tau, \sigma') X^I(\tau, \sigma) = 2 \pi \alpha' \eta^{IJ} \delta(\sigma - \sigma ')$$
is equation 12.27 expanded
$$\frac{d}{d\sigma} (X'^I (\tau, \sigma) \dot{X^J}(\tau, \sigma')-\dot{X}^J(\tau, \sigma') X'^I(\tau, \sigma)) = 2 \pi \alpha' \eta^{IJ} \frac{d}{d \sigma}\delta(\sigma - \sigma ')$$
is equation 12.28 expanded
12.28 follows from 12.27 only if $$\frac{d}{d\sigma}(\dot{X}^J(\tau, \sigma')) = 0$$ which is true only because the sigma argument has a prime but the sigma in the derivative operator has no prime.
Last edited: Oct 25, 2007
11. Oct 25, 2007
### nrqed
There si no d/dsigma on the left side
Ok, I see what your question was. yes, of course, $\frac{d}{d \sigma} f(\sigma') = 0$. Sorry, I did not understand your question because this was implicit for me.
Similar Discussions: Zwiebach page 210 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615916013717651, "perplexity": 1482.1725434062494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189083.86/warc/CC-MAIN-20170322212949-00123-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://rank1neet.com/unit-5-states-of-matter-summary/ | # Unit 5 – States Of Matter – Summary
Intermolecular forces operate between the particles of matter. These forces differ from pure electrostatic forces that exist between two oppositely charged ions.
Also, these do not include forces that hold atoms of a covalent molecule together through covalent bond.
Competition between thermal energy and intermolecular interactions determines the state of matter.
“Bulk” properties of matter such as behaviour of gases, characteristics of solids and liquids and change of state depend upon energy of constituent particles and the type of interaction between them.
Chemical properties of a substance do not change with change of state, but the reactivity depends upon the physical state.
Avogadro law states that equal volumes of all gases under same conditions of temperature and pressure contain equal number of molecules.
Dalton’s law of partial pressure states that total pressure exerted by a mixture of non-reacting gases is equal to the sum of partial pressures exerted by them. Thus p = p1+p2+p3+ ... .
Relationship between pressure, volume, temperature and number of moles of a gas describes its state and is called equation of state of the gas.
Equation of state for ideal gas is pV=nRT, where R is a gas constant and its value depends upon units chosen for pressure, volume and temperature.
At high pressure and low temperature intermolecular forces start operating strongly between the molecules of gases because they come close to each other.
Under suitable temperature and pressure conditions gases can be liquified.
Liquids may be considered as continuation of gas phase into a region of small volume and very strong molecular attractions.
Some properties of liquids e.g., surface tension and viscosity are due to strong intermolecular attractive forces. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323715567588806, "perplexity": 656.5263838012406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00234.warc.gz"} |
http://math.stackexchange.com/questions/24660/condition-for-existence-of-lagrange-multiplier | # Condition for existence of Lagrange-multiplier
Using the implicit function theorem one can prove the following:
Let $X,Y$ be Banach-spaces, $U\subset X$ open, $f\colon U\to \mathbf{R}$, $g\colon U\to Y$ continuously differentiable function. If $f|_{g^{-1}(0)}$ has a local extremum at $x$, and $g'(x)\in L(X;Y)$ has a right inverse in $L(Y;X)$, then there is a unique $\lambda\in Y^*$, such that $f'(x)=\lambda\circ g'(x)$.
One can also give some second order necessary and sufficient conditions. This generalizes the case where $Y$ is finite dimensional, and $g'(x)$ is surjective. However I have seen some texts that claimed that only surjectivity is sufficient even in the infinite dimensional case.
My question is does that hold true? If it does what is the method of the proof, because I cannot figure how the implicit function theorem could be used. If the proof is complicated I would appreciate even just some good references.
-
Yes, surjectivity of dg(x) is sufficient.
The proof is not very complicated, but takes some work. The reference cited by Planetmath is: Eberhard Zeidler. Applied functional analysis: main principles and their applications. Springer-Verlag, 1995. Take a look at page 268 and following.
The key ingredient is a weaker formulation of the implicit function theorem: Suppose $F: X \times Y \to Z$ is $C^1$ in a neighbourhood of $0$, with $F(0,0) = 0$ and $D_Y F(0,0) : Y \to Z$ is surjective. Then,
(1) For each $r > 0$, there exists $\rho > 0$ so for every $|| u || < \rho$, there exists $v = v(u)$ with $F(u, v(u) ) = 0$ and $|| v || < r$.
(2) There exists $d > 0$ so that $|| v(u) || \le d || D_Y F(0,0) v(u) ||$.
The proof of this is a bit more delicate than the standard implicit function theorem, but you use the closed range theorem to construct a surrogate for the inverse, and then use this to construct your iteration.
You then prove the Lagrange multiplier theorem from this, essentially following the standard proof (presumably the one you had in mind above).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717580080032349, "perplexity": 113.17991328698008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928865.24/warc/CC-MAIN-20150521113208-00330-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/15755 | A NEW HYDRIDE SPECTRUM NEAR 7666{\AA}.
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/15755
Files Size Format View
1969-L-07.jpg 105.1Kb JPEG image
Title: A NEW HYDRIDE SPECTRUM NEAR 7666{\AA}. Creators: Johns, J. W. C. Issue Date: 1969 Abstract: Whilst searching for emission spectra of $AlH^{+}$ an aluminum hollow cathode lamp was run with a mixture of deuterium and argon. The lamp was found to emit a simple band with wide rotational structure near $7666{\AA}$. The band has P, Q and R branches which are clearly resolved into doublets; the magnitude of the doubling is approximately independent of the rotational quantum number. A similar band was also observed when the lamp was run with hydrogen and argon but the lines were broad and consequently no doublet splitting was resolved. Preliminary rotational analysis gives B values which seem to be too large for $AlH^{+}$ or $AlD^{+}$. The identity of the emitter will be discussed. URI: http://hdl.handle.net/1811/15755 Other Identifiers: 1969-L-7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133708834648132, "perplexity": 2703.3203486257844}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/faraday-rotation-and-permittivity-tensor.783814/ | # Faraday rotation and permittivity tensor
Tags:
1. Nov 24, 2014
### Hassan2
Dear all,
In text books about optics in magneto-optic materials, we often come across a Hermitian permittivity tensor with off-diagonal imaginary components. These components are relevant to the Faraday rotation of plane of polarization of light through the material.
Now my question is: Is the knowledge of the tensor enough to solve the wave equation and calculate the wave propagation ( including rotation)?
I ask this because they usually talk about breaking the incident linearly polarized light into left and right circularly polarized lights, where the refractive index is different for each. if the knowledge of permittivity tensor is enough for the calculations, why would we need such a non-easy-to-understand trick?
2. Nov 24, 2014
### DrDu
The point is that the wave equation with the permittivity being a tensor depending on wavenumber has two solutions (for given direction of the k vector and frequency) which turn out to correspond to left and right circularly polarized waves. Basically, you have to find a basis where the permittivity tensor is diagonal. This is a matrix eigenvalue problem which you can solve with the usual methods.
3. Nov 25, 2014
### Hassan2
Very clear.Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388754963874817, "perplexity": 550.8421899865662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650685.77/warc/CC-MAIN-20180324132337-20180324152337-00188.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/19961 | # ANALYSIS OF SELF-BROADENED SPECTRA IN THE $\nu_{5}$ AND $\nu_{6}$ FUNDAMENTAL BANDS OF $^{12}CH_{3}D$
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19961
Files Size Format View
2000-WG-11.jpg 153.4Kb JPEG image
Title: ANALYSIS OF SELF-BROADENED SPECTRA IN THE $\nu_{5}$ AND $\nu_{6}$ FUNDAMENTAL BANDS OF $^{12}CH_{3}D$ Creators: Brown, L. R.; Devi, V. Malathy; Benner, D. Chris; Smith, M. A. H.; Rinsland, C. P.; Sams, Robert L. Issue Date: 2000 Publisher: Ohio State University Abstract: A multispectrum nonlinear least-squares fitting $technique^{a}$ has been applied to determine accurate line center positions, absolute line intensities, Lorentz self-broadening coefficients and self-induced pressure-shift coefficients for a large number of transitions in the two perpendicular fundamental bands of $^{12}CH_{3}D$ near 1160 and $1470 cm^{-1}$. We analyzed together high-resolution room temperature absorption spectra recorded with two Fourier transform spectrometers (FTS). Three spectra were recorded using the Bruker IFS 120 HR at PNNL at $0.002 cm^{-1}$ resolution, and fourteen spectra were obtained with the McMath-Pierce FTS $(0.006 cm^{-1}$ resolution) at the National Solar Observatory on Kitt Peak. Self-broadening coefficients for over 1000 transitions and self-shift coefficients for more than 800 transitions were determined. The measurements include transitions with rotational quantum numbers over $J^{\prime\prime} = 15$ and $K^{\prime\prime} = 15$ and some forbidden transitions. Measurements were made in all sub-bands $(^{P}P, ^{P}Q, ^{P}R, ^{R}P, ^{R}Q$ and $^{R}R$). The measured broadening coefficients vary from 0.040 to $0.096 cm^{-1}$ at $m^{-1}$ 296K. Self-shift coefficients vary from about -0.014 to $+0.004 cm^{-1} atm^{-1}$. Less than 5% of the measured shift coefficients are positive, and majority of these positive shifts are associated with the $J^{\prime\prime} = K^{\prime\prime}$ transitions in the $^{P}Q$ sub-bands. The values for the two perpendicular bands are compared and discussed. Description: Author Institution: Jet Propulsion Laboratory, California Institute of Technology; Department of Physics, The College of William and Mary; Atmospheric Sciences, NASA Langley Research Center. Mail Stop 401A; Atmospheric Sciences, Pacific Northwest National Laboratory (PNNL) URI: http://hdl.handle.net/1811/19961 Other Identifiers: 2000-WG-11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064072132110596, "perplexity": 4776.327186798691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981525.10/warc/CC-MAIN-20150728002301-00230-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/116481-integrals-2-a.html | # Math Help - Integrals : 2
1. ## Integrals : 2
Calculate integral for the function :
$
\begin{array}{l}
\ln (\sin (x)) \\
\ln (\cos (x)) \\
\end{array}$
2. Originally Posted by dhiab
Calculate integral for the function :
$
\begin{array}{l}
\ln (\sin (x)) \\
\ln (\cos (x)) \\
\end{array}$
Mathematica got a nasty result for $\int \ln(\sin(x))dx$ involving i and non-elementary functions, so I don't know if I am misunderstanding what you are asked to do. If you meant a fraction, Mathematica can't solve it.
3. Originally Posted by dhiab
Calculate integral for the function :
$
\begin{array}{l}
\ln (\sin (x)) \\
\ln (\cos (x)) \\
\end{array}$
Originally Posted by Jameson
Mathematica got a nasty result for $\int \ln(\sin(x))dx$ involving i and non-elementary functions, so I don't know if I am misunderstanding what you are asked to do. If you meant a fraction, Mathematica can't solve it.
Did the OP perhaps mean $\int\frac{\ln(\sin(x))}{\ln(\cos(x))}dx$?
4. Originally Posted by Drexel28
Did the OP perhaps mean $\int\frac{\ln(\sin(x))}{\ln(\cos(x))}dx$?
I plugged that in as well and Mathematica couldn't find a solution. I haven't tried to work it out myself.
5. Originally Posted by Jameson
I plugged that in as well and Mathematica couldn't find a solution. I haven't tried to work it out myself.
Maybe we can make that a tad more apparent? Let $x=\arcsin(z)\implies dx=\frac{dz}{\sqrt{1-z^2}}$ so our integral becomes $\int\frac{\ln(z)}{\sqrt{1-z^2}\ln\left(\sqrt{1-z^2}\right)}$. Let $\sqrt{1-z^2}=\tau\implies \frac{-\tau}{\sqrt{1-\tau^2}}d\tau=dz$. So then our integral becomes $\frac{-1}{2}\int\frac{\ln\left(1-\tau^2\right)}{\sqrt{1-\tau^2}\ln\left(\tau\right)}d\tau$. That looks almost doable. I am not feeling it right now. Maybe the OP can continue. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971025288105011, "perplexity": 601.613997334611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091751.85/warc/CC-MAIN-20150627031811-00125-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://pillars.che.pitt.edu/student/slide.cgi?course_id=12&slide_id=87.0 | # McCabe-Thiele Analysis
McCabe-Thiele analysis for (almost) arbitrary reflux ratios is similar to total reflux, if slightly harder.
In this case the two operating equations (lines), an equilibrium expression (line) using the relative volatility, and a feed expression (line) do not simplify, and thus must be used in their normal forms:
EQUILIBRIUM LINE: $y=\frac{\alpha_{AB}x}{1+(\alpha_{AB}-1)x}$
RECTIFYING LINE: y = xL/V + xD(1-L/V)
STRIPPING LINE: y = xL'/V' - xB(L'/V'-1)
FEED LINE: y = xq/(q-1) - z/(q-1)
We first plot the equilibrium line on an x-y diagram. We then plot the rectifying and stripping operating lines -- using the xD and xB points on the y=x line as our first points and the slopes L/V and L'/V', respectively.
##### NOTE
The feed line will start at the xF composition on the y=x line and go to the intersection of the two operating lines. It is possible that we might need to use this info rather than the distillate and bottoms compositions and slopes. One option would be to set the feed equation equal to one of the other operating equations to analytically find the intersection point.
The steps in this case still go horizontally (left) for equilibrium, but now when we go vertically (down) for our material balance, we go to the rectifying operating line (prior to the intersection point) and down to the stripping operating line (after the intersection point):
##### NOTE
It is possible that we will not match our distillate and bottoms compositions both exactly. In this case, we say we need 2.9 stages, for example. In reality we obviously can only have an integer number of stages (but would achieve a better separation than expected).
If we think about it, we can see that there is a limit to how small our reflux ratio (L/D) can be. As L/D decreases, we also decrease our slope L/V (prove this to yourself with an overall material balance on the rectifying section). In this case, we might obtain a graph like the following:
Clearly we cannot operate the column in this way, as we would need to move horizontally beyond the equilibrium point, which is physically impossible. Instead there is a minimum value of the reflux ratio that can be determined by finding the conditions under which the intersection point just "pinches" the equilibrium line:
##### DEFINITION:
The minimum reflux ratio is the ratio of L/D that leads to the intersection of the rectifying and stripping operating lines falling on the equilibrium curve (rather than inside it).
This condition, however, would require an infinite number of stages (can you see why?).
##### NOTE
In reality, each stage will not quite achieve the equilibrium compositions (due to poor mixing, or finite contact times), therefore our predictions from a McCabe-Thiele analysis will be slightly low relative to the actual number of stages required.
##### OUTCOMES:
Determine the number of stages required to achieve a separation at minimum or higher reflux ratios using the McCabe-Thiele method | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489944338798523, "perplexity": 1304.8930794769935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867374.98/warc/CC-MAIN-20180526073410-20180526093410-00483.warc.gz"} |
http://jdh.hamkins.org/tag/hugh-woodin/ | # A conference in honor of W. Hugh Woodin’s 60th birthday, March 2015
I am pleased to announce the upcoming conference at Harvard celebrating the 60th birthday of W. Hugh Woodin. See the conference web site for more information. Click on the image below for a large-format poster.
# The ground axiom is consistent with $V\ne{\rm HOD}$
• J. D. Hamkins, J. Reitz, and W. Woodin, “The ground axiom is consistent with $V\ne{\rm HOD}$,” Proc.~Amer.~Math.~Soc., vol. 136, iss. 8, pp. 2943-2949, 2008.
@ARTICLE{HamkinsReitzWoodin2008:TheGroundAxiomAndVequalsHOD,
AUTHOR = {Hamkins, Joel David and Reitz, Jonas and Woodin, W.~Hugh},
TITLE = {The ground axiom is consistent with {$V\ne{\rm HOD}$}},
JOURNAL = {Proc.~Amer.~Math.~Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {136},
YEAR = {2008},
NUMBER = {8},
PAGES = {2943--2949},
ISSN = {0002-9939},
CODEN = {PAMYAR},
MRCLASS = {03E35 (03E45 03E55)},
MRNUMBER = {2399062 (2009b:03137)},
MRREVIEWER = {P{\'e}ter Komj{\'a}th},
DOI = {10.1090/S0002-9939-08-09285-X},
URL = {http://dx.doi.org/10.1090/S0002-9939-08-09285-X},
file = F
}
Abstract. The Ground Axiom asserts that the universe is not a nontrivial set-forcing extension of any inner model. Despite the apparent second-order nature of this assertion, it is first-order expressible in set theory. The previously known models of the Ground Axiom all satisfy strong forms of $V=\text{HOD}$. In this article, we show that the Ground Axiom is relatively consistent with $V\neq\text{HOD}$. In fact, every model of ZFC has a class-forcing extension that is a model of $\text{ZFC}+\text{GA}+V\neq\text{HOD}$. The method accommodates large cardinals: every model of ZFC with a supercompact cardinal, for example, has a class-forcing extension with $\text{ZFC}+\text{GA}+V\neq\text{HOD}$ in which this supercompact cardinal is preserved.
# The necessary maximality principle for c.c.c. forcing is equiconsistent with a weakly compact cardinal
• W. Hamkins Joel D.~and Woodin, “The necessary maximality principle for c.c.c.\ forcing is equiconsistent with a weakly compact cardinal,” MLQ Math.~Log.~Q., vol. 51, iss. 5, pp. 493-498, 2005.
@ARTICLE{HamkinsWoodin2005:NMPccc,
AUTHOR = {Hamkins, Joel D.~and Woodin, W.~Hugh},
TITLE = {The necessary maximality principle for c.c.c.\ forcing is equiconsistent with a weakly compact cardinal},
JOURNAL = {MLQ Math.~Log.~Q.},
FJOURNAL = {MLQ.~Mathematical Logic Quarterly},
VOLUME = {51},
YEAR = {2005},
NUMBER = {5},
PAGES = {493--498},
ISSN = {0942-5616},
MRCLASS = {03E65 (03E55)},
MRNUMBER = {2163760 (2006f:03082)},
MRREVIEWER = {Tetsuya Ishiu},
DOI = {10.1002/malq.200410045},
URL = {http://dx.doi.org/10.1002/malq.200410045},
eprint = {math/0403165},
archivePrefix = {arXiv},
primaryClass = {math.LO},
file = F,
}
The Necessary Maximality Principle for c.c.c. forcing asserts that any statement about a real in a c.c.c. extension that could become true in a further c.c.c. extension and remain true in all subsequent c.c.c. extensions, is already true in the minimal extension containing the real. We show that this principle is equiconsistent with the existence of a weakly compact cardinal.
See related article on the Maximality Principle
# Small forcing creates neither strong nor Woodin cardinals
• J. D. Hamkins and W. Woodin, “Small forcing creates neither strong nor Woodin cardinals,” Proc.~Amer.~Math.~Soc., vol. 128, iss. 10, pp. 3025-3029, 2000.
@article {HamkinsWoodin2000:SmallForcing,
AUTHOR = {Hamkins, Joel David and Woodin, W.~Hugh},
TITLE = {Small forcing creates neither strong nor {W}oodin cardinals},
JOURNAL = {Proc.~Amer.~Math.~Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {128},
YEAR = {2000},
NUMBER = {10},
PAGES = {3025--3029},
ISSN = {0002-9939},
CODEN = {PAMYAR},
MRCLASS = {03E35 (03E55)},
MRNUMBER = {1664390 (2000m:03121)},
MRREVIEWER = {Carlos A.~Di Prisco},
DOI = {10.1090/S0002-9939-00-05347-8},
URL = {http://dx.doi.org/10.1090/S0002-9939-00-05347-8},
eprint = {math/9808124},
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
After small forcing, almost every strongness embedding is the lift of a strongness embedding in the ground model. Consequently, small forcing creates neither strong nor Woodin cardinals. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503391146659851, "perplexity": 3682.38327629325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00039-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/91538/why-is-adding-log-probabilities-considered-numerically-stable/91546 | # Why is adding log probabilities considered "numerically stable"?
Every once in a while, I come across the term numerical stability, which I don't really understand. In particular, I have seen a description of the practice of "adding logs rather than multiplying numbers" as "numerically stable." I would like to know why this is considered numerically stable.
By "adding logs rather than multiplying numbers" I mean that if you have several numbers, for example 0.01, 0.001, 0.0001, and you want to get their product, you can instead add the logs of each term. In this case assuming log base 10, the result would be -2 -3 -4 = -9. This doesn't give you the same output as multiplying, but it's a good way to get something like the product so that you don't experience numerical underflow.
My question is that I'm a bit confused because the definitions of numerical stability I've found on the Internet don't seem to apply to this case. The definitions of "numerical stability" I've found are that it occurs when a "malformed input" doesn't affect the performance of an algorithm, see for example here. In this case I don't really see how we would consider the numbers 0.01 etc "malformed," they are what they are. It would be more accurate to say that the algorithm (of multiplying them) is bad in this context since the computer can't handle it, so we choose a better algorithm. So why do people say this is "numerically stable"?
• Investigate how non-integers are represented in most computers (IEEE floats), the limitations of that format, and relevant error-estimating techniques.
– Raphael
May 6 '18 at 7:14
• I suspect you meant "the definition" (not "definitions"). I've never seen a definition that uses "malformed" or anything like it. That MathWorld "definition" is nonsensical precisely because "malformed" is a bizarre word to use. Presumably, "malformed" was just meant to mean "is (slightly) inaccurate", but even this isn't correct. Numerical stability is relevant even if the input is exactly represented. May 7 '18 at 14:29
A "numerically stable" method calculates a result in a way that will not produce excessive rounding errors.
Given a small number x, let's say x = 0.00079, it is possible to calculate log (1 + x) with much higher precision than 1 + x. (Of course calculating log (1 + x) avoids adding 1 + x). In both cases the relative error will be small and about equal size, but since the logarithm is very small, it's absolute error is much smaller.
So if you want to calculate the product of values $1 + x_i$ for many small i, then you use a clever method to calculate $log (1 + x_i)$, add those values, and calculate the exponential. With probabilities, it is possible that you have say 1,000,000 events each with small probability $p_i$, and the probability that none of the events happens is the product of $1 - p_i$, and calculating that probability is more precisely done by adding logarithms (you would also keep the rounding errors down by always adding the two smallest values).
In other situations, using logarithms can be less precise. For multiplication, the relative error is independent of the size of the numbers. But if you add logarithms, and say their sum is around 100, then each addition loses precision because 7 bits of the mantissa are used to store the 100. The precision of your result will be much less. So it's quite the opposite of numerically stable.
Start with x = 1, y = 0. Then for i = 1 to 100, multiply x by i, and add log i to y. Then for i = 1 to 100, divide x by i, and subtract log i from y. Except for rounding errors, you should end up with x = 1 and y = 0. Check how much the difference is.
PS. How do we calculate log (1 + x) without calculating 1 + x? There is the power series around 1, $ln (x) = (x-1) / 1 - (x-1)^2 / 2 + (x-1)^3 / 3 ...$. If we substitute 1 + x, then $ln(1 + x) = x / 1 - x^2 / 2 + x^3 / 3 ...$, which is obviously calculated without ever calculating 1 + x.
That's an important lesson: Given a problem that suggests an obvious sequence of steps to solve it, you will very often find a better and less obvious solution. To calculate log (1+x), the obvious sequence of steps is to calculate y = 1+x, z = log (y), but there is a much better solution that completely avoids the first step.
• In your last paragraph I think you mean "divide x by i" right? May 6 '18 at 16:57
• I like your definition of "numerically stable" in your first paragraph, and I think I see overall what you're getting at. May 6 '18 at 17:04
• But your second paragraph doesn't make much sense to me: "it is possible to calculate log (1 + x) with much higher precision than 1 + x" but we can calculate 1 + x perfectly, it is just 1.00079, whereas log(1.00079) can only be imperfectly represented since its mantissa never ends. "Of course calculating log (1 + x) avoids adding 1 + x" but of course we have to calculate 1 + x before we can take the log of that. May 6 '18 at 17:04
• "In both cases the relative error will be small and about equal size, but since the logarithm is very small, it's absolute error is much smaller." again I don't see any error in just 1.00079, it's an exact number. Only the logarithm has an error. But I'm not very familiar with the discussion of numerical errors, so I'm probably missing something fundamental. May 6 '18 at 17:06
• @Stephen: 0.00079 is a bit above 1/2048. Adding 1, the exponent increases by 11, eleven mantissa bits will be lost, so if you had a typical 53 bit mantissa, only 42 bits of the 0.00079 are left in the result 1 + x. The error has just increased by a factor 2048. Using binary floating point, 1.00079 is absolutely not an exact number. May 6 '18 at 17:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400328159332275, "perplexity": 408.6650174934996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00393.warc.gz"} |
https://computergraphics.stackexchange.com/questions/5140/does-smooth-lighting-work-with-gouraud-shading-on-single-triangles | # Does smooth lighting work with Gouraud shading on single triangles?
I'm currently working on a project where a 3D model gets computed through an isosurface algorithm. This algorithm outputs single triangles with vertices and normals, but without indices. So therefore I have a lot of duplicates which can't be avoided, because of a software restriction.
I was asked to write smooth lighting for this model, by sending the proper normals to the visualisation software, which uses OpenGL and runs under Windows. What I tried was to set the normals for duplicated vertices to the same direction but this also results in unsmoothed lighting
as visible here:
Is it possible to have smooth Gouraud shading on a 3D Model which consists of many single unconnected triangles?
• As long as all duplicates of a vertex have the same normal, you should get smooth shading. Check that (i) the duplicated normals are indeed identical, and (b) the shader is actually doing Gouraud shading and not, say, flat shading (incorrect setting of glShadeModel or interpolation qualifier). – Rahul May 23 '17 at 2:06
• If you just want to know whether this is possible, this seems ready to answer, and @Rahul's comment contains a good start on such an answer. If you want to know why your specific code isn't working, we'd need to see it in order to investigate it. – trichoplax May 23 '17 at 10:17
• @Rahul Thank you for your hints. I check the normals and the glShadeModel and both are set correctly. I also wrote the model into an Wavefront obj file and this also results in strange lightning. Here is an image of the model imgur.com/a/JjLxp. – Tim Rolff May 23 '17 at 12:11
• Oh..could it be that you are seeing "Mach band-ish" related" artefacts that appear when you get a discontinuity in the 1st derivative of the shading? Basically, the human visual system amplifies these discontinuities (probably to save our ancestors being eaten by lions). – Simon F May 23 '17 at 16:12
• Going by your image, it looks like you have Gouraud shading working perfectly well. The problem is a combination of skinny triangles and poorly estimated normals, making it look like the shading is not smooth when really it is just varying rapidly over a narrow region. – Rahul May 24 '17 at 5:06
After thinking about it for some days I came up with a proof sketch (which is hopefully correct) that it is possible to use Gouraud shading on disconnected triangles, if they share an edge with the same normals as in the picture. The simple idea of this proof is to check that there is no discontinuity between the intensity of the triangles by approaching from each side to the edge. So given two triangles $T_1: (P_1,P_2,P_3)$ and $T_2: (P_1', P_2', P_3')$ which share an edge and let's assume w.l.o.g. that this edges are constructed by $(P_2,P_3)$ and $(P_2', P_3')$. Because Gouraud shading only computes the light intensity at the vertex positions and interpolate in between them, it's save to assume that the intensity at $P_2$ is the same as on $P_2'$, same goes for $P_3$ and $P_3'$. Because it's also save to assume that the values inside the triangles are getting computed correctly so it's only necessary to show that there is no discontinuity between the triangles. To interpolate inside the triangle it's rational to use Barycentric coordinates so that the intensity at any point inside the triangle is given by \begin{align*} I &= aI_2 + bI_3 + (1 - (a+b)) I_1\\ I' &= a'I_2' + b'I_3' + (1 - (a'+b')) I_1' \end{align*} By now going from $I_1$ respectively $I_1'$ to the edge, a sequence can be constructed to archive this, by setting $a_n = \frac{c}{n+1}, b_n=\frac{d}{n+1}, c + d = 1$ and analogously for $a_n', b_n'$. Then the intensity is \begin{align*} I_n &= \frac{c}{n+1} I_2 + \frac{d}{n+1} I_3 + (1 - \frac{c+d}{n+1}) I_1\\ I_n' &= \frac{c'}{n+1}I_2' + \frac{d'}{n+1}I_3' + (1 - \frac{c'+d'}{n+1}) I_1' \end{align*} If one now approaches the edge with: \begin{align*} &\lim\limits_{n\rightarrow 0^+} I_n = cI_2 + dI_3\\ &\lim\limits_{n\rightarrow 0^+} I_n' = c'I_2 + d'I_3' \end{align*} and using the requirement that $I_n$ and $I_n'$ share the same position $P_n$ if they are on an edge, which can be constructed by \begin{align*} P_n &= \frac{c}{n+1} P_2 + \frac{d}{n+1} P_3 + (1 - \frac{c+d}{n+1}) P_1\\ P_n &= \frac{c'}{n+1}P_2' + \frac{d'}{n+1}P_3' + (1 - \frac{c'+d'}{n+1}) P_1' \end{align*} with the fact that $P_2 =P_2'$ and $P_3 = P_3'$ it follows: \begin{align*} &\lim\limits_{n\rightarrow 0^+} P_n = cP_2 + dP_3\\ &\lim\limits_{n\rightarrow 0^+} P_n' = c'P_2 + d'P_3 \end{align*} and $d = 1 - c$ \begin{align*} &\lim\limits_{n\rightarrow 0^+} P_n = cP_2 + (1-c)P_3\\ &\lim\limits_{n\rightarrow 0^+} P_n' = c'P_2 + (1-c')P_3 \end{align*} this results in $c = c'$ and therefore this gives the final result: \begin{align*} \lim\limits_{n\rightarrow 0^+} I_n = cI_2 + (1-c)I_3 = c'I_2' + (1-c')I_3' = \lim\limits_{n\rightarrow 0^+}I_n' \end{align*} Thereby is possible to use Gouraud shading over disconnected triangles as in my case, because the intensity on the edge is the same on both triangles. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755357265472412, "perplexity": 1066.5739653945618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00042.warc.gz"} |
https://www.jiskha.com/users?name=Randy | # Randy
Popular questions and responses by Randy
1. ## Math
sec 7pi/4 without a calculator Find the exact answer
2. ## Chemistry
A selenium atom (Se) would form its most stable ion by the 1. loss of 2 electrons. 2. gain of 1 electron. 3. loss of 1 electron. 4. loss of 7 electrons. 5. gain of 2 electrons. Help please and tell me how you got the answer. Thanks!
3. ## Statistics
A box contains five numbered balls (1,2,2,3 and 4). We will randomly select two balls from the box (without replacement). (a) The outcome of interest is the number on each of the two balls we select. List the complete sample space of outcomes. (b) What is
4. ## math
the absolute value of an integer is always greater than the integer. True or False?
5. ## Physics
Two horizontal forces, F1-> and F2-> , are acting on a box, but only F1->is shown in the drawing. F2-> can point either to the right or to the left. The box moves only along the x axis. There is no friction between the box and the surface. Suppose that
6. ## ALGEBRA
What are the odds in favor of drawing face card from an ordinary deck of cards? to
7. ## ALGEBRA
The product of two consecutive positive even numbers is 528. What are the numbers? (Enter solutions from smallest to largest.) and
8. ## geometry
The altitude of an equilateral triangle is 7 square root 3 units long. The length of one side of the triangle is ____ units. 7 14 14 square root 3 7 square root 3 over 2
9. ## chemistry
Isotonic saline is 0.89% NaCl (w/v). Suppose you wanted to make 1.0 L of isotonic solution of NH4Cl. What mass of NH4Cl would you need? Question 23 options: 1.6 g 8.1 g 8.9 g 54 g
10. ## Trig / Calc
What angle (in degrees) corresponds to 18.96 rotations around the unit circle? Enter the exact decimal answer.
11. ## geometry
Ray OX bisects angle AOC and angle AOX =42 degrees, angle AOC is: 42 degrees 84 degrees 21 degrees 68 degrees
12. ## Chemistry
What is the percent by mass of C in methyl acetate (C3H6O2)? The molecular weight of carbon is 12.0107 g/mol, of hydrogen 1.00794 g/mol, and of oxygen 15.9994 g/mol. Answer in units of %. Please work it out for me so I totally understand it. Thanks!
13. ## Word Problem
Assume that the mathematical model C(x) = 16x + 130 represents the cost C, in hundreds of dollars, for a certain manufacturer to produce x items. How many items x can be manufactured while keeping costs between $525,000 and$781,000? Thanks in advance!
14. ## Statistics/Probability
So I tried solving the first one and apparently failed miserably, attempted both twice and got it wrong each time and I have one submission attempt left, so any help is definitely appreciated! 1. Hotels R Us has kept the following recordes concerning the
15. ## physics
A golfer hits a shot to a green that is elevated 3.0 m above the point where the ball is struck. The ball leaves the club at a speed of 16.8 m/s at an angle of 30.0¢ª above the horizontal. It rises to its maximum height and then falls down to the green.
16. ## Physics
To move a large crate across a rough floor, you push on it with a force at an angle of 21 below the horizontal, as shown in the figure. Find the acceleration of the crate if the applied force is 400 , the mass of the crate is 32 and the coefficient of
17. ## chemistry
The solubility of Na3(PO4) is 0.025 M. What is the Ksp of sodium phosphate?
18. ## math
order the numbers from least to greatest. _ 1 4/5, 1.78, 1 5/6, 7/4, 1.7, 1 8/11 I am confused on going back and forth between decimals and fractions.
19. ## Chemistry
Help with lab report Need Help finishing my lab report.KNOWN VALUES ARE V OF HCL=50ML,M=2.188.V OF NAOH=55.3ML,M=2.088.INTIAL TEMP FOR HCL=22.5C,NAOH IS 23C.this is for part A: Mass of final NACL solution assuming that the density of a 1 M Nacl solution is
20. ## Statistics
How do frequency tables, relative frequencies, and histograms showing relative frequencies help us understand sampling distributions? A. They help us to measure or estimate of the likelihood of a certain statistic falling within the class bounds. B. They
21. ## physics
consider a pair of planets that find the distance between them is decreased by a factoer of 5. Show that the force between them becomes 25 times as strong?
22. ## Chemistry
What mass of oxygen is consumed when 1.73L of carbon dioxide (STP) are produced in the following equation. C4H8 + 6 O2 > 4 CO2 + 4 H2O please help and show your work. My answer for this problem is 11.12g/O2. I have no confidence in my answer.
23. ## Chemistry
What volume of carbon monoxide gas (at STP) is needed when 1.35 Moles of oxygen gas react completely in the following equation. 2CO + O2 > 2CO2 please show your work I'm having trouble with Gas Stoichiometry.
24. ## ALGEBRA
Select all statements that are true. (log_b$$A$$)/(log_b$$B$$)=log_b$$A-B$$ (If ) log_1.5$$8$$=x, text( then ) x**(1.5) =8. log$$500$$ text( is the exponent on ) 10 text( that gives ) 500. text (In )log_b$$N$$, text( the exponent is )N. text( If )
25. ## math
Ray OX bisects angle AOC and angle AOX =42 degrees, angle AOC is: 42 degrees 84 degrees 21 degrees 68 degrees
26. ## statistics
A USU today survey found that of the gun owners surveyed 275 favor stricter gun laws. Test the claim that the majority (more than 50%) of gun owners favor stricter gun laws. Use a .05 significance level.
27. ## chemistry
How many moles of Cu are needed to react with 3.50 moles of AgNO3?
28. ## Geometry
How do you find the coordinates of the image of J (-7,-3) after the translation (x,y) (x-4,y+6)
29. ## Pre-Calculous (Trig)
The Singapore Flyer, currently the world's largest Ferris wheel, completes one rotation every 37 minutes.1 Measuring 150 m in diameter, the Flyer is set atop a terminal building, with a total height of 165 m from the ground to the top of the wheel. When
30. ## chemistry
While in Europe, if you drive 105km per day, how much money would you spend on gas in one week if gas costs 1.10 euros per liter and your car's gas mileage is 22.0mi/gal? Assume that 1euro = 1.26 dollars.
62. ## chemistry
how many grams of NaOH would be needed to make 896mL of a 139 M solution
63. ## chemistry
calculate the molarity of an acetic acid solution if 39.96 mL of the solution is needed to neutralize 136mL of 1.41 M sodium hydroxide. the equation for the reaction is HC2H3O2(aq) + NaOH(aq) > Na+(aq) + C2H3O2(aq) +H2O(aq)
64. ## chemistry
calculate the molarity of an acetic acid solution if 39.96 mL of the solution is needed to neutralize 136mL of 1.41 M sodium hydroxide. the equation for the reaction is HC2H3O2(aq) + NaOH(aq) > Na+(aq) + C2H3O2(aq) +H2O(aq)
65. ## Chemistry
the heat of fusion water is 335J/g,the heat of vaporization of water is 2.26kJ/g, the specific heat of water is 4.184J/deg/g. how many grams of ice at 0 degrees could be converted to steam at 100 degrees C by 9,574J
66. ## ALGEBRA
Graph the equation. x = -6
67. ## ALGEBRA
Use the definition of logarithm to simplify each expression. text((a) )log_(3b) $$3b$$ text((b) )log_(4b) $$(4b)^6$$ text((c) )log_(7b) $$(7b)^(-11)$$
68. ## chemistry
the heat of fusion water is335 J/g. The heat of evaporization of water is 2.26 kJ/g, the specific heat of ice is 2.05 J/Deg/g, the specific heat of steam is 2.08 J/deg/g and the specific heat of liquid water is 4.184 J/deg/g. How much heat would be needed
69. ## chemistry
the heat of vaporization of water is 2.26kJ/g. how much heat is needed to change 2.55 g of water to steam?
70. ## Chemistry
what volume of ammonia gas will be produced when 8.01 L of hydrogen react completely in the following equation. N2 + 3H2 > 2NH3 this stuff is so confusing because one little word seems to change the entire process for solving.
71. ## Chemistry
what volume of ammonia gas will be produced when 1.26L of nitrogen react completely in the following equation? N2 + 3H2 > 2NH3 Please if someone can just give me the order of operation it will be much appreciated because chemistry is just not my forte.
72. ## ALGEBRA
3x2 = 17x + 6
73. ## ALGEBRA
Consider the following expression. 3(x - 2) - 9(x**2 + 7 x + 4) - 5(x + 8) (a) Simplify the expression.
74. ## ALGEBRA
The Greek God Zeus ordered his blacksmith Hephaestus to create a perpetual water-making machine to fill Zeus' mighty chalice. The volume of Zeus' chalice was reported to hold about one hundred and fifty sextillion gallons (that is a fifteen followed by
75. ## ALGEBRA
Evaluate the given expressions (to two decimal places). (a) ) log((23.0) (b) ) log_(2) $$128$$ (c) ) log_(9) $$1$$
76. ## ALGEBRA
Use the definition of logarithm to simplify each expression. (a) )log_(3b) $$3b$$ ((b) )log_(8b) $$(8b)^6$$ (c) )log_(10b) $$(10b)^(-13)$$
77. ## ALGEBRA
Evaluate the given expressions (to two decimal places). (a) log((23.0) ((b) log_(2) $$128$$ text((c) ) log_(9) $$1$$
78. ## ALGEBRA
Find a simplified value for x by inspection. Do not use a calculator. (a) log5(25) = x (b) log2(16) = x
79. ## ALGEBRA
Contract the expressions. That is, use the properties of logarithms to write each expression as a single logarithm with a coefficient of 1. text ((a) ) ln$$3$$-2ln$$4$$+ln$$8$$ ((b) ln$$3$$-2ln$$4+8$$ (c) )ln$$3$$-2(ln$$4$$+ln$$8$$)
80. ## ALGEBRA
Solve the equations by finding the exact solution. ln$$x$$ - ln$$9$$ = 3
81. ## ALGEBRA
ln$$e$$ = ln$$sqrt(2)/x$$ -ln$$e$$
82. ## ALGEBRA
(1)/(2) log$$x$$ - log$$10000$$ = 4
83. ## ALGEBRA
Solve the equations by finding the exact solution. ln$$x$$ - ln$$9$$ = 3
84. ## ALGEBRA
A seismograph 300 km from the epicenter of an earthquake recorded a maximum amplitude of 5.4 multiplied by 102 µm. Find this earthquake's magnitude on the Richter scale. (Round your answer to the nearest tenth.)
85. ## ALGEBRA
Find the number of decibels for the power of the sound. Round to the nearest decibel. A rock concert, 5.21 multiplied by 10-6 watts/cm2
86. ## Algebra II
if y varies directly as x, if y=80 when x=32, find x if y=100
87. ## geometry
The altitude of an equilateral triangle is 7 square root 3 units long. The length of one side of the triangle is ____ units. 7 14 14 square root 3 7 square root 3 over 2
88. ## math
1. find the lateral area of a right prism whose altitude measures 20 cm and whose base is a square with a width 7 cm long. 2. the volume of a rectangular solid is 5376 cubic meters, and the base is 24 meters by 16 meters. find the height of the solid. 3. a
89. ## physics
A jetliner can fly 5.57 hours on a full load of fuel. Without any wind it flies at a speed of 2.43 x 102 m/s. The plane is to make a round-trip by heading due west for a certain distance, turning around, and then heading due east for the return trip.
90. ## physics
Relative to the ground, a car has a velocity of 14.4 m/s, directed due north. Relative to this car, a truck has a velocity of 24.8 m/s, directed 52.0° north of east. What is the magnitude of the truck's velocity relative to the ground?
91. ## physics
A ball is thrown upward at a speed v0 at an angle of 59.0¢ª above the horizontal. It reaches a maximum height of 8.7 m. How high would this ball go if it were thrown straight upward at speed v0?
92. ## physics
A speedboat starts from rest and accelerates at +1.68 m/s2 for 5.43 s. At the end of this time, the boat continues for an additional 8.16 s with an acceleration of +0.475 m/s2. Following this, the boat accelerates at -0.866 m/s2 for 7.54 s. (a) What is the
93. ## algebra
what is the slope of line that passes through (-3,-8) and (0,6)
94. ## physics
Driving in your car with a constant speed of 12 m/s, you encounter a bump in the road that has a circular cross-section. If the radius of curvature of the bump is 35 m, find the apparent weight of a 63-kg person in your car as you pass over the top of the
95. ## Math
What is the surface area of a right circular cylinder with base circle of radius of 5m and height of the cylinder 10m?
96. ## statistics
Find z such that 20.3% of the standard normal curve lies to the right of z. 0.831 0.533 -0.533 -0.257 0.257
97. ## statistics
Assume that about 45% of all U.S. adults try to pad their insurance claims. Suppose that you are the director of an insurance adjustment office. Your office has just received 110 insurance claims to be processed in the next few days. What is the
99. ## Pre-cal
Please determine the following limit if they exist. If the limit doe not exist put DNE. lim 2x^3 / x^2 + 10x - 12 x->infinity. Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027283549308777, "perplexity": 2082.689372556587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00510.warc.gz"} |
http://math.stackexchange.com/tags/multilinear-algebra/hot | # Tag Info
5
The symmetrizer $S: \bigotimes^k V \to \bigotimes^k V$ is idempotent. Hence, $\ker(S) = \mathrm{im}(\mathrm{id}-S)$. This is generated by elements of the form $\alpha-{}^\sigma \alpha$, where $\sigma$ is some permutation.
2
Remember: for finite dimensional spaces, two vectors spaces (over the same field) are isomorphic if and only if they have the same dimension. So, for all these, it suffices to simply find a basis, and therefore conclude the dimension. In particular, we have $$\dim(\mathcal L(V,W)) = \dim(V \otimes W) = \dim(V) \dim(W)$$ When $V$ and $W$ are finite ...
2
SECTION A : The linearly independent elements of a Totally Symmetric Tensor $\;T_{i_{1}i_{2}\cdots i_{p-1}i_{p}}\;$(important for the interpretation of Quark Theory of Baryons in Particle Physics) \begin{equation*} \bbox[#FFFF88,8px] {\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}= ...
1
Multiply by $g^{\sigma\tau}$ and use that $$g^{ab}g_{bc} = \delta^a_c,$$ so the right-hand side becomes $d^{\sigma}$, which is what you want. I'll leave the left-hand side to you, since you don't say if you mind raising the index on $A_{\mu\nu\tau}$.
1
$0$-tensors are just scalars, so the tensor product in this case is just scalar multiplication.
1
Symbolically, it is an $n\times n$ matrix. Don't expand the $\vec{e}_i$ into coordinates. Just take the determinant according to however you normally do so, and whenever multiplication involves the scalars from below, multiply accordingly, and when it involves a scalar times one of these vectors from the top row, multiply the scalar times the vector ...
1
I assume that by transoposition you mean dual mapping $T^*$. That is if $T:V\rightarrow V$ then $T^*:V^*\rightarrow V^*$ in following manner $$\left[T^*\alpha\right](v):=\alpha(T(v)).$$ Let $\alpha_1,\dots,\alpha_n\in V^*.$ Fix arbitrary $v_1,\dots,v_n\in V.$ Then \left[\bigwedge^n T^*(\alpha_1\wedge\dots\wedge\alpha_n)\right](v_1\wedge\dots\wedge ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988775372505188, "perplexity": 333.153241319246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00225-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://en.m.wikibooks.org/wiki/Fundamentals_of_Transportation/Mode_Choice/Solution2 | # Fundamentals of Transportation/Mode Choice/Solution2
Problem:
Prior to the collapse, there were two modes serving the Marcytown-Rivertown corridor: driving alone (d) and carpool (c), which takes advantage of an uncongested carpool lane. The utilities of the modes are as given below.
${\displaystyle U_{d}=-t_{d}\,\!}$
${\displaystyle U_{c}=-12-t_{c}\,\!}$
where t is the travel time
Assuming a multinomial logit model, and
A) That the congested time by driving alone was 15 minutes and time by carpool was 5 minutes. What was the modeshare prior to the collapse?
B) How would you interpret the constant of -12 in the expression for Uc?
C) After the collapse, because of a shift in travelers from other bridges, the travel time by both modes increased by 12 minutes. What is the post-collapse modeshare?
D) The transit agency decides to run a bus to help out the commuters after the collapse. Again assuming the multinomial logit model holds, without knowing how many travelers take the bus, what proportion of travelers on the bus previously took the car? Why? Comment on this result. Does it seem plausible?
Solution:
A) That the congested time by driving alone was 15 minutes and time by carpool was 5 minutes. What was the modeshare prior to the collapse?
Ud = -15
Uc = -17
Pd = 0.88
Pc = 0.12
B) How would you interpret the constant of -12 in the expression for Uc?
The constant -12 is an alternative specific constant. In this example, even if the travel time for the drive and carpool modes were the same, the utility of the carpool mode is lesser than the drive alone due to the negative constant -12. This indicates that there is a lesser probability of an individual choosing the carpool mode due to its lower utility.
C) After the collapse, because of a shift in travelers from other bridges, the travel time by both modes increased by 12 minutes. What is the post-collapse modeshare?
In this question since the travel time for both modes increase by the same 12 minutes, the post-collapse mode share will be the same as before
D) The transit agency decides to run a bus to help out the commuters after the collapse. Again assuming the multinomial logit model holds, without knowing how many travelers take the bus, what proportion of travelers on the bus previously took the car? Why? Comment on this result. Does it seem plausible?
The logit model will indicate that 88% of the bus riders previously drove alone due to the underlying Independence of Irrelevant Alternatives (IIA) property. The brief implication of IIA is that when you add a new mode, it will draw from the existing modes in proportion to their existing shares. This doesnt seem plausible since the bus is more likely to draw from the carpool mode than the drive alone mode. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815913736820221, "perplexity": 1407.199472415468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122629.72/warc/CC-MAIN-20170423031202-00399-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Mason_Equation | # Mason equation
(Redirected from Mason Equation)
The Mason equation is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason.[1] The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small).
## Equation
In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat.
Thus if the drop has a size r, the inward mass flow rate is given by[1]
${\displaystyle {\frac {dM}{dt}}=4\pi r_{p}D_{v}(\rho _{0}-\rho _{w})\,}$
and the sensible heat flux by[1]
${\displaystyle {\frac {dQ}{dt}}=4\pi r_{p}K(T_{0}-T_{w})\,}$
and the final expression for the growth rate is[1]
${\displaystyle r{\frac {dr}{dt}}={\frac {(S-1)}{[(L/RT-1)\cdot L\rho _{l}/KT_{0}+(\rho _{l}RT_{0})/(D\rho _{v})]}}}$
where
## References
1. ^ a b c d 1. B. J. Mason The Physics of Clouds (1957) Oxford Univ. Press. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726926684379578, "perplexity": 781.8132515431947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612013.52/warc/CC-MAIN-20170529034049-20170529054049-00341.warc.gz"} |
http://www.logic.univie.ac.at/2017/Talk_03-09_a.html | # 2017 seminar talk: Projective sets and inner models
Talk held by Yizheng Zhu (Universität Münster, Germany) at the KGRC seminar on 2017-03-09.
### Abstract
The collection of projective sets of reals is the smallest one containing all the Borel sets and closed under complements and continuous images. The Axiom of Projective Determinacy (PD) is the correct axiom that settles the regularity properties of projective sets. Inner model theory provides a systematic way of studying the projective sets under PD. In this talk, we describe some recent progress in this direction. A key theorem is the following inner-model-theoretic characterization of the canonical model associated to $\Sigma^1_3$:
Let $\mathcal{O}_{\Sigma^1_{3}}$ be the universal $\Sigma^1_3$ subset of $u_\omega$ in the sharp codes for ordinals in $u_\omega$. Let $M_{1,\infty}$ be the direct limit of iterates of $M_1$ via countable trees and let $\delta_{1,\infty}$ be the Woodin cardinal of $M_{1,\infty}$. Then $M_{1,\infty}|\delta_{1,\infty} = L_{u_\omega}[\mathcal{O}_{\Sigma^1_{3}}]$.
This theorem paves the way for further study of $\Sigma^1_3$ sets using inner model theory. It also generalizes to arbitrary $\Sigma^1_{2n+1}$ and $M_{2n-1,\infty}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963081479072571, "perplexity": 315.41618996075357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00272.warc.gz"} |
https://www.physicsforums.com/threads/ising-cell-hamiltonian.627842/ | # Ising cell hamiltonian
1. Aug 13, 2012
### LagrangeEuler
I don't understand this idea. For example we have cubic crystal which has a lot of unit cells. We define spin variable of center of cell like $$S_c$$. And spin variable of nearest neighbour cells with $$S_{c+r}$$. So the cell hamiltonian is
$$\hat{H}=\frac{1}{2}J\sum_{c}\sum_{r}(S_c-S_{c+r})^2+\sum_cU(S_c^2)$$
This model is simulation of uniaxial feromagnet.
I have three question:
1. What's the difference between Ising model and 1d Heisenberg model?
2. Why this model is better than Ising model with no cells? Where we have just spins which interract.
$$\hat{H}=-J\sum_iS_{i}S_{i+1}$$
3. What $$\sum_cU(S_c^2)$$ means physically?
Tnx.
2. Aug 13, 2012
### cgk
If I am not mistaken, in the Heisenberg model the spins can have an arbitrary polarization, not just up or down (and there are Heisenberg models with different J paramters in different directions, like the XXZ Heisenberg model). The "1d" in the model then applies to the /lattice/ dimension. That is, for the 1d Heisenberg model, you might think of a one-dimensional lattice in 3d space, where the spins are not actually one-dimensional.
I have little knowledge of spin models, but the model you wrote down looks to me like it includes some variant of a "Hubbard U" term. In electron models, such terms represent a strong local interaction which gives a penalty for two electrons occupying the same lattice site (a kind of screened Coulomb interaction, if you wish). In such models the U is used to tune a system between weakly correlated limits (simple metals) to strongly correlated limits (Mott-insulating anti-ferromagnet), and maybe other phases depending on the lattice type.
In the Hubbard case, the U term is normally written as $$U\cdot n_{c\uparrow}\cdot n_{c\downarrow}$$ where the n are the up/down spin occupation number operators of electrons (thus giving only a contribution if there are both up and down electrons on the same site). But this form can be re-formulated into a similar form involving the total electron number operator and squared spins: $$\langle n_{c\uparrow}\cdot n_{c\downarrow}\rangle=\langle n\rangle - 2/3 \langle S_c^2\rangle$$ (or something to that degree..don't nail me on the prefactors). In spin models the site occupation is of course normally fixed at one spin per site, but maybe the squared spin term still can fulfill a similar role.
3. Aug 13, 2012
### LagrangeEuler
Tnx for the answer. So I can say in 1 - dimensional Heisenberg spin can pointed in any direction.
I'm not quite sure about Hubbard but I will look at it.
Do you know maybe the answer of second question? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044367671012878, "perplexity": 881.3432453536561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211167.1/warc/CC-MAIN-20180816191550-20180816211550-00362.warc.gz"} |
http://math-doc.ujf-grenoble.fr/cgi-bin/sps?kwd=MARTIN+BOUNDARY&kwd_op=contains | Browse by: Author name - Classification - Keywords - Nature
3 matches found
II: 11, 175-199, LNM 51 (1968)
MEYER, Paul-André
Compactifications associées à une résolvante (Potential theory)
Let $E$ be a locally compact space, $(U_p)$ be a submarkovian resolvent, with a potential kernel $U=U_0$ which maps $C_k$ (the continuous functions with compact support) into continuous bounded functions. Let $F$ be a compact space containing $E$ as a dense subset, but inducing possibly a coarser topology. It is assumed that all potentials $Uf$ with $f\in C_k$ extend to continuous functions on $F$, and that points of $F$ are separated by continuous functions on $F$ whose restriction to $E$ is supermedian. Then it is shown how to extend the resolvent to $F$ and imitate the construction of a Ray semigroup and a strong Markov process. This was an attempt to compactify the space using only supermedian functions, not $p$-supermedian for all $p>0$. An application to Markov chains is given
Comment: This method of compactification suggested by Chung's boundary theory for Markov chains (similarly Doob, Trans. Amer. Math. Soc., 149, 1970) never superseded the standard Ray-Knight approach
Keywords: Resolvents, Ray compactification, Martin boundary, Boundary theory
Nature: Original
Retrieve article from Numdam
V: 19, 196-208, LNM 191 (1971)
MEYER, Paul-André
Représentation intégrale des fonctions excessives. Résultats de Mokobodzki (Markov processes, Potential theory)
Main result: the convex cone of excessive functions for a resolvent which satisfies the absolute continuity hypothesis is the union of convex compact metrizable hats''\ in a suitable topology, and therefore has the integral representation property. The original proof of Mokobodzki, self-contained and unpublished, is given here
Comment: See Mokobodzki's work on cones of potentials, Séminaire Bourbaki, May 1970
Keywords: Minimal excessive functions, Martin boundary, Integral representations
Nature: Exposition
Retrieve article from Numdam
IX: 13, 305-317, LNM 465 (1975)
FÖLLMER, Hans
Phase transition and Martin boundary (Miscellanea)
To be completed
Comment: To be completed
Keywords: Random fields, Martin boundary
Nature: Original
Retrieve article from Numdam | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820205330848694, "perplexity": 2127.855338886427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00075-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/254391/does-the-p-part-of-the-level-of-a-newform-appear-in-its-attached-p-adic-repr | Does the $p$-part of the level of a newform appear in its attached $p$-adic representation?
Let $f$ a newform of weight $2$ on $\Gamma_0(Np^r)$, $N$ coprime to $p$, and consider its $p$-adic Galois representation $$\rho:G_{\mathbb Q}\longrightarrow GL_2(\bar{\mathbb Q}_p)$$ It's a theorem of Carayol that the prime-to-$p$ conductor $N(\rho)$ of $\rho$ equals $N$. Hence, one can recover $N$ from $\{\rho\vert_{I_q}\}_{q\mid N}$.
The question is:
Can $r$ be read in $\rho\vert_{I_p}$?
• Could $p^r$ be the conductor of the Weil-Deligne representation attached to $\rho|G_{\mathbb{Q_p}}$ by Fontaine? – Aurel Nov 11 '16 at 0:59
• What Aurel says is true -- a theorem of Takeshi Saito. – wrigley Nov 11 '16 at 17:19
• @wrigley It was a guess; thanks for confirming it. Do you have a precise reference? – Aurel Nov 14 '16 at 18:51
• "modular forms and p-adic Hodge theory" 1997 Inventiones. – wrigley Nov 25 '16 at 12:04
The answer to the question in the title is yes, as explained in the last paragraph below.
However, under a literal interpretation of "can" (implying actual feasibility), I believe the answer to the question in the body of the text is no.
Assume for instance that $f$ and $g$ are two is $p$-ordinary eigencuspforms ($p$-ordinary means that under a fixed embedding of $\bar{\mathbb Q}$ into $\bar{\mathbb Q}_{p}$, the $p$-adic valuation of $a_{p}(f)$ and of $a_{p}(g)$ is zero), that $\pi(f)_p$ (the automorphic representation of $\operatorname{GL}_{2}(\mathbb Q_{p})$ attached to $f$) is unramified principal series and that $\pi(g)_{p}$ (same notation) is unramified Steinberg.
Then the conductor of $f$ at $p$ is trivial ($r=0$) whereas the conductor at $p$ of $g$ is $p$ ($r=1$). However, after restriction to $I_{p}$, both $\rho_f$ and $\rho_g$ are equivalent to $$\begin{pmatrix} 1&*\\ 0&\chi^{-1} \end{pmatrix}$$ where $\chi$ is the cyclotomic character. I don't know how to distinguish between them using the class of the extension of $\chi^{-1}$ by $1$ (the $*$, so to speak) and it seems hard to me though I admit I also don't know that it is definitely not possible.
One can construct many such examples of ambiguous $I_{p}$-representation, so I doubt one can reconstruct $p^{r}$ in general. As more generally the representation $\rho_f|G_{\mathbb Q_{p}}$ is the representation $V_{2,a_p}$ in the notation of C.Breuil Sur quelques représentations modulaires et $p$-adiques de $\operatorname{GL}_{2}(\mathbb Q_{p})$ II (Journal de l'IMJ, 2003) it might be a good idea to have a look at this article if you want a definite answer.
As Aurel points out, $p^{r}$ is the conductor of the Weil-Deligne representation attached to $D_{\operatorname{pst}}(\rho_f|G_{\mathbb Q_{p}})$ so you certainly can reconstruct $r$ from $\rho_{f}|G_{\mathbb Q_{p}}$ and what you are missing in your setting are the eigenvalues of the image of $\operatorname{Fr}(p)$ through $\rho_f$. In the case above for instance, both eigenvalues would have the same $p$-adic valuations in the first case and different valuations in the second.
• That's really not that hard. Such a class is an element of $H^1(\mathbb Q_p, \chi)$, i.e., by Kummer theory, an element of $\mathbb Q_p^ \times \otimes \mathbb Q_p = \mathbb Q_p \times \mathbb Q_p$ where the first coordinate comes from the valuation and the second comes from the logarithm. The unramified extensions necessarily form a one-dimensional subspace. I claim the image of the logarithm map is the unramified one. – Will Sawin Nov 11 '16 at 3:55
• A really silly way to check this is to note that every other one-dimensional subspace contains the image of an element $q$ in $\mathbb Q_p$ with positive valuation, and thus is the $p$-adic Tate module of the elliptic curve with uniformization $\mathbb Q_p^\times / q$, which has multiplicative reduction at $p$ and hence has ramified Weil-Deligne representation. – Will Sawin Nov 11 '16 at 3:56
• Oh, I see. I'll let my example stand for the moment though, and maybe someone can give a definite answer. – Olivier Nov 11 '16 at 4:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.954089343547821, "perplexity": 209.36858146046498}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00654.warc.gz"} |
https://www.physicsforums.com/threads/trig-problem.162369/ | # Trig Problem
1. Mar 24, 2007
### ][nstigator
Hey guys, got a small problem and need some help
The problem statement, all variables and given/known data
Show that
$$\arctan{\left( \frac{x}{2} \right)} = \arccos{\left(\frac{2}{\sqrt{4+x^2}}\right)} \ \mbox{for x}\epsilon\mbox{R}$$
The attempt at a solution
Honestly Im pretty stumped from the very beginning....
The only thing I can currently think of to do is go...
$$\arctan{\frac{x}{2}} = \frac{\arcsin{\frac{x}{2}}}{\arccos{\frac{x}{2}}}$$
but Im not sure if that is even correct....
Even still, if that is valid, Im still pretty unsure what Im meant to do next..
Any hints to point me in the right direction would be much appreciated
I hope I did the Latex stuff right, its my first time using it..
Last edited: Mar 25, 2007
2. Mar 24, 2007
### symbolipoint
Draw a right-triangle and label the sides until you can form a triangle which give s the relationship that you are looking for in your equation. This may give you another formulable relationship which permits you to solve the problem.
3. Mar 24, 2007
### symbolipoint
I misunderstood the meaning of the problem. You are probably looking for identity relationships to PROVE that your given relation is an identity. Of course, when you draw a right-triangle, you will be able to derive the relationship but you are trying to use a trail of identities to prove this. I wish I could offer better help.
The best that I could do right now is to draw a triangle; I label one of the non-right angles; the side opposite I give as "x"; the side between the referenced angle and the right-angle I give as length 2; pythagorean theorem gives the hypotenuse as (4 + x^2)^(1/2). Continued reference to this triangle gives the arcos expression which you wanted -------- I am not well with being able to prove as you wanted, but maybe you might be able to now?
4. Mar 25, 2007
### VietDao29
Are you sure you've copies the problem correctly?
What if $$x = -2$$?
$$\arctan \left( \frac{-2}{2} \right) = \arctan (-1) = -\frac{\pi}{4}$$
Whereas:
$$\arccos \left( \frac{2}{\sqrt{4 + (-2) ^ 2}} \right) = \arccos \left( \frac{1}{\sqrt{2}} \right) = \frac{\pi}{4}$$
So:
$$\arctan \left( \frac{-2}{2} \right) \neq \arccos \left( \frac{2}{\sqrt{4 + (-2) ^ 2}} \right)$$ (Q.E.D)
5. Mar 25, 2007
### ][nstigator
Yup, I definately copied the problem down correctly... weird huh :(
6. Mar 25, 2007
### f(x)
Either you are not working in principle values or the question is copied down incorrectly.
because $cos \frac{\pi}{4}= cos \frac{- \pi}{4}$
but the inverse doesnt hold as $cos^{-1} \mbox{has principle range as} [0,\pi]$
7. Mar 25, 2007
### ][nstigator
Show that $$\arctan{\left( \frac{x}{2} \right)} = \arccos{\left(\frac{2}{\sqrt{4+x^2}}\right)} \ \mbox{for x}\epsilon\mbox{R}$$
..is the question, character for character
8. Mar 25, 2007
### VietDao29
Well, then, the problem cannot be proven. Because, it's... you know, false.
Similar Discussions: Trig Problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8070182800292969, "perplexity": 1411.5227651974021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00470.warc.gz"} |
http://motls.blogspot.co.uk/2012/04/royal-status-of-11-dimensional.html | ## Saturday, April 07, 2012 ... /////
### The royal status of 11-dimensional supergravity
A few days ago, I discussed the maximally supersymmetric gauge theory, the Darth Vader theory, which is exceptional.
I mentioned that the $\NNN=4$ gauge theory in $d=4$ dimensions is a beautiful, fully consistent (at the quantum level) dimensional reduction of the 10-dimensional supersymmetric gauge theory which has many hidden virtues and symmetries. All these theories have 16 supercharges.
A portrait of eleven-dimensional supergravity. As Dilaton knows, "elf" means "eleven". ;-)
There's one more royal family that is, in some sense, comparably fundamental as the family descended from the ten-dimensional supersymmetric gauge theory. This additional family of theories has 32 real supercharges and not just 16 of them.
The master theory's spacetime has $d=11$ dimensions rather than $d=10$. It is not renormalizable and none of its compactifications is fully consistent at the quantum level, either. These theories are only meaningful at a classical level (or in some low-energy expansion). All of these theories contain gravity and they are limits of string/M-theory which had to be the case because string/M-theory is the only consistent quantum theory of gravity.
Even though the eleven-dimensional gravity is just a low-energy limit of the full string/M-theory, it (and its compactifications) already knows quite something about the wisdom of string/M-theory in its most symmetric incarnation.
First of all, the previous text looked at gauge theories. We saw that supersymmetry implied some pairing between fermionic and bosonic degrees of freedom. Because the fermions have to be organized as $j=1/2$ spinors whose number of components grows exponentially, it's only possible to match them for $d=3,4,6,10$. In particular, $d=10$ was the maximum dimension in which a supersymmetric and pure gauge theory may exist.
Allowing gravity
Now we will allow massless fields with spin up to $j\leq 2$. We know that Einstein's gravity predicts gravitational waves. Their energy has to be quantized i.e. these waves have to be composed of gravitons of energy $E=\hbar\omega$. The spin of the gravitons is $j=2$ essentially because the metric field $g_{\mu\nu}$ has two indices.
If we linearize the metric tensor around a background, e.g. the flat Minkowski background, we obtain a dynamical spin-2 field. Much like spin-1 fields, it contains timelike components (in particular, components with an odd number of indices being timelike) which create negative-norm excitations. Much like in the Yang-Mills case, there has to be a gauge symmetry that decouples these dangerous excitations that could make the probabilities of many processes negative if they weren't killed.
For spin-1 fields, the possible gauge symmetries that do this job are Yang-Mills symmetries; we may choose the gauge group which also influences the number of components of the gauge fields (via the dimension of the gauge group). However, the gauge field of gravity – the metric tensor – has a spin that is greater by one. So the conserved charges – the generators of the symmetries – have to carry an additional Lorentz index. The generator of this symmetry can't be an "electric charge" or its non-Abelian generalization; it has to be a spacetime vector.
The only conserved vector that still allows realistic theories is the energy-momentum vector, i.e. the integral of the stress-energy tensor. Its conservation is linked to the translational symmetries of the spacetime via Emmy Noether's theorem; however, we are making this symmetry local or gauge so the translational symmetry is going to be promoted to all diffeomorphisms. If we required the conservation of yet another vector or even higher-spin tensors, the theory would be so constrained by the conservation laws (so many components) that it would essentially have to be non-interacting.
(Winding numbers of macroscopic strings and membranes may be a loophole and may be additional conserved quantities aside from the energy-momentum vector; this loophole is possible because pointlike excitations of the theory – and those are studied by the Coleman-Mandula theorem – carry vanishing values of these charges.)
If you think about these words, you will see that $j=2$ is the maximum spin of a field for which you will be able to propose a reasonable gauge symmetry that kills the negative-norm excitations while allowing at least some interactions in the theory. Moreover, the $j=2$ massless field has to be unique. There's only one translational symmetry of the spacetime so there's only one stress-energy tensor and it can only couple to one "gauge field" for this symmetry, the metric tensor.
The spin $j=5/2$ would already give too many components.
Maximizing the number of supercharges
Fine. So we want to construct a supersymmetric theory of massless particles that has the maximum spacetime dimension and the maximum number of supercharges. Let me say in advance that we will be led to $d=11$ dimensions and $N=32$ supercharges by this maximization procedure. Why?
Imagine a supermultiplet of massless particles in such a theory. One-half of the supercharges i.e. $N/2$ of them will annihilate the whole supermultiplet; all of the components will be invariant under one-half of the supercharges. The rest of the supercharges, $N/2$ real supercharges, may be recombined into $N/4$ raising operators and $N/4$ lowering operators; the latter group may be composed of the Hermitian conjugates of the former group. Each of these complexified supersymmetry generators changes a chosen projection of the spin by $\Delta j=\pm 1/2$.
We want to maximize the number of supercharges so we must allow the set of $N/4$ raising operators to be able to climb from the minimum allowed component of the spin, e.g. $j_{12}$, i.e. from $j_{12}=-2$ (graviton), to the maximum allowed one, $j_{12}=+2$, by those $\Delta j=1/2$ steps. We therefore see that $\frac{N}{4} = \frac{(+2)-(-2)}{\frac{1}{2}}=8,\qquad N=32.$ If the supersymmetry algebra is capable of climbing from one extreme component of the graviton to the opposite one, we must have $N=32$ real supercharges. Now, recall that the Dirac spinor in $d=10$ has $2^{10/2}=32$ components so in $d=10$, we actually find a 32-component spinor. However, the Dirac spinor is reducible, to the left-handed and right-handed chiral components, if you wish. That's not really a problem but it's a sign that we may go higher.
And indeed, the spinor in $d=11$ has 32 real components. There's no chirality in $d=11$ because eleven is an odd number. The $d=11$ spinor simply becomes the Dirac spinor if you dimensionally reduce the theory by one dimension. So $d=11$ is the right number of dimensions in which we should expect a nice maximally supersymmetric theory of gravity or supergravity.
Representation of the massless multiplet
So far I haven't said anything about interactions and even in this section, we will assume that we're considering the free limit of the theory only. The question we want to answer is how do the excitations of the massless gravitational supermultiplet transform under spacetime rotations.
In $d=11$, the massless particles move in a direction. Their nonzero energy prevents us from mixing time with other coordinates while preserving the energy-momentum vector of the particle; the direction of the particle's motion removes one spatial dimension as well. So there are only $d-2=9$ transverse dimensions that may be rotated into each other so that the energy-momentum vector is unchanged. (There are actually some light-like boosts as well but I only want to discuss compact groups here.)
So the little group is $SO(9)$. As we have already said, the multiplet is annihilated by 16 supercharges while the remaining 16 "active" supercharges play the role of 8 raising and 8 lowering operators. The algebra satisfied by these 16 "active" Hermitian supercharges is $\{Q_a,Q_b\} = \delta_{ab} E, \qquad a,b\in\{1,2,\dots , 16\}$ if we normalize the supercharges so that there's no extra coefficient. The Kronecker delta arises from a Dirac gamma matrix reduced to the simpler 16-dimensional space relevant for the little group. My point is that these 16 "active" supercharges transform as a spinor of the little group, $SO(9)$, because they're a remnant of a spinor of $SO(10,1)$, the Lorentz group of the 11-dimensional theory.
Do you know another way how to look at the anticommutators above? Well, you should. They're the same anticommutators as the algebra of Dirac gamma matrices:$\{\Gamma_a,\Gamma_b\} = \delta_{ab},\qquad a,b\in\{1,2,\dots , 16\}$ up to the obvious change of the normalization. But we know what we get from the algebra above if we "quantize it", right? We get a 256-dimensional spinor of $SO(16)$ or $Spin(16)$, to be more accurate. It's 256-dimensional because each of the 8 raising operators (that exists besides the 8 lowering operators) may be either applied to the "ground state" or not. The mathematical problem we're solving here is completely $Spin(16)$-symmetric so we know that the states we get by "quantizing" the raising and lowering operators must produce a nice representation of $Spin(16)$. Clearly, it has to be $2^8$-dimensional and it's nothing else than the spinor of $SO(16)$.
Moreover, one may define the operator of chirality $\Gamma_{17}$, to use a notation analogous to $\Gamma_5$ in $d=4$, that anticommutes with the sixteen $\Gamma_a$ matrices. This operator makes it clear that the 256-dimensional representation is reducible; it decomposes at least to 128 components that have $\Gamma_{17}=+1$ and 128 components with $\Gamma_{17}=-1$.
If we return from the $\Gamma_a$ notation to $Q_a$, it's clear that $\Gamma_{17}$ is nothing else than the operator remembering whether a state of the representation is bosonic or fermionic. That's the only conclusion from the fact that $\Gamma_{17}$ anticommutes with all $Q_a$ operators and those are fermionic ones.
We have just determined that the gravitational supermultiplet contains 128 bosonic and 128 fermionic states. We know that they transform as the full Dirac spinor (or the sum of the two chiral spinors) under $SO(16)$. However, the little group $SO(9)$ is "bizarrely" embedded into this $SO(16)$ in such a way that the "vector" ${\bf 16}$ of $SO(16)$ is the "spinor" ${\bf 16}$ of $SO(9)$ whose vector is ${\bf 9}$, of course. So how do the 128-dimensional chiral spinors of $SO(16)$ decompose under $SO(9)$ which is a subgroup of $SO(16)$?
Getting the three fields of $d=11$ SUGRA
Let's start with the fermionic ones. We should get a representation of $Spin(9)$ which is 128-dimensional. This representation inevitably contains some states with $j_{12}=3/2$, as expected for gravitinos. What are the representations of $Spin(9)$ with this property?
Well, the answer is that there is a unique 128-dimensional representation of this kind and it is irreducible. Why? Take the tensor product of the vector and the spinor of $Spin(9)$, ${\bf 9} \otimes {\bf 16}.$ It's 144-dimensional which is too many for us. Is it irreducible? Well, it is not. We may require$\chi_{ia}\gamma^i_{ab}=0$ That's the only linear, rotationally invariant condition we may demand from the spin-3/2 object $\chi_{ia}$ which doesn't make it identically vanish. The condition above has a free $b$ index so it eliminates sixteen components of the tensor with one vector index and one spinor index. So the number of independent surviving components is $8\times 16=128$ rather than $9\times 16=144$ and that's exactly what we need. The gravitino in $d=11$ forms an irreducible 128-dimensional representation of the little group
$Spin(9)$.
They must contain the graviton which comes from transverse excitations of the metric tensor, $h_{ij}$, where $i,j=1,2,\dots,9$. This would have $9\times 10/2\times 1=45$ components; count the number of squares in a symmetric matrix or a triangle that includes the diagonal. However, this 45-dimensional representation isn't irreducible. Again, we may demand an extra condition, namely tracelessness$\sum_{i=1}^9 h_{ii} = 0$ and general relativity actually does imply that the physical states are traceless which means that the $d=11$ graviton only has $\frac{9\times 10}{2\times 1} - 1 = 44$ components. Where are the remaining 84 components?
I have already mentioned that the spin-2 states have to be unique. But the gravitational multiplet may still contain $j=1$ and $j=0$ states. Well, there have to be some additional $j=1$ states as well. I originally wrote an explanation in terms of weights but it would be too annoyingly technical so I erased it and you will only be told the sketched result. The representations with $j=1$ don't have to be just vectors; $p$-forms are fine, too. If you analyze the dimensions of the antisymmetric tensor representations, you will find out that $C_{ijk}$ with three $SO(9)$ indices has$\frac{9\times 8 \times 7}{3\times 2 \times 1} = 84$ components which is exactly what you need. So the 128 bosonic states decompose to 44 states of the graviton and 84 states of the 3-form. Everything seems to work at the level of the little group and its representations.
Writing the Lorentz, diff-invariant Lagrangian
The $d=11$ theory of supergravity should therefore contain the metric tensor $g_{\mu\nu}$, a gravitino Rarita-Schwinger field $\chi_{\mu\alpha}$, and a three-form $C_{\lambda\mu\nu}$. Let's use the convention in which the metric and the three-form potential are dimensionless and the fermionic field has the dimension ${\rm mass}^{1/2}$. What is the Lagrangian?
You combine the usual Einstein-Hilbert action $\LL_{EH} = \frac{1}{16\pi G} R$ i.e. the Ricci scalar with other terms. Note that its dimension is ${\rm mass}^2$ and when divided by Newton's constant which is ${\rm length}^9$, e.g. because $A/4G$ should be the dimensionless entropy, we get ${\rm mass}^{11}$, as expected from an 11-dimensional Lagrangian density. Aside from the Einstein-Hilbert action, there will also be some nice Maxwell-like kinetic term for the three-form potential,$\LL_{EM} = \frac{C}{G} F_{\lambda\mu\nu\pi} F^{\lambda\mu\nu\pi}$ where the 4-form $F$ is gotten as the antisymmetrized derivative, $F=*dC$, much like in the electromagnetic case. Finally, there have to be kinetic terms for the gravitino,$\LL_{RS} \sim \frac{C}{G} \chi^{\lambda \beta} (\Gamma\delta)_{\lambda}^{\mu\nu}{}^\alpha_\beta \partial_\nu \chi_{\mu\alpha}$ where I don't want to write all the required combinations of contractions of the indices and their right relative normalizations so I have unified the gamma matrices and a Lorentz-vector Kronecker delta into a "hybrid" object. Just like the Dirac Lagrangian, this fermionic kinetic term is linear in derivatives. Note that all the terms in the Lagrangian have the right units; and all of them are proportional to $1/G$.
You will find out that the theory above isn't supersymmetric. However, it's possible to systematically deduce all the additional interaction terms and the supersymmetry suddenly starts to hold. It's a kind of miracle. You will have to add gauge-fermion-like interactions $F\psi\psi$, the Chern-Simons term $C\wedge F\wedge F$ with the right coefficient (an 11-form), and a $\chi^4$ term as well. But when you add all of them, the total action may be verified to be locally supersymmetric! It's also diffeomorphism symmetric and symmetric under $\delta C = d\lambda$ where $\lambda$ is a 2-form parameter of a gauge transformation for the 3-form that generalizes the electromagnetic gauge symmetry (but is still Abelian).
Why does the supersymmetry work?
It's a kind of a miracle that the supersymmetry holds when you adjust a few coefficients. I guess that only dozens of people in the world have verified the supersymmetry of the 11-dimensional supergravity's action explicitly, without any help of a computer, and roughly 100 people have done so with the help of a computer or some other improvements of their brains.
Is there an intuitive explanation why it works? I admit that, paradoxically enough, the most straightforward and heavy-algebra-free approach to prove that this limit exists could actually start from (type IIA) string theory. If any reader has an explanation why such an interacting theory with such a huge symmetry principle exists at all, an explanation will be highly welcome.
M2-branes, M5-branes
The field content of the theory includes a 3-form bosonic field. One may naturally include another term in the action$S = \int \dd \Sigma_3 C_{(3)}$ that simply integrates the 3-form potential over some 3-dimensional manifold in the spacetime. Such privileged manifolds that do contribute to the action in this way may exist; they're of course the world volumes of the two-dimensional branes or membranes. The membranes in 11-dimensional supergravity or M-theory are known as M2-branes. They carry some specific "charge density" in the right direction. However, in electromagnetism, you know that you may also consider $*F$ instead of $F$ and study magnetic charges, too.
The same thing may be done here. The field strength $F_{(4)}$ which is a 4-form may be Hodge dualized in 11 dimensions to a 7-form and this 7-form may be, at least in regions without sources, written as $F_{(7)}=d\tilde C_{(6)}$. And this dual 6-potential may be integrated over 6-dimensional manifolds in spacetime: they're nothing else than the world volumes of M5-branes.
So the 11-dimensional theory predicts gravitational waves, generalizations of electromagnetic waves, black holes, M2-branes, and M5-branes, among other objects.
We're going to look at some dimensional reductions, perhaps oxidations, and dimensional chemistry. Song "Chemistry" is sung by Mr Xindl X.
Surprising richness of the compactifications
If you wanted the theory to be well-defined at the quantum level including the quantum corrections – which is the same as corrections that become strong at high energies – you would have to replace the 11-dimensional supergravity by its unique ultraviolet completion, M-theory. I don't want to do that in this article.
However, it's interesting to look at compactifications. The simplest compactifications are the toroidal ones, i.e. ones in which a couple of dimensions are periodically identified, i.e. points are identified with other points that are shifted by an element of a lattice.
$\RR^n / \Gamma^n=T^n$ is a torus. If its size is comparable to the Planck length, the length scale at which the quantum M-theory corrections become really important, it's obvious that the compactification will be a compactification with extremely short, microscopic periodicities that look like zero from the viewpoint of long-distance probes. So if you only consider SUGRA and not the full M-theory, these compactifications will be just dimensional reductions and the size and angles in the torus will be physically inconsequential (for all effects at low energies).
And indeed, this fact will manifest itself as a noncompact symmetry of the dimensionally reduced supergravity theory: changing the shape and size of the torus is actually changing nothing about the low-energy physics. The dimensional reductions still have 32 real supercharges but they're organized as extended supersymmetry in the lower-dimensional spacetimes. You will find out that there is a noncompact version of the $E_k$ symmetry if you dimensionally reduce the theory to $11-k$ large dimensions of spacetime. That's true for $k=6,7$ and, when you properly interpret some fields in 3 dimensions, even for $k=8$. Well, one could argue that some of those statements may even be done for the Kač-Moody case $k=9$ in $d=2$ if not the hyperbolic case $E_{10}$ in a 1-dimensional spacetime.
The groups $E_5, E_4, E_3, E_2, E_1$ should be understood as $SO(5,5)$, $SL(5,\RR)$, $SL(2,\RR)\times SL(3,\RR)$, $SL(2,\RR)$, $\RR$. If you study the full M-theory, using probes that can feel the Planckian physics, the shape of the torus matters and these noncompact symmetries are reduced to their discrete versions such as $E_{7(7)}(\ZZ)$ in the case of the $\NNN=8$ $d=4$ supergravity, i.e. to the so-called U-duality groups. The quotient of the continuous group and its discrete subgroup spans the moduli space. Various charges and scalar fields etc. know about the noncompact group or its maximal compact subgroup.
It's a shocking set of mathematical facts. There exist mathematical explanations why the moduli spaces have to be quotients, why the exceptional groups are the only solutions, and so on. But I think that no one knows of any "truly conceptual" explanation why the exceptional groups appear in the discussion of a maximally supersymmetric theory of gravity which didn't start with any components that would resemble exceptional groups. The exceptional groups were spitted out as one of the amazingly surprising outputs.
All these mathematical facts become even more stunning if you study those theories beyond the low-energy approximation i.e. if you investigate the whole structure of string/M-theory. The 11-dimensional spacetime of M-theory, the completion of the 11-dimensional supergravity, is the maximum-dimensional spacetime in string/M-theory that may exist which is why people often say that it's a "more fundamental" limit than others. Of course, a more politically correct assertion is that it's just another limit, on par with many others such as the five 10-dimensional string "theories" (vacua). Still, by its having a higher number of dimensions, the description in terms of 11-dimensional theory is "more geometric" than others.
(F-theory has 12 dimensions in some counting but 2 of them have to be infinitesimal and they're a bit different than the remaining 10 dimensions. From some perspectives, F-theory is more geometrical and higher-dimensional than M-theory; from others, it's the other way around. This discussion is similar to the question whether mothers or fathers are more fundamental. There's no unique answer. After all, it's not just an analogy because M-theory stands for Mother while F-theory stands for Father. M-theory compactified on a circle easily produces type IIA string theory; F-theory is a toolkit to construct sophisticated nonperturbative type IIB string vacua.)
Technical: Speed up your MathJax $\rm\LaTeX$ by a second per page, at least in Windows. Download these zipped OTF fonts, unzip them, and for each of the 22 OTF files, right-click Properties/General and "unblock". Then choose all the 22 OTF files and install them. It will display the same maths you're used from the web, but using your local copy of the fonts. (These are not STIX fonts which I consider uglier but the very same fonts you're used to.) Warning: the local $\rm \TeX$ fonts may produce imperfect output when applied by Internet Explorer to pages with Greek letters. You should learn how to uninstall the MathJax fonts, too.
This text has been proofread once and quickly. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853415250778198, "perplexity": 473.28008859252947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275764.90/warc/CC-MAIN-20160524002115-00138-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/review-q-3.430933/ | # Homework Help: Review Q #3
1. Sep 21, 2010
### TeenieWeenie
Problem solved! Thanks again Thaakisfox!
1. The problem statement, all variables and given/known data
A ball thrown horizontally at 2.2 m/s from the roof of a building lands 36 m from the base of the building. Calculate the height of the building.
Vox = 2.2 m/s
Xo = 0m
Vx = 0 m/s
X = 36m
2. Relevant equations
y = Vosin ao - (0.5)(g)(t^2)
3. The attempt at a solution
I tried plugging it in but then I have a missing time...?
Last edited: Sep 21, 2010
2. Sep 21, 2010
### Thaakisfox
This is a horizontal projection hence you dont need any sine etc. Just calculate the time of descent from the velocity and horizontal displacement by dividing them. And then the vertical component is simple freefall.
3. Sep 21, 2010
### TeenieWeenie
I'm kinda confused.
What formula would I use then?
x=Vo *CosAo * t
4. Sep 21, 2010
### Thaakisfox
No. It is a horizontal projection. You dont need any of those sine or cosine function.
You throw the ball with horizontal velocity Vo. Since it has no acceleration in the horizontal direction, the distance it travels is simply: x=Vo*t where x is given (the distance from the base of the building). from here you can get t. Now in the vertical plane it is just simple freefall, so use the formula which gives the distance when freefall takes place
5. Sep 21, 2010
### TeenieWeenie
36 m = 2.2 m/s * t
36/2.2 s = t
16.36 s = t
Formula for free fall: h(t) = Vo*t + 1/2 at^2
h(16.36) = 0*t + (0.5)(-9.8m/s^2)(16.36^2)
= -1311.48304 m ?
I think something went wrong... :(
6. Sep 21, 2010
### Thaakisfox
The calculation is correct. The given data seem insensible.
7. Sep 21, 2010
### TeenieWeenie
Insensible?
8. Sep 21, 2010
### Thaakisfox
That happens many times in textbooks, that the given data for a problem dont make sense. For example there probably arent many buildings taller than 1km, and especially someone throwing a ball off.
But your calculation is correct. (That minus sign doesnt matter, it just means that you took the y axis to point upwards, and thats why the acceleration has a negative sign. But you are searching for the absolute value of the height anyway, so just take that minus sign away.)
9. Sep 21, 2010
### TeenieWeenie
Oh! Good observation.
Problem solved! Thanks again Thaakisfox!
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569966793060303, "perplexity": 1961.3969663825476}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00459.warc.gz"} |
https://www.gregschool.org/new-blog-20/2017/8/30/capacitance | # Capacitance
The capacitance of a system (represented by $$C$$) is a measure of how efficient and quickly that system accumulates an amount of charge $$Q$$ and is defined as
$$C≡\frac{Q}{ΔV_{ab}}.\tag{1}$$
If we're charging a capacitor by an amount $$Q$$, the voltage $$ΔV_{ab}=\frac{ΔU}{q}$$ measures how much potential energy is transferred to the capacitor every time an amount of charge $$q$$ is transferred from one conductor in the capacitor to another. In Equation (1), $$Q$$ is the total amount of charge that gets stored in the capacitor.
(Let's, for arguments sake, just assume for the moment that the capacitor is charged to the amount $$Q=q$$.) Now, the important thing to know is that the capacitance $$C$$ is a number that we measure which only depends on the type of material comprising the conductors and the insulator of the capacitor. The value of $$C$$ does not depend on $$Q$$ or $$ΔV_{ab}$$. So, if $$C$$ has say a small value ($$C=\text{'small value'}$$) then if we charged the capacitor by an amount $$Q+q$$, then the amount of energy (or, to be more precise, the amount of electric potential energy) $$ΔV_{ab}=\frac{ΔU}{q}=ΔU=\frac{q}{C}=\frac{q}{text{'small value'}}$$ stored in the capacitor is large. If the capacitor is, however, made out of material for which $$C=\text{'big value'}$$, then if we charged the capacitor to an amount $$Q=q$$ the total energy $$ΔV_{ab}=\frac{q}{\text{'big value'}}=ΔU$$ stored in the capacitor would be small. So, in a certain sense, the capacitance of a capacitor can be viewed as how efficiently energy is transferred to the capacitor as you charge it. Capacitors for which the capacitance is lower accumulates energy faster and morre efficiently as you charge it up, whereas capacitors with a higher capacitance assumulates energy much slower and less efficiently as you charge it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654832482337952, "perplexity": 172.29631749411234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202723.74/warc/CC-MAIN-20190323040640-20190323062640-00234.warc.gz"} |
http://www-old.newton.ac.uk/programmes/MLC/seminars/2013010910001.html | # MLC
## Seminar
### On the cubic instability in the Q-tensor theory of nematics
Zarnescu, A (University of Sussex)
Wednesday 09 January 2013, 10:00-10:40
Seminar Room 1, Newton Institute
#### Abstract
Symmetry considerations, as well as compatibility with the Oseen-Frank theory, require the presence of a cubic term (involving spatial derivatives) in the Q-tensor energy functional used for describing variationally the nematics. However the presence of the cubic term makes the energy functional unbounded from below.
We propose a dynamical approach for addressing this issue, namely to consider the L^2 gradient flow generated by the energy functional and show that the energy is dynamically bounded, namely if one starts with a bounded, suitable, energy then the energy stays bounded in time. We discuss notions of suitability which are related to the preservation of a physical constraint on the eigenvalues of the Q-tensors (without using the Ball-Majumdar singular potential).
This is joint work with G. Iyer and X. Xu (Carnegie-Mellon).
#### Video
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848933577537537, "perplexity": 1346.3108086187135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292151.8/warc/CC-MAIN-20160823195812-00171-ip-10-153-172-175.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.