url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://clay6.com/qa/9164/evaluate-the-limit-for-the-following-if-exists-lim-limits-large-frac
# Evaluate the limit for the following if exists.$\;\lim\limits_{x\to 2} \large\frac{\sin \pi x}{2 - x}$ Toolbox: • L'Hopital's rule: Let $f$ and $g$ be continous real valued functions defined on the closed interval $[a,b], f,g$ be differentiable on $(a,b)$ and $g'(c) \neq 0$ • Then if $\lim\limits_{x \to c}\; f(x)=0, \lim \limits_{x \to c}\; g(x)=0$ and • $\lim\limits_{x \to c} \large\frac{f'(x)}{g'(x)}$$=L it follows that • \lim \limits_{x \to c} \large\frac{f(x)}{g(x)}$$=L$ Step 1: .$\;\lim\limits_{x\to 2} \large\frac{\sin \pi x}{2 - x}$ is of the form $\large\frac{0}{0}$ Step 2: Applying L'Hopital's rule, $\;\lim\limits_{x\to 2} \large\frac{\sin \pi x}{2 - x}$$=\lim\limits_{x\to 2} \large\frac{\pi \cos \pi x}{-1}$$=-\pi$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999077320098877, "perplexity": 1353.219525663745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863109.60/warc/CC-MAIN-20180619173519-20180619193519-00395.warc.gz"}
https://stats.libretexts.org/Homework_Exercises/General_Statistics/Exercises%3A_Shafer_and_Zhang/02.E%3A_Descriptive_Statistics_(Exercises)
# 2.E: Descriptive Statistics (Exercises) [ "article:topic", "Exercises" ] These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. ## 2.1: Three popular data displays ### Basic 1. Describe one difference between a frequency histogram and a relative frequency histogram. 2. Describe one advantage of a stem and leaf diagram over a frequency histogram. 3. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $$51-60$$, $$61-70$$, and so on. $\begin{array}69 & 92 & 68 & 77 & 80 \\ 70 & 85 & 88 & 85 & 96 \\ 93 & 75 & 76 & 82 & 100 \\ 53 & 70 & 70 & 82 & 85\end{array}$ 4. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $$6.0-6.9$$, $$7.0-7.9$$, and so on. $\begin{array}8.5 & 8.2 & 7.0 & 7.0 & 4.9 \\ 6.5 & 8.2 & 7.6 & 1.5 & 9.3 \\ 9.6 & 8.5 & 8.8 & 8.5 & 8.7 \\ 8.0 & 7.7 & 2.9 & 9.2 & 6.9\end{array}$ 5. A data set contains $$n = 10$$ observations. The values $$x$$ and their frequencies $$f$$ are summarized in the following data frequency table. $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \\ \hline f & 3 & 4 & 2 & 1\end{array}$Construct a frequency histogram and a relative frequency histogram for the data set. 6. A data set contains the $$n=20$$ observations The values $$x$$ and their frequencies $$f$$ are summarized in the following data frequency table. $\begin{array}{c|ccc}x & -1 & 0 & 1 & 2 \\ \hline f & 3 & a & 2 & 1\end{array}$The frequency of the value $$0$$ is missing. Find a and then sketch a frequency histogram and a relative frequency histogram for the data set. 7. A data set has the following frequency distribution table: $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \\ \hline f & 3 & a & 2 & 1\end{array}$The number a is unknown. Can you construct a frequency histogram? If so, construct it. If not, say why not. 8. A table of some of the relative frequencies computed from a data set is $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \\ \hline f ∕ n & 0.3 & p & 0.2 & 0.1\end{array}$The number $$p$$ is yet to be computed. Finish the table and construct the relative frequency histogram for the data set. ### Applications 1. The IQ scores of ten students randomly selected from an elementary school are given. $\begin{array}108 & 100 & 99 & 125 & 87 \\ 105 & 107 & 105 & 119 & 118\end{array}$Grouping the measures in the $$80s$$, the $$90s$$, and so on, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 2. The IQ scores of ten students randomly selected from an elementary school for academically gifted students are given. $\begin{array}133 & 140 & 152 & 142 & 137 \\ 145 & 160 & 138 & 139 & 138\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 3. During a one-day blood drive $$300$$ people donated blood at a mobile donation center. The blood types of these $$300$$ donors are summarized in the table. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \\ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Construct a relative frequency histogram for the data set. 4. In a particular kitchen appliance store an electric automatic rice cooker is a popular item. The weekly sales for the last $$20$$weeks are shown. $\begin{array}20 & 15 & 14 & 14 & 18 \\ 15 & 17 & 16 & 16 & 18 \\ 15 & 19 & 12 & 13 & 9 \\ 19 & 15 & 15 & 16 & 15\end{array}$Construct a relative frequency histogram with classes $$6-10$$, $$11-15$$, and $$16-20$$. 1. Random samples, each of size $$n = 10$$, were taken of the lengths in centimeters of three kinds of commercial fish, with the following results: $\begin {array}{lrcccccccc} Sample \hspace{0.167em}1 : & 108 & 100 & 99 & 125 & 87 & 105 & 107 & 105 & 119 & 118 \\ Sample \hspace{0.167em} 2 : & 133 & 140 & 152 & 142 & 137 & 145 & 160 & 138 & 139 & 138 \\ Sample \hspace{0.167em} 3 : & 82 & 60 & 83 & 82 & 82 & 74 & 79 & 82 & 80 & 80\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for each of the samples. Compare the histograms and describe any patterns they exhibit. 2. During a one-day blood drive $$300$$ people donated blood at a mobile donation center. The blood types of these $$300$$ donors are summarized below. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \\ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Identify the blood type that has the highest relative frequency for these $$300$$ people. Can you conclude that the blood type you identified is also most common for all people in the population at large? Explain. 3. In a particular kitchen appliance store, the weekly sales of an electric automatic rice cooker for the last $$20$$ weeks are as follows. $\begin{array}20 & 15 & 14 & 14 & 18 \\ 15 & 17 & 16 & 16 & 18 \\ 15 & 19 & 12 & 13 & 9 \\ 19 & 15 & 15 & 16 & 15\end{array}$In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if 1. the store is not to run out of stock by the end of a week for more than $$15\%$$ of the weeks; and 2. the store is not to run out of stock by the end of a week for more than $$5\%$$ of the weeks. 4. In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if the store is not to run out of stock by the end of a week for more than $$15\%$$ of the weeks; and the store is not to run out of stock by the end of a week for more than $$5\%$$ of the weeks. 1. The vertical scale on one is the frequencies and on the other is the relative frequencies. 2. 3. $\begin{array}{r|cccccc}5 & 3 & & & & & & \\ 6 & 8 & 9 & & & & & \\ 7 & 0 & 0 & 0 & 5 & 6 & 7 & \\ 8 & 0 & 2 & 3 & 5 & 5 & 5 & 8 \\ 9 & 2 & 3 & 6 & & & & \\ 10 & 0 & & & & & &\end{array}$ 4. 5. Noting that $$n = 10$$ the relative frequency table is: $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \\ \hline f ∕ n & 0.3 & 0.4 & 0.2 & 0.1\end{array}$ 6. 7. Since $$n$$ is unknown, $$a$$ is unknown, so the histogram cannot be constructed. 8. 9. $\begin{array}{r|cccc}8 & 7 & & & & \\ 9 & 9 & & & & \\ 10 & 0 & 5 & 5 & 7 & 8 \\ 11 & 8 & 9 & & \\ 12 & 5 & & & &\end{array}$ Frequency and relative frequency histograms are similarly generated. 10. 11. Noting $$n = 300$$, the relative frequency table is therefore: $\begin{array}{c|cccc}Blood\hspace{0.167em}Type & O & A & B & AB \\ \hline f ∕ n & 0.4533 & 0.4 & 0.1067 & 0.04\end{array}$ A relative frequency histogram is then generated. 12. 13. The stem and leaf diagrams listed for Samples $$1,\, 2,\; \text{and}\; 3$$ in that order: $\begin{array}{c|ccccc}6 & & & & & \\ 7 & & & & & \\ 8 & 7 & & & & \\ 9 & 9 & & & & \\ 10 & 0 & 5 & 5 & 7 & 8 \\ 11 & 8 & 9 & & & \\ 12 & 5 & & & & \\ 13 & & & & & \\ 14 & & & & & \\ 15 & & & & & \\ 16 & & & & &\end{array}$ $\begin{array}{c|ccccc}6 & & & & & \\ 7 & & & & & \\ 8 & & & & & \\ 9 & & & & & \\ 10 & & & & & \\ 11 & & & & & \\ 12 & & & & & \\ 13 & 3 & 7 & 8 & 8 & 9 \\ 14 & 0 & 2 & 5 & & \\ 15 & 2 & & & & \\ 16 & 0 & & & &\end{array}$ $\begin{array}{c|ccccccc}6 & 0 & & & & \\ 7 & 4 & 9 & & & \\ 8 & 0 & 0 & 2 & 2 & 2 & 2 & 3 \\ 9 & & & & & \\ 10 & & & & & \\ 11 & & & & & \\ 12 & & & & & \\ 13 & & & & & \\ 14 & & & & & \\ 15 & & & & & \\ 16 & & & & &\end{array}$ The frequency tables are given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \\ \hline f & 1 & 1 & 5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \\ \hline f & 2 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \\ \hline f & 5 & 3 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 160 \sim 169 \\ \hline f & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \\ \hline f & 1 & 2 & 7\end{array}$ The relative frequency tables are also given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \\ \hline f ∕ n & 0.1 & 0.1 & 0.5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \\ \hline f ∕ n & 0.2 & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \\ \hline f ∕ n & 0.5 & 0.3 & 0.1\end{array}$ $\begin{array}{c|c}Length\hspace{0.167em} & 160 \sim 169 \\ \hline f ∕ n & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \\ \hline f ∕ n & 0.1 & 0.2 & 0.7\end{array}$ 1. 2. 1. 19 2. 20 3. ## 2.2: Measures of Central Location ### Basic 1. For the sample data set $$\{1,2,6\}$$ find 1. $$\sum x$$ 2. $$\sum x^2$$ 3. $$\sum (x-3)$$ 4. $$\sum (x-3)^2$$ 2. For the sample data set $$\{-1,0,1,4\}$$ find 1. $$\sum x$$ 2. $$\sum x^2$$ 3. $$\sum (x-1)$$ 4. $$\sum (x-1)^2$$ 3. Find the mean, the median, and the mode for the sample $1\; 2\; 3\; 4$ 4. Find the mean, the median, and the mode for the sample $3\; 3\; 4\; 4$ 5. Find the mean, the median, and the mode for the sample $2\; 1\; 2\; 7$ 6. Find the mean, the median, and the mode for the sample $-1\; 0\; 1\; 4\; 1\; 1$ 7. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c}x & 1 & 2 & 7 \\ \hline f & 1 & 2 & 1\\ \end{array}$ 8. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c c}x & -1 & 0 & 1 & 4 \\ \hline f & 1 & 1 & 3 & 1\\ \end{array}$ 9. Create a sample data set of size $$n=3$$ for which the mean $$\bar{x}$$ is greater than the median $$\tilde{x}$$. 10. Create a sample data set of size $$n=3$$ for which the mean $$\bar{x}$$ is less than the median $$\tilde{x}$$. 11. Create a sample data set of size $$n=4$$ for which the mean $$\bar{x}$$, the median $$\tilde{x}$$, and the mode are all identical. 12. Create a sample data set of size $$n=4$$ for which the median $$\tilde{x}$$ and the mode are identical but the mean $$\bar{x}$$ is different. ### Applications 1. Find the mean and the median for the LDL cholesterol level in a sample of ten heart patients. $\begin{matrix} 132 & 162 & 133 & 145 & 148\\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the mean and the median, for the LDL cholesterol level in a sample of ten heart patients on a special diet. $\begin{matrix} 127 & 152 & 138 & 110 & 152\\ 113 & 131 & 148 & 135 & 158 \end{matrix}$ 3. Find the mean, the median, and the mode for the number of vehicles owned in a survey of $$52$$ households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline f &2 &12 &15 &11 &6 &3 &1 &2\\ \end{array}$ 4. The number of passengers in each of $$120$$ randomly observed vehicles during morning rush hour was recorded, with the following results. $\begin{array}{c|c c c c c } x & 1 & 2 & 3 & 4 & 5\\ \hline f &84 &29 &3 &3 &1\\ \end{array}$Find the mean, the median, and the mode of this data set. 5. Twenty-five $$1-lb$$ boxes of $$16d$$ nails were randomly selected and the number of nails in each box was counted, with the following results. $\begin{array}{c|c c c c c } x & 47 & 48 & 49 & 50 & 51\\ \hline f &1 &3 &18 &2 &1\\ \end{array}$Find the mean, the median, and the mode of this data set. 1. Five laboratory mice with thymus leukemia are observed for a predetermined period of $$500$$ days. After $$500$$ days, four mice have died but the fifth one survives. The recorded survival times for the five mice are $\begin{matrix} 493 & 421 & 222 & 378 & 500^* \end{matrix}$where $$500^*$$ indicates that the fifth mouse survived for at least $$500$$ days but the survival time (i.e., the exact value of the observation) is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, why not? 2. Can you find the sample median for the data set? If so, find it. If not, why not? 2. Five laboratory mice with thymus leukemia are observed for a predetermined period of $$500$$ days. After $$450$$ days, three mice have died, and one of the remaining mice is sacrificed for analysis. By the end of the observational period, the last remaining mouse still survives. The recorded survival times for the five mice are $\begin{matrix} 222 & 421 & 378 & 450^* & 500^* \end{matrix}$where $$^*$$ indicates that the mouse survived for at least the given number of days but the exact value of the observation is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, explain why not. 2. Can you find the sample median for the data set? If so, find it. If not, explain why not. 3. A player keeps track of all the rolls of a pair of dice when playing a board game and obtains the following data. $\begin{array}{c|c c c c c c } x & 2 & 3 & 4 & 5 & 6 & 7\\ \hline f &10 &29 &40 &56 &68 &77 \\ \end{array}$ $\begin{array}{c|c c c c c } x & 8 & 9 & 10 & 11 & 12 \\ \hline f &67 &55 &39 &28 &11 \\ \end{array}$Find the mean, the median, and the mode. 4. Cordelia records her daily commute time to work each day, to the nearest minute, for two months, and obtains the following data. $\begin{array}{c|c c c c c c c } x & 26 & 27 & 28 & 29 & 30 & 31 & 32\\ \hline f &3 &4 &16 &12 &6 &2 &1 \\ \end{array}$ 1. Based on the frequencies, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 5. An ordered stem and leaf diagram gives the scores of $$71$$ students on an exam. $\begin{array}{c|c c c c c c c c c c c c c c c c c c } 10 & 0 & 0 \\ 9 &1 &1 &1 &1 &2 &3\\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\\ 4 &2 &5 &6 &8 &8\\ 3 &9 &9 \end{array}$ 1. Based on the shape of the display, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 6. A man tosses a coin repeatedly until it lands heads and records the number of tosses required. (For example, if it lands heads on the first toss he records a $$1$$; if it lands tails on the first two tosses and heads on the third he records a $$3$$.) The data are shown. $\begin{array}{c|c c c c c c c c c c } x & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1 \end{array}$ 1. Find the mean of the data. 2. Find the median of the data. 7. 1. Construct a data set consisting of ten numbers, all but one of which is above average, where the average is the mean. 2. Is it possible to construct a data set as in part (a) when the average is the median? Explain. 8. Show that no matter what kind of average is used (mean, median, or mode) it is impossible for all members of a data set to be above average. 9. 1. Twenty sacks of grain weigh a total of $$1,003\; lb$$. What is the mean weight per sack? 2. Can the median weight per sack be calculated based on the information given? If not, construct two data sets with the same total but different medians. 10. Begin with the following set of data, call it $$\text{Data Set I}$$. $\begin{matrix} 5 & -2 & 6 & 14 & -3 & 0 & 1 & 4 & 3 & 2 & 5 \end{matrix}$ 1. Compute the mean, median, and mode. 2. Form a new data set, $$\text{Data Set II}$$, by adding $$3$$ to each number in $$\text{Data Set I}$$. Calculate the mean, median, and mode of $$\text{Data Set II}$$. 3. Form a new data set, $$\text{Data Set III}$$, by subtracting $$6$$ from each number in $$\text{Data Set I}$$. Calculate the mean, median, and mode of $$\text{Data Set III}$$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. ### Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $$\text{Data Set 1}$$ lists the SAT scores and GPAs of $$1,000$$ students. 1. Compute the mean and median of the $$1,000$$ SAT scores. 2. Compute the mean and median of the $$1,000$$ GPAs. 2. Large $$\text{Data Set 1}$$ lists the SAT scores of $$1,000$$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $$\mu$$. 2. Regard the first $$25$$ observations as a random sample drawn from this population. Compute the sample mean $$\bar{x}$$ and compare it to $$\mu$$. 3. Regard the next $$25$$ observations as a random sample drawn from this population. Compute the sample mean $$\bar{x}$$ and compare it to $$\mu$$. 3. Large $$\text{Data Set 1}$$ lists the GPAs of $$1,000$$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $$\mu$$. 2. Regard the first $$25$$ observations as a random sample drawn from this population. Compute the sample mean $$\bar{x}$$ and compare it to $$\mu$$. 3. Regard the next $$25$$ observations as a random sample drawn from this population. Compute the sample mean $$\bar{x}$$ and compare it to $$\mu$$. 4. Large $$\text{Data Sets}\: 7,\: 7A,\: \text{and}\: 7B$$ list the survival times in days of $$140$$ laboratory mice with thymic leukemia from onset to death. 1. Compute the mean and median survival time for all mice, without regard to gender. 2. Compute the mean and median survival time for the $$65$$ male mice (separately recorded in Large $$\text{Data Set 7A}$$). 3. Compute the mean and median survival time for the $$75$$ female mice (separately recorded in Large $$\text{Data Set 7B}$$). 1. 1. 9 2. 41 3. 0 4. 14 2. 3. $$\bar x= 2.5,\; \tilde{x} = 2.5,\; \text{mode} = \{1,2,3,4\}$$ 4. 5. $$\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$$ 6. 7. $$\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$$ 8. 9. $$\{0, 0, 3\}$$ 10. 11. $$\{0, 1, 1, 2\}$$ 12. 13. $$\bar x = 146.9,\; \tilde x = 147.5$$ 14. 15. $$\bar x=2.6 ,\; \tilde{x} = 2,\; \text{mode} = 2$$ 16. 17. $$\bar x= 48.96,\; \tilde{x} = 49,\; \text{mode} = 49$$ 18. 19. 1. No, the survival times of the fourth and fifth mice are unknown. 2. Yes, $$\tilde{x}=421$$. 20. 21. $$\bar x= 28.55,\; \tilde{x} = 28,\; \text{mode} = 28$$ 22. 23. $$\bar x= 2.05,\; \tilde{x} = 2,\; \text{mode} = 1$$ 24. 25. Mean: $$nx_{min}\leq \sum x$$ so dividing by $$n$$ yields $$x_{min}\leq \bar{x}$$, so the minimum value is not above average. Median: the middle measurement, or average of the two middle measurements, $$\tilde{x}$$, is at least as large as $$x_{min}$$, so the minimum value is not above average. Mode: the mode is one of the measurements, and is not greater than itself 26. 27. 1. $$\bar x= 3.18,\; \tilde{x} = 3,\; \text{mode} = 5$$ 2. $$\bar x= 6.18,\; \tilde{x} = 6,\; \text{mode} = 8$$ 3. $$\bar x= -2.81,\; \tilde{x} = -3,\; \text{mode} = -1$$ 4. If a number is added to every measurement in a data set, then the mean, median, and mode all change by that number. 28. 29. 1. $$\mu = 1528.74$$ 2. $$\bar{x}=1502.8$$ 3. $$\bar{x}=1532.2$$ 30. 31. 1. $$\bar x= 553.4286,\; \tilde{x} = 552.5$$ 2. $$\bar x= 665.9692,\; \tilde{x} = 667$$ 3. $$\bar x= 455.8933,\; \tilde{x} = 448$$ ## 2.3 Measures of Variability ### Basic 1. Find the range, the variance, and the standard deviation for the following sample. $1\; 2\; 3\; 4$ 2. Find the range, the variance, and the standard deviation for the following sample. $2\; -3\; 6\; 0\; 3\; 1$ 3. Find the range, the variance, and the standard deviation for the following sample. $2\; 1\; 2\; 7$ 4. Find the range, the variance, and the standard deviation for the following sample. $-1\; 0\; 1\; 4\; 1\; 1$ 5. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c} x & 1 & 2 & 7 \\ \hline f &1 &2 &1\\ \end{array}$ 6. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \\ \hline f &1 &1 &3 &1\\ \end{array}$ ### Applications 1. Find the range, the variance, and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 132 & 162 & 133 & 145 & 148\\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the range, the variance and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 142 & 152 & 138 & 145 & 148\\ 139 & 147 & 155 & 150 & 153 \end{matrix}$ 1. Consider the data set represented by the table $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \\ \hline f &3 &4 &16 &12 &6 &2 &1\\ \end{array}$ 1. Use the frequency table to find that $$\sum x=1256$$ and $$\sum x^2=35,926$$. 2. Use the information in part (a) to compute the sample mean and the sample standard deviation. 2. Find the sample standard deviation for the data $\begin{array}{c|c c c c c} x & 1 & 2 & 3 & 4 & 5 \\ \hline f &384 &208 &98 &56 &28 \\ \end{array}$ $\begin{array}{c|c c c c c} x & 6 & 7 & 8 & 9 & 10 \\ \hline f &12 &8 &2 &3 &1 \\ \end{array}$ 3. A random sample of $$49$$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $$3,800$$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \\ 3 &0 &0 &1 &1 &2 &4 \\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \\ 2 &0 &0 &0 &0 &1 &2 &2 &4 \\ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \\ 1 &0 &0 &1 &3 &4 &4 &4 \\ 0 &5 &6 &8 &8 \\ 0 &4 \end{array}$ For these data, $$\sum x=101$$, $$\sum x^2=244,830,000$$. 1. Compute the mean, median, and mode. 2. Compute the range. 3. Compute the sample standard deviation. 4. What must be true of a data set if its standard deviation is $$0$$? 5. A data set consisting of $$25$$ measurements has standard deviation $$0$$. One of the measurements has value $$17$$. What are the other $$24$$ measurements? 6. Create a sample data set of size $$n=3$$ for which the range is $$0$$ and the sample mean is $$2$$. 7. Create a sample data set of size $$n=3$$ for which the sample variance is $$0$$ and the sample mean is $$1$$. 8. The sample $$\{-1,0,1\}$$ has mean $$\bar{x}=0$$ and standard deviation $$\bar{x}=0$$. Create a sample data set of size $$n=3$$ for which $$\bar{x}=0$$ and $$s$$ is greater than $$1$$. 9. The sample $$\{-1,0,1\}$$ has mean $$\bar{x}=0$$ and standard deviation $$\bar{x}=0$$.  Create a sample data set of size $$n=3$$ for which $$\bar{x}=0$$ and the standard deviation $$s$$ is less than $$1$$. 10. Begin with the following set of data, call it $$\text{Data Set I}$$. $5\; -2\; 6\; 1\; 4\; -3\; 0\; 1\; 4\; 3\; 2\; 5$ 1. Compute the sample standard deviation of $$\text{Data Set I}$$. 2. Form a new data set, $$\text{Data Set II}$$, by adding $$3$$ to each number in $$\text{Data Set I}$$. Calculate the sample standard deviation of $$\text{Data Set II}$$. 3. Form a new data set, $$\text{Data Set III}$$, by subtracting $$6$$ from each number in $$\text{Data Set I}$$. Calculate the sample standard deviation of $$\text{Data Set III}$$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. ### Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. $$\text{Large Data Set 1}$$ lists the SAT scores and GPAs of $$1,000$$ students. 1. Compute the range and sample standard deviation of the $$1,000$$ SAT scores. 2. Compute the range and sample standard deviation of the $$1,000$$ GPAs. 2. $$\text{Large Data Set 1}$$ lists the SAT scores of $$1,000$$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population range and population standard deviation $$\sigma$$. 2. Regard the first $$25$$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $$s$$ and compare them to the population range and $$\sigma$$. 3. Regard the next $$25$$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $$s$$ and compare them to the population range and $$\sigma$$. 3. $$\text{Large Data Set 1}$$ lists the GPAs of $$1,000$$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population range and population standard deviation $$\sigma$$. 2. Regard the first $$25$$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $$s$$ and compare them to the population range and $$\sigma$$. 3. Regard the next $$25$$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $$s$$ and compare them to the population range and $$\sigma$$. 4. $$\text{Large Data Set 7, 7A, and 7B }$$ list the survival times in days of $$140$$ laboratory mice with thymic leukemia from onset to death. 1. Compute the range and sample standard deviation of survival time for all mice, without regard to gender. 2. Compute the range and sample standard deviation of survival time for the $$65$$ male mice (separately recorded in $$\text{Large Data Set 7A}$$). 3. Compute the range and sample standard deviation of survival time for the $$75$$ female mice (separately recorded in $$\text{Large Data Set 7B}$$). Do you see a difference in the results for male and female mice? Does it appear to be significant? 1. $$R = 3,\; s^2 = 1.7,\; s = 1.3$$. 2. 3. $$R = 6,\; s^2=7.\bar{3},\; s = 2.7$$. 4. 5. $$R = 6,\; s^2=7.3,\; s = 2.7$$. 6. 1. $$R = 30,\; s^2 = 103.2,\; s = 10.2$$. 2. 1. $$\bar{x}=28.55,\; s = 1.3$$. 2. 1. $$\bar{x}=2063,\; \tilde{x} =2000,\; \text{mode}=2000$$. 2. $$R = 3400$$. 3. $$s = 869$$. 3. 4. All are $$17$$. 5. 6. $$\{1,1,1\}$$ 7. 8. One example is $$\{-.5,0,.5\}$$. 9. 1. $$R = 1350$$ and $$s = 212.5455$$ 2. $$R = 4.00$$ and $$s = 0.7407$$ 1. 1. $$R = 4.00$$ and $$\sigma = 0.740375$$ 2. $$R = 3.04$$ and $$s = 0.808045$$ 3. $$R = 2.49$$ and $$s = 0.657843$$ ## 2.4 Relative Position of Data ### Basic 1. Consider the data set $\begin{matrix} 69 & 92 & 68 & 77 & 80\\ 93 & 75 & 76 & 82 & 100\\ 70 & 85 & 88 & 85 & 96\\ 53 & 70 & 70 & 82 & 85 \end{matrix}$ 1. Find the percentile rank of $$82$$. 2. Find the percentile rank of $$68$$. 2. Consider the data set $\begin{matrix} 8.5 & 8.2 & 7.0 & 7.0 & 4.9\\ 9.6 & 8.5 & 8.8 & 8.5 & 8.7\\ 6.5 & 8.2 & 7.6 & 1.5 & 9.3\\ 8.0 & 7.7 & 2.9 & 9.2 & 6.9 \end{matrix}$ 1. Find the percentile rank of $$6.5$$. 2. Find the percentile rank of $$7.7$$. 3. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 & 0 & 0 \\ 9 &1 &1 &1 &1 &2 &3\\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\\ 4 &2 &5 &6 &8 &8\\ 3 &9 &9 \end{array}$ 1. Find the percentile rank of the grade $$75$$. 2. Find the percentile rank of the grade $$57$$. 4. Is the $$90^{th}$$ percentile of a data set always equal to $$90\%$$? Why or why not? 5. The $$29^{th}$$ percentile in a large data set is $$5$$. 1. Approximately what percentage of the observations are less than $$5$$? 2. Approximately what percentage of the observations are greater than $$5$$? 6. The $$54^{th}$$ percentile in a large data set is $$98.6$$. 1. Approximately what percentage of the observations are less than $$98.6$$? 2. Approximately what percentage of the observations are greater than $$98.6$$? 7. In a large data set the $$29^{th}$$ percentile is $$5$$ and the $$79^{th}$$ percentile is $$10$$. Approximately what percentage of observations lie between $$5$$ and $$10$$? 8. In a large data set the $$40^{th}$$ percentile is $$125$$ and the $$82^{nd}$$ percentile is $$158$$. Approximately what percentage of observations lie between $$125$$ and $$158$$? 9. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the stem and leaf diagram in Figure 2.1.2 "Ordered Stem and Leaf Diagram". 10. Find the five-number summary and the IQR and sketch the box plot for the sample explicitly displayed in "Example 2.2.7" in Section 2.2 "Measures of Central Location". 11. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c} x & 1 & 2 & 5 & 8 & 9 \\ \hline f &5 &2 &3 &6 &4\\ \end{array}$ 12. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c c c c c} x & -5 & -3 & -2 & -1 & 0 & 1 & 3 & 4 & 5 \\ \hline f &2 &1 &3 &2 &4 &1 &1 &2 &1\\ \end{array}$ 13. Find the $$z$$-score of each measurement in the following sample data set. $-5\; \; 6\; \; 2\; \; -1\; \; 0$ 14. Find the $$z$$-score of each measurement in the following sample data set. $1.6\; \; 5.2\; \; 2.8\; \; 3.7\; \; 4.0$ 15. The sample with data frequency table $\begin{array}{c|c c c} x & 1 & 2 & 7 \\ \hline f &1 &2 &1\\ \end{array}$ has mean $$\bar{x}=3$$ and standard deviation $$s\approx 2.71$$. Find the $$z$$-score for every value in the sample. 16. The sample with data frequency table $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \\ \hline f &1 &1 &3 &1\\ \end{array}$ has mean $$\bar{x}=1$$ and standard deviation $$s\approx 1.67$$. Find the $$z$$-score for every value in the sample. 17. For the population $0\; \; 0\; \; 2\; \; 2$compute each of the following. 1. The population mean $$\mu$$. 2. The population variance $$\sigma ^2$$. 3. The population standard deviation $$\sigma$$. 4. The $$z$$-score for every value in the population data set. 18. For the population $0.5\; \; 2.1\; \; 4.4\; \; 1.0$compute each of the following. 1. The population mean $$\mu$$. 2. The population variance $$\sigma ^2$$. 3. The population standard deviation $$\sigma$$. 4. The $$z$$-score for every value in the population data set. 19. A measurement $$x$$ in a sample with mean $$\bar{x}=10$$ and standard deviation $$s=3$$ has $$z$$-score $$z=2$$. Find $$x$$. 20. A measurement $$x$$ in a sample with mean $$\bar{x}=10$$ and standard deviation $$s=3$$ has $$z$$-score $$z=-1$$. Find $$x$$. 21. A measurement $$x$$ in a population with mean $$\mu =2.3$$ and standard deviation $$\sigma =1.3$$ has $$z$$-score $$z=2$$. Find $$x$$. 22. A measurement $$x$$ in a sample with mean $$\mu =2.3$$ and standard deviation $$\sigma =1.3$$ has $$z$$-score $$z=-1.2$$. Find $$x$$. ### Applications 1. The weekly sales for the last $$20$$ weeks in a kitchen appliance store for an electric automatic rice cooker are $\begin{matrix} 20 & 15 & 14 & 14 & 18\\ 15 & 19 & 12 & 13 & 9\\ 15 & 17 & 16 & 16 & 18\\ 19 & 15 & 15 & 16 & 15 \end{matrix}$ 1. Find the percentile rank of $$15$$. 2. If the sample accurately reflects the population, then what percentage of weeks would an inventory of $$15$$ rice cookers be adequate? 2. The table shows the number of vehicles owned in a survey of 52 households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline f &2 &12 &15 &11 &6 &3 &1 &2\\ \end{array}$ 1. Find the percentile rank of $$2$$. 2. If the sample accurately reflects the population, then what percentage of households have at most two vehicles? 3. For two months Cordelia records her daily commute time to work each day to the nearest minute and obtains the following data: $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \\ \hline f &3 &4 &16 &12 &6 &2 &1 \\ \end{array}$Cordelia is supposed to be at work at $$8:00\; a.m$$. but refuses to leave her house before $$7:30\; a.m$$. 1. Find the percentile rank of $$30$$, the time she has to get to work. 2. Assuming that the sample accurately reflects the population of all of Cordelia’s commute times, use your answer to part (a) to predict the proportion of the work days she is late for work. 4. The mean score on a standardized grammar exam is $$49.6$$; the standard deviation is $$1.35$$. Dromio is told that the $$z$$-score of his exam score is $$-1.19$$. 1. Is Dromio’s score above average or below average? 2. What was Dromio’s actual score on the exam? 5. A random sample of $$49$$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $$3,800$$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \\ 3 &0 &0 &1 &1 &2 &4 \\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \\ 2 &0 &0 &0 &0 &1 &2 &2 &4 \\ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \\ 1 &0 &0 &1 &3 &4 &4 &4 \\ 0 &5 &6 &8 &8 \\ 0 &4 \end{array}$For these data, $$\sum x=101,100$$, $$\sum x^2=244,830,000$$. 1. Find the $$z$$-score of the repair that cost $$\1,100$$. 2. Find the $$z$$-score of the repairs that cost $$\2,700$$. 6. The stem and leaf diagram shows the time in seconds that callers to a telephone-order center were on hold before their call was taken. $\begin{array}{c|c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} 0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &2 &2 &2 &2 &2 &3 &3 &3 &3 &3 &3 &3 &4 &4 &4 &4 &4 \\ 0 &5 &5 &5 &5 &5 &5 &5 &5 &5 &6 &6 &6 &6 &6 &6 &6 &6 &6 &6 &7 &7 &7 &7 &7 &7 &8 &8 &8 &9 &9 \\ 1 &0 &0 &1 &1 &1 &1 &2 &2 &2 &2 &4 &4 \\ 1 &5 &6 &6 &8 &9 \\ 2 &2 &4 \\ 2 &5 \\ 3 &0 \\ \end{array}$ 1. Find the quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. 1. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 &0 &0 \\ 9 &1 &1 &1 &1 &2 &3\\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\\ 4 &2 &5 &6 &8 &8\\ 3 &9 &9 \end{array}$ 1. Find the three quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. 2. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $$3,800$$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \\ 3 &0 &0 &1 &1 &2 &4\\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \\ 2 &0 &0 &0 &0 &1 &2 &2 &4 \\ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \\ 1 &0 &0 &1 &3 &4 &4 &4 \\ 0 &5 &6 &8 &8\\ 0 &4 \end{array}$ 1. Find the percentile rank of $$800$$. 2. Find the percentile rank of $$3,200$$. 3. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \\ \hline f &3 &4 &16 &12 &6 &2 &1\\ \end{array}$ 4. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c c c c} x &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 \\ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1\\ \end{array}$ 5. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $$3,800$$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \\ 3 &0 &0 &1 &1 &2 &4\\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \\ 2 &0 &0 &0 &0 &1 &2 &2 &4 \\ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \\ 1 &0 &0 &1 &3 &4 &4 &4 \\ 0 &5 &6 &8 &8\\ 0 &4 \end{array}$ 1. Find the three quartiles. 2. Find the IQR. 3. Give the five-number summary of the data. 6. Determine whether the following statement is true. “In any data set, if an observation $$x_1$$ is greater than another observation $$x_2$$, then the $$z$$-score of $$x_1$$ is greater than the $$z$$-score of $$x_2$$. 7. Emilia and Ferdinand took the same freshman chemistry course, Emilia in the fall, Ferdinand in the spring. Emilia made an $$83$$ on the common final exam that she took, on which the mean was $$76$$ and the standard deviation $$8$$. Ferdinand made a $$79$$ on the common final exam that he took, which was more difficult, since the mean was $$65$$ and the standard deviation $$12$$. The one who has a higher $$z$$-score did relatively better. Was it Emilia or Ferdinand? 8. Refer to the previous exercise. On the final exam in the same course the following semester, the mean is $$68$$ and the standard deviation is $$9$$. What grade on the exam matches Emilia’s performance? Ferdinand’s? 9. Rosencrantz and Guildenstern are on a weight-reducing diet. Rosencrantz, who weighs $$178\; lb$$, belongs to an age and body-type group for which the mean weight is $$145\; lb$$ and the standard deviation is $$15\; lb$$. Guildenstern, who weighs $$204\; lb$$, belongs to an age and body-type group for which the mean weight is $$165\; lb$$ and the standard deviation is $$20\; lb$$. Assuming z-scores are good measures for comparison in this context, who is more overweight for his age and body type? ### Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $$\text{Data Set 1}$$ lists the SAT scores and GPAs of $$1,000$$ students. 1. Compute the three quartiles and the interquartile range of the $$1,000$$ SAT scores. 2. Compute the three quartiles and the interquartile range of the $$1,000$$ GPAs. 2. Large $$\text{Data Set 10}$$ records the scores of $$72$$ students on a statistics exam. 1. Compute the five-number summary of the data. 2. Describe in words the performance of the class on the exam in the light of the result in part (a). 3. Large $$\text{Data Sets 3 and 3A}$$ list the heights of $$174$$ customers entering a shoe store. 1. Compute the five-number summary of the heights, without regard to gender. 2. Compute the five-number summary of the heights of the men in the sample. 3. Compute the five-number summary of the heights of the women in the sample. 4. Large $$\text{Data Sets 7, 7A, and 3B}$$ list the survival times in days of $$140$$ laboratory mice with thymic leukemia from onset to death. 1. Compute the three quartiles and the interquartile range of the survival times for all mice, without regard to gender. 2. Compute the three quartiles and the interquartile range of the survival times for the $$65$$ male mice (separately recorded in $$\text{Data Set 7A}$$). 3. Compute the three quartiles and the interquartile range of the survival times for the $$75$$ female mice (separately recorded in $$\text{Data Sets 7B}$$). 1. 1. 60 2. 10 2. 3. 1. 59 2. 23 4. 5. 1. 29 2. 71 6. 7. $$50\%$$ 8. 9. $$x_{min}=25,\; \; Q_1=70,\; \; Q_2=77.5\; \; Q_3=90,\; \; x_{max}=100, \; \; IQR=20$$ 10. 11. $$x_{min}=1,\; \; Q_1=1.5,\; \; Q_2=6.5\; \; Q_3=8,\; \; x_{max}=9, \; \; IQR=6.5$$ 12. 13. $$-1.3,\; 1.39,\; 0.4,\; -0.35,\; -0.11$$ 14. 15. $$z=-0.74\; \text{for}\; x = 1,\; z=-0.37\; \text{for}\; x = 2,\; z = 1.48\; \text{for}\; x = 7$$ 16. 17. 1. 1 2. 1 3. 1 4. $$z=-1\; \text{for}\; x = 0,\; z=1\; \text{for}\; x = 2$$ 18. 19. 16 20. 21. 4.9 22. 23. 1. 55 2. 55 24. 25. 1. 93 2. 0.07 26. 27. 1. -1.11 2. 0.73 28. 29. 1. $$Q_1=59,\; Q_2=70,\; Q_3=81$$ 2. $$x_{min}=39,\; Q_1=59,\; Q_2=70,\; Q_3=81,\; x_{max}=100$$ 3. $$R = 61,\; IQR=22$$ 30. 31. $$x_{min}=26,\; Q_1=28,\; Q_2=28,\; Q_3=29,\; x_{max}=32$$ 32. 33. 1. $$Q_1=1450,\; Q_2=2000,\; Q_3=2800$$ 2. $$IQR=1350$$ 3. $$x_{min}=400,\; Q_1=1450,\; Q_2=2000,\; Q_3=2800,\; x_{max}=3800$$ 34. 35. Emilia: $$z=0.875$$, Ferdinand: $$z=1.1\bar{6}$$ 36. 37. Rosencrantz: $$z=2.2$$, Guildenstern: $$z=1.95$$. Rosencrantz is more overweight for his age and body type. 38. 39. 1. $$x_{min}=15,\; Q_1=51,\; Q_2=67,\; Q_3=82,\; x_{max}=97$$ 2. The data set appears to be skewed to the left. 40. 41. 1. $$Q_1=440,\; Q_2=552.5,\; Q_3=661\; \; \text{and}\; \; IQR=221$$ 2. $$Q_1=641,\; Q_2=667,\; Q_3=700\; \; \text{and}\; \; IQR=59$$ 3. $$Q_1=407,\; Q_2=448,\; Q_3=504\; \; \text{and}\; \; IQR=97$$ ## 2.5 The Empirical Rule and Chebyshev's Theorem ### Basic 1. State the Empirical Rule. 2. Describe the conditions under which the Empirical Rule may be applied. 3. State Chebyshev’s Theorem. 4. Describe the conditions under which Chebyshev’s Theorem may be applied. 5. A sample data set with a bell-shaped distribution has mean $$\bar{x}=6$$ and standard deviation $$s=2$$. Find the approximate proportion of observations in the data set that lie: 1. between $$4$$ and $$8$$; 2. between $$2$$ and $$10$$; 3. between $$0$$ and $$12$$. 6. A population data set with a bell-shaped distribution has mean $$\mu =6$$ and standard deviation $$\sigma =2$$. Find the approximate proportion of observations in the data set that lie: 1. between $$4$$ and $$8$$; 2. between $$2$$ and $$10$$; 3. between $$0$$ and $$12$$. 7. A population data set with a bell-shaped distribution has mean $$\mu =2$$ and standard deviation $$\sigma =1.1$$. Find the approximate proportion of observations in the data set that lie: 1. above $$2$$; 2. above $$3.1$$; 3. between $$2$$ and $$3.1$$. 8. A sample data set with a bell-shaped distribution has mean $$\bar{x}=2$$ and standard deviation $$s=1.1$$. Find the approximate proportion of observations in the data set that lie: 1. below $$-0.2$$; 2. below $$3.1$$; 3. between $$-1.3$$ and $$0.9$$. 9. A population data set with a bell-shaped distribution and size $$N=500$$ has mean $$\mu =2$$ and standard deviation $$\sigma =1.1$$. Find the approximate number of observations in the data set that lie: 1. above $$2$$; 2. above $$3.1$$; 3. between $$2$$ and $$3.1$$. 10. A sample data set with a bell-shaped distribution and size $$n=128$$ has mean $$\bar{x}=2$$ and standard deviation $$s=1.1$$. Find the approximate number of observations in the data set that lie: 1. below $$-0.2$$; 2. below $$3.1$$; 3. between $$-1.3$$ and $$0.9$$. 11. A sample data set has mean $$\bar{x}=6$$ and standard deviation $$s=2$$. Find the minimum proportion of observations in the data set that must lie: 1. between $$2$$ and $$10$$; 2. between $$0$$ and $$12$$; 3. between $$4$$ and $$8$$. 12. A population data set has mean $$\mu =2$$ and standard deviation $$\sigma =1.1$$. Find the minimum proportion of observations in the data set that must lie: 1. between $$-0.2$$ and $$4.2$$; 2. between $$-1.3$$ and $$5.3$$. 13. A population data set of size $$N=500$$ has mean $$\mu =5.2$$ and standard deviation $$\sigma =1.1$$. Find the minimum number of observations in the data set that must lie: 1. between $$3$$ and $$7.4$$; 2. between $$1.9$$ and $$8.5$$. 14. A sample data set of size $$n=128$$ has mean $$\bar{x}=2$$ and standard deviation $$s=2$$. Find the minimum number of observations in the data set that must lie: 1. between $$-2$$ and $$6$$ (including $$-2$$ and $$6$$); 2. between $$-4$$ and $$8$$ (including $$-4$$ and $$8$$). 15. A sample data set of size $$n=30$$ has mean $$\bar{x}=6$$ and standard deviation $$s=2$$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $$(2,10)$$? 2. What can be said about the proportion of observations in the data set that are below $$2$$? 3. What can be said about the proportion of observations in the data set that are above $$10$$? 4. What can be said about the number of observations in the data set that are above $$10$$? 16. A population data set has mean $$\mu =2$$ and standard deviation $$\sigma =1.1$$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $$(-1.3,5.3)$$? 2. What can be said about the proportion of observations in the data set that are below $$-1.3$$? 3. What can be said about the proportion of observations in the data set that are above $$5.3$$? ### Applications 1. Scores on a final exam taken by $$1,200$$ students have a bell-shaped distribution with mean $$72$$ and standard deviation $$9$$. 1. What is the median score on the exam? 2. About how many students scored between $$63$$ and $$81$$? 3. About how many students scored between $$72$$ and $$90$$? 4. About how many students scored below $$54$$? 2. Lengths of fish caught by a commercial fishing boat have a bell-shaped distribution with mean $$23$$ inches and standard deviation $$1.5$$ inches. 1. About what proportion of all fish caught are between $$20$$ inches and $$26$$ inches long? 2. About what proportion of all fish caught are between $$20$$ inches and $$23$$ inches long? 3. About how long is the longest fish caught (only a small fraction of a percent are longer)? 3. Hockey pucks used in professional hockey games must weigh between $$5.5$$ and $$6$$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped, has mean $$5.75$$ ounces and standard deviation $$0.125$$ ounce, what proportion of the pucks will be usable in professional games? 4. Hockey pucks used in professional hockey games must weigh between $$5.5$$ and $$6$$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped and has mean $$5.75$$ ounces, how large can the standard deviation be if $$99.7\%$$ of the pucks are to be usable in professional games? 5. Speeds of vehicles on a section of highway have a bell-shaped distribution with mean $$60\; mph$$ and standard deviation $$2.5\; mph$$. 1. If the speed limit is $$55\; mph$$, about what proportion of vehicles are speeding? 2. What is the median speed for vehicles on this highway? 3. What is the percentile rank of the speed $$65\; mph$$? 4. What speed corresponds to the $$16_{th}$$ percentile? 6. Suppose that, as in the previous exercise, speeds of vehicles on a section of highway have mean $$60\; mph$$ and standard deviation $$2.5\; mph$$, but now the distribution of speeds is unknown. 1. If the speed limit is $$55\; mph$$, at least what proportion of vehicles must speeding? 2. What can be said about the proportion of vehicles going $$65\; mph$$ or faster? 7. An instructor announces to the class that the scores on a recent exam had a bell-shaped distribution with mean $$75$$ and standard deviation $$5$$. 1. What is the median score? 2. Approximately what proportion of students in the class scored between $$70$$ and $$80$$? 3. Approximately what proportion of students in the class scored above $$85$$? 4. What is the percentile rank of the score $$85$$? 8. The GPAs of all currently registered students at a large university have a bell-shaped distribution with mean $$2.7$$ and standard deviation $$0.6$$. Students with a GPA below $$1.5$$ are placed on academic probation. Approximately what percentage of currently registered students at the university are on academic probation? 9. Thirty-six students took an exam on which the average was $$80$$ and the standard deviation was $$6$$. A rumor says that five students had scores $$61$$ or below. Can the rumor be true? Why or why not? 1. For the sample data $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \\ \hline f &3 &4 &16 &12 &6 &2 &1\\ \end{array}$ $\sum x=1,256\; \; \text{and}\; \; \sum x^2=35,926$ 1. Compute the mean and the standard deviation. 2. About how many of the measurements does the Empirical Rule predict will be in the interval $$\left (\bar{x}-s,\bar{x}+s \right )$$, the interval $$\left (\bar{x}-2s,\bar{x}+2s \right )$$, and the interval $$\left (\bar{x}-3s,\bar{x}+3s \right )$$? 3. Compute the number of measurements that are actually in each of the intervals listed in part (a), and compare to the predicted numbers. 2. A sample of size $$n = 80$$ has mean $$139$$ and standard deviation $$13$$, but nothing else is known about it. 1. What can be said about the number of observations that lie in the interval $$(126,152)$$? 2. What can be said about the number of observations that lie in the interval $$(113,165)$$? 3. What can be said about the number of observations that exceed $$165$$? 4. What can be said about the number of observations that either exceed $$165$$ or are less than $$113$$? 3. For the sample data $\begin{array}{c|c c c c c } x &1 &2 &3 &4 &5 \\ \hline f &84 &29 &3 &3 &1\\ \end{array}$ $\sum x=168\; \; \text{and}\; \; \sum x^2=300$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. 4. For the sample data set $\begin{array}{c|c c c c c } x &47 &48 &49 &50 &51 \\ \hline f &1 &3 &18 &2 &1\\ \end{array}$ $\sum x=1224\; \; \text{and}\; \; \sum x^2=59,940$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. 1. See the displayed statement in the text. 2. 3. See the displayed statement in the text. 4. 5. 1. $$0.68$$ 2. $$0.95$$ 3. $$0.997$$ 6. 7. 1. $$0.5$$ 2. $$0.16$$ 3. $$0.34$$ 8. 9. 1. $$250$$ 2. $$80$$ 3. $$170$$ 10. 11. 1. $$3/4$$ 2. $$8/9$$ 3. $$0$$ 12. 13. 1. $$375$$ 2. $$445$$ 14. 15. 1. At most $$0.25$$. 2. At most $$0.25$$. 3. At most $$0.25$$. 4. At most $$7$$. 16. 17. 1. $$72$$ 2. $$816$$ 3. $$570$$ 4. $$30$$ 18. 19. $$0.95$$ 20. 21. 1. $$0.975$$ 2. $$60$$ 3. $$97.5$$ 4. $$57.5$$ 22. 23. 1. $$75$$ 2. $$0.68$$ 3. $$0.025$$ 4. $$0.975$$ 24. 25. By Chebyshev’s Theorem at most $$1/9$$ of the scores can be below $$62$$, so the rumor is impossible. 26. 27. 1. Nothing. 2. It is at least $$60$$. 3. It is at most $$20$$. 4. It is at most $$20$$. 28. 29. 1. $$\bar{x}=48.96$$, $$s = 0.7348$$. 2. Roughly bell-shaped, the Empirical Rule should apply. True count: $$18$$, Predicted: $$17$$. 3. Nothing. 4. True count: $$23$$, Guaranteed: at least $$18.75$$, hence at least $$19$$. • Anonymous
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386096954345703, "perplexity": 653.542125641129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823228.36/warc/CC-MAIN-20181209232026-20181210013526-00172.warc.gz"}
http://math.stackexchange.com/questions/284172/given-a-function-space-with-a-norm-what-is-the-meaning-of-writing-when
# Given a function space with a norm , what is the meaning of writing $||.||$ when the used norm is $||.||_\infty$ Example 1 Given $$C_{0}(\mathbb{R}^{n})=\{f\in C(\mathbb{R^n} \ | \ \ \exists R \ge 0 \ \text{such that } f(x)=0 \ \text{for} \ ||x||\ge R \}$$ and $$||f(x)||_{\infty} = \max_{x\in R^n}|f(x)|$$ What exactly is : $||x||$ in this context? Is it : $||x||_\infty = \max |x|$ ? If the $||_||\infty$ norm is defined for functions? Is it meant for $f(x)=x$? Example 2 Given $$C^\alpha ( \mathbb{R}^\alpha)=\{ f\in B(\mathbb{R}^n)\ \ | \ \sup_{x,y \in \mathbb{R}^n , x\ne y} \frac{|f(x)-f(y)|}{||x-y||^\alpha} \}$$ and for $$f\in C^\alpha(\mathbb{R^n}) :||f||_\alpha = \sup_{x\in \mathbb{R}^n}|f(x)| + \sup_{x,y\in \mathbb{R}^n. x\ne y} \frac{|f(x)-f(y)|}{||x-y||^\alpha} < \infty$$ What exactly is $||x-y||^\alpha$ ? Is it : $||x-y||_\alpha ^\alpha = (\sup_{x\in \mathbb{R}^n}|x-y| + \sup_{x,y\in \mathbb{R}^n. x\ne y} \frac{|f(x)-f(y)|}{||x-y||^\alpha} )^\alpha < \infty$ ??? Why are the norms defined for functions and then only used for vectors?? - Your $x \in \mathbb{R}^n$ so $\| x \|$ is a norm on $\mathbb{R}^n$. All norms on finite dimensional spaces are equivalent, so it doesn't matter much which one you take. The most common choice would be the Euclidean norm $\|x\| = (x_1^2 + \cdots x_n^2)^{1/2}$. For your edited example 2 $\|x-y\|^\alpha$ almost certainly means the Euclidean norm raised to $\alpha$, i.e. $$\|x-y\|^\alpha = \big( (x_1-y_1)^2 + \cdots + (x_n - y_n)^2 \big)^{\alpha/2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710062146186829, "perplexity": 127.27907006305826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00018-ip-10-33-131-23.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/191424/preimage-of-a-continous-surjection-from-0-1-to-0-1
# Preimage of a continous surjection from (0,1) to [0,1]. For a continuous function $f: X \to Y$, the preimage of every closed set in $Y$ is closed in $X$. Let $g: (0,1) \to [0,1]$ be a continuous surjection. Isn't the preimage of $[0,1]$ = $(0,1)$ open? - Yes but it's also closed since it's the whole space. –  Matt N. Sep 5 '12 at 13:26 What about this: "$g: [0,1] \to (0,1)$ (g is onto) cannot be continuous because a continuous function maps a compact set to a compact set"? Is this wrong because $(0,1)$ is closed since its the whole space, and it is bounded, and hence compact? –  Legendre Sep 5 '12 at 13:29 Your argument that $(0,1)$ is compact uses the Heine-Borel theorem, which applies to $\mathbb{R}^n$, not arbitrary subspaces of it. It is true that $(0,1)$ is not compact in itself (under the topology induced from $\mathbb{R}$) so your argument about $g$ is correct. –  Matt Pressland Sep 5 '12 at 13:31 Re your comment question: It is not wrong. There is no continuous function $g:[0,1]\to\(0,1)$ which is onto. –  Thomas Andrews Sep 5 '12 at 13:47 And what Matt said: your argument that $(0,1)$ is compact in itself doesn't work because where you write "...hence compact" you use Heine-Borel which is a theorem about subsets of $\mathbb R^n$ but here you consider $(0,1)$ as the whole space, not as a subset of $\mathbb R$. –  Matt N. Sep 5 '12 at 13:47 show 1 more comment "Closed and bounded" is not the same as compact in general. Observe (for example) that $$\mathcal{U}=\bigl\{(1/n,1-1/n):n\in\Bbb Z,n>2\bigr\}$$ is an open cover of $(0,1)$, but has no finite subcover. Thus, you're right--such a $g$ cannot be continuous. Another way to see that it $(0,1)$ isn't compact is to observe that (for example) $$x\mapsto\cfrac{2x-1}{4(x-x^2)}$$ is a homeomorphism $(0,1)\to\Bbb R$. Since $\Bbb R$ isn't compact, then neither can $(0,1)$ be. –  Cameron Buie Sep 5 '12 at 13:43 Another way is to note that $f(x) = \frac{1}{x}$ is continuous on $(0,1)$, but unbounded. But on compact spaces, all continuous functions are bounded. (I'm not sure if this is circular or not). –  Jason DeVito Sep 5 '12 at 13:55 It isn't circular, Jason. It's certainly true that all real-valued continous functions on compact spaces are bounded (or more generally, that all metric-space-valued continuous functions on compact spaces are bounded). Thus, there can't be an unbounded continuous function $X\to\Bbb R$ for compact spaces $X$. –  Cameron Buie Sep 5 '12 at 14:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150755405426025, "perplexity": 398.62318881598276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/35972-solved-function-question.html
# Math Help - [SOLVED] Function Question.... 1. ## [SOLVED] Function Question.... Question: Express x^2 + 4x in the form (x + a)^2 + b , stating the numerical values of a and b. The functions f and g are defined as follows: $f : x \mapsto x^2 + 4x$,██ $x \geq -2$, $g : x \mapsto x + 6$,██ $x \in R$ (i) Show that the equation $gf(x) = 0$ has no real roots. (ii) State the domain of $f^{-1}$, and find an expression in terms of $x$ for $f^{-1}(x)$. (iii) Sketch, in a single diagram, the graph of $y = f(x)$ and $y = f^{-1}(x)$, making clear the relationship between these graphs. Attempt: $x^2 + 4x$ $\Rightarrow \left(x + \frac{4}{2}\right)^2 - \left(\frac{4}{2}\right)^2$ $\Rightarrow (x + 2)^2 - 4$ $a = 2$ , $b = -4$ $(i) gf(x) = (x^2 + 4x) + 6$ $gf(x) = (x + 2)^2 - 4 + 6$ $gf(x) = (x + 2)^2 + 2$ $(x + 2)(x + 2)$ $\Rightarrow x^2 + 2x + 2x + 4$ $\Rightarrow x^2 + 4x + 4$ $gf(x) = x^2 + 4x + 6$ $a = 1, b = 4, c = 6$ $x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$ $x = \frac{4 \pm \sqrt{4^2 - 4 \times 1 \times 6}}{2 \times 1}$ I can't complete because I get a negative value in the square root. Where did I go wrong? 2. Originally Posted by looi76 Question: Express x^2 + 4x in the form (x + a)^2 + b , stating the numerical values of a and b. The functions f and g are defined as follows: $f : x \mapsto x^2 + 4x$,██ $x \geq -2$, $g : x \mapsto x + 6$,██ $x \in R$ (i) Show that the equation $gf(x) = 0$ has no real roots. (ii) State the domain of $f^{-1}$, and find an expression in terms of $x$ for $f^{-1}(x)$. (iii) Sketch, in a single diagram, the graph of $y = f(x)$ and $y = f^{-1}(x)$, making clear the relationship between these graphs. Attempt: $x^2 + 4x$ $\Rightarrow \left(x + \frac{4}{2}\right)^2 - \left(\frac{4}{2}\right)^2$ $\Rightarrow (x + 2)^2 - 4$ $a = 2$ , $b = -4$ $(i) gf(x) = (x^2 + 4x) + 6$ $gf(x) = (x + 2)^2 - 4 + 6$ $gf(x) = (x + 2)^2 + 2$ $(x + 2)(x + 2)$ $\Rightarrow x^2 + 2x + 2x + 4$ $\Rightarrow x^2 + 4x + 4$ $gf(x) = x^2 + 4x + 6$ $a = 1, b = 4, c = 6$ $x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$ $x = \frac{4 \pm \sqrt{4^2 - 4 \times 1 \times 6}}{2 \times 1}$ I can't complete because I get a negative value in the square root. Where did I go wrong? The fact that you get a negative value in the square root illustrates that the function $x^2 + 4x + 6$ has no real roots, which is what you want to show. 3. Thanks icemanfan (ii) $(y + 2)^2 - 4 = x$ $(y + 2)^2 = x + 4$ $y + 2 = \sqrt{x + 4}$ $y = \sqrt{x + 4} - 2$ $f^{-1}(x) = \sqrt{x + 4} - 2$ What is the domain and what is the range? 4. i hope you know the rules on real and imaginary roots: ax^2 + b.x + c if b^2 - 4.a.c > 0 you will have two different real roots if b^2 - 4.a.c = 0 you will have equal real roots if b^2 - 4.a.c < 0 you will have imaginary conjugate roots . . . = (x+2)^2 +2 = x^2 + 4x +6 ====> a=1, b=4, c=6 b^2 - 4.a.c = -8 (two imaginary roots) 5. Originally Posted by looi76 Thanks icemanfan (ii) $(y + 2)^2 - 4 = x$ $(y + 2)^2 = x + 4$ $y + 2 = \sqrt{x + 4}$ $y = \sqrt{x + 4} - 2$ $f^{-1}(x) = \sqrt{x + 4} - 2$ What is the domain and what is the range? This is a tricky question. It turns out that you can also find the inverse function $y = -\sqrt{x + 4} - 2$. But for the function $y = \sqrt{x + 4} - 2$, the domain is $x \geq -4$, all of the numbers that will maintain a nonnegative value within the square root. The range for this function is $y \geq -2$, since the square root is always greater than or equal to zero. See if you can find the domain and range for the other inverse function. i hope you know the rules on real and imaginary roots: ax^2 + b.x + c if b^2 - 4.a.c > 0 you will have two different real roots if b^2 - 4.a.c = 0 you will have equal real roots if b^2 - 4.a.c < 0 you will have imaginary conjugate roots . . . = (x+2)^2 +2 = x^2 + 4x +6 ====> a=1, b=4, c=6 b^2 - 4.a.c = -8 (two imaginary roots) What does Discriminant mean? In the textbook it is written that "Discriminant is -8 < 0" 7. Originally Posted by looi76 (iii) Sketch, in a single diagram, the graph of $y = f(x)$ and $y = f^{-1}(x)$, making clear the relationship between these graphs. 8. b^2 - 4a.c is called the discriminant. 9. Originally Posted by looi76 What does Discriminant mean? In the textbook it is written that "Discriminant is -8 < 0" Hello, Discriminant - Wikipedia, the free encyclopedia To sum up : If you have an equation $ax^2+bx+c=0$ $\Delta=b^2-4ac$ If $\Delta <0$, the equation has no real root and $ax^2+bx+c$ is of the same sign as a. If $\Delta=0$, $ax^2+bx+c=\left(x+\frac{b}{2a}\right)^2$ If $\Delta>0$, $ax^2+bx+c=\left(x-\frac{-b+\sqrt{\Delta}}{2a}\right)\left(x-\frac{-b-\sqrt{\Delta}}{2a}\right)$ b^2 - 4a.c is called the discriminant. Originally Posted by Moo Hello, Discriminant - Wikipedia, the free encyclopedia To sum up : If you have an equation $ax^2+bx+c=0$ $\Delta=b^2-4ac$ If $\Delta <0$, the equation has no real root and $ax^2+bx+c$ is of the same sign as a. If $\Delta=0$, $ax^2+bx+c=\left(x+\frac{b}{2a}\right)^2$ If $\Delta>0$, $ax^2+bx+c=\left(x-\frac{-b+\sqrt{\Delta}}{2a}\right)\left(x-\frac{-b-\sqrt{\Delta}}{2a}\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 83, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685382008552551, "perplexity": 564.9730541307271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701174607.44/warc/CC-MAIN-20160205193934-00340-ip-10-236-182-209.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4157274/
# Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures. ## Presentation on theme: "Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures."— Presentation transcript: Variance reduction techniques 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures expedited execution and minimized storage requirements. “Statistical” efficiency is related to the variance of the output random variables from a simulation. If we can somehow reduce the variance of an output random variable of interest without disturbing its expectation, we can obtain greater precision. That is, smaller confidence intervals, for the same amount of simulating, or, alternatively, achieve a desired precision with less simulating. 3 Introduction The method of applying Variance Reduction Techniques (VRTs) usually depends on the particular model(s) of interest. Hence, a complete understanding of the model is required for proper use of VRTs. Typically, it is impossible to know beforehand how great a variance reduction might be realized. Or worse, whether the variance will be reduced at all in comparison with straight-forward simulation. If affordable, primary runs should be made to compare results of applying a VRT with those from straight-forward simulation. 4 Introduction Some VRTs are themselves going to increase computing costs, and this decrease in computational efficiency must be traded off against the potential gain in statistical efficiency. Almost all VRTs require some extra effort on the part of the analyst. This could just be to understand the technique, or sometimes little more! 5 Common random numbers (CRN) Probably the most used and popular VRT. More commonly used for comparing multiple systems rather than for analyzing a single system. Basic principle: “we should compare alternate configurations under similar experimental conditions.” Hence we will be more confident that the observed differences are due to differences in the system configuration rather than the fluctuations of the “experimental conditions.” In our simulation experiment, these experimental conditions are the generated random variates that are used to drive the model through the simulated time. 6 Common random numbers (CRN) The name for this techniques is because of the possibility in many situations of using the same basic U(0,1) random numbers to drive each of the alternate configurations through time. To see the rationale for the use of CRN, consider two systems to be compared with output performance parameters X 1j and X 2j, respectively for replication j. Let µ i = E[X i ] be the expected output measure for system i. We are interested in ζ = µ 1 - µ 2. Let Z j = X 1j – X 2j for j = 1,2 … n. Then, E[Z j ] = ζ. That is, 7 Common random numbers (CRN) Since Z j ’s are IID variables: If the simulations of two different configurations are done independently, with different random numbers, X 1j and X 2j will be independent. So the covariance will be zero. If we could somehow simulate the two configurations so that X 1j and X 2j, are positively co-related, then Cov(X 1j, X 2j ) > 0 so that the variance of the sample mean is reduced. So its value is closer to the population parameter ζ. 8 Common random numbers (CRN) CRN is a technique where we try to introduce this positive correlation by using the same random numbers to simulate all configurations. However, success of using CRN is not guaranteed. We can see that as long as the output performance measures for two configurations X 1j and X 2j, react monotonically to the common random numbers, CRN works. However, if X 1j and X 2j react in opposite directions to the random variables, CRN backfires. Another drawback of CRN is that formal statistical analyses can be complicated by the induced correlation. 9 Common random numbers (CRN) Synchronization To implement CRN, we must match up, or synchronize, the random numbers across different configurations on a particular replications. Ideally, a specific random variable should be used for a specific purpose on one configuration is used for exactly same purpose on all configurations. For example, say we are comparing different configurations of a queuing system. If a random number is used to generate service time for one system, the same random number should be used to generate service times for the other systems. 10 Antithetic variates Antithetic variates (AV) is a VRT that is more applicable in simulating a single system. As with CRN, we try to induce correlation between separate runs, but now we seek negative correlation. Basic idea: Make pairs of runs of the model such that a “small” observation on one of the run in the pair tends to be offset by a “large” observation on the other. So the two observations are negatively correlated. Then, if we use the average of the two observations in the pair as a basic data point for analysis, it will tend to be closer to the common expectation µ of an observation than it would be if the two observations in the pair were independent. 11 Antithetic variates AV induces negative correlation by using complementary random numbers to drive the two runs of the pair. If U is a particular random number used for a particular purpose in the first run, we use 1 – U for the same purpose in the second run. This number 1 – U is valid because if U ~ U(0,1) then (1 – U) ~ U(0,1). Note that synchronization is required in AV too – use of complementary random numbers for the same purpose in a pair. 12 Antithetic variates Suppose that we make n pairs of runs of the simulation model resulting in observations (X 1 (1), X 1 (2) ) … (X n (1), X n (2) ), where X j (1) is from the first run of the jth pair, and X j (2) is from the antithetic run of the jth pair. Both X j (1) are X j (2) are legitimate that is E(X j (1) ) = E(X j (2) ) = µ. Also each pair is independent of every other pair. In fact, total number of replications are thus 2n. For j = 1, 2 …n, let X j = (X j (1) + X j (2) )/2 and let average of X j ’s be the unbiased point estimator for population mean µ. 13 Antithetic variates Since, X j ’s are IID, If the two runs within a pair were made independently, then Cov(X j (1), X j (2) ) = 0. On the other hand, if we could induce negative correlation between X j (1) are X j (2) then Cov(X j (1), X j (2) ) < 0, which reduces Var[ ]. This is the goal of AV. 14 Antithetic variates Like CRN, AV doesn’t guarantees that the method will work every time. For AV to work, its response to a random number used for a particular purpose needs to monotonic – in either direction. How about combining CRN and AV? We can drive each configuration using AV and then use CRN to simulate multiple configurations under similar conditions. 15 Control variates The method of Control Variates (CV) attempts to take advantage of correlation between certain random variables to obtain a variance reduction. Suppose we are interested in the output parameter X. Particularly, we want µ = E[X]. Suppose Y be another random variable involved in the same simulation that is thought to be correlated with X – either positively or negatively. Also suppose that we know the value ν = E[Y]. 16 Control variates If X and Y are positively correlated, then it is highly likely that in a particular simulation run, Y > ν would lead to X > µ. Thus, if in a run, we notice that Y > ν, we might suspect that X is above its expectation µ as well, and accordingly we adjust X downward by some amount. Alternatively, if we find Y < ν, then we would suspect X < µ as well and adjust X upward accordingly. This way, we use our knowledge of Y’s expectation to pull X (up or down) towards its expected value µ, thus reducing variability about µ from one run to next. We call Y a control variate of X. 17 Control variates Unlike CRN or AV, success of CV does not depend on the correlation being of a particular sign. If Y and X are negatively correlated, CV would still work. Now, we would simply adjust X upward if Y > ν and downward if Y < ν. To implement this idea, we need to quantify the amount of the upward or downward adjustment of X. We will express this quantification in terms of the deviation Y – ν, of Y from its expectation. 18 Control variates Let a be a constant that has the same sign as correlation between Y and X. We use a to scale the deviation Y – ν to arrive at an adjustment to X and thus define the “controlled” estimator: X c = X – a(Y – ν). Since µ = E[X] and ν = E[Y], then for any real number a, E(X c ) = µ. So is an unbiased estimator of µ and may have lower variance. Var(X c ) = Var(X) + a 2 Var(Y) – 2a Cov(X, Y). So has less variance if and only if: a 2 Var(Y) < 2a Cov(X, Y). 19 Control variates We need to choose the value of a carefully so that the condition is always satisfied. The optimal value is: In practice, though, its bit difficult than it appears! Depending on the source and nature of the control variate Y, we may not know Var(Y) and certainly not know Cov(X, Y). Hence obtaining the optimal value of a might be difficult. Download ppt "Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182391166687012, "perplexity": 842.7370708252129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00712.warc.gz"}
https://electronicspost.com/resistors-in-series-and-parallel-combinations/
# Resistors in Series and Parallel Combinations ## Resistors in Series and Parallel Combinations In our previous post about resistors , we studied about different types of resistors. In some cases when we do not get the desires or specific resistor values we have to either use variable resistors such as potentiometers or presets to obtain such precise values. However,  such pots are too expensive to use for every case. Another method to do this,  is to combine two or more resistors to obtain the necessary precise values.Such resistor combinations cost very less. Now the question arises as to how one should combine these resistors. The resistors  can be combined in two different ways such as : 1. Series Combinations 2. Parallel combinations ### Resistors in Series Resistors are said to be connected in “Series“, when they are daisy chained together in a single line. Calculating values for two or more resistors in series is simple, just add all the values up. The  series connection ensures that the SAME current flows through all resistors. In this type of connection R Total  will always be GREATER than any of the included resistors. The total resistance is the sum of all the resistors connected in series and is given by the expression  : R Total = R1 + R2 + R3 +………… #### Example : • As the resistors are connected together in series,  the same current passes through each resistor in the chain and the total resistance, RTotal of the circuit must be equal to the sum of all the individual resistors added together. That is R Total = R1 + R2 • The total applied voltage  V is divided by the two resistors. • The current in the circuit is given as: •  Using Ohm’s Law,the voltages across R1 and R2 are  given as : • Hence, the total voltage is given as : • For example if we take V= 6 V , R1= 1 kΩ  and  R= 2 kΩ, then R Total = 1 kΩ+ 2 kΩ = 3 kΩ I = 6V/3kΩ = 2 mA Voltage across the 1 kΩ resistor is      V1 =  2 mA × 1 kΩ = 2 V Voltage across the 2 kΩ resistor is      V2 =  2 mA × 2 kΩ = 4 V So we see that we can replace the two individual resistors above with just one single “equivalent” resistor                     which will have a value of 3 . This total resistance is generally known as the Equivalent Resistance and can be defined as;  “a single value of resistance that can replace any number of resistors in series without altering the values of the current or the voltage in the circuit“. The series connection can be  characterized by the following points : 1. The same current flows through all the resistors connected in series. 2. The resultant resistor is the SUM of all resistors in series. 3. Series resistors divide the total applied voltage proportional to their magnitude. ### Voltage Divider Circuit Since the series resistors divide voltage, this idea an be used to get smaller voltage from a power supply output. For example, we have a power supply with 10V fixed output. But we want only 5V from it. How to get it ? The circuit shown above consists of  two resistors, R1 and R2 connected together in series across the supply voltage Vin. The current I is given by: Since the current I flows through   Ras well as R2, hence, by using Ohm’s law, the voltage developed across Ris given by : If  R= R,    then Vout  = Vin  /2 If more resistors are connected in series to the circuit then different voltages will appear across each resistor in turn with regards to their individual resistance  values providing different but smaller voltage points from one single supply. ### Resistors in Parallel Resistors are said to be connected together in “Parallel” when both of their terminals are respectively connected to each terminal of the other resistor or resistors. ### Parallel Combination The fig. below shows the circuit of resistors in parallel combination where two resistors Rand R2  are connected in parallel across the supply voltage E . As we can see from the fig. above : • There are two paths available for Current. Hence current divides. • But voltage across the resistors are the same. •  If the two resistors are equal the current will divide equally and the RTotal will be exactly half of either resistor or exactly one third if there are three equal resistors. • In general we can say : ### Currents in a Parallel Resistor Circuit In a parallel resistor circuit the voltage remains same across each resistor connected in parallel. However,  the current through each parallel  resistor is not necessarily the same since the value of the resistance in each branch determines the current within that branch. The total current, ITotal  in a parallel resistor circuit is the sum of the individual currents flowing in all the parallel branches which can be determined by using Ohm’s law. #### Example Let’s take  the voltage E be 6V. The resistors be R= 1 kΩ and R= 2 kΩ. By using Ohm’s law , the current through R= 6 V/ 1 kΩ = 6 mA  and   current through R=  6 V/ 2 kΩ = 3 mA Hence the total current is 6 mA + 3 mA = 9 mA 6 V will generate 9 mA only when the total resistance of the circuit is equal to : 6 V/ 9 mA = 0.66 kΩ Hence the effective resistance of  Rand  Rconnected in parallel is 0.66 kΩ. This effective resistance can also be calculated by using the  formula as below : Thus the parallel connection can be characterized by : 1. The same voltage exists across all the resistors connected in parallel . 2. The reciprocal of resultant or total resistance is the sum of reciprocals of all resistors in parallel . 3. Parallel resistors divide the total current in an inverse proportion to their magnitude. 4. When a set of resistors are connected in parallel, the effective resistance is always smaller then the smallest in the set. For example:  Let 1 kΩ and 10 kΩ resistors are  in parallel . Then the resultant is    (1 k × 10 k)/ 11 k = 0.9 kΩ , which is smaller than 1 k ( the smallest). ### Resistors in Series and Parallel Combinations In some electrical and electronic circuits it is required to connect various resistors together in “BOTH” parallel and series combinations within the same circuit and produce more complex resistive networks. Now the question arises, how do we calculate the combined or total circuit resistance, currents and voltages for these resistive combinations. Resistor circuits that combine series and parallel resistors networks together are generally known as Resistor Combination or mixed resistor circuits. The method of calculating the circuits equivalent resistance is the same as that for any individual series or parallel circuit. The most important thing to keep in mind in such calculations is that resistors in series carry exactly the same current and that resistors in parallel have exactly the same voltage across them. #### Example Let us consider the circuit shown in fig. below : In the above circuit  let us calculate the total current ( IT ) taken from the 12 v supply. We can see that the two resistors, R2 and R3 are actually connected in a “SERIES” combination so we can add them together to produce an equivalent resistance.  The resultant resistance for this combination would therefore be: R2+ R= 8 Ω +4 Ω = 12 Ω So we can replace both resistor R2 and R3 above with a single resistor of resistance value 12 Ω as shown in fig. below: So our circuit now has a single resistor RA in “PARALLEL” with the resistor R4. Using our resistors in parallel equation we can reduce this parallel combination to a single equivalent resistor value of R(combination) using the formula for two parallel connected resistors as follows. The resultant resistive circuit now looks something like this: We can see that the two remaining resistances, R1 and R(combination) are connected together in a “SERIES” combination and again they can be added together (resistors in series) so that the total circuit resistance therefore given as: A single resistance of just 12 Ω can be used to replace the original four resistors connected together in the original circuit. Now by using Ohm’s law, the value of the circuit current ( I) is simply calculated as:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850891649723053, "perplexity": 1179.5212590048823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00389.warc.gz"}
http://www.physicsforums.com/showthread.php?t=9869
# Pi and Space-time by hedons Tags: spacetime P: 38 Is the value of Pi related to the curvature of space-time? Is it the value that it is because space-time is more or less flat? If the universe were of a greater open or closed curve, would Pi be a different value? Thanks, Glenn Sci Advisor PF Gold P: 2,226 Pi is a constant, so it's value is absolute. but if you take it as the ratio between the area of a circle and the square of it's radius then you are perfectly correct, it's value is dependent on the curvature of space time. If we imagine the universe (ignoring all other curvature) as negatively or postively curved then the ratio of a circle's area to it's radius squared would not by pi, but it would tend to pi the smaller the radius of the circle. P: 271 I fundamentally believe that it's related to the curvature of space-time itself. If space-time had a different curvature, I believe that would have a different value. For our 3-dimensional space/time pairing, I suspect that 3.00000000000. . . would be the value if the cosmological constant were exactly one. Some current workers in this field suggest that the graviton must be a super-massive particle. I doubt this, however. Even if they find a super-massive particle that they believe to be the mediator of the gravitational force, I think that eventually a very much less massive particle will turn up. This is, of course, rather muddied by the fact that their is a relationship between mass, momentum and energy that makes mass a variable for any given object or moment. The fact that no quantizable unit of mass has been found subatomically, even more fundamental than quarks, may be due to this relationship. PF Gold P: 2,226 ## Pi and Space-time What's the cosmological constant got to do with this? I think you mean the density parameter $\Omega$ which is equal to 1 when the curvature of the universe $\kappa[/tex] is equal to zero. Pi is not just a geometric property it is a number in it's own right and can only have 1 value and whatever the curvature of the universe this is not 3. Even in a non-flat space, the ratio of a circels area to it's radius squared can only equal three for certain sized circles in spaces with a certain curvature. In a truly flat space a circle will always have the ratio pi which is equal to 3.14..... I've never herad of the graviton evre being proposed as a massive particle as such a thing would go against what is known about gravity and what quantum field theory says. Sci Advisor P: 5,891 Pi is the ratio of the circumference to the diameter of a circle in Euclidean (plane) geometry. It is a mathematical constant and has nothing to with the physical geometry of the universe. P: 2 Quote by mathman Pi is the ratio of the circumference to the diameter of a circle in Euclidean (plane) geometry. It is a mathematical constant and has nothing to with the physical geometry of the universe. The carefully measured value of pi is determined by measuring the ratio of the circumference to the diameter of a circle in the real-world, where the curvature of space-time is non-zero! The carefully measured value of pi may be characteristic of the local curvature of space time. Likewise, phi is a constant that is characteristic of growth in the natural world, and phi is related to pi by phi = 2 cos(pi/5). Math Emeritus Sci Advisor Thanks PF Gold P: 38,706 Quote by jbacsa The carefully measured value of pi is determined by measuring the ratio of the circumference to the diameter of a circle in the real-world, where the curvature of space-time is non-zero! The carefully measured value of pi may be characteristic of the local curvature of space time. Likewise, phi is a constant that is characteristic of growth in the natural world, and phi is related to pi by phi = 2 cos(pi/5). No, [itex]\pi$ is not a "carefully measured value". It is an exact value based on abstract mathematics is not "measured" at all. $\pi$ does NOT depend upon the local curvature of space time. $\pi$ is the ratio of circumference to diameter in a (mathematically abstract) circle in (mathematically abstract) Euclidean space (which has measure 0). It makes no sense to talk about $\pi$ in that sense in a non-Euclidean space since then the ratio of the circumference of a circle to its diameter is not a constant at all. Yes, "phi is related to pi by phi = 2 cos(pi/5)", but phi is not "characteristic of growth in the natural world" except in a few special situations where we can find a number approximately equal to phi. Mentor P: 16,283 Quote by Jeebus For our 3-dimensional space/time pairing, I suspect that 3.00000000000. . . would be the value if the cosmological constant were exactly one. No, pi would only be 3 if a circle were a hexagon. Mentor P: 8,272 This thread is extremely old (4 and a half years old, in fact!). I suspect that when the post was made, PF had a different level strictness than it does now. I think this thread should be closed or, at best, moved to the mathematics forum, since this has nothing to do with relativity! P: 2 Quote by cristo This thread is extremely old (4 and a half years old, in fact!). I suspect that when the post was made, PF had a different level strictness than it does now. I think this thread should be closed or, at best, moved to the mathematics forum, since this has nothing to do with relativity! OK - found this (old) thread as I was looking for information on the relationship of pi and phi. This discussion though might be relevant to relativity, as space-time is considered to be Euclidean, with distortions due to mass. I accept the comment posted by HallsofIvy that pi is based on a mathematically abstract circle - but we live in the real world, with non-Euclidean space-time that we fail to notice. I.e. an empirically determined value of pi is based on a non-Euclidean circle. Some would argue that "all growth structures are regulated by the golden mean". Mentor P: 14,243 Quote by jbacsa I.e. an empirically determined value of pi is based on a non-Euclidean circle. We do not determine the value of pi empirically. It is not a measured constant, end of story. We already know the value of pi to far, far greater accuracy than the best any scientific measurement. The fine structure constant is one of the (if not the) best-measured physical constants; we know it to twelve places of accuracy. Many a nerd can spout pi to twenty places or more; with computers we know the value to millions of places. Quote by cristo This thread is extremely old (4 and a half years old, in fact!). I suspect that when the post was made, PF had a different level strictness than it does now. I think this thread should be closed or, at best, moved to the mathematics forum, since this has nothing to do with relativity! This thread doesn't have much to do with math, either, because pi is not a measured quantity. The best thing to do with this thread (and similar necromanced threads) is to nuke it to oblivion. P: 15 Pi is totally mathematical constant and has no relation with experiment; it can be computed in many ways. Check this link for more information (http://en.wikipedia.org/wiki/Computing_%CF%80). So if there is a curvature of space then the circle will have different ratio (circumference/radius) than (2π), and that is one way to know that the space is curved, so the flat circle is from our (Mind Inventions and Creation) and have nothing to do with Physics, and Math is not an experimental science it is an absolutely (A Mind Creation). P: 213 pi is a mathematical constant...if you introduce curvature of space-time into calculating the value of pi then you are introducing curvature of space-time into circle...so upon introducing this curvature,circle cant be a circle becasue the distance from the center to its boundary would be different in some places if you introduce curvature of space-time into circle and hence circle(curved spacetime) is not a circle and so the pi(curved spacetime) is not the pi... P: 19 I kinda support Jeebus's comments. Here is why i do believe so. If you look at the way relativity was discovered, it goes like this 1. Newton said - space-time and fundamental observables like mass are absolute. 2. SR said - these are all relative 3. GR said - space-time bends, they are not even straight. in all these, one thing is either assumed or taken as axiom - "Laws or nature are absolute, they will not vary". Laws of nature are essentially mathematics expressions. If you take realtivity to the next level you may get "Mathematics is not absolute either" - I know this goes against the current school of thought which was indeed started by Plato. What of Maths was relative. say - under heavy curvature of space time 1 electron + 1 more may not give 2 electorns. in other words let the number line be bent - this will distort PI, Exp and all other magical numbers. A straight numberline = euceldain geometry. wht if the number line was bent - bent against the imaginary axis ? P: 869 Quote by Jeebus I fundamentally believe that it's related to the curvature of space-time itself. If space-time had a different curvature, I believe that would have a different value. For our 3-dimensional space/time pairing, I suspect that 3.00000000000. . . would be the value if the cosmological constant were exactly one. I guess that would explain why the Bible has it as exactly 3. P: 28 Quote by HallsofIvy No, $\pi$ is not a "carefully measured value". It is an exact value based on abstract mathematics is not "measured" at all. $\pi$ does NOT depend upon the local curvature of space time. $\pi$ is the ratio of circumference to diameter in a (mathematically abstract) circle in (mathematically abstract) Euclidean space (which has measure 0). It makes no sense to talk about $\pi$ in that sense in a non-Euclidean space since then the ratio of the circumference of a circle to its diameter is not a constant at all. Yes, "phi is related to pi by phi = 2 cos(pi/5)", but phi is not "characteristic of growth in the natural world" except in a few special situations where we can find a number approximately equal to phi. but we also have to see at reality and not only at the abstract math Mentor P: 14,243 This is getting ridiculous. [itex[\pi[/itex] is an abstract mathematical concept. We will not need to change the value for $\pi$ if we find the universe is curved. We already know that the circumference of a circle of radius $r$ on the surface of the Earth is (assuming a spherical Earth), $c=2\pi R_e \sin\frac r {R_e}$ rather than $c=2\pi r$. This fact does not alter either the value of $\pi$ or the equation of the radius of a circle on a Euclidean plane. P: 15 It is so Silly !!! Pi is totally mathematical constant, it will not be changed with any change in the space or time or atoms or electrons .............. If physics would try to change mathematics then all physics theories will be a trash, since that special relativity and the general relativity are based on (strong mathematical background + experiments) or else it will not be anymore an acceptable theory. So if you say that a physical theory will change mathematics, then you are saying that a theory will change itself. This issue is a philosophically problem, it belong to the (Philosophy of Mathematics). Related Discussions Special & General Relativity 10 Special & General Relativity 34 General Physics 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018499851226807, "perplexity": 549.6777357471678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678977/warc/CC-MAIN-20140305060758-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
https://studydaddy.com/question/let-be-the-set-of-all-calendar-dates-from-4-30-1789-to-9-25-2014-let-be-the-set
QUESTION # Let be the set of all calendar dates from 4/30/1789 to 9/25/2014. Let be the set of names of U. presidents. Let be the set of names of first ladies... Let  be the set of all calendar dates from 4/30/1789 to 9/25/2014. Let  be the set of names of U.S. presidents. Let  be the set of names of first ladies of the U.S. Consider the following two functions  where  is the name of the president of the U.S. on date  (the incumbent on inauguration days, to avoid ambiguity) and  where  is the name of the first lady of the U.S. during the presidency of president .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111115097999573, "perplexity": 1311.761967290753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823710.44/warc/CC-MAIN-20181212000955-20181212022455-00200.warc.gz"}
http://www.oxfordmathcenter.com/drupal7/node/198
# Exercises - Linear Combinations 1. Find all solutions in integers to the following. (Hint: First find one solution in integers by successively writing each remainder seen in Euclid's Algorithm as an appropriate linear combination. Then, consider how you can alter together the values of $x$ and $y$, without adding anything to the left side of each equation.) 1. $105x + 121y = 1$ 2. $12345x + 67890y = \textrm{gcd}(12345,67890)$ 3. $54321x + 9876y = \textrm{gcd}(54321,9876)$ 2. What follows is a modified version of the Euclidean Algorithm: 1. Set $x=1$, $g=a$, $v=0$, $w=b$, $s=0$, and $t=0$. 2. If $w=0$, let $y=(g-ax)/b$ and stop. 3. Find $q$ and $t$ so that $g = qw + t$, with $0 \le t \lt w$. 4. Set $s=x-qv$ 5. Set $(x,g) = (v,w)$ 6. Set $(v,w) = (s,t)$ 7. Go to step ii. Use this algorithm to determine the greatest common divisor, $g$, of $a$ and $b$, as well as the solutions to $ax+by=g$ for the following values of $a$ and $b$: 1. $a=19789$, $b=23548$ 2. $a=31875$, $b=8387$ 3. $a=22241739$, $b=19848039$ 3. The following questions concern linear combinations of three values: 1. Find a solution in integers to $6x + 15y + 20z = 1$. 2. Under what conditions are their integers $x,y,$ and $z$ where $ax + by + cz = 1$? Describe a method to find such a solution, when it exists. 3. Use your method from (b) to find a solution in integers to $15x+341y+385z=1$ 4. Suppose $\gcd(a,b)=1$. Prove that $ax+by=c$ has integer solutions $x$ and $y$ for every integer $c$, then find a solution to $37x+47y=103$ where $x$ and $y$ are as small as possible. ◆ ◆ ◆
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431496858596802, "perplexity": 213.91733116044884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591683.78/warc/CC-MAIN-20180720135213-20180720155213-00034.warc.gz"}
https://helpdesk.uts.sc.edu/study/colleges_schools/artsandsciences/mathematics/beyond_classroom/colloquia_and_seminars/analysis_seminar/index.php
# Department of Mathematics ## Analysis Seminar We invite speakers to present original research in analysis. ### 2021 – 2022 Academic Year Organized by: Daniel Dix ( [email protected] ) This is a traditional in-person seminar. No recordings are planned. Come and participate! Organizational Meeting • Friday, Feb 4 • 3pm • COL 1015 Daniel Dix • Friday, Feb 11 • 2:15pm • COL 1015 Abstract: This will be an overview of how an interesting groupoid can be derived from a molecular system of three identical nuclei plus some number of electrons. The structure of the groupoid will be fully determined, and that will significantly constrain the electronic energy eigenvalue intersection patterns for the molecule. Daniel Dix • Friday, Feb 18 • 2:15pm • COL 1015 Abstract: We will show how a groupoid arises from the tangent mapping of a section of an associated bundle to the $$C^2$$ invariant subspace bundle (that we derived from a triatomic molecular system in Part 1) at a triple eigenvalue intersection point that has maximal $$S_3$$ symmetry. By linearization and passage to the range of the tangent mapping we arrive at a computable groupoid that gives information about the eigenvalue intersections of the molecular system. Daniel Dix • Friday, Feb 25 • 2:15pm • COL 1015 Abstract: If $$f\colon \mathbb R^n\to M$$ is a $$C^2$$ mapping, where $$M$$ is an $$m$$-dimensional manifold, equipped with an atlas of homeomorphisms $$\phi_\mu\colon U_\mu\to V_\mu$$, where $$U_\mu\subset\mathbb R^m$$ and $$V_\mu\subset M$$ are open sets, with $$C^2$$ overlap mappings, and $$\mathbf l_0\in\mathbb R^n$$, then there is a natural groupoid defined as follows. The objects are pairs $$(\mu,A)$$, where $$f(\mathbf l_0)\in V_\mu$$ and $$A=D(\phi_\mu^{-1}\circ f)(\mathbf l_0)$$. An arrow between objects $$(\mu,A)$$ and $$(\nu,B)$$ is determined by a triple $$(\mu,G_{\nu,\mu},\nu)$$, where $$G_{\nu,\mu}$$ is a linear isomorphism so that $$B=G_{\nu,\mu}A$$, i.e. $$G_{\nu,\mu}=D(\phi_\nu^{-1}\circ\phi_\mu)(\phi_\mu^{-1}(f(\mathbf l_0)))$$. This groupoid is another way of presenting the tangent mapping (differential) of $$f$$ at $$\mathbf l_0$$. We apply this construction where $$n=3$$ and $$M=\mathfrak B$$ and $$f(\mathbf l) =(\mathbf l,\Pi\breve{\mathcal H}(\mathbf l))$$, where $$\Pi\breve{\mathcal H}$$ is the trace-free projection of the molecular electronic Hamiltonian restricted to a 3-dimensional invariant subspace $$\mathcal F(\mathbf l)$$, and where $$\mathbf l_0$$ is an equilateral triangle configuration at which the three lowest eigenvalues of $$\breve{\mathcal H}(\mathbf l_0)$$ coincide. This construction, combined with certain functorial (groupoid homomorphism) images, leads to a groupoid we can completely compute. Ralph Howard • Friday, Mar 18 • 2:15pm • COL 1015 Abstract:  For curves in the plane which have linearly independent velocity and acceleration vectors there a notion of affine arclength and affine curvature which is invariant under area preserving affine maps of the plane.  In terms of the Euclidean arclength $$s$$ and curvature $$\kappa$$ the affine arclength is $$\int_a^b \kappa^{1/3} ds$$ We will outline the basic theory of the differential geometryof affine curves and give some new results which estimate the area bounded by the curve and the segment between the endpoints of the curve in terms of the affine arclength of  the curve and its affine curvature. Most of the proofs do not involve any mathematics not in in Math 241 and 242 (or Math 550 and 520). Ralph Howard • Friday, Mar 18 • 2:15pm • COL 1015 Abstract:  For curves in the plane which have linearly independent velocity and acceleration vectors there a notion of affine arclength and affine curvature which is invariant under area preserving affine maps of the plane.  In terms of the Euclidean arclength $$s$$ and curvature $$\kappa$$ the affine arclength is $$\int_a^b \kappa^{1/3} ds$$ We will outline the basic theory of the differential geometryof affine curves and give some new results which estimate the area bounded by the curve and the segment between the endpoints of the curve in terms of the affine arclength of  the curve and its affine curvature. Most of the proofs do not involve any mathematics not in in Math 241 and 242 (or Math 550 and 520). Stephen Fenner • Friday, Apr 8 • 2:15pm • COL 1015 Abstract: The quantum fanout gate has been used to speed up quantum algorithms such as the quantum Fourier transform used in Shor's quantum algorithm for factoring.  Fanout can be implemented by evolving a system of qubits via a simple Hamiltonian involving pairwise interqubit couplings of various strengths.  We characterize exactly which coupling strengths are sufficient for fanout: they are sufficient if and only if they are odd multiples of some constant energy value J.  We also investigate when these couplings can arise assuming that strengths vary inversely proportional to the squares of the distances between qubits. This is joint work with Rabins Wosti. Rabins Wosti, Computer Science and Engineering Department • Friday, Apr 15 • 2:15pm • COL 1015 Abstract: The quantum fanout gate has been used to speed up quantum algorithms such as the quantum Fourier transform used in Shor's quantum algorithm for factoring.  Fanout can be implemented by evolving a system of qubits via a simple Hamiltonian involving pairwise interqubit couplings of various strengths.  We characterize exactly which coupling strengths are sufficient for fanout: they are sufficient if and only if they are odd multiples of some constant energy value J.  We also investigate when these couplings can arise assuming that strengths vary inversely proportional to the squares of the distances between qubits. This is joint work with Stephen Fenner. Margarite Laborde • Friday, Apr 22 • 2:15pm • COL 1015 Abstract: Symmetry laws showcase the elegant relationship between mathematics and physical systems. Noether’s theorem, which relates symmetries in a Hamiltonian with conserved physical quantities, is one of the most impactful theorems throughout physics. As such, describing this property in a Hamiltonian is of the utmost importance in many applications–from determining state transition laws to expressing resource theories. In this talk, I give algorithms to determine if a Hamiltonian is symmetric with respect to a discrete, finite group $$G$$ and its associated unitary representation $$\{U(g)\}_{g\in G}$$. Furthermore, I directly relate the acceptance probability of these algorithms with the typical commutation relationship for symmetry in quantum mechanics. I show that one of the algorithms can efficiently compute the normalized commutator of the group representation and Hamiltonian. Joint work with Mark M. Wilde and available as arXiv:2203.10017
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407622814178467, "perplexity": 712.6973486595216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00346.warc.gz"}
https://www.physicsforums.com/threads/differentiability-with-sinx-x.277472/
# Homework Help: Differentiability with sinx/x 1. Dec 6, 2008 ### tomboi03 Let f(x)= sinx/x if x $$\neq$$ 0 and f(0)=1 Find a polynomial pN of degree N so that |f(x)-pN(x)| $$\leq$$ |x|^(N+1) for all x. Argue that f is differentiable, f' is differentiable, f" is differentiale .. (all derivatives exist at all points). Thank You 2. Dec 6, 2008 ### lurflurf $$f(x)=\int_0^1 \cos(x t) dt$$ so approximate cos first and the integral for f with cos approximated will approximate f. The derivatives clearly exist and |(D^n)f|<1/n+1 3. Dec 10, 2008 ### tomboi03 i still don't understand this, can you elaborate? Thank You 4. Dec 12, 2008 ### lurflurf Since cos(x t) is smooth the integral will be as well. Since Cos(x)~1-x^2/2+x^4/24-x^6/720+... is a family of approximations of cosine (each member being a sum the first n=1,2,3,... terms) we may repace cosine by an approximation in the integral representation of f to see that f~1-x^2/6+x^4/120-x^6/5040+... are approximations of f. You function f at zero has what is called a removable singularity, a ficticious singularity that is caused by the representation, not by actual properties of the function. By representing the function differently (such as using the integral representation I gave) the singularity and any problems it may cause vanish. 5. Dec 12, 2008 ### HallsofIvy Did you consider taking the Taylor's series for sin x, around x= 0, and dividing each term by x? That seems to me to be far simpler than using the integral form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938803911209106, "perplexity": 2976.0969381588216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00339.warc.gz"}
https://www.lmfdb.org/L/2/4368/13.12
## Results (1-50 of 252 matches) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 2-4368-1.1-c1-0-0 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.364095 Modular form 4368.2.a.bc.1.1 2-4368-1.1-c1-0-1 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.429076$ Elliptic curve 4368.e Modular form 4368.2.a.e Modular form 4368.2.a.e.1.1 2-4368-1.1-c1-0-10 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.659973 Elliptic curve 4368.b Modular form 4368.2.a.b Modular form 4368.2.a.b.1.1 2-4368-1.1-c1-0-11 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.701373$ Modular form 4368.2.a.bd.1.1 2-4368-1.1-c1-0-12 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.713681 Modular form 4368.2.a.bl.1.1 2-4368-1.1-c1-0-13 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.728036$ Elliptic curve 4368.d Modular form 4368.2.a.d Modular form 4368.2.a.d.1.1 2-4368-1.1-c1-0-14 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.743523 Elliptic curve 4368.n Modular form 4368.2.a.n Modular form 4368.2.a.n.1.1 2-4368-1.1-c1-0-15 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.762088$ Elliptic curve 4368.g Modular form 4368.2.a.g Modular form 4368.2.a.g.1.1 2-4368-1.1-c1-0-16 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.767089 Modular form 4368.2.a.bn.1.2 2-4368-1.1-c1-0-17 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.776970$ Modular form 4368.2.a.bs.1.1 2-4368-1.1-c1-0-18 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.779771 Modular form 4368.2.a.bk.1.1 2-4368-1.1-c1-0-19 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.780648$ Modular form 4368.2.a.bq.1.2 2-4368-1.1-c1-0-2 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.465639 Elliptic curve 4368.m Modular form 4368.2.a.m Modular form 4368.2.a.m.1.1 2-4368-1.1-c1-0-20 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.791456$ Elliptic curve 4368.q Modular form 4368.2.a.q Modular form 4368.2.a.q.1.1 2-4368-1.1-c1-0-21 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.796109 Modular form 4368.2.a.bc.1.2 2-4368-1.1-c1-0-22 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.849878$ Modular form 4368.2.a.bd.1.2 2-4368-1.1-c1-0-23 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.900936 Elliptic curve 4368.k Modular form 4368.2.a.k Modular form 4368.2.a.k.1.1 2-4368-1.1-c1-0-24 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.911969$ Modular form 4368.2.a.bs.1.3 2-4368-1.1-c1-0-25 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.929175 Elliptic curve 4368.u Modular form 4368.2.a.u Modular form 4368.2.a.u.1.1 2-4368-1.1-c1-0-26 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.967781$ Modular form 4368.2.a.bs.1.2 2-4368-1.1-c1-0-27 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.977761 Modular form 4368.2.a.be.1.2 2-4368-1.1-c1-0-28 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $0.979978$ Modular form 4368.2.a.bm.1.1 2-4368-1.1-c1-0-29 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.989497 Modular form 4368.2.a.bp.1.3 2-4368-1.1-c1-0-3 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.504842$ Modular form 4368.2.a.bg.1.1 2-4368-1.1-c1-0-30 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.990966 Modular form 4368.2.a.bg.1.2 2-4368-1.1-c1-0-31 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $1.01235$ Elliptic curve 4368.x Modular form 4368.2.a.x Modular form 4368.2.a.x.1.1 2-4368-1.1-c1-0-32 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 1.01563 Elliptic curve 4368.ba Modular form 4368.2.a.ba Modular form 4368.2.a.ba.1.1 2-4368-1.1-c1-0-33 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $1.01699$ Elliptic curve 4368.y Modular form 4368.2.a.y Modular form 4368.2.a.y.1.1 2-4368-1.1-c1-0-34 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 1.03569 Modular form 4368.2.a.bk.1.2 2-4368-1.1-c1-0-35 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $1.05833$ Modular form 4368.2.a.bn.1.3 2-4368-1.1-c1-0-36 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.06388 Modular form 4368.2.a.br.1.1 2-4368-1.1-c1-0-37 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.08276$ Elliptic curve 4368.a Modular form 4368.2.a.a Modular form 4368.2.a.a.1.1 2-4368-1.1-c1-0-38 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.08444 Modular form 4368.2.a.bb.1.1 2-4368-1.1-c1-0-39 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.09618$ Modular form 4368.2.a.br.1.2 2-4368-1.1-c1-0-4 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 0.516266 Modular form 4368.2.a.bp.1.1 2-4368-1.1-c1-0-40 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $1.10906$ Modular form 4368.2.a.bf.1.2 2-4368-1.1-c1-0-41 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 1.11124 Modular form 4368.2.a.bl.1.2 2-4368-1.1-c1-0-42 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $1.12900$ Elliptic curve 4368.z Modular form 4368.2.a.z Modular form 4368.2.a.z.1.1 2-4368-1.1-c1-0-43 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 1.14921 Modular form 4368.2.a.bq.1.3 2-4368-1.1-c1-0-44 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.16541$ Modular form 4368.2.a.bo.1.1 2-4368-1.1-c1-0-45 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.20600 Elliptic curve 4368.c Modular form 4368.2.a.c Modular form 4368.2.a.c.1.1 2-4368-1.1-c1-0-46 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.26332$ Modular form 4368.2.a.bh.1.1 2-4368-1.1-c1-0-47 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.29374 Modular form 4368.2.a.br.1.3 2-4368-1.1-c1-0-48 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.29735$ Elliptic curve 4368.f Modular form 4368.2.a.f Modular form 4368.2.a.f.1.1 2-4368-1.1-c1-0-49 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0 0 1.31628 Modular form 4368.2.a.bs.1.4 2-4368-1.1-c1-0-5 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0$ $0$ $0.548388$ Modular form 4368.2.a.bn.1.1 2-4368-1.1-c1-0-50 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.35093 Modular form 4368.2.a.bm.1.2 2-4368-1.1-c1-0-51 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.38065$ Elliptic curve 4368.j Modular form 4368.2.a.j Modular form 4368.2.a.j.1.1 2-4368-1.1-c1-0-52 $5.90$ $34.8$ $2$ $2^{4} \cdot 3 \cdot 7 \cdot 13$ 1.1 $$1.0 1 0.5 1 1.40782 Elliptic curve 4368.h Modular form 4368.2.a.h Modular form 4368.2.a.h.1.1 2-4368-1.1-c1-0-53 5.90 34.8 2 2^{4} \cdot 3 \cdot 7 \cdot 13 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.41306$ Elliptic curve 4368.o Modular form 4368.2.a.o Modular form 4368.2.a.o.1.1 Next
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641886949539185, "perplexity": 640.3531699034497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00860.warc.gz"}
http://extoxnet.orst.edu/faqs/dietcancer/web2/twohowto.html
Recommendation 2 "How to" HOW TO CALCULATE YOUR BODY MASS INDEX OR BMI BMI is your weight (in kilograms) over your height squared (in centimeters). Let’s calculate, however, using pounds and inches. For instance, the BMI of a person who is 5’3" and weighs 125 lbs is calculated as follows: Multiply the weight in pounds by 0.45 (the metric conversion factor) 125 X 0.45 = 56.25 kg 2. Multiply the height in inches by 0.025 (the metric conversion factor) 63 X 0.025 = 1.575 m 3. Square the answer from step 2 1.575 X 1.575 = 2.480625 4.Divide the answer from step 1 by the answer from step 3 56.25 : 2.480625 = 22.7 The BMI for a person who is 5’3" and weighs 125 lbs is 22.7 or practically, 23. A healthy BMI ranges between 19 and 25.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9762030243873596, "perplexity": 1585.4361079769883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639414.6/warc/CC-MAIN-20150417045719-00081-ip-10-235-10-82.ec2.internal.warc.gz"}
http://zbmath.org/?q=an:1130.11011
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Several polynomials associated with the harmonic numbers. (English) Zbl 1130.11011 For nonnegative integers $r$ and $n$ let ${H}_{n}^{\left(r\right)}={\sum }_{1\le {n}_{0}+\cdots +{n}_{r}\le n}\frac{1}{{n}_{0}{n}_{1}\cdots {n}_{r}}$ be the $n$th generalized harmonic number of rank $r$. In this paper, the authors develop polynomials ${H}_{n}^{\left(r\right)}\left(z\right)$ of degree $n-r$ in the complex variable $z$ generalizing the above harmonic numbers. These polynomials are given by $\frac{{\left[-ln\left(1-t\right)\right]}^{1+r}}{t{\left(1-t\right)}^{1-z}}=\sum _{n=0}^{\infty }{H}_{n}^{\left(r\right)}\left(z\right){t}^{n}·$ The harmonic polynomials can be expressed in terms of the generalized harmonic numbers as ${H}_{n}^{\left(r\right)}\left(z\right)=\sum _{k=0}^{n-r}{\left(-1\right)}^{k}{H}_{n}^{\left(r+k\right)}\frac{{z}^{k}}{k!},$ which is analogous to the formula relating Bernoulli polynomials and Bernoulli numbers. In the paper, the authors prove various relations between the generalized harmonic polynomials and other interesting sequences of polynomials such as generalized Stirling polynomials, Bernoulli polynomials, multiple Gamma functions, Cauchy polynomials and Nörlund polynomials. For example, Theorem 5.1 shows that $\frac{{\left[x-z+1\right]}_{n}}{n!}=\sum _{k=0}^{n}\frac{1}{\left(k+1\right)!}{H}_{n}^{\left(k\right)}\left(z-x+1\right),$ where, as usual, ${\left[x\right]}_{n}=x\left(x+1\right)\cdots \left(x+n-1\right)$. The proofs make strong use of the summation property of Riordan arrays [see L. W. Shapiro, S. Getu, W.-J. Woan and L. C. Woodson, Discrete Appl. Math. 34, No. 1–3, 229–239 (1991; Zbl 0754.05010)]. ##### MSC: 11B68 Bernoulli and Euler numbers and polynomials 11B73 Bell and Stirling numbers 05A10 Combinatorial functions 05A15 Exact enumeration problems, generating functions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547024726867676, "perplexity": 4727.33665896988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/372946/independence-of-xy-and-x-y
# Independence of $X+Y$ and $X-Y$ In a roll of die, if $$X$$ is the number on the first die and $$Y$$ is the number on second die, then determine whether the random variable $$X+Y$$ and $$X-Y$$ are independent. The covariance between the two turned out to be $$\mathrm{Var}(X) + \mathrm{Var} (Y)$$. So zero covariance would mean that $$\mathrm{Var}(X)$$ and $$\mathrm{Var}(Y)$$ is zero, that is, no spread. But we also know that zero covariance does not imply independence. I really cannot think of a way to prove independence between the two. • The contrivance is Var(X) - Var(Y). Also, do the dies have the same number of sides? – t.f Oct 21 '18 at 7:47 • Independence is P(XY)=P(X)P(Y). You can calculate all probabilities for 36 outcomes and show that the equation holds. – keiv.fly Oct 21 '18 at 7:59 • @keiv.fly Do we calculate X-Y and X+Y probabilities for different cases and then use P((X-Y)(X+Y))=P(X-Y)P(X+Y)? Isn't there a shorter, more formal method to do the same? – Shinjini Rana Oct 21 '18 at 8:12 • @t.f I'm not aware of the term 'contrivance' yet. Yes, they have same no of sides. – Shinjini Rana Oct 21 '18 at 8:13 • If your dice are known, e.g. with standard numbering from 1 to n, then (X+Y) and (X-Y) are not independent. A simple way of thinking about disproving independence is that you only need to show that there exists at least one outcome of X+Y such that X-Y is known with absolute certainty. if X+Y = 2, then (X-Y) is known and it has to be 0. – NofP Oct 21 '18 at 11:07 They're not: If $$X+Y=12$$ then both rolls were sixes, so $$X-Y=0$$. So you have: $$1 = \mathbb{P}(X-Y =0|X+Y=12) \neq \mathbb{P}(X-Y =0) = \frac{1}{6}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315690755844116, "perplexity": 367.7428742257908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00087.warc.gz"}
http://www.nag.com/numeric/fl/nagdoc_fl24/html/F08/f08hpf.html
F08 Chapter Contents F08 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentF08HPF (ZHBEVX) Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose F08HPF (ZHBEVX) computes selected eigenvalues and, optionally, eigenvectors of a complex $n$ by $n$ Hermitian band matrix $A$ of bandwidth $\left(2{k}_{d}+1\right)$. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. ## 2  Specification SUBROUTINE F08HPF ( JOBZ, RANGE, UPLO, N, KD, AB, LDAB, Q, LDQ, VL, VU, IL, IU, ABSTOL, M, W, Z, LDZ, WORK, RWORK, IWORK, JFAIL, INFO) INTEGER N, KD, LDAB, LDQ, IL, IU, M, LDZ, IWORK(5*N), JFAIL(*), INFO REAL (KIND=nag_wp) VL, VU, ABSTOL, W(N), RWORK(7*N) COMPLEX (KIND=nag_wp) AB(LDAB,*), Q(LDQ,*), Z(LDZ,*), WORK(N) CHARACTER(1) JOBZ, RANGE, UPLO The routine may be called by its LAPACK name zhbevx. ## 3  Description The Hermitian band matrix $A$ is first reduced to real tridiagonal form, using unitary similarity transformations. The required eigenvalues and eigenvectors are then computed from the tridiagonal matrix; the method used depends upon whether all, or selected, eigenvalues and eigenvectors are required. ## 4  References Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912 Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore ## 5  Parameters 1:     JOBZ – CHARACTER(1)Input On entry: indicates whether eigenvectors are computed. ${\mathbf{JOBZ}}=\text{'N'}$ Only eigenvalues are computed. ${\mathbf{JOBZ}}=\text{'V'}$ Eigenvalues and eigenvectors are computed. Constraint: ${\mathbf{JOBZ}}=\text{'N'}$ or $\text{'V'}$. 2:     RANGE – CHARACTER(1)Input On entry: if ${\mathbf{RANGE}}=\text{'A'}$, all eigenvalues will be found. If ${\mathbf{RANGE}}=\text{'V'}$, all eigenvalues in the half-open interval $\left({\mathbf{VL}},{\mathbf{VU}}\right]$ will be found. If ${\mathbf{RANGE}}=\text{'I'}$, the ILth to IUth eigenvalues will be found. Constraint: ${\mathbf{RANGE}}=\text{'A'}$, $\text{'V'}$ or $\text{'I'}$. 3:     UPLO – CHARACTER(1)Input On entry: if ${\mathbf{UPLO}}=\text{'U'}$, the upper triangular part of $A$ is stored. If ${\mathbf{UPLO}}=\text{'L'}$, the lower triangular part of $A$ is stored. Constraint: ${\mathbf{UPLO}}=\text{'U'}$ or $\text{'L'}$. 4:     N – INTEGERInput On entry: $n$, the order of the matrix $A$. Constraint: ${\mathbf{N}}\ge 0$. 5:     KD – INTEGERInput On entry: if ${\mathbf{UPLO}}=\text{'U'}$, the number of superdiagonals, ${k}_{d}$, of the matrix $A$. If ${\mathbf{UPLO}}=\text{'L'}$, the number of subdiagonals, ${k}_{d}$, of the matrix $A$. Constraint: ${\mathbf{KD}}\ge 0$. 6:     AB(LDAB,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output Note: the second dimension of the array AB must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$. On entry: the upper or lower triangle of the $n$ by $n$ Hermitian band matrix $A$. The matrix is stored in rows $1$ to ${k}_{d}+1$, more precisely, • if ${\mathbf{UPLO}}=\text{'U'}$, the elements of the upper triangle of $A$ within the band must be stored with element ${A}_{ij}$ in ${\mathbf{AB}}\left({k}_{d}+1+i-j,j\right)\text{​ for ​}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-{k}_{d}\right)\le i\le j$; • if ${\mathbf{UPLO}}=\text{'L'}$, the elements of the lower triangle of $A$ within the band must be stored with element ${A}_{ij}$ in ${\mathbf{AB}}\left(1+i-j,j\right)\text{​ for ​}j\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+{k}_{d}\right)\text{.}$ On exit: AB is overwritten by values generated during the reduction to tridiagonal form. The first superdiagonal or subdiagonal and the diagonal of the tridiagonal matrix $T$ are returned in AB using the same storage format as described above. 7:     LDAB – INTEGERInput On entry: the first dimension of the array AB as declared in the (sub)program from which F08HPF (ZHBEVX) is called. Constraint: ${\mathbf{LDAB}}\ge {\mathbf{KD}}+1$. 8:     Q(LDQ,$*$) – COMPLEX (KIND=nag_wp) arrayOutput Note: the second dimension of the array Q must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ if ${\mathbf{JOBZ}}=\text{'V'}$, and at least $1$ otherwise. On exit: if ${\mathbf{JOBZ}}=\text{'V'}$, the $n$ by $n$ unitary matrix used in the reduction to tridiagonal form. If ${\mathbf{JOBZ}}=\text{'N'}$, Q is not referenced. 9:     LDQ – INTEGERInput On entry: the first dimension of the array Q as declared in the (sub)program from which F08HPF (ZHBEVX) is called. Constraints: • if ${\mathbf{JOBZ}}=\text{'V'}$, ${\mathbf{LDQ}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$; • otherwise ${\mathbf{LDQ}}\ge 1$. 10:   VL – REAL (KIND=nag_wp)Input 11:   VU – REAL (KIND=nag_wp)Input On entry: if ${\mathbf{RANGE}}=\text{'V'}$, the lower and upper bounds of the interval to be searched for eigenvalues. If ${\mathbf{RANGE}}=\text{'A'}$ or $\text{'I'}$, VL and VU are not referenced. Constraint: if ${\mathbf{RANGE}}=\text{'V'}$, ${\mathbf{VL}}<{\mathbf{VU}}$. 12:   IL – INTEGERInput 13:   IU – INTEGERInput On entry: if ${\mathbf{RANGE}}=\text{'I'}$, the indices (in ascending order) of the smallest and largest eigenvalues to be returned. If ${\mathbf{RANGE}}=\text{'A'}$ or $\text{'V'}$, IL and IU are not referenced. Constraints: • if ${\mathbf{RANGE}}=\text{'I'}$ and ${\mathbf{N}}=0$, ${\mathbf{IL}}=1$ and ${\mathbf{IU}}=0$; • if ${\mathbf{RANGE}}=\text{'I'}$ and ${\mathbf{N}}>0$, $1\le {\mathbf{IL}}\le {\mathbf{IU}}\le {\mathbf{N}}$. 14:   ABSTOL – REAL (KIND=nag_wp)Input On entry: the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as converged when it is determined to lie in an interval $\left[a,b\right]$ of width less than or equal to $ABSTOL+ε maxa,b ,$ where $\epsilon$ is the machine precision. If ABSTOL is less than or equal to zero, then $\epsilon {‖T‖}_{1}$ will be used in its place, where $T$ is the tridiagonal matrix obtained by reducing $A$ to tridiagonal form. Eigenvalues will be computed most accurately when ABSTOL is set to twice the underflow threshold , not zero. If this routine returns with ${\mathbf{INFO}}>{\mathbf{0}}$, indicating that some eigenvectors did not converge, try setting ABSTOL to . See Demmel and Kahan (1990). 15:   M – INTEGEROutput On exit: the total number of eigenvalues found. $0\le {\mathbf{M}}\le {\mathbf{N}}$. If ${\mathbf{RANGE}}=\text{'A'}$, ${\mathbf{M}}={\mathbf{N}}$. If ${\mathbf{RANGE}}=\text{'I'}$, ${\mathbf{M}}={\mathbf{IU}}-{\mathbf{IL}}+1$. 16:   W(N) – REAL (KIND=nag_wp) arrayOutput On exit: the first M elements contain the selected eigenvalues in ascending order. 17:   Z(LDZ,$*$) – COMPLEX (KIND=nag_wp) arrayOutput Note: the second dimension of the array Z must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{M}}\right)$ if ${\mathbf{JOBZ}}=\text{'V'}$, and at least $1$ otherwise. On exit: if ${\mathbf{JOBZ}}=\text{'V'}$, then • if ${\mathbf{INFO}}={\mathbf{0}}$, the first M columns of $Z$ contain the orthonormal eigenvectors of the matrix $A$ corresponding to the selected eigenvalues, with the $i$th column of $Z$ holding the eigenvector associated with ${\mathbf{W}}\left(i\right)$; • if an eigenvector fails to converge (${\mathbf{INFO}}>{\mathbf{0}}$), then that column of $Z$ contains the latest approximation to the eigenvector, and the index of the eigenvector is returned in JFAIL. If ${\mathbf{JOBZ}}=\text{'N'}$, Z is not referenced. Note:  you must ensure that at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{M}}\right)$ columns are supplied in the array Z; if ${\mathbf{RANGE}}=\text{'V'}$, the exact value of M is not known in advance and an upper bound of at least N must be used. 18:   LDZ – INTEGERInput On entry: the first dimension of the array Z as declared in the (sub)program from which F08HPF (ZHBEVX) is called. Constraints: • if ${\mathbf{JOBZ}}=\text{'V'}$, ${\mathbf{LDZ}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$; • otherwise ${\mathbf{LDZ}}\ge 1$. 19:   WORK(N) – COMPLEX (KIND=nag_wp) arrayWorkspace 20:   RWORK($7×{\mathbf{N}}$) – REAL (KIND=nag_wp) arrayWorkspace 21:   IWORK($5×{\mathbf{N}}$) – INTEGER arrayWorkspace 22:   JFAIL($*$) – INTEGER arrayOutput Note: the dimension of the array JFAIL must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$. On exit: if ${\mathbf{JOBZ}}=\text{'V'}$, then • if ${\mathbf{INFO}}={\mathbf{0}}$, the first M elements of JFAIL are zero; • if ${\mathbf{INFO}}>{\mathbf{0}}$, JFAIL contains the indices of the eigenvectors that failed to converge. If ${\mathbf{JOBZ}}=\text{'N'}$, JFAIL is not referenced. 23:   INFO – INTEGEROutput On exit: ${\mathbf{INFO}}=0$ unless the routine detects an error (see Section 6). ## 6  Error Indicators and Warnings Errors or warnings detected by the routine: ${\mathbf{INFO}}<0$ If ${\mathbf{INFO}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated. ${\mathbf{INFO}}>0$ If ${\mathbf{INFO}}=i$, then $i$ eigenvectors failed to converge. Their indices are stored in array JFAIL. Please see ABSTOL. ## 7  Accuracy The computed eigenvalues and eigenvectors are exact for a nearby matrix $\left(A+E\right)$, where $E2 = Oε A2 ,$ and $\epsilon$ is the machine precision. See Section 4.7 of Anderson et al. (1999) for further details. The total number of floating point operations is proportional to ${k}_{d}{n}^{2}$ if ${\mathbf{JOBZ}}=\text{'N'}$, and is proportional to ${n}^{3}$ if ${\mathbf{JOBZ}}=\text{'V'}$ and ${\mathbf{RANGE}}=\text{'A'}$, otherwise the number of floating point operations will depend upon the number of computed eigenvectors. The real analogue of this routine is F08HBF (DSBEVX). ## 9  Example This example finds the eigenvalues in the half-open interval $\left(-2,2\right]$, and the corresponding eigenvectors, of the Hermitian band matrix $A = 1 2-i 3-i 0 0 2+i 2 3-2i 4-2i 0 3+i 3+2i 3 4-3i 5-3i 0 4+2i 4+3i 4 5-4i 0 0 5+3i 5+4i 5 .$ ### 9.1  Program Text Program Text (f08hpfe.f90) ### 9.2  Program Data Program Data (f08hpfe.d) ### 9.3  Program Results Program Results (f08hpfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 131, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994276225566864, "perplexity": 3914.153860210738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.73/warc/CC-MAIN-20150521113208-00309-ip-10-180-206-219.ec2.internal.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/statug_irt_details01.htm
# The IRT Procedure ### Notation for the Item Response Theory Model This section introduces the mathematical notation that is used throughout the chapter to describe the item response theory (IRT) model. For a description of the fitting algorithms and the mathematical-statistical details, see the section Details: IRT Procedure. A d-dimensional graded response IRT model that has K ordinal responses can be expressed by the equations where is the observed ordinal response from subject i for item j, is a continuous latent response that underlies , is a vector of threshold parameters for item j, is a vector of slope (or discrimination) parameters for item j, is a vector of latent factors for subject i, , and is a vector of unique factors for subject i. All the unique factors in are independent from one another, suggesting that , are independent conditional on the latent factor . This is the so-called local independence assumption. Finally, and are also independent. Based on the preceding model specification, where p is determined by the link function. It is the density function of the standard normal distribution if the probit link is used, or the density function of the logistic distribution if the logistic link is used. Let denote the slope matrix. To identify the model in exploratory analysis, the upper triangular elements of are fixed as zero, the factor mean is fixed as a zero vector, and the factor variance covariance matrix is fixed as an identity matrix. For confirmatory analysis, it is assumed that the identification problem is solved by user-specified constraints. The model that is specified in the preceding equation uses the latent response formulation. PROC IRT uses this parameterization for computational convenience. When there is only one latent factor, a mathematically equivalent parameterization for the model is where is called the slope (discrimination) parameter and are called the threshold parameters. The threshold parameters under these two parameterizations can be translated as , where and is often called the intercept parameter. The preceding model is called a graded-response model. When the responses are binary, this model reduces to the two-parameter model, which can be expressed as where is often called the item difficulty parameter. The two-parameter model reduces to a one-parameter model when slope parameters for all the items are constrained to be equal. In the case where the logistic link is used, the one- and two-parameter models are often abbreviated as 1PL and 2PL. When all the slope parameters are set to 1 and the factor variance is set to a free parameter, the Rasch model is obtained. You can obtain three- and four-parameter models by introducing the guessing and ceiling parameters. Let and denote the item-specific guessing and ceiling parameters, respectively. Then the four-parameter model can be expressed as This model reduces to the three-parameter model when . The generalized partial credit (GPC) model is another popular IRT model for ordinal items besides the graded response model. Introduced by Muraki (1992), it is an extension of the partial credit (PC) model proposed by Masters (1982). In the PC model, the slope (or discrimination) parameter is fixed as 1 for all the items. The GPC model releases this assumption by introducing the slope parameter for each item. The GPC model can be formulated as In this formulation, is called the slope (discrimination) parameter and is called the step parameter.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844988584518433, "perplexity": 803.5175206869699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00276.warc.gz"}
https://www.thejournal.club/c/paper/50625/
#### Some remarks on spatial uniformity of solutions of reaction-diffusion PDE's and a related synchronization problem for ODE's ##### Zahra Aminzare, Eduardo D. Sontag In this note, we present a condition which guarantees spatial uniformity for the asymptotic behavior of the solutions of a reaction-diffusion PDE with Neumann boundary conditions in one dimension, using the Jacobian matrix of the reaction term and the first Dirichlet eigenvalue of the Laplacian operator on the given spatial domain. We also derive an analog of this PDE result for the synchronization of a network of identical ODE models coupled by diffusion terms. arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928757905960083, "perplexity": 395.6517456308103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00440.warc.gz"}
http://math.stackexchange.com/questions/288853/continuity-of-the-real-gamma-function
# Continuity of the (real) $\Gamma$ function. Consider the real valued function $$\Gamma(x)=\int_0^{\infty}t^{x-1}e^{-t}dt$$ where the above integral means the Lebesgue integral with the Lebesgue measure in $\mathbb R$. The domain of the function is $\{x\in\mathbb R\,:\, x>0\}$, and now I'm trying to study the continuity. The function $$t^{x-1}e^{-t}$$ is positive and bounded if $x\in[a,b]$, for $0<a<b$, so using the dominated convergence theorem in $[a,b]$, I have: $$\lim_{x\to x_0}\Gamma(x)=\lim_{x\to x_0}\int_0^{\infty}t^{x-1}e^{-t}dt=\int_0^{\infty}\lim_{x\to x_0}t^{x-1}e^{-t}dt=\Gamma(x_0)$$ Reassuming $\Gamma$ is continuous in every interval $[a,b]$; so can I conclude that $\Gamma$ is continuous on all its domain? - The Gamma function of x is not continuous (but is defined) for negative x –  Elements in Space Jan 28 '13 at 11:23 The domain of $\Gamma$ (as written above) is $\mathbb R_+$. I'm asking if it is continuous in its domain. –  fair-coin tossing Jan 28 '13 at 11:26 Yes. I think so. –  Bombyx mori Jan 28 '13 at 11:27 If there exists a function $f$ such that $\Gamma(x)\leq h$ forall $x>0$ such that $\int_\Omega h \; d\mu < \infty$ then it is. –  UnadulteratedImagination Jan 28 '13 at 11:34 Can you find the measurable $g$ function that $t^{x-1}e^{-t}\leq g$? –  Felipe Feb 7 at 10:42 For any $\,b>0\,\,\,,\,\,\epsilon>0\,$ choose $\,\delta>0\,$ so that $\,|x-x_0|<\delta\Longrightarrow \left|t^{x-1}-t^{x_0-1}\right|<\epsilon\,$ in $\,[0,b]\,$ : $$\left|\Gamma(x)-\Gamma(x_0)\right|=\left|\lim_{b\to\infty}\int\limits_0^b \left(t^{x-1}-t^{x_0-1}\right)e^{-t}\,dt\right|\leq$$ $$\leq\lim_{b\to\infty}\int\limits_0^b\left|t^{x-1}-t^{x_0-1}\right|e^{-t}\,dt<\epsilon\lim_{b\to\infty}\int\limits_0^b e^{-t}\,dt=\epsilon$$ @Galoisfan, yes it is...but if you're using the DCT then I think it'd be better if you specifically show the integrable function $\,g(x)\,$ s.t. $\,\left|t^{x_0-1}e^{-t}\right|\leq |g(x)|\,$ . Not that this is hard to do in this case. –  DonAntonio Jan 28 '13 at 12:31 I have some problems to show the integrable function $g$ –  fair-coin tossing Jan 28 '13 at 12:35 @DonAntonio can you give the function $g$? –  UnadulteratedImagination Jan 28 '13 at 13:31 I see it as follows: (1) For $\,t>1\,$: $$\exists M\in\Bbb N\,\,\;\text{s.t.}\;\;\,\,t>M\Longrightarrow e^{-t/2}t^{x-1}\leq 1\,\,,\,\,\text{since} e^{-t/2}t^{x-1}\xrightarrow[t\to\infty]{}0$$ and from here $\,\int\limits_M^\infty e^{-t}t^{x-1}\,dt\leq\int\limits_M^\infty e^{-t/2}dt<\infty\,$ For $\,0<t\leq 1\,$ we get $\,e^{-t}t^{x-1}\leq t^{x-1}\,$ and $\,\int\limits_0^Mt^{x-1}dt\,$ converges for $\,x-1>-1\Longleftrightarrow x>0$ –  DonAntonio Jan 28 '13 at 14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863531589508057, "perplexity": 321.9342237168154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274967.3/warc/CC-MAIN-20140728011754-00299-ip-10-146-231-18.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/323712/norm-closure-of-c-b1-mathbbr
# Norm closure of $C_b^1(\mathbb{R})$ I want to determine what the closure of $$C_b^1(\mathbb{R})$$, the space of continuous differentiable functions with bounded derivative, with respect to the supremums norm is. I think that $$\overline{C_b^1(\mathbb{R})}=BUC(\mathbb{R})$$, the space of bounded uniformly continuous functions. Can someone help me? Do one has to use the fundamental theorem of calculus. I think also uniform convergence plays a big role. • I guess that you know that $BUC(\mathbb R)$ is closed in the space of bounded continuous functions endowed with the supremum norm. It thus remains to approximate every $f\in BUC$ by elements of $C_b^1$. A standard method is mollification: Fix a positive smooth function $\phi$ with compact support and integral $1$, define $\phi_n(x)=n\phi(nx)$ and consider $f\ast \phi_n$. Using $(f\ast \phi_n)'=f\ast \phi_n'$ you see that these convolutions are in $C_b^1$, and using the uniform continuity of $f$ you will show $f\ast \phi_n \to f$ uniformly. – Jochen Wengenroth Feb 21 at 10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996984601020813, "perplexity": 75.25252407283726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318154220-00460.warc.gz"}
https://labs.tib.eu/arxiv/?category=nlin.SI
• A polynomial formula for the solution of 3D reflection equation(1411.7763) Feb. 26, 2019 math-ph, math.MP, math.QA, nlin.SI We introduce a family of polynomials in $q^2$ and four variables associated with the quantized algebra of functions $A_q(C_2)$. A new formula is presented for the recent solution of the 3D reflection equation in terms of these polynomials specialized to the eigenvalues of the $q$-oscillator operators. • Tetrahedron and 3D reflection equations from quantized algebra of functions(1208.1586) Feb. 26, 2019 math-ph, math.MP, math.QA, nlin.SI Soibelman's theory of quantized function algebra A_q(SL_n) provides a representation theoretical scheme to construct a solution of the Zamolodchikov tetrahedron equation. We extend this idea originally due to Kapranov and Voevodsky to A_q(Sp_{2n}) and obtain the intertwiner K corresponding to the quartic Coxeter relation. Together with the previously known 3-dimensional (3D) R matrix, the K yields the first ever solution to the 3D analogue of the reflection equation proposed by Isaev and Kulish. It is shown that matrix elements of R and K are polynomials in q and that there are combinatorial and birational counterparts for R and K. The combinatorial ones arise either at q=0 or by tropicalization of the birational ones. A conjectural description for the type B and F_4 cases is also given. • Continuum limits of pluri-Lagrangian systems(1706.06830) Feb. 21, 2019 math-ph, math.MP, nlin.SI A pluri-Lagrangian (or Lagrangian multiform) structure is an attribute of integrability that has mainly been studied in the context of multidimensionally consistent lattice equations. It unifies multidimensional consistency with the variational character of the equations. An analogous continuous structure exists for integrable hierarchies of differential equations. We present a continuum limit procedure for pluri-Lagrangian systems. In this procedure the lattice parameters are interpreted as Miwa variables, describing a particular embedding in continuous multi-time of the mesh on which the discrete system lives. Then we seek differential equations whose solutions interpolate the embedded discrete solutions. The continuous systems found this way are hierarchies of differential equations. We show that this continuum limit can also be applied to the corresponding pluri-Lagrangian structures. We apply our method to the discrete Toda lattice and to equations H1 and Q1$_{\delta = 0}$ from the ABS list. • Instantons in $\sigma$ model and tau functions(1611.02248) Feb. 20, 2019 math-ph, math.MP, nlin.SI We show that a number of multiple integrals may viewed as tau functions of various integrable hierarchies. The instanton contributions in the two-dimensional O(3)$\ \sigma$ model is an example of such an approach. • Macdonald-Koornwinder moments and the two-species exclusion process(1505.00843) Introduced in the late 1960's, the asymmetric exclusion process (ASEP) is an important model from statistical mechanics which describes a system of interacting particles hopping left and right on a one-dimensional lattice with open boundaries. It has been known for awhile that there is a tight connection between the partition function of the ASEP and moments of Askey-Wilson polynomials, a family of orthogonal polynomials which are at the top of the hierarchy of classical orthogonal polynomials in one variable. On the other hand, Askey-Wilson polynomials can be viewed as a specialization of the multivariate Macdonald-Koornwinder polynomials (also known as Koornwinder polynomials), which in turn give rise to the Macdonald polynomials associated to any classical root system via a limit or specialization. In light of the fact that Koornwinder polynomials generalize the Askey-Wilson polynomials, it is natural to ask whether one can find a particle model whose partition function is related to Koornwinder polynomials. In this article we answer this question affirmatively, by showing that the "homogeneous" Koornwinder moments at q=t recover the partition function for the two-species exclusion process. We also provide a "hook length" formula for Koornwinder moments when q=t=1. • On the thermodynamic limit of form factor expansions of dynamical correlation functions in the massless regime of the XXZ spin $1/2$ chain(1706.09459) This work constructs a well-defined and operational form factor expansion in a model having a massless spectrum of excitations. More precisely, the dynamic two-point functions in the massless regime of the XXZ spin-1/2 chain are expressed in terms of properly regularised series of multiple integrals. These series are obtained by taking, in an appropriate way, the thermodynamic limit of the finite volume form factor expansions. The series are structured in way allowing one to identify directly the contributions to the correlator stemming from the conformal-type excitations on the Fermi surface and those issuing from the massive excitations (deep holes, particles and bound states). The obtained form factor series opens up the possibility of a systematic and exact study of asymptotic regimes of dynamical correlation functions in the massless regime of the XXZ spin $1/2$ chain. Furthermore, the assumptions on the microscopic structure of the model's Hilbert space that are necessary so as to write down the series appear to be compatible with any model -- not necessarily integrable -- belonging to the Luttinger liquid universality class. Thus, the present analysis provides also the phenomenological structure of form factor expansions in massless models belonging to this universality class. • On a Poisson-Lie deformation of the BC(n) Sutherland system(1508.04991) Feb. 13, 2019 hep-th, math-ph, math.MP, nlin.SI A deformation of the classical trigonometric BC(n) Sutherland system is derived via Hamiltonian reduction of the Heisenberg double of SU(2n). We apply a natural Poisson-Lie analogue of the Kazhdan-Kostant-Sternberg type reduction of the free particle on SU(2n) that leads to the BC(n) Sutherland system. We prove that this yields a Liouville integrable Hamiltonian system and construct a globally valid model of the smooth reduced phase space wherein the commuting flows are complete. We point out that the reduced system, which contains 3 independent coupling constants besides the deformation parameter, can be recovered (at least on a dense submanifold) as a singular limit of the standard 5-coupling deformation due to van Diejen. Our findings complement and further develop those obtained recently by Marshall on the hyperbolic case by reduction of the Heisenberg double of SU(n,n). • Resilience of constituent solitons in multisoliton scattering off barriers(1501.00075) We introduce superheated integrability,'' which produces characteristic staircase transmission plots for barrier collisions of breathers of the nonlinear Schr\"odinger equation. The effect makes tangible the inverse scattering transform, which treats the velocities and norms of the constituent solitons as the real and imaginary parts of the eigenvalues of the Lax operator. If all the norms are much greater than the velocities, an integrability-breaking potential may nonperturbatively change the velocities while having no measurable effect on the norms. This could be used to improve atomic interferometers. • Long-Time Asymptotics for the Toda Shock Problem: Non-Overlapping Spectra(1406.0720) Feb. 1, 2019 math-ph, math.MP, nlin.SI We derive the long-time asymptotics for the Toda shock problem using the nonlinear steepest descent analysis for oscillatory Riemann--Hilbert factorization problems. We show that the half plane of space/time variables splits into five main regions: The two regions far outside where the solution is close to free backgrounds. The middle region, where the solution can be asymptotically described by a two band solution, and two regions separating them, where the solution is asymptotically given by a slowly modulated two band solution. In particular, the form of this solution in the separating regions verifies a conjecture from Venakides, Deift, and Oba from 1991. • Analysis and comparative study of non-holonomic and quasi-integrable deformations of the Nonlinear Schr\"odinger Equation(1611.00961) Sept. 25, 2019 nlin.SI The non-holonomic deformation of the nonlinear Schr\"odinger equation, uniquely obtained from both the Lax pair and Kupershmidt's bi-Hamiltonian [Phys. Lett. A 372, 2634 (2008)] approaches, is compared with the quasi-integrable deformation of the same system [Ferreira et. al. JHEP 2012, 103 (2012)]. It is found that these two deformations can locally coincide only when the phase of the corresponding solution is discontinuous in space, following a definite phase-modulus coupling of the non-holonomic inhomogeneity function. These two deformations are further found to be not gauge-equivalent in general, following the Lax formalism of the nonlinear Schr\"odinger equation. However, asymptotically they converge for localized solutions as expected. Similar conditional correspondence of nonholonomic deformation with a non-integrable deformation, namely, due to local scaling of the amplitude of the nonlinear Schr\"odinger equation is further obtained. • Integrability via geometry: dispersionless differential equations in three and four dimensions(1612.02753) Jan. 4, 2019 math.DG, math.AP, nlin.SI We prove that the existence of a dispersionless Lax pair with spectral parameter for a nondegenerate hyperbolic second order partial differential equation (PDE) is equivalent to the canonical conformal structure defined by the symbol being Einstein-Weyl on any solution in 3D, and self-dual on any solution in 4D. The first main ingredient in the proof is a characteristic property for dispersionless Lax pairs. The second is the projective behaviour of the Lax pair with respect to the spectral parameter. Both are established for nondegenerate determined systems of PDEs of any order. Thus our main result applies more generally to any such PDE system whose characteristic variety is a quadric hypersurface. • Q-deformed Painleve tau function and q-deformed conformal blocks(1608.02566) Jan. 1, 2019 hep-th, math-ph, math.MP, math.QA, nlin.SI We propose $q$-deformation of the Gamayun-Iorgov-Lisovyy formula for Painlev\'e $\tau$ function. Namely we propose formula for $\tau$ function for $q$-difference Painlev\'e equation corresponding to $A_7^{(1)}{}'$ surface (and $A_1^{(1)}$ symmetry) in Sakai's classification. In this formula $\tau$ function equals the series of $q$-Virasoro Whittaker conformal blocks (equivalently Nekrasov partition functions for pure $SU(2)$ 5d theory). • Cut-and-join description of generalized Brezin-Gross-Witten model(1608.01627) Dec. 29, 2018 hep-th, math-ph, math.MP, math.CO, nlin.SI We investigate the Brezin-Gross-Witten model, a tau-function of the KdV hierarchy, and its natural one-parameter deformation, the generalized Brezin-Gross-Witten tau-function. In particular, we derive the Virasoro constraints, which completely specify the partition function. We solve them in terms of the cut-and-join operator. The Virasoro constraints lead to the loop equations, which we solve in terms of the correlation functions. Explicit expressions for the coefficients of the tau-function and the free energy are derived, and a compact formula for the genus zero contribution is conjectured. A family of polynomial solutions of the KdV hierarchy, given by the Schur functions, is obtained for the half-integer values of the parameter. The quantum spectral curve and its classical limit are discussed. • On the elliptic $\mathfrak{gl}_2$ solid-on-solid model: functional relations and determinants(1606.06144) In this work we study an elliptic solid-on-solid model with domain-wall boundaries having the elliptic quantum group $\mathcal{E}_{p, \gamma}[\widehat{\mathfrak{gl}_2}]$ as its underlying symmetry algebra. We elaborate on results previously presented by the author and extend our analysis to include continuous families of single determinantal representations for the model's partition function. Interestingly, our families of representations are parameterized by two continuous complex variables which can be arbitrarily chosen without affecting the partition function. • 4D limit of melting crystal model and its integrable structure(1704.02750) Dec. 22, 2018 hep-th, math-ph, math.MP, math.QA, nlin.SI This paper addresses the problems of quantum spectral curves and 4D limit for the melting crystal model of 5D SUSY $U(1)$ Yang-Mills theory on $\mathbb{R}^4\times S^1$. The partition function $Z(\mathbf{t})$ deformed by an infinite number of external potentials is a tau function of the KP hierarchy with respect to the coupling constants $\mathbf{t} = (t_1,t_2,\ldots)$. A single-variate specialization $Z(x)$ of $Z(\mathbf{t})$ satisfies a $q$-difference equation representing the quantum spectral curve of the melting crystal model. In the limit as the radius $R$ of $S^1$ in $\mathbb{R}^4\times S^1$ tends to $0$, it turns into a difference equation for a 4D counterpart $Z_{\mathrm{4D}}(X)$ of $Z(x)$. This difference equation reproduces the quantum spectral curve of Gromov-Witten theory of $\mathbb{CP}^1$. $Z_{\mathrm{4D}}(X)$ is obtained from $Z(x)$ by letting $R \to 0$ under an $R$-dependent transformation $x = x(X,R)$ of $x$ to $X$. A similar prescription of 4D limit can be formulated for $Z(\mathbf{t})$ with an $R$-dependent transformation $\mathbf{t} = \mathbf{t}(\mathbf{T},R)$ of $\mathbf{t}$ to $\mathbf{T} = (T_1,T_2,\ldots)$. This yields a 4D counterpart $Z_{\mathrm{4D}}(\mathbf{T})$ of $Z(\mathbf{t})$. $Z_{\mathrm{4D}}(\mathbf{T})$ agrees with a generating function of all-genus Gromov-Witten invariants of $\mathbb{CP}^1$. Fay-type bilinear equations for $Z_{\mathrm{4D}}(\mathbf{T})$ can be derived from similar equations satisfied by $Z(\mathbf{t})$. The bilinear equations imply that $Z_{\mathrm{4D}}(\mathbf{T})$, too, is a tau function of the KP hierarchy. These results are further extended to deformations $Z(\mathbf{t},s)$ and $Z_{\mathrm{4D}}(\mathbf{T},s)$ by a discrete variable $s \in \mathbb{Z}$, which are shown to be tau functions of the 1D Toda hierarchy. • Splitting of surface defect partition functions and integrable systems(1709.04926) Dec. 14, 2018 hep-th, math-ph, math.MP, nlin.SI We study Bethe/gauge correspondence at the special locus of Coulomb moduli where the integrable system exhibits the splitting of degenerate levels. For this investigation, we consider the four-dimensional pure $\mathcal{N}=2$ supersymmetric $U(N)$ gauge theory, with a half-BPS surface defect constructed with the help of an orbifold or a degenerate gauge vertex. We show that the non-perturbative Dyson-Schwinger equations imply the Schr\"odinger-type and the Baxter-type differential equations satisfied by the respective surface defect partition functions. At the special locus of Coulomb moduli the surface defect partition function splits into parts. We recover the Bethe/gauge dictionary for each summand. • Noncommutative Painlev\'e equations and systems of Calogero type(1710.00736) Dec. 12, 2018 math-ph, math.MP, nlin.SI All Painlev\'e equations can be written as a time-dependent Hamiltonian system, and as such they admit a natural generalization to the case of several particles with an interaction of Calogero type (rational, trigonometric or elliptic). Recently, these systems of interacting particles have been proved to be relevant in the study of $\beta$-models. An almost two decade old open question by Takasaki asks whether these multi-particle systems can be understood as isomonodromic equations, thus extending the Painlev\'e correspondence. In this paper we answer in the affirmative by displaying explicitly suitable isomonodromic Lax pair formulations. As an application of the isomonodromic representation we provide a construction based on discrete Schlesinger transforms, to produce solutions for these systems for special values of the coupling constants, starting from uncoupled ones; the method is illustrated for the case of the second Painlev\'e equation. • Integrable deformations of the $G_{k_1} \times G_{k_2}/G_{k_1+k_2}$ coset CFTs(1710.02515) Nov. 27, 2018 hep-th, math-ph, math.MP, nlin.SI We study the effective action for the integrable $\lambda$-deformation of the $G_{k_1} \times G_{k_2}/G_{k_1+k_2}$ coset CFTs. For unequal levels theses models do not fall into the general discussion of $\lambda$-deformations of CFTs corresponding to symmetric spaces and have many attractive features. We show that the perturbation is driven by parafermion bilinears and we revisit the derivation of their algebra. We uncover a non-trivial symmetry of these models parametric space, which has not encountered before in the literature. Using field theoretical methods and the effective action we compute the exact in the deformation parameter $\beta$-function and explicitly demonstrate the existence of a fixed point in the IR corresponding to the $G_{k_1-k_2} \times G_{k_2}/G_{k_1}$ coset CFTs. The same result is verified using gravitational methods for $G=SU(2)$. We examine various limiting cases previously considered in the literature and found agreement. • Hypergeometric First Integrals of the Duffing and van der Pol Oscillators(1706.02506) Nov. 15, 2018 math-ph, math.MP, nlin.SI The autonomous Duffing oscillator, and its van der Pol modification, are known to admit time-dependent first integrals for specific values of parameters. This corresponds to the existence of Darboux polynomials, and in fact more can be shown: that there exist Liouvillian first integrals which do not depend on time. They can be expressed in terms of the Gauss and Kummer hypergeometric functions, and are neither analytic, algebraic nor meromorphic. A criterion for this to happen in a general dynamical system is formulated as well. • Dispersionless integrable hierarchies and GL(2,R) geometry(1607.01966) Nov. 14, 2018 nlin.SI Paraconformal or $GL(2)$ geometry on an $n$-dimensional manifold $M$ is defined by a field of rational normal curves of degree $n-1$ in the projectivised cotangent bundle $\mathbb{P} T^*M$. Such geometry is known to arise on solution spaces of ODEs with vanishing W\"unschmann (Doubrov-Wilczynski) invariants. In this paper we discuss yet another natural source of $GL(2)$ structures, namely dispersionless integrable hierarchies of PDEs (for instance the dKP hierarchy). In the latter context, $GL(2)$ structures coincide with the characteristic variety (principal symbol) of the hierarchy. Dispersionless hierarchies provide explicit examples of various particularly interesting classes of $GL(2)$ structures studied in the literature. Thus, we obtain torsion-free $GL(2)$ structures of Bryant that appeared in the context of exotic holonomy in dimension four, as well as totally geodesic $GL(2)$ structures of Krynski. The latter, also known as involutive $GL(2)$ structures, possess a compatible affine connection (with torsion) and a two-parameter family of totally geodesic $\alpha$-manifolds (coming from the dispersionless Lax equations), which makes them a natural generalisation of the Einstein-Weyl geometry. Our main result states that involutive $GL(2)$ structures are governed by a dispersionless integrable system. This establishes integrability of the system of W\"unschmann conditions. • On Darboux integrability of discrete 2D Toda lattices(1410.0319) Nov. 12, 2018 nlin.SI Darboux integrability of semidiscrete and discrete 2D Toda lattices corresponding to Lie algebras of A and C series is proved. • An constructive proof for the Umemura polynomials for the third Painlev\'e equation(1609.00495) Nov. 9, 2018 math.CA, nlin.SI We are concerned with the Umemura polynomials associated with the third Painlev\'e equation. We extend Taneda's method, which was developed for the Yablonskii--Vorob'ev polynomials associated with the second Painlev\'e equation, to give an algebraic proof that the rational functions generated by the nonlinear recurrence relation satisfied by Umemura polynomials are indeed polynomials. Our proof is constructive and gives information about the roots of the Umemura polynomials. • Currents in the dilute $O(n=1)$ model(1510.02721) In the framework of an inhomogeneous solvable lattice model, we derive exact expressions for a boundary-to-boundary current on a lattice of finite width. The model we use is the dilute $O(n=1)$ loop model, related to the Izergin-Korepin spin-1 chain and the critical site percolation on the triangular lattice. Our expressions are derived based on solutions of the $q$-Knizhnik-Zamolodchikov equations, and recursion relations. • Rational Maps with Invariant Surfaces(1706.00173) Oct. 31, 2018 nlin.SI We provide new examples of integrable rational maps in four dimensions with two rational invariants, which have unexpected geometric properties, as for example orbits confined to non algebraic varieties, and fall outside classes studied by earlier authors. We can reconstruct the map from both invariants. One of the invariants defines the map unambiguously, while the other invariant also defines a new map leading to non trivial fibrations of the space of initial conditions. • Fredholm determinant and Nekrasov sum representations of isomonodromic tau functions(1608.00958) Oct. 29, 2018 hep-th, math-ph, math.MP, nlin.SI We derive Fredholm determinant representation for isomonodromic tau functions of Fuchsian systems with $n$ regular singular points on the Riemann sphere and generic monodromy in $\mathrm{GL}(N,\mathbb C)$. The corresponding operator acts in the direct sum of $N(n-3)$ copies of $L^2(S^1)$. Its kernel has a block integrable form and is expressed in terms of fundamental solutions of $n-2$ elementary 3-point Fuchsian systems whose monodromy is determined by monodromy of the relevant $n$-point system via a decomposition of the punctured sphere into pairs of pants. For $N=2$ these building blocks have hypergeometric representations, the kernel becomes completely explicit and has Cauchy type. In this case Fredholm determinant expansion yields multivariate series representation for the tau function of the Garnier system, obtained earlier via its identification with Fourier transform of Liouville conformal block (or a dual Nekrasov-Okounkov partition function). Further specialization to $n=4$ gives a series representation of the general solution to Painlev\'e VI equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105527400970459, "perplexity": 604.0415251945999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00352.warc.gz"}
https://math.stackexchange.com/questions/1266587/example-of-uncomputable-but-definable-number/1266607
# Example of uncomputable but definable number Every computable number is definable. However, the converse is not true. What is an example of a real number that is definable but that is NOT computable? I guess if it is there, we can "define" (describe) it, can't we? • Define in what language? – Asaf Karagila May 4 '15 at 15:47 • As in the intro of this article. en.wikipedia.org/wiki/Definable_real_number – islamfaisal May 4 '15 at 15:53 • I don't actually believe, philosophically speaking, in non-computable reals. – Kyle Strand May 5 '15 at 5:03 • I don't know if integrals of some pathological functions can qualify as "non-computable " here. If yes, there are way too many examples. – Vim May 5 '15 at 7:52 • @Wrzlprmft: You said "the smallest positive real number". I don't agree that there is one, not to say a unique one! So you cannot use it in a definition. =) – user21820 May 5 '15 at 12:10 Here is a non-computable real number: $$\sum_{i=1}^\infty 2^{-\Sigma(i)}$$ where $\Sigma$ is any busy beaver function. • But is it well-defined? – cfh May 4 '15 at 15:59 • @cfh: Since the Busy Beaver function is a well-defined function, yes. How could it not be well-defined? – Asaf Karagila May 4 '15 at 16:04 • The sum converges since it is bounded above by $\sum_{i\ge 1} 2^{-i}$, which is geometric. – vadim123 May 4 '15 at 16:07 • @cfh: Busy Beaver grows faster than any computable function – user541686 May 4 '15 at 17:54 • How does our knowledge that this sum cannot be computed prove that the number itself is not (coincidentally, and necessarily unbeknownst to us) computable via some other algorithm? – Kyle Strand May 23 '15 at 23:17 The point here is that definable real numbers are definable using the entire strength of might of the set theoretic universe; whereas computable real numbers are only allowed to access the natural numbers and their very very very rudimentary properties (since computable functions are only $\Sigma_1$ definable functions over $\Bbb N$). Let $\varphi_n$ enumerate the sentences in the language of arithmetic. Now consider the real number whose $n$-th digit in the decimal expansion is $1$ if and only if $\Bbb N\models\varphi_n$, and $0$ otherwise. So it is a number in $[0,1]$. This number is of course definable in the language of set theory, since the set of true sentences in $\Bbb N$ is definable; but it is not a computable real number since there is no computable function telling us what is true in $\Bbb N$ and what isn't (not even arithmetical, to be more accurate). We can also take the following approach, as I suggest in the comments to the original question. Note that every computable real lies in $L$, by absoluteness arguments (every computable functions lies there), and in $L$ there is a definable well-ordering of the reals (even with a $\Delta^1_3$ definition!), so there is a least real in the canonical well-ordering which is not definable. Since the set "the real numbers which also lie in $L$" cannot change between models of $\sf ZF$ with the same ordinals, this set always has a canonical, definable well-ordering in any model of $\sf ZF$, and this indeed gives us a definition of a real number which is non-computable. You can also argue that various generic reals are non-computable but definable, if you're willing to go this far as to consider different set theoretic universes (or at least one which can be seen as a nontrivial generic extension of some inner model). For example Jensen reals are definable (they are the unique solution to a $\Pi^1_2$ predicate) but not computable. Similarly, you can consider the iterated forcing that at the $n$-th step does the lottery sum between forcing $2^{\aleph_n}=\aleph_{n+1}$ and forcing $2^{\aleph_n}=\aleph_{n+2}$, at the limit step take a finite support limit, and consider the real number whose $n$-th decimal digit $1$ if and only if $2^{\aleph_n}=\aleph_{n+1}$, and $0$ otherwise. This is a Cohen real which is definable, since it encodes the continuum below $\aleph_\omega$; but of course it is not computable by genericity arguments. Note that this gives a very peculiar example of a real number, the one encoding the continuum function below $\aleph_\omega$. It is always definable, but in different models of $\sf ZFC$ it wil have different values, sometimes they will be computable (e.g. if $\sf GCH$ holds) and sometimes they could be non-computable (as above). So this gives us a definition of a real number which is not provably computable and not provably uncomputable! • One step beyond! – rewritten May 4 '15 at 17:34 • Well, you know what they say... to infinity and beyond! :-) – Asaf Karagila May 4 '15 at 17:39 • I'd be happy to hear verbally about the mistakes in this answer! Silent downvotes might speak volumes, but it can often be in a foreign language, even more so when they are countered by many upvotes. – Asaf Karagila May 5 '15 at 10:00 • Hmm... second downvote. I'd really love to hear some actual criticism. Although I can't help but feel that maybe the criticism begins and ends with "It was written by Asaf Karagila", which admittedly is not that constructive. But if anyone has any idea what's wrong with this answer, I'd be interested to hear about that! – Asaf Karagila May 5 '15 at 15:45 • Could you elaborate on how you define a real number by doing those forcings? – Mario Carneiro May 6 '15 at 1:22 The probability that a random computer program will run forever is not computable. http://en.wikipedia.org/wiki/Chaitin%27s_constant That some aspects of our concepts in this area are problematic is illustrated by the following example, which I learned from Hartley Rogers' book on computability: let $$f(x) = \begin{cases} 1 & \text{if there is a sequence of }x\text{ consecutive 7s in the decimal expansion of }\pi, \\ 0 & \text{otherwise}. \end{cases}$$ This is computable! And there is an easy argument for its computability. And the algorithm for computing this function is really really simple. One can prove that easily, but no one knows, nor is it at all easy to know, which algorithm it is. • Could you elaborate what qualifies as a 'really really simple' for the algorithm of $f$? – orlp May 4 '15 at 18:03 • It is either identical to g (x) = 1 for all x, or it is identical to h (x) = 1 if x ≤ n, and h (x) = 0 otherwise, for some value of n. In any case, it is very easy to compute. The only problem is that we don't know which one it is and what the value of n would be. – gnasher729 May 4 '15 at 18:09 • @orlp : The algorithm is one of the following. If there is no longest sequence of consecutive $7$s, then always return $1$. If there is a longest such sequence, return $1$ if $x\le{}$the length of that longest sequence, and otherwise $0$. That gives you an infinite sequence of algorithms, each really simple. But WHICH one is the right one? No one knows. ${}\qquad{}$ – Michael Hardy May 4 '15 at 18:28 • @MichaelHardy It is computable, but not nessesarily terminating. Just generate a moving window of size $x$ of the digits of $\pi$ and check if all of them are $7$. – NightRa Aug 28 '15 at 10:11 • @NightRa : Computability as usually defined does not follow from your comment. ${}\qquad{}$ – Michael Hardy Aug 28 '15 at 14:45 The Chaitin's constant is a well defined number in computability theory, but it is not computable. But, about the concept of definable number see the answers to Definable real numbers. If we consider an enumeration of all possible pairs of turing machines and inputs, then we can let $S$ denote the set of those positive integers $n$ for which the $n$th pair halts. Now this number $x$ will be well-defined but uncomputable: $$x = \frac 1 3 + 4\sum_{n \in S} 10^{-n}$$ $x$ will consist of a sequence of decimals all of which are either 3 or 7. The $n$th decimal will be 7 if the $n$th pair of turing machine and input halts, and 3 otherwise. In other words computing a decimal of $x$ is equivalent to solving an instance of the halting problem. What is also interesting about $x$ is that there is a simple constructive algorithm to produce a sequence of rational numbers, that converge towards $x$. • Initialize $a := \frac 1 3$ • For $i \in \mathbb{N}$ do: • Simulate the first $i$ turing machines for the first $i$ steps. • For each turing machine $n$ which halted and did not halt for any lower $i$: • $a := a + 4 \cdot 10 ^{-n}$ • Output $a$ This shows that it possible for a computable sequence of rational numbers to converge on a non-computable number. This is a bit more than what you asked for, but to me this particular example gave me a better feeling for what the boundary of compatibility looks like. • How is this more than what was asked? The rational numbers are dense in $\Bbb R$. So if there is an uncomputable real, there is a sequence of rationals converging to it. – Asaf Karagila May 5 '15 at 4:14 • @AsafKaragila Because in this case the sequence is computable. Every number in $\mathbb{R}$ will be the limit of a sequence of rational numbers, but for most of them the sequence will not be computable. – kasperd May 5 '15 at 5:13 • @AsafKaragila: He means that there is a Turing machine that outputs a sequence of Turing machines that all halt and whose output converges to the uncomputable real number. There just isn't a Turing machine that outputs the sequence of digits of that real number. The crucial difference is that the sequence converges at an uncomputable rate but outputting the digits in order requires a linear convergence. kasperd, you might want to include such kinds of detail in your answer to make it more complete. =) – user21820 May 5 '15 at 12:17 The simplest is perhaps the uncomputable real number whose binary expansion is $$0.x_1x_2x_3...$$ where $$x_i = \begin{cases} 1 & \text{if }T_i\text{ eventually halts}\\ 0 & \text{otherwise} \end{cases}$$ and $T_i$ is the $i$th Turing machine (in some chosen ordering) with an initially blank tape.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814731776714325, "perplexity": 364.9700425864923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00185.warc.gz"}
http://swmath.org/software/12897
# Algorithm 939 Algorithm 939: computation of the Marcum Q-function. Methods and an algorithm for computing the generalized Marcum Q-function (Q μ (x,y)) and the complementary function (P μ (x,y)) are described. These functions appear in problems of different technical and scientific areas such as, for example, radar detection and communications, statistics, and probability theory, where they are called the noncentral chi-square or the noncentral gamma cumulative distribution functions. The algorithm for computing the Marcum functions combines different methods of evaluation in different regions: series expansions, integral representations, asymptotic expansions, and use of three-term homogeneous recurrence relations. A relative accuracy close to 10 -12 can be obtained in the parameter region (x,y,μ)∈[0,A]×[0,A]×[1,A],A=200, while for larger parameters the accuracy decreases (close to 10 -11 for A=1000 and close to 5×10 -11 for A=10000). This software is also peer reviewed by journal TOMS. ## Keywords for this software Anything in here will be replaced on browsers that support the canvas element ## References in zbMATH (referenced in 4 articles , 1 standard article ) Showing results 1 to 4 of 4. Sorted by year (citations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765602707862854, "perplexity": 1989.9476424777038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320226.61/warc/CC-MAIN-20170624050312-20170624070312-00537.warc.gz"}
https://www.physicsforums.com/threads/velocity-of-the-comet-increases-as-it-comes-near-the-planets.45449/
Velocity of the comet increases as it comes near the planets 1. Sep 30, 2004 lakshmi why does the velocity of the comet increases as it comes near the planets 2. Sep 30, 2004 Gonzolo It is accelerating due to the sun's gravity. 3. Sep 30, 2004 rodrigo.fig Hello, lakshmi! The question is that anything putted around the planet(that is, in region where the planet's field acts) suffers an interaction due to planet's field. This interaction is proportional to inverse of square of the distance between them. So, when this something is approaching to the planet, the interaction is increasing, consequently, its acceleration too, consequently, its velocity too. 4. Sep 30, 2004 pervect Staff Emeritus Because of the conservation of energy, any object must gain velocity as it "drops" down a gravity well. It's not that much different than the reason a ball gains velocity if you drop and/or throw it downwards. 5. Sep 30, 2004 aekanshchumber According to Keppler's law the area swept by any celestial body revolving around sun per unit time s constant. When a comet aprochs near to the sun its distance fromit decreases and to fulfill the condition its speed increases. Similar Discussions: Velocity of the comet increases as it comes near the planets
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926061749458313, "perplexity": 1615.8196200201776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00514.warc.gz"}
https://hypercomputation.blogspot.com/2013/12/on-computing-spacetime.html
### On the Computing Spacetime Fotini Markopoulou has published a paper entitled The Computing Spacetime. The first sentence of this paper is: That the Universe can be thought of as a giant computation is a straightforward corollary of the existence of a universal Turing machine. This is a very bold statement, to say the least. We have absolutely no idea what are the (ultimate) laws of the universe and yet we can immediately prove that it is a gigantic computer! Now let's see the proof: The laws of physics allow for a machine, the universal Turing machine, such that its possible motions correspond to all possible motions of all possible physical objects. That is, a universal quantum computer can simulate every physical entity and its behavior. This means that physics, the study of all possible physical systems, is isomorphic to the study of all programs that could run on a universal quantum computer. We can think of our universe as software running on a universal computer. First let me remark that Markopoulou confuses universal Turing machines with universal quantum computers, which means, I suppose, that she assumes that the Church-Turing thesis is valid. Second, it is known that the Turing machine operates in a Newtonian universe and naturally does not take under consideration any quantum phenomena. Third, to say that physics is essentially equivalent to the study of quantum programming is a fallacy because we have no idea what are the ultimate laws of universe. Unless, of course, we assume that fairytale physics is real physics. In this case, it is crystal clear that we are talking nonsense. As with fairytale physics, the problem with the the-Universe-is-a-computer paradigm is that there is no experimental evidence that actually the universe computes something. In fact, I am pretty sure that this is something like an Illuminati conspiracy theory rather than a real scientific theory. Naturally, please bear on mind that the problem of quantifying gravity has not been solved yet. PS Today (20/02/2014) I discovered a preprint entitled  The Universe is not a Computer. In this paper the author puts forth an interesting argument against the validity of the idea that the universe is a computer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777517914772034, "perplexity": 273.5060925281966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00281.warc.gz"}
https://www.arxiv-vanity.com/papers/astro-ph/0703489/
# Near-infrared observations of the Fornax dwarf galaxy. I. The red giant branch ††thanks: Based on data collected at the European Southern Observatory, La Silla, Chile, Proposals No. 65.N-0167, 66.B-0247. M. Gullieuszik 1 Osservatorio Astronomico di Padova, INAF, vicolo dell’Osservatorio 5, I-35122 Padova, Italy 12Dipartimento di Astronomia, Università di Padova, vicolo dell’Osservatorio 2, I-35122 Padova, Italy 2    E. V. Held 1 Osservatorio Astronomico di Padova, INAF, vicolo dell’Osservatorio 5, I-35122 Padova, Italy 1    L. Rizzi 3Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA 3    I. Saviane 4European Southern Observatory, Casilla 19001, Santiago 19, Chile 4    Y. Momany 1 Osservatorio Astronomico di Padova, INAF, vicolo dell’Osservatorio 5, I-35122 Padova, Italy 1    S. Ortolani 2Dipartimento di Astronomia, Università di Padova, vicolo dell’Osservatorio 2, I-35122 Padova, Italy 2 ###### Key Words.: Galaxies: dwarf – Galaxies: individual: Fornax – Local Group – Galaxies: stellar content offprints: ###### Abstract Context: Aims:We present a study of the evolved stellar populations in the dwarf spheroidal galaxy Fornax based on wide-area near-infrared observations, aimed at obtaining new independent estimates of its distance and metallicity distribution. Assessing the reliability of near-infrared methods is most important in view of future space- and ground-based deep near-infrared imaging of resolved stellar systems. Methods:We have obtained imaging photometry of the stellar populations in Fornax. The observations cover an arcmin central area with a mosaic of SOFI images at the ESO NTT. Our data sample all the red giant branch (RGB) for the whole area. Deeeper observations reaching the red clump of helium-burning stars have also been obtained for a arcmin region. Results:Near-infrared photometry led to measurements of the distance to Fornax based on the -band location of the RGB tip and the red clump. Once corrected for the mean age of the stellar populations in the galaxy, the derived distance modulus is , corresponding to a distance of  Kpc, in good agreement with estimates from optical data. We have obtained a photometric estimate of the mean metallicity of red giant stars in Fornax from their and colors, using several methods. The effect of the age-metallicity degeneracy on the combined optical-infrared colors is shown to be less important than for optical or infrared colors alone. By taking age effects into account, we have derived a distribution function of global metallicity [M/H] from optical-infrared colors of individual stars. Our photometric Metallicity Distribution Function covers the range , with a main peak at and a long tail of metal-poor stars, and less metal-rich stars than derived by recent spectroscopy. If metallicities from Ca ii triplet lines are correct, this result confirms a scenario of enhanced metal enrichment in the last 1-4 Gyr. Conclusions: ## 1 Introduction Stellar populations in dwarf spheroidal galaxies (dSph) are important for our understanding of galaxy formation and evolution. Dwarf spheroidals in the Local Group can be studied in detail, giving strong constraints on the star formation history (SFH) and chemical evolution of these system. Galaxies of the dSph type all started forming stars at an old epoch ( Gyr), but in most cases this early stellar generation was later followed by major star-formation episodes giving rise to significant or even dominant intermediate age populations. While old and intermediate age stellar populations of dSph galaxies have been the subject of many studies in optical bands (see, e.g., Grebel 2005; Held 2005, and references therein), they have been little studied in the near-infrared (NIR). However, NIR bands have several advantages when studying evolved stars in these stellar systems. Infrared photometry of evolved low-mass and intermediate-mass stars on the red giant branch (RGB) and the helium-burning phases (e.g., red clump, RC) can be used to derive the basic properties of galaxies (distance, metallicity). Techniques to measure such properties from near-infrared observations are becoming increasingly important since the NIR wavelength domain will be central to future instrumentation (ELT adaptive optics, JWST, etc…). There are advantages in using near-infrared photometry for RGB stars, and even more so in combining optical and NIR data. As we will show in this paper, the age-metallicity degeneracy affecting the color of RGB stars (and therefore metallicity determinations) is much less severe using optical-infrared colors. In order to explore the information contained in the near-infrared spectral window, we have undertaken an imaging study of the evolved stellar populations in Local Group dwarf galaxies. The first galaxy we consider is Fornax, one of the most interesting cases to study, being one of the most massive and luminous dSph satellites of the Milky Way (Mateo 1998). This galaxy was one of the first to provide evidence of an intermediate age stellar population, probed by the presence of luminous carbon star on the asymptotic giant branch (AGB) (Aaronson & Mould 1980, 1985; Azzopardi et al. 1999). Another indicator of a conspicuous intermediate-age stellar population is the well populated red clump of helium-burning stars (Stetson et al. 1998; Saviane et al. 2000a). A wide plume of main-sequence turnoff stars indicates that the star formation history of Fornax has been continuous from galaxy’s formation up to recent times (Stetson et al. 1998; Buonanno et al. 1999; Saviane et al. 2000a; Pont et al. 2004). The luminosity of the brightest blue stars shows evidence that Fornax has been forming stars at least up to 200 Myr ago, while it does not contain any stars younger than 100 Myr (Stetson et al. 1998; Saviane et al. 2000a). An old population is also present, as shown by the presence of an horizontal branch (HB) and RR-Lyrae (Bersier & Wood 2002; Greco et al. 2006), although the blue part of the HB is poorly populated, so that the old and metal-poor population must be small (Stetson et al. 1998; Buonanno et al. 1999; Saviane et al. 2000a). The picture emerging from the above mentioned studies indicates that Fornax began forming stars at the epoch of formation of the Galactic globular clusters (GGCs) Gyr ago. The star formation rate was quite low in the first Gyr, and then increased rapidly; Fornax formed most of its stars 4-10 Gyrs ago. Given the presence of recent star formation, one would expect to find some gas associated with the galaxy. Young (1999) searched for neutral hydrogen out to the tidal radius, and found none. More recent observations by Bouchard et al. (2006) revealed an extended H i cloud in the direction of Fornax that may (or may not) be within the galaxy. Two studies of the chemical enrichment history of Fornax from the spectra of red giant stars, using the Ca ii infrared lines equivalent widths, have recently been presented (Pont et al. 2004; Battaglia et al. 2006). Pont et al. (2004) found a metallicity distribution of Fornax centered at (on the scale of Carretta & Gratton 1997, hereafter CG97), with a metal poor tail extending to and a metal-rich population reaching . The derived age-metallicity relation is well described by a chemical evolution model with a low effective yield, in which an initial rapid enrichment is followed by a period of slower enrichment, reaching about 3 Gyr ago. Then a high star formation rate produced an acceleration, increasing the to a recent value of dex. These results agree with those previously obtained for a few stars by (Tolstoy et al. 2003) using high resolution spectroscopy. In a large spectroscopic study of RGB stars in Fornax, Battaglia et al. (2006) noted the lack of stars more metal poor than [Fe/H] and confirmed the presence of a metal-rich tail up to . They also found that the metal-rich stellar populations are more centrally concentrated, having also a lower velocity dispersion than metal-poor stars. This paper presents the results of a near-infrared study of RGB and RC stars in Fornax, aimed at obtaining new independent estimates of its distance as well as information on its metallicity distribution function from a combination of optical and NIR photometry. This study may represent a local example of NIR studies of more distant resolved stellar systems with future ground-based and space instrumentation. Section 2 presents the observations and the reduction, with special care for the mosaicing techniques, and provides photometric catalogs. Color-magnitude diagrams are presented in Sect. 3. In Sect. 4, the luminosity functions of red giant stars and helium-burning red clump stars are obtained and used to provide new estimates of the distance to Fornax. The mean metallicity and metallicity distribution of red giant stars is derived in Sect. 5 from their color distribution and compared with recent spectroscopic work. Finally, our results are summarized and briefly discussed in Sect. 6. ## 2 Observations and data reduction ### 2.1 Observations Near-infrared observations of Fornax were carried out in November 10–11, 2000, using the SOFI camera at the ESO NTT telescope. The camera employed a 10241024 pixel Hawaii HgCdTe detector which was read in Double Correlated mode. We used SOFI in Large Field mode, yielding a pixel scale of 029 pixel and a total field-of-view of about . Two complementary sets of images were taken in the filters. The first was a wide-area series of 16 shallow contiguous fields, in a square array of covering about arcmin. Secondly, we obtained a dithered sequence of images of a central field of the galaxy providing deep photometry for a arcmin area. We used an 8-points dithering pattern with shifts of up to 10″ from the central position. The observing parameters are given in Table 1, which lists the number of images of each sequence along with the Detector Integration Time (DIT) and the number of co-added integrations per image (NDIT). The on-target exposure times are 960s in for the deep image, and 60s for the shallow mosaic. The observation strategy is further illustrated in Fig. 1. Given the non-negligible crowding of our Fornax field, we adopted an observing strategy based on offset sky images alternated to on-target images. For the deep image, the “chopping” time interval varied between 120s for and 60 s for the band, while it was 60 s for all bands for the shallow observations. The offset pattern was a dithered cross pattern for the deep image and a simple offset in declination for the shallow scans. Although expensive in terms of observing time, this strategy proved to give extremely good sky subtraction on scales comparable with the Point Spread Function (PSF) size, improving the quality of the photometry. Standard stars (including very red stars) from Persson et al. (1998) were observed on a regular basis during the nights at airmasses comparable with those of target objects, to provide photometric calibration. For each standard star, five images were taken, with the star located at the center of the detector and in the middle of the four quadrants of the frame. ### 2.2 Reduction and astrometry The reduction steps were implemented in IRAF111The Image Reduction and Analysis Facility (IRAF) software is provided by the National Optical Astronomy Observatories (NOAO), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under contract to the National Science Foundation. as described in Momany et al. (2003). For the shallow imaging, the basic observation and reduction unit was one “strip” consisting of 4 science and 4 sky frames. The sky images were scaled to a common median, after rejecting the highest and lowest pixels, and averaged to produce a master sky frame to be subtracted from all science images of the strip. The reduction steps were similar for the deep imaging, using the four offset sky frames closest in time to each science image. The sky-subtracted images were then corrected for bad pixels and flat-fielded. Illumination correction frames (as well as bad pixels maps) available from the ESO Web pages were found suitable to this purpose. For the deep imaging, all individual images, after sky subtraction, correction of bad pixels, and flat-fielding, were registered and averaged using the task imcombine with rejection of the brightest and faintest pixel for cleaning of cosmic ray hits. For the shallow scans, mosaicing is a complex task because of the modest overlap of the individual images. We resorted to absolute astrometry to effectively and accurately register the scans, match the photometry catalogs, and produce a single mosaic image. To this purpose, we used the IRAF package mscred (Valdes 1998) along with the wfpred script package designed at the Padua Observatory by two of us (LR, EVH) for reduction of CCD mosaics. The first step in the mosaicing was the characterization of the astrometric properties of SOFI. This was done using a catalog of secondary astrometric standards in the central region of Fornax established from ESO-WFI observations (Rizzi et al. 2007). This secondary catalog, in turn tied to the USNO-A2.0 reference system (Monet et al. 1998), is required to provide a sufficiently high surface density of stars needed to map distortions in the small field of SOFI. The field distortion of SOFI was actually found to be negligible. Once a distortion map was constructed, all the individual exposures were registered and resampled to a distortion-corrected coordinate grid with a common reference World Coordinate System (WCS). As a result, a large-area mosaic image was reconstructed in each of the filters. As a check of our astrometric calibration, we measured the right ascension and declination of the secondary standard stars on the WCS-calibrated frames. Figure 2 shows the difference between stellar coordinates on the SOFI wide-field catalog, and the of the same stars in the reference optical catalog. This consistency test shows the internal precision of the astrometric calibration, which is the really important figure for image and catalog registration. The absolute (systematic) accuracy of the coordinates is that of the reference catalog, estimated to be of the order 0.2 arcsec. The standard deviation of the residuals is not larger than 012 on both coordinates. ### 2.3 Stellar photometry and calibration Stellar photometry was obtained for the deep and shallow images using daophot ii/allstar (Stetson 1987). For the deep photometry, the PSFs were generated from a list of isolated stars on the coadded images. A Penny function with a quadratic dependence on position in the frame was adopted, to account for the elongation of the stellar profiles especially affecting the left side of the chip (this was traced to a misalignment of the Large Field objective). The photometry was finally performed on the stacked images using allstar. For the shallow images, simple aperture photometry with an aperture with radius equal to the FWHM of the stellar profile was found to yield the best photometric precision. For both catalogs, the pixel coordinates provided by daophot ii were converted to the equatorial system using the calculated WCS astrometric calibration. The photometric calibration techniques are similar to those adopted by Momany et al. (2003) and will be only briefly outlined here. Photometry of the standard stars through increasing apertures was obtained with the IRAF apphot task. For the “total” magnitude, we adopted a reference aperture of 18 pixel radius, close to the 10″ aperture used by Persson et al. (1998). The 5 measurements of each standard star were averaged after checking their uniformity. The instrumental magnitudes were finally normalized to 1 s exposure time and zero airmass; for a magnitude , calculated for an observation with exposure time and airmass , we defined the normalized magnitude as m=map+2.5log(texp)−κλX (1) where are the mean atmospheric extinction coefficients: , , and were adopted from the ESO web page of SOFI. Given the small number of red stars observed each night, we used for calibration all data available for five observing nights (including data previously obtained in Feb. 2000 with the same SOFI setup). All the measurements were scaled to a common zero point and a first least square fit was done to compute the color terms of the calibration relations. Then, assuming fixed color terms, we measured the zero point variations through the run (in particular between the two nights of Nov. 10, 11) and found them to be very small, comparable with the measurements errors; a single zero point was therefore adopted for the run, with uncertainties 0.02 mag in , , and . The resulting calibration relations are: J−j  =−0.016(J−H)+23.118H−h =+0.001(J−H)+22.902H−h =−0.002(H−K)+22.902K−ks=+0.021(J−K)+22.337 (2) which are consistent with the calibration presented in the ESO web page. Note that our magnitudes on the Persson et al. (1998) system are expected to show virtually no offset relative to the 2MASS system (Carpenter 2001). The color terms measured with respect to the Persson et al. (1998) standard stars are negligible. The photometric catalogs of Fornax stars were calibrated using the Eqs. (2) after magnitude-scale and aperture correction. This was based on apphot large-aperture photometry and growth-curve analysis of a few relatively bright, isolated stars in the deep and shallow fields. The uncertainties on aperture correction are of the order 0.03 mag, yielding total calibration uncertainties 0.04 mag. In order to test the photometric calibration and rule out the presence of photometric bias in the shallow photometry, we compare the zero points of the deep and shallow photometry in Fig. 3. For the stars in common the photometry shows an excellent internal consistency down to K , with zero point differences of the order 0.01 mag. Figure 4 shows a comparison of our shallow catalog with 2MASS photometry (Cutri et al. 2003) on the same area. We have selected only 2MASS measurements with . The systematic differences are negligible considering the errors in the 2MASS catalog and the aperture correction uncertainties. The photometry lists obtained for the individual pointings were merged into a single shallow catalog. The stars were matched on the basis of right ascension and declination, using an error box of 15. Finally, the coordinates of stars in the two catalogs were matched to those of photometry (selecting objects with ) from ESO WFI observations (Rizzi et al. 2007), thus producing a final list with magnitudes. The shallow and deep catalogs of near-infrared photometry of Fornax stars are presented in Tables 2 and 3 (the entire catalogs are made available electronically at the CDS). ### 2.4 Artificial star experiments The completeness of our photometric catalogues was evaluated from artificial star experiments. We performed 20 test runs by adding 400 stars to the scientific frames in each run. Since the crowding is uniformly low over the 16 shallow pointings, with only about 1000 stars per pointing on average, the experiments were limited to one of the shallow pointings. The input magnitudes and colors of the artificial stars were chosen along a sequence corresponding to the Fornax RGB. The stars were placed on a grid of equilateral triangles with a side of 40 pixels (), much larger than the stellar PSF. In each experiment the grid was then randomly shifted in order to uniformly cover all the frame. The results from artificial star experiments on the shallow and deep photometry are shown in Fig. 5 and Fig. 6, respectively. We note that the deep data are comparatively rather noisy in the band with respect to the and K bands. Another well-known observational effect is the fact that the mean difference between input and output magnitudes is generally biased towards brighter output magnitudes (e.g., Gallart et al. 1996). In our experiments for the shallow data, the mean difference of input and output magnitudes resulted to be less than 0.05 mag for . This is also an upper limit for the color shift down to this limiting magnitude. However, the color bias is negligible ( mag) in the first 2 mag below the RGB tip used for the metallicity measurements. ## 3 Color-magnitude diagrams The near-infrared and optical-infrared color-magnitude diagrams (CMDs) of Fornax dSph in the shallow sample are presented in Fig. 7. Given the better image quality and spatial resolution of our optical photometry, we used the allstar shape parameter sharp for the measurements to remove bad and non-stellar objects from our photometric catalogues. Only objects with sharp were selected. The most noteworthy feature in the CMD in Fig. 7 is the well defined sequence of intermediate-age AGB stars. A fraction of AGB stars have red colors typical of carbon (C) stars, and a few stars show extremely red colors possibly indicating dust-enshrouded AGB stars. The properties of near-infrared selected AGB stars in Fornax will be discussed in more detail in a separate paper. We only remark here the dramatic change in their distribution when optical and near-infrared bands are used. Note, for example, that the redder AGB stars become progressively fainter in the bluer optical bands, so that even the brightest (in terms of bolometric luminosity) are missed in optical CMDs. For the reddest AGB stars the colors saturate. Thus, selection in the near-infrared appears to be extremely important to investigate the evolution of luminous AGB stars. Also, an advantage of optical–near-infrared colors is the improved discrimination against field contamination by foreground stars and background galaxies, as is evident, for example, when comparing the , and , diagrams in Fig. 7. The shallow, large-area catalog provides the statistics for studying the AGB stars, the RGB tip, and the stellar populations of Fornax. The fainter limiting magnitude and better precision of our deep photometry of a smaller central region allows us to characterize the RGB down to the so-called “AGB bump” and the red clump. Our deep color-magnitude diagram is presented in Fig. 8, together with the RGB fiducial lines of Milky Way globular clusters (Valenti et al. 2004a). This diagram is similar to that obtained by Pietrzyński et al. (2003) with ISAAC at the VLT. Note that the RGB of Fornax is relatively thin in this small central field. It lies between the fiducial RGB lines of M 4 ([Fe/H]  on the Carretta & Gratton 1997 abundance scale) and M 107 ([Fe/H] ). This suggests a metallicity [Fe/H]  on the same scale. ## 4 Luminosity function and distance Using the deep data shown in Fig. 8, we constructed a luminosity function (LF) extended down to the red clump of Fornax (Fig. 9). The LF was corrected by taking into account the incompleteness factor derived in Sect. 2.4. Our sample of RGB and He-burning stars was selected as shown in the inset. Using this LF, new estimates of the distance to Fornax have been obtained based on the mean -band luminosity of the red clump and the magnitude of the RGB tip. ### 4.1 The red clump Several authors have pointed out that distance measurements based on the magnitude of the RC are most reliable in the near-infrared, because of the smaller dependence of the RC mean luminosity on metallicity and age in the -band, along with the reduced reddening relative to the optical wavelengths (e.g., Alves 2000; Grocholski & Sarajedini 2002; Pietrzyński et al. 2003). Pietrzyński et al. (2003) employed VLT-ISAAC photometry of RC stars to measure the Fornax distance. An independent distance estimate based on our own measurement of is presented here, and the two determinations compared. In order to reduce the dependence from the bin choice, the LF was constructed as a multibin histogram by averaging 10 LFs with a fixed 0.1 mag bin and starting points shifted in steps of 0.01 mag. A mean level was measured following the standard procedure, i.e. by fitting the sum of a Gaussian and a polynomial to the magnitude distribution of stars in the color range (see, e.g., Pietrzyński et al. 2003). The mean value and its standard deviation are the results of 5000 experiments with bootstrap resampling of the luminosity function. Based on our artificial star experiments, any magnitude shift due to a photometric bias is negligible ( mag) for the deep data at the RC level. Other sources of error are considered in the following, in addition to the 0.02 mag fitting error, to evaluate the total uncertainty in the distance to Fornax dSph. In order to compute the distance, our magnitudes (tied to the LCO system of Persson et al. 1998) need to be transformed onto the photometric system used in the RC luminosity calibration of Alves (2000). Since our -band photometry agrees very well with the 2MASS system (Sect. 2.3; see also Carpenter 2001), we simply adopted the transformation from 2MASS to the Bessell & Brett (1988) system, . The latter is very close to the Koornneef (1983) system used by Alves (2000) (see Carpenter 2001; Grocholski & Sarajedini 2002). We therefore applied the relation (m−M)0=(KRC+0.044)−MK−AK+ΔMK (3) where the -band luminosity of the red clump in the solar neighborhood is according to the Alves (2000) calibration (a value confirmed by Grocholski & Sarajedini 2002 from a sample of 14 open clusters), and is a population correction term. This correction accounts for the different stellar content of Fornax and the local Galactic RC on which the Alves (2000) calibration is based. The population correction was calculated using the precepts of Salaris & Girardi (2002), the age-metallicity relation of Pont et al. (2004), and the star-formation history by Tolstoy et al. (2003), and found to be (the RC in Fornax being fainter). Adopting the extinction law of Rieke & Lebofsky (1985) and   , the resulting distance is  , where the uncertainty includes the statistical error on the RC location, the photometric zero-point error, and a photometric error of 0.1 mag at the level of the RC (see Fig. 6). Pietrzyński et al. (2003), considering the population correction negligible, found a distance modulus  , which differs from our value only for the correction term, the determination being in perfect agreement. ### 4.2 The AGB bump In the LF presented in Fig. 9 another bump is clearly seen. Its band magnitude was measured by fitting also in this case the sum of a polynomial and a Gaussian function and found to be with a bootstrap technique similar to that used for the red clump. This feature is identified with the AGB bump, which is the signature in the CMD of the beginning of the AGB phase. At the beginning of this evolutionary phase the increase of luminosity of a star is slower than in subsequent AGB phase, and thus a bump in the LF is produced (Castellani et al. 1991; Gallart 1998; Alves & Sarajedini 1999). We also explored the possible identification of this feature with the RGB bump. using the Valenti et al. (2004b) calibration of the RGB bump. Assuming a metallicity [M/H] for the old population of Fornax (Bersier & Wood 2002; Greco et al. 2006), we derived that the expected RGB bump magnitude is . As discussed by Saviane et al. (2000a), the RGB bump is fainter that the bump in Fig. 9 and not visible because it is too close to the overwhelming red clump. We note that the metallicity assumed here, [M/H], is appropriate for the old population of Fornax and a lower limit to the actual mean metallicity of intermediate age stellar populations (see following sections). The RGB bump magnitude derived by Valenti et al. (2004b) becomes fainter at increasing metallicity, and therefore our estimate of the RGB bump magnitude is to be considered a lower limit. We therefore identify the feature observed at with the AGB bump. ### 4.3 The RGB tip Figure 10 shows a close-up view of the brighter part of the RGB luminosity function in Fornax, obtained by selecting red giant stars from the wide field shallow catalog This catalog was chosen because of the better statistics, and, indeed, the RGB cutoff appears very well defined. We obtained an objective estimate of the magnitude of the RGB tip by fitting the LF with the convolution of a step function with a Gaussian kernel representative of the measurement errors, as in Momany et al. (2002). The function is composed of a constant value brighter than the RGB cutoff, and a power law (on a log scale) below the RGB cutoff. Since the tip is located at K  where our photometry is complete (see Fig. 5), no completeness correction was applied to the LF. As before, a multibin LF was used, obtained by averaging 10 LFs with a fixed 0.1 mag bin and intervals shifted by 0.01 mag. Using this procedure the RGB tip was detected at . The uncertainty includes the fitting error and the error associated with the binning of the LF. The internal measurement error at the level of the RGB tip gives a minor contribution ( mag). An independent measurement of the RGB tip was also obtained using the Maximum Likelihood Algorithm described by Makarov et al. (2006). The TRGB was found at . Since the two independent measurements agree within the errors, we adopt the mean value  (random)  (systematic) as our final measure of the tip, where the systematic uncertainty reflects the photometric zero point error (Sect. 2). The age and metallicity dependence of the RGB tip is larger in than in the -band, so that the application of the RGB tip method is more uncertain in this case (e.g., Salaris & Girardi 2005). Intermediate-age stars show a fainter RGB tip than old stars, with a difference that can be as large as mag in in the age interval 4–13 Gyr. On the other hand, if younger populations become more metal-rich as a result of galactic chemical evolution, their RGB tip becomes brighter, because the TRGB luminosity rises with increasing metallicity. Since the bulk of stellar populations in Fornax is of intermediate age, a population correction is required to compute the distance. To this aim, we modeled the combined effects of age and metallicity on the RGB tip by constructing a synthetic CMD containing 100 000 stars in the upper RGB. Our simulations are based on the ZVAR code (Bertelli et al. 1992), the Padova isochrones (Girardi et al. 2002), and adopt the chemical evolution history by Pont et al. (2004) and the SFH from Tolstoy et al. (2003). We obtained the population correction for the luminosity of the RGB tip by measuring the cutoff in the simulated CMD for (i) the full stellar population mix (representative of the Fornax RGB), and (ii) only stars older that 10 Gyr. As a consequence of the adopted metal enrichment history, the TRGB in the overall synthetic CMD turns out to be brighter than that for old stars alone. The difference in the TRGB magnitude is mag. By taking this population correction into account, the distance modulus of Fornax was calculated using Valenti et al. (2004b) empirical calibration of the as a function of metallicity, based on Galactic globular clusters: MTRGBK=−6.92−0.62 [M/H] (4) where [M/H] is an estimate of the average metal abundance. The r.m.s. uncertainty of this calibration is mag. We then apply Eq. 4 to the old stellar population in Fornax: (m−M)0=(KTRGB+ΔMK)−AK−MTRGBK (5) where is the measured TRGB level and is the extinction. Assuming a low metallicity [M/H] appropriate for the old population of Fornax (Bersier & Wood 2002; Greco et al. 2006), the corrected distance modulus is , where the error is dominated by the uncertainty on the Valenti et al. (2004b) calibration. It is interesting that, once the stellar content is properly taken into account, the -band RGB tip provides an estimate of the distance to Fornax in good agreement with our determination from the RC and other estimates in the literature. However, the uncertainty remains high, given the strong dependence of the TRGB on age and metallicity and the error on the calibration relation. In summary, the distance modulus derived here from near-infrared data and methods appears to confirm those measured by Saviane et al. (2000a) from the magnitude of the RGB tip,  , and from the mean magnitude of old HB stars,  . New distance measurements based on wide-field optical observations are presented by Rizzi et al. (2007), where the different results in the literature are compared and discussed in some detail. ## 5 Metallicity ### 5.1 Mean metallicity The mean metallicity of the stellar populations that make up the Fornax RGB were estimated by comparing the near-infrared and optical-infrared colors of red giants to the RGB fiducial lines of Galactic globular clusters of known metal abundance. Valenti et al. (2004a) calibrated the RGB colors at fixed -band luminosities in the near-infrared CMDs of Milky Way globular clusters as a function of metallicity. They give color-metallicity relations for and at against the [Fe/H] of GGCs on the scale of Carretta & Gratton (1997). They also provide calibrations against [M/H], a mean metallicity measuring the abundance of all heavy elements. This parameter is particularly important to estimate the metallicities of dwarf spheroidal galaxies by comparison with the photometric properties of Milky Way globular clusters. These objects are known to show non-solar abundance patterns, with an overabundance of -elements relative to iron that is a function of the cluster metallicity (Pritzl et al. 2005, and references therein). In contrast, dwarf spheroidal galaxies tend to have [/Fe] ratios closer to solar (e.g., Shetrone et al. 2003). Salaris et al. (1993) have shown that the color of red giant stars is driven by the overall metal abundance rather than the Fe abundance. Thus, the iron [Fe/H] scale of Galactic globular clusters is not immediately applicable to dwarf galaxies. Instead, the [M/H] provides a suitable parameter for comparing stellar systems with different abundance patterns and rank them againsts Milky Way globular clusters. For the sake of comparison with previous works, we provide here both the [Fe/H] and [M/H] rankings, but recommend the [M/H] as the most appropriate values. The mean [Fe/H] and values of Fornax RGB stars computed using the Valenti et al. (2004a) calibrations are presented in Table 4, together with errors calculated from color uncertainties by error propagation. The average values are and (on the scale of Carretta & Gratton 1997). Using the calibrations against [M/H], we obtain a mean metallicity and Alternatively, a robust metallicity indicator is represented by the RGB slope. The Kuchinski & Frogel (1995) calibration yields on the Zinn & West (1984, ZW84) scale, corresponding to on the CG97 scale (using the conversion of Carretta et al. 2001). This value is confirmed by the recent re-calibration of the slope by Valenti et al. (2004a), yielding or [M/H] , in good agreement with other methods. ### 5.2 Metallicity distribution The metallicity of RGB stars can also be estimated by interpolating the colors of individual stars across a grid of empirical RGB templates. In this way, we also obtain an observable related to the metallicity distribution function (MDF) of the red giant stars. The method used here (see Saviane et al. 2000b) consists in building up a family of hyperbolae that best fit the RGB fiducial lines of GGCs of known metallicity, and then use interpolation over that family of curves to derive the metallicity of RGB stars. A previous application of the method to the , plane can be found in Zoccali et al. (2003), where it is discussed in some detail for stars in the Galactic bulge. Table 5 lists the reference globular clusters with their adopted metallicities. In the present implementation of the method we have employed values from Valenti et al. (2004a), with the updated scale of Carretta et al. (2001). The resulting “photometric MDF” is shown in Fig. 11 along with an illustration of the interpolation method. We also estimated the statistical uncertainty of our photometric metallicity determinations using a Monte Carlo approach. A synthetic CMD was generated by randomly choosing 10 000 stars along the best fit line in Fig. 11. Errors were added to the synthetic magnitudes according to a Gaussian distribution with standard deviations 0.01 in and according to the results of artificial star experiments in (see Sect. 2.4). We then used the same algorithm applied to the Fornax CMD to retrieve individual metallicities. The recovered metallicities have a nearly Gaussian distribution with a dispersion dex. This scatter was taken as the statistical uncertainty associated to our individual metallicities. We note that the photometric completeness in the region of the CMD used to compute the MDF is 100%, so that no completeness correction is needed. The optical–near-IR data confirm the extended metallicity distribution of Fornax stars suggested by Saviane et al. (2000a). The distribution is formally modeled by the sum of two Gaussians, with a main peak at and a secondary peak at . The mode corresponds to about and on the CG97 and ZW84 scales, respectively. The standard deviation of the Gaussian corresponding to the main peak is dex, which is much larger than the scatter of dex due to photometric error. We remind that our “metallicity distribution” is representative of the MDF only for old stellar populations with age comparable to the old globular clusters in the Milky Way. This is certainly not true for Fornax stars, whose mean age is about 5-6 Gyr (Saviane et al. 2000a). In all cases, however, this distribution represents an important observable that models of galactic evolution should be able to reproduce. ### 5.3 Age correction Since the bulk of the stellar populations in Fornax is younger than stars in GGCs, the red giants stars in Fornax are on average slightly bluer than globular cluster stars of the same metallicity. As a consequence, the metallicity obtained from the mean RGB color is systematically underestimated. Using optical–near-infrared colors, however, this “age-metallicity degeneracy” is much reduced with respect to optical colors. This is illustrated in Fig. 12, where we show the effects of age and metallicity variations on RGB colors at using the Pietrinferni et al. (2004) isochrones. In this figure, contour lines of equal color are nearly vertical. This means that the effects of a change in age (or those of an age spread) on the color shift and the color dispersion are much smaller than those produced by metallicity variations. We have used this plot to estimate a mean correction to metallicity, by assuming that the age of GGCs and Fornax dSph are 12.5 and 7.5 Gyr, respectively. The correction is the difference in metallicity needed to keep the color constant while moving from 12.5 Gyr to 7.5 Gyrs. We found a correction , yielding for Fornax an age-corrected metallicity in terms of global metallicity (we consider the mode of the photometric MDF). This result is in good agreement with previous results in the literature. Saviane et al. (2000a) derived a mean age-corrected metallicity from the color of the RGB, on the ZW84 scale (corresponding to [Fe/H]). ### 5.4 Comparison with spectroscopy Our mean metallicity agrees well with the results of low-resolution spectroscopic analyses of RGB stars, yielding a mean metallicity and (Tolstoy et al. 2001; Pont et al. 2004) on the CG97 scale. This value has been confirmed by high-resolution spectroscopy of (two) stars by Tolstoy et al. (2003). Since the extended star formation of Fornax leads to an abundance ratio (Shetrone et al. 2003), we can assume that our [M/H] values are directly comparable with the spectroscopic measurements. The relatively large overlap between our optical–near-IR photometry and the spectroscopic samples of Fornax RGB stars with metallicities derived by Ca ii triplet spectroscopy (Pont et al. 2004; Battaglia et al. 2006) allows us a direct comparison of metal abundances on a star-by-star basis. This comparison is especially interesting to assess the reliability of photometric MDF determinations for all systems that are too distant for (even low-resolution) spectroscopy. Figure 13 plots the metallicities of individual stars derived from colors against those estimated from the equivalent widths of the Ca ii triplet lines by Pont et al. (2004) (lower panel) and Battaglia et al. (2006) (upper panel). For the comparison with Pont et al. (2004) results we used only spectra with noise below a given threshold (F. Pont, priv. comm.). The spectroscopic values for metal-rich stars are those corrected by the authors by comparison with high-metallicity stars in the LMC (see Pont et al. 2004, their section 3). While a few stars appear too red or too blue for their spectroscopic metallicity, the general trend is that of an overall correlation. A discrepancy is apparent at the metal-rich end, where the metallicity estimates from photometry appear to saturate. A similar trend, with a worse correlation, is noticed in the comparison with Battaglia et al. (2006) data, where there is a large excess of metal-rich stars with respect to the photometric estimates. The metallicity distribution of Fornax RGB stars is shown in Fig. 14, together with the MDFs obtained from Ca ii spectroscopy. In this case, the distribution of photometric [M/H] values in Fig. 14 was corrected by 0.15 dex toward higher metallicities to take into account the fact that intermediate age stars (the bulk of Fornax stars) are bluer than the GGC template stars of the same metallicity. Although this correction clearly represents a first order approximation, it is interesting to note the agreement in the mode of the metallicity distribution with the results of spectroscopy. Clearly, the first explanation for the discrepancy at the metal-rich end is the age-metallicity degeneracy in RGB star colors that even optical-infrared color indices cannot completely overcome. Assuming that the Ca ii triplet methods provides the correct metallicities, the behavior noticed in Fig. 13 and 14 appears to be consistent with a late metal enrichment scenario suggested by the cited spectroscopic studies. Indeed, the redder optical-infrared colors of a young population of metal-rich red giants are compensated for by the younger age. However, spectroscopic abundances from Ca ii triplet line strengths are also somewhat uncertain, especially in the high-metallicity regime (where an extrapolation may be needed) and when the Ca ii triplet calibration, which is based on globular cluster stars, is applied to the spectra of young (1-4 Gyr old) stars. Accurate abundance measurements from high-resolution spectroscopy may be useful to definitively clarify the issue. ## 6 Summary and conclusions We have presented a near-infrared photometry of the stars in the Fornax dwarf spheroidal galaxy. Our study provides color-magnitude diagrams and photometric catalogs of red giant and AGB stars in Fornax over a area, and deep photometry over a central field. The main results are the following: • From stars on the red giant branch and the RC, we have obtained independent estimates of the distance to Fornax based on mean magnitude of the red clump and the RGB tip, which take into account the mean age of the stellar populations in Fornax. The average value obtained from the two methods is , in excellent agreement with previous authors, and in particular with the results of Saviane et al. (2000a) from optical photometry. • The color distribution of RGB stars has been used to infer the mean metallicity and metallicity distribution of red giant stars in Fornax, taking advantage of the reduced dependence of colors of RGB stars from the age of the stellar population. The average metallicity was found to be . This compares well with the values recently obtained from spectroscopy by Tolstoy et al. (2001, 2003), Pont et al. (2004), and Battaglia et al. (2006). • The metallicity distribution is consistent with that obtained from spectroscopy up to the metallicity of 47 Tuc (). However, there is a clear discrepancy between the MDFs derived from near-infrared colors and spectroscopy near the metal-rich end, where Pont et al. (2004) found of a tail of stars with metallicity up to almost solar. This discrepancy could be caused by the effects of the age-metallicity degeneracy, which cannot be completely corrected even using near-IR photometry, but the extrapolation used by Pont et al. (2004) to derive their metallicities from the measurements of Ca ii lines, as noted by the authors could also have some effect. More observations are needed to solve this discrepancy and establish the upper end of metal enrichment in Fornax. ###### Acknowledgements. We are indebted with F. Pont, G. Battaglia and collaborators for kindly providing unpublished information about their spectroscopic results. We thank M. Salaris for helpful discussions of the properties red clump stars. We also thank an anonymous referee for comments and suggestions that improved the presentation of the paper. M.G. and E.V.H. acknowledge support by MIUR, under the scientific projects PRIN 2002028935 and PRIN 2003029437. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884316682815552, "perplexity": 2300.5756788418685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00598.warc.gz"}
https://www.arxiv-vanity.com/papers/1009.5175/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Delay time distribution of type Ia supernovae: theory vs. observation Nicki Mennekens    Dany Vanbeveren    Jean-Pierre De Greve    Erwin De Donder ###### Abstract Two formation scenarios are investigated for type Ia supernovae in elliptical galaxies: the single degenerate scenario (a white dwarf reaching the Chandrasekhar limit through accretion of matter transferred from its companion star in a binary) and the double degenerate scenario (the inspiraling and merging of two white dwarfs in a binary as a result of the emission of gravitational wave radiation). A population number synthesis code is used, which includes the latest physical results in binary evolution and allows to differentiate between certain physical scenarios (such as the description of common envelope evolution) and evolutionary parameters (such as the mass transfer efficiency during Roche lobe overflow). The thus obtained theoretical distributions of type Ia supernova delay times are compared to those that are observed, both in morphological shape and absolute number of events. The critical influence of certain parameters on these distributions is used to constrain their values. The single degenerate scenario alone is found to be unable in reproducing the morphological shape of the observational delay time distribution, while use of the double degenerate one (or a combination of both) does result in fair agreement. Most double degenerate type Ia supernovae are formed through a normal, quasi-conservative Roche lobe overflow followed by a common envelope phase, not through two successive common envelope phases as is often assumed. This may cast doubt on the determination of delay times by using analytical formalisms, as is sometimes done in other studies. The theoretical absolute number of events in old elliptical galaxies lies a factor of at least three below the rates that are observed. While this may simply be the result of observational uncertainties, a better treatment of the effects of rotation on stellar structure could mitigate the discrepancy. ###### Keywords: supernovae, close binaries, white dwarfs, elliptical galaxies ###### : 97.20.Rp, 97.60.Bw, 97.80.Fk, 98.52.Eh ## 1 Introduction Type Ia supernovae (SNe Ia), which are among the most powerful explosions observed in the universe, are events that can occur only in multiple star systems. They are not only critical to the chemical evolution of galaxies (without them, we would for example be unable to explain the amount of iron observed in the solar neighborhood), but are also increasingly being used as distance indicators, or standard candles, in cosmology. Despite this, their origin remains unknown. It is agreed upon that SNe Ia originate from the thermonuclear disruption of a white dwarf (WD) in a binary star, which attains a critical mass close to the Chandrasekhar limit of  M (see e.g. Livio 2001). However, the exact formation process, and even the type of systems in which such is possible, is a matter of debate. The two most popular formation channels are the single degenerate (a WD steadily accreting hydrogen-rich material from a late main sequence (MS) or red giant (RG) companion, see e.g. Nomoto 1982) and double degenerate (a super-Chandrasekhar merger of two WDs due to gravitational wave radiation (GWR) spiral-in, see e.g. Webbink 1984) scenario. To address the question of which of these scenarios is dominant in nature (or both), one can turn to the observational delay time distribution (DTD) of SNe Ia, which is the number of such events per unit time as a function of time elapsed since starburst. Totani et al. (2008) obtain a DTD by observing the SN Ia rate in elliptical galaxies which are passively evolving, and thus equivalent to starburst galaxies for this purpose, at similar (near solar) metallicity but different redshifts. The thus obtained distribution is extended with the SN Ia rate for local elliptical galaxies observed by Mannucci et al. (2005). The result is a DTD decreasing inversely proportional to time and in units (SNuK, SNe per K-band luminosity) which need to be converted (into SNuM, SNe per total initial galaxy mass) in order to be compared to any theoretical model. This conversion factor is obtained from spectral energy distribution templates, but may be subject to uncertainties. Using the observational DTD, it is possible to constrain theoretical models for SN Ia formation in starburst galaxies. For the reason just mentioned, comparisons between theoretical and observational DTDs will mainly focus on the shape of the distributions, and not so much on the absolute values. ## 2 Assumptions Previous studies have been done on this topic by other groups (see e.g. Ruiter et al. 2009; Hachisu et al. 2008; Han & Podsiadlowski 2004; Yungelson & Livio 2000), but the present one specifically focuses on the influence of mass transfer efficiency during Roche lobe overflow (RLOF) in close binaries. This is done with an updated version of the population number synthesis code by De Donder & Vanbeveren (2004), which computes detailed binary evolution models, without the use of analytical formalisms. Single degenerate (SD) progenitors are assumed to be as given by Hachisu et al. (2008), where regions in the companion mass–orbital period parameter space are denoted, one for the WD+MS channel and one for the WD+RG channel. Systems entering one of these regions will encounter a mass transfer phase towards the WD that is calm enough in order not to result in nova-like flashes on the surface of the WD that burn away any accreted hydrogen, but also sufficiently fast to let the WD reach the Chandrasekhar limit and result in a SN Ia before the companion ends its life. This scenario includes the mass stripping effect, which allows accretors to blow away some of the mass coming towards them, letting some systems which would otherwise have merged escape a common envelope (CE) phase and result in a SN Ia. For the double degenerate (DD) scenario, it is assumed that every WD merger exceeding  M will result in a SN Ia. Certain parameters need to be scrutinized, first and foremost the fraction of RLOF-material which is accepted by the accretor. If , mass will be lost from the system and angular momentum loss must be taken into account. This is done under the assumption that matter leaves the system with the specific angular momentum of the second Lagrangian point, since mass loss is considered to take place through a circumbinary disk. Other groups make different assumptions, which can have serious implications for the eventual evolution outcome. Finally, a formalism must be adopted for the treatment of energy conversion during CE phases. For the standard model, the -formalism by Webbink (1984) will be adopted, while another possibility will be considered later on. ## 3 Double degenerate evolution channels In our code, there are two evolution channels that can lead to a DD SN Ia, which are represented graphically by typical examples in Fig. 1. In the first channel, the explosion follows an evolution which entails one stable RLOF phase (which is assumed to be conservative: ), followed by a CE phase. The latter is due to the extreme mass ratio at the start of the second mass transfer phase, and the fact that the accreting object is a WD. In this channel, the resulting system is a double WD binary with a mass of the order of  M each, and with an orbital period of a few hours. Such a system then typically needs a GWR spiral-in lasting several Gyr, resulting in a SN Ia after such a long delay time. Importantly, if in this channel the RLOF phase is assumed to be totally non-conservative (), the system will merge already during that first mass transfer phase, and there will thus be no SN Ia. The second channel consists of an evolution made up of two successive CE phases. The nature of the first mass transfer phase is a result of the system having an initial orbital period typically two orders of magnitude larger than in the other channel. This makes that the donor’s outer layers are deeply convective by the time mass transfer starts, which causes this process to be dynamically unstable. Eventually, after the second CE phase, a double WD binary of about the same component masses as in the first channel is obtained, but with an orbital period of only a few hundred seconds. Such systems require GWR during only a few tens of thousands of years in order to merge, with the SN Ia thus having a total delay time of just a few hundred Myr. ## 4 Results and discussion The results for the DTD, obtained with the population synthesis code, are shown in Fig. 2. It is obvious that the SD DTD for conservative RLOF () is decreasing much too fast and too soon in order to keep matching the morphological shape of the observational data points after a few Gyr. The SD DTD for totally non-conservative RLOF () hardly deviates from the conservative one shown. The SD scenario by itself is thus incompatible with the observations. We also find most SD events to occur through the WD+MS channel, as opposed to the WD+RG channel. The DD DTD for conservative RLOF matches the observational points in shape, but results in a too low absolute number of events to match them. This may be partially caused by uncertainties in the conversion between SNuK and SNuM, but may also have a physical explanation which will be addressed below. Importantly, most DD SNe Ia are created through a quasi-conservative RLOF followed by a CE phase, not through two successive CE phases. This is also visible from Fig. 2, by comparing the DD DTDs for totally conservative and non-conservative RLOF. In the latter case, the DTD drops dramatically after a few hundred Myr, leaving no DD SNe Ia with a sizeable delay time. This means that the first peak in the DD DTD, present in both cases of , contains the events created through a double CE phase, and the second one (the absolute majority of events, but only present in the case of conservative RLOF) those created by a RLOF phase followed by a CE phase. Apart from confirming the aforementioned typical timescales for both channels, and the inability of the channel containing a RLOF phase to produce any SNe Ia if this phase is non-conservative, this also means that in reality a quasi-conservative RLOF (more specifically, ) is required to obtain a match in morphological shape between model and observation. This has negative implications for the use of analytical formalisms for the determination of delay times, since such studies typically assume that the lifetime of the secondary star is unaffected by the mass transfer process, which is obviously not true in the case of conservative RLOF. The next step is a study of the influence of the description of CE evolution. So far, the -formalism by Webbink (1984) was used, which is based on a balance of energy following from a conservation of angular momentum. An alternative is the -formalism by Nelemans & Tout (2005), which starts from a conservation of energy to arrive at a balance of angular momentum, and which is said to be better for the treatment of systems which will result in a WD binary. The result obtained with this formalism is shown for both the SD and DD scenario and with in Fig. 3. The SD DTD using the -formalism still deviates strongly from the observations, both in shape and number. While the shape of the DD DTD is in agreement with that of the observational data points, it has dropped in absolute number by another order of magnitude as compared to the -formalism. While it is thus not possible to reject the use of the -formalism based on a shape comparison, it seems unlikely that such a large SN Ia rate discrepancy can be explained. Finally, some considerations are made on the absolute number of SNe Ia. As mentioned before, all considered theoretical models underestimate the observed absolute rate by a factor of at least three at the 11 Gyr point. This might be partially due to the SNuK-SNuM conversion, but a more plausible solution is stellar rotation. If it is so that stars in binaries are typically born with a higher rotational velocity than single stars, for which there seem to be indications (see e.g. Habets & Zwaan 1989), then it seems likely that a lot of binary components will rotate faster than synchronously on the ZAMS. In that case, they will also have heavier MS convective cores than expected (see e.g. Decressin et al. 2009), which will eventually lead to heavier remnant masses. One will thus obtain heavier WDs, and hence more systems of merging WDs which attain the required  M for a DD SN Ia. Figure 4 shows a theoretical DTD obtained for with a 10% increase in MS convective core mass, and for the SD and DD scenario combined, since there is no reason why both scenarios could not be working together. This DTD agrees well, now both in morphological shape and in absolute number, with the observational DTD by Totani et al. (2008) and with the more recent one by Maoz et al. (2010). ## 5 Conclusions We find (see also Mennekens et al. 2010) that the single degenerate scenario by itself is incompatible with the morphological shape of the observed delay time distribution of type Ia supernovae. Most double degenerate events are created through a quasi-conservative Roche lobe overflow, followed by a common envelope phase. The resulting critical dependence of the delay time distribution on the mass transfer efficiency during Roche lobe overflow and on the physics of common envelope evolution might be a way to find out more about these processes when more detailed observations become available.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037385582923889, "perplexity": 1127.1167930455315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141708017.73/warc/CC-MAIN-20201202113815-20201202143815-00422.warc.gz"}
https://hal.archives-ouvertes.fr/hal-01111730
# Study of a low Mach model for two-phase flows with phase transition II: tabulated equation of state 6 ANGE - Numerical Analysis, Geophysics and Ecology LJLL - Laboratoire Jacques-Louis Lions, Inria Paris-Rocquencourt Abstract : In order to model the water flow in a nuclear reactor core, the authors carried out several studies coupling a low Mach model - named Low Mach Nuclear Core (LMNC) model - to the stiffened gas law for the equation of state. The LMNC model is derived from the compressible Navier-Stokes equations through an asymptotic expansion with respect to the Mach number commonly assumed to be small in this domain of application. This simplified system of equations provides qualitative results worth of interest under the stiffened gas hypothesis such as analytical solutions in dimension 1 and enables an easier numerical treatment in any dimension compared to the parent compressible model solved in the low Mach regime. Moreover, in the temperature and pressure regime of interest (namely high temperature and pressure situations), the stiffened gas law turns out to be inaccurate, which requires a new modelling of the equation of state. This is why this paper is devoted to the coupling of the LMNC model to an equation of state tuned by means of experimental values (NIST) for thermodynamic variables. The very point in this study consists in presenting an easy-to-implement procedure to fit tabulated values and derivatives satisfying positivity and monotonicity constraints for pure liquid and vapour phases. Modifications of previously published numerical schemes designed for a stiffened gas law are detailed in dimensions 1 and 2 to allow the use of a general equation of state. In the regime of interest and when the coolant is water, numerical results highlight the difference of tabulated equation of state with the stiffened gas law and also show that thermal conduction effects can be ignored. Keywords : Type de document : Pré-publication, Document de travail MAP5 2016-03. 2015 Domaine : https://hal.archives-ouvertes.fr/hal-01111730 Contributeur : Bérénice Grec <> Soumis le : jeudi 12 mai 2016 - 17:15:35 Dernière modification le : mardi 30 mai 2017 - 01:17:29 Document(s) archivé(s) le : mercredi 16 novembre 2016 - 02:46:51 ### Fichier DellacherieFaccanoniGrecPenel.... Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : hal-01111730, version 2 ### Citation Stéphane Dellacherie, Gloria Faccanoni, Bérénice Grec, Yohan Penel. Study of a low Mach model for two-phase flows with phase transition II: tabulated equation of state. MAP5 2016-03. 2015. <hal-01111730v2> Consultations de la notice ## 307 Téléchargements du document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744136691093445, "perplexity": 1826.3468296663414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00089.warc.gz"}
http://mathoverflow.net/questions/14423/understanding-moment-maps-and-lie-brackets/14542
Understanding moment maps and lie brackets I'm trying to learn about moment maps in symplectic topology (suppose our Lie group is G with lie algebra g, acting on the symplectic manifold (M,w) by symplectomorphisms). I'm having a hard time, and I've realized this is because I don't have a good conceptual understanding of the lie bracket, either on the lie algebra g, or on the group of symplectomorphisms of (M,w), or on the space of functions C^infty(M,R). Therefore I can't "visualize" the Hamiltonian condition, which requires that the linear map g --> C^infty(M,R), which exists when the action by G is "exact," be a lie algebra homomorphism. Please tell me how you personally understand/intuit/conceptualize this situation, both the lie bracket stuff and moment maps more generally! Any help is greatly appreciated. EDIT: I didn't realize how non-standard some of this terminology is, so my question might be confusing. I call the action rho: G --> Symp(M,w) "exact" if the image of the induced map rho: Lie(G) ---> Lie(Symp(M,w)) is contained in the sub-lie-algebra of Hamiltonian vector fields. The condition that was confusing me, I now realize, is just a technical point: that we choose a set of representative Hamiltonian functions for the image rho(Lie(G)) which is a sub-Lie-algebra of C^inf(M) with its Poisson bracket. Thanks to all the helpful answers I think I understand this much better now. In particular, if we present Lie(G) (assumed finite dimensional, semi-simple, etc) by Lie algebra generators (with some relations), then we can probably just choose appropriate elements in C^inf(M) for these generators, and then the rest of the map from Lie(G) to C^inf(M) is just forced on us, and this gives a Hamiltonian action? Is that right? - I am choosing to accept Ben's answer because it was the one that made me realize that this Hamiltonian condition is just about futzing around with the constants, and in particular less deep (and less confusing) than I thought it was. But Ilya, your response was also very helpful in clarifying my fuzzy understanding. –  Sam Lewallen Feb 7 '10 at 0:15 I just found a pretty decent reference that discusses these exact questions. Perhaps it'll be useful to you. books.google.com/… –  Ilya Grigoriev Feb 8 '10 at 2:15 This question is (at least as I read it) about the Poisson bracket; the Poisson bracket is a Lie bracket structure on the functions on a symplectic manifold. So how should one think about Poisson bracket? Well, remember that for every function on a symplectic manifold, one has a Hamiltonian vector field $X_f$. One way to think about this is that if your symplectic manifold is the phase space of a physical system (the space of possible positions and momenta), and $f$ is the energy function, then the resulting vector field is the derivative of the time evolution of the system. The Poisson bracket $\{f,g\}$ is defined to be $X_f(g)$, the derivative of $g$ along the vector field $X_f$. That is the Poisson bracket of $f$ and $g$ is the time derivative of $g$ if you use $f$ as the energy function. Remarkably, this operation is anti-symmetric, and defines a Lie algebra structure. As you mentioned, a moment map is equivalent to a Lie algebra homomorphism from $\mathfrak{g}$ to the space of functions on your manifold. I'm not sure how you're exactly supposed to visualize that, but let me explain how I think about it. So, imagine you have your favorite G-action on a symplectic manifold, preserving the symplectic structure. Taking derivative, you get a map of Lie algebras from $\mathfrak{g}$ to vector fields on your manifold. It sounds from your question like how to think about such Lie algebra homomorphisms is actually what is confusing you. This just says that if you were to integrate the vector fields coming from $\mathfrak{g}$, you would get the group G (or maybe a finite cover). Now, each of these vector fields corresponds under the symplectic form to a 1-form. If your manifold has no $H^1$, then you can integrate these to functions, but of course, you can't do this uniquely; it's only unique up to a constant. So taking all of these lifts, you get a vector space of functions which is $\dim \mathfrak{g}+1$ dimensional (assuming $G$ acted faithfully). This is closed under Lie bracket, so it's a finite dimensional Lie algebra $\tilde {\mathfrak{g}}$ with a map $\tilde {\mathfrak{g}}\to {\mathfrak{g}}$. It might be that $\tilde {\mathfrak{g}}\cong {\mathfrak{g}}\times \mathbb{R}$ as a Lie algebra, in which case you can get a moment map by picking a splitting of the map above, or it might not, in which case you don't have a moment map. If $\mathfrak{g}$ is semi-simple, then the latter case is impossible, so you always have a moment map. - I believe the following way (Kostant's, 1970) to be the best way to think about the Hamiltonian condition. First, "why" is there a central extension $H^0(M; {\mathbb R}) \to C^\infty (M) \to symp(M)$ of Lie algebras? Of what is $C^\infty (M)$ supposed to be the Lie group? For $symp(M)$, the Lie algebra of vector fields annihilating the symplectic form $\omega$, it's clear it should the Lie algebra of the group $Symp(M)$ of symplectomorphisms. Assume now that $[\omega]$ is integral. Then it is $c_1$ of some "prequantization" line bundle $\mathcal L$, and $\omega$ is the curvature of some Hermitian connection $\alpha$ on that line bundle. Let $Aut(M,{\mathcal L},\alpha)$ denote the group of Hermitian bundle automorphisms (moving the base around) of $\mathcal L$ preserving $\alpha$. This group obviously maps to $Diff(M)$, forgetting the action on the fibers, but because it preserves $\alpha$ on $\mathcal L$ it preserves $\omega$ on $M$, so the image lies inside $Symp(M)$. The kernel consists of bundle automorphisms that only act fiberwise, and for them to preserve the flat connection they must, on each component, rotate all the fibers by the same element of $U(1)$. The Hamiltonian condition, then, is about whether one can lift the action of $G$ on $M$ to an action on the line bundle over $M$. It's very easy, given such a lift, to write down a moment map. (Basically, now that you're dealing with a $1$-form $\alpha$ instead of a $2$-form $\omega$, you can pair vector fields from $\mathfrak g$ with it.) One example I find instructive is ${\mathbb R}^{2n}$ acting on itself by translation, with the space given the usual symplectic structure. That's acting as symplectomorphisms, and the space is simply connected, so there's no $H^1$ obstruction (as when $T^1$ acts on $T^2$). But one can't lift the action to preserve the (non-flat) connection on the (trivial) line bundle; it only lifts to an action of the Heisenberg group. Another subtle example is $SO(3)$ acting on $S^2$ with the area $1$ symplectic structure. On the Lie algebra level, yes, the action is Hamiltonian. But actually $SO(3)$ doesn't act on the line bundle; only its double cover $SU(2)$ does. Finally, think about the case that $G$ acts algebraically on $X \subseteq {\mathbb P}V$. I like to say that $X$ is "equivariantly projective" if $G$ acts on ${\mathbb P}V$ preserving $X$, and this is pretty nearly an algebro-geometric replacement for the Hamiltonian condition. (Non-example: $X$ is a nodal cubic curve, whose smooth locus is ${\mathbb C}^\times$, acted on by ${\mathbb C}^\times$.) - Thanks, this is very interesting. I was about to ask what the condition was for. –  Sam Lewallen Feb 8 '10 at 18:52 I'm not an expert, but I don't find the situation very hard to imagine(see footnote below). It doesn't use anything beyond the fact that "the Lie algebra is the tangent space of the Lie group at the identity". Moreover, for this purpose it is enough to imagine the tangent space as arrows at the identity of the Lie group, pointing in the directions you can move. Oh, and a side remark that might be helpful to some. Having a map $M \to \mathfrak g^*$ that I usually thought of as a "moment map" is exactly the same as having a map $\mathfrak g \to C^\infty(M,\mathbb R)$ the questioner is talking about, and the latter is easier to visualize. The simplest case is when $G=\mathbb R$. This is the standard case of the (time-independent) Hamiltonian flow. The Lie-algebra $\mathfrak g$ is one-dimensional, so a linear moment map $f:\mathfrak g \to C^\infty (M,\mathbb R)$ is determined by its value on the unit vector $\vec e \in \mathfrak g$. If $H=f(\vec e)$, this is precisely the Hamiltonian on your manifold. The flow with respect to the moment map is precisely the same as the flow of this Hamiltonian. The next simplest case is when $G=\mathbb R^n$. The only difference here is that there are several directions in which you could go. Let ${\vec e_1 ,\ldots \vec e_n} \in \mathfrak g$ be the vectors at the identity of $G$ that represent the possible directions. Now, a moment map $f:\mathfrak g \to C^\infty (M,\mathbb R)$ is determined by n Hamiltonians; for each $1\leq i \leq n$ we have $H_i = f(\vec e_i)$. Now, any path in $G=\mathbb R^n$ will correspond to some flow on the manifold $M$. In particular, if you always go "right" (in the direction of $\vec e_1$), the flow will be precisely that of the Hamiltonian $H_1$; the existence of other directions won't matter. If you only go in the direction of $\vec e_2$, the flow will be that of $H_2$. For any path, you can approximate it by a piecewise-linear path that's always parallel to a coordinate axes; the flow will be that of $H_i$ whenever you go parallel to the axes of $\vec e_i$. Finally, if G is any Lie group, it's a manifold, so it locally looks like $\mathbb R^n$. Everything I said above about this case still holds, with one exception: the coordinate directions may no longer be "independent". (So, going right, then up, then left, then down, might not bring you exactly to the same point you started) Unfortunately, at this point my expertness runs out, so see the other answers for a detailed explanation. However, algebraically, the condition that you need to add is precisely that the moment map f is a Lie-algebra homomorphism; think of this as an error term you need to add to account for the lack of independence when you change the direction of your piecewise-linear path. Of course, in practice, in calculating flows you don't need to approximate anything with piecewise-linear paths, as there are algebraic equations will give you the Hamiltonian that corresponds to any direction. In the case of $\mathbb R ^n$, the map f will simply be linear, in general there is likely an error term. Oh, and finally, most algebraic equations are probably simpler if you think of your moment maps as maps $M \to \mathfrak g^*$. Footnote: When I wrote that everything is simple, I hadn't thought of the error terms yet (see above). However, I still think that for most conceptual purposes, it's best to think of your Lie group as $\mathbb R^n$ with some error terms added. Somebody correct me if I'm wrong, but when we describe Lie groups using "structure forms", isn't it precisely a way to make this idea precise? - Thanks! This is very clear, and I think you are right that the situation is not very difficult. But my confusion started as soon as we go from \R^n to some more complicated group, so that the Lie bracket is non-trivial. –  Sam Lewallen Feb 7 '10 at 0:16 The moment map condition is a precise form of Noether's theorem. Whenever you have an $G$-invariant Hamiltonian, its flow will preserve the value of the moment map. That is in some sense the moment map condition: $\mu:(M,\omega)\rightarrow \mathfrak{g}^*$ should be equivariant and satisfy $\langle d \mu(x).v,\xi\rangle=\omega(\xi_M(x),v)$ for all $v\in T_x M$ and $\xi\in \mathfrak{g}$ Now consider the flow $\phi^t$ of the $G$-invariant Hamiltonian $H$. Then you compute $\frac{d}{dt}|_0 \langle\mu(\phi^t(x)),\xi\rangle=\langle d\mu(x).X_H(x),\xi\rangle=\omega(\xi_M(x),X_H (x))=dH(x).\xi_M(x)=\frac{d}{dt}|_0 H(e^{t\xi}.x)=0$ since H is $G$-invariant. There is also a motivation from symplectic reduction, but I don't know whether that is really relevant at the beginning. Please don't be discouraged by this "bracket stuff". I don't consider it particularly enlightening anyway... - What I am about to say is implicit in Ben Webster's answer, but I figured that I would make it explicit. A Hamiltonian group action on a symplectic manifold M should be thought of as a Lie group homomorphism $G \to Ham(M)$, so it induces a Lie algebra map $Lie(G) \to Lie(Ham(M))$. But what is Lie(Ham(M))? It is the normalized $C^\infty(M)$ (those $f$ whose integral over M is zero, if M is closed) and the Lie bracket is the Poisson bracket. So the Hamiltonian condition for a symplectic group action is just that you want your Lie group homomorphism $G \to Symp(M)$ to factor through the Lie group homomorphism $Ham(M) \to Symp(M)$. - Thank you, this seems to clear things up quite a bit. However, what is your exact definition of Ham(M)? Three possible definitions come to my mind, and I'm not sure which of them are the same. 1) Symplectomorphisms that come from time-independent Hamiltonians. 2) Symplectomorphisms that come from dependent Hamiltonians. 3) Symplectomorphisms that come from moment maps on some Lie group. (The last would make talking about it quite tautological, but I'm pretty sure it's equivalent to number 2). The reason this feels important is that I'm not sure how exactly you calculate Lie(Ham(M)). –  Ilya Grigoriev Feb 6 '10 at 22:31 The definition of Ham(M) is your #2, namely symplectomorphisms that come from time dependent Hamiltonians. –  user1835 Feb 6 '10 at 22:40 Right. I guess 1) wouldn't even make sense - composition of two symplectomorphisms from time-independent Hamiltonian might not come from a time-independent Hamiltonian. 3) should be equivalent to 2), I hope. And an element of Lie(Ham(M)) is just a Hamiltonian (which determines an infinitesmall Hamiltonian symplectomorphism, as they always do). So it's in $C^\infty (M)$. Since adding constants to a Hamiltonian doesn't change anything, we can assume it's normalized. –  Ilya Grigoriev Feb 6 '10 at 23:00 Hmm I thought I was starting to understand things but now this comment confuses me again. Lie(Ham(M)) is a sub-lie-algebra of Lie(Symp(M)), right, because Ham(M) is just a subgroup of Symp(M)? If this is not right then I am misunderstanding your definitions. If it is right, then by "factor through the the Lie group homomorphism..." you just mean that the image lies in the subgroup Ham(M). In which case this is the "exactness" requirement that I referred to, and the "Hamiltonian" requirement is something more. –  Sam Lewallen Feb 6 '10 at 23:29 In particular, as Ben said, there is a "S.E.S." of lie algebras 0 --> R ---> C^inf(M) --p-> Lie(Ham(M)) ---> 0 Using our rho : g ---> Lie(Ham(M)), which exists because of the exactness condition, we get a sequence 0 ---> R ---> p^-1(rho(g)) ---> rho(g) ---> 0, and the Hamiltonian condition corresponds to being able to find a lie-algebra splitting of this sequence. is that right? –  Sam Lewallen Feb 6 '10 at 23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541409015655518, "perplexity": 272.60330948343307}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987171.38/warc/CC-MAIN-20150728002307-00044-ip-10-236-191-2.ec2.internal.warc.gz"}
https://eprints.lancs.ac.uk/id/eprint/186965/
# Linear systems, Hankel products and the sinh-Gordon equation Blower, Gordon (2023) Linear systems, Hankel products and the sinh-Gordon equation. Journal of Mathematical Analysis and Applications. ISSN 0022-247X (In Press) Text (JMAASinhGordonrevision1) JMAASinhGordonrevision1.pdf - Accepted Version ## Abstract Let $(-A,B,C)$ be a linear system in continuous time $t>0$ with input and output space ${\mathbb C}^2$ and state space $H$. The scattering (or impulse response) functions $\phi_{(x)}(t)=Ce^{-(t+2x)A}B$ determines a Hankel integral operator $\Gamma_{\phi_{(x)}}$; if $\Gamma_{\phi_{(x)}}$ is trace class, then the Fredholm determinant $\tau (x)=\det (I+\Gamma_{\phi_{(x)}})$ determines the tau function of $(-A,B,C)$. The paper establishes properties of algebras containing $R_x = \int_x^\infty e^{-tA}BCe^{-tA}\,dt$ on $H$, and obtains solutions of the sinh-Gordon PDE. The tau function for sinh-Gordon satisfies a particular Painl\'eve $\mathrm{III}'$ nonlinear ODE and describes a random matrix model, with asymptotic distribution found by the Coulomb fluid method to be the solution of an electrostatic variational problem on an interval. Item Type: Journal Article Journal or Publication Title: Journal of Mathematical Analysis and Applications Uncontrolled Keywords: /dk/atira/pure/subjectarea/asjc/2600/2604 Subjects: Departments: ID Code: 186965 Deposited By: Deposited On: 21 Feb 2023 11:50 Refereed?: Yes Published?: In Press
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736257791519165, "perplexity": 1780.687290631546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00686.warc.gz"}
https://www.neliti.com/publications/55898/on-some-covering-graphs-of-a-graph
# On Some Covering Graphs of a Graph Pirzada, Shariefuddin • Ganie, Hilal A • Siddique, Merajuddin ## Abstract For a graph $G$ with vertex set $V(G)=\{v_1, v_2, \dots, v_n\}$, let $S$ be the covering set of $G$ having the maximum degree over all the minimum covering sets of $G$. Let $N_S[v]=\{u\in S : uv \in E(G) \}\cup \{v\}$ be the closed neighbourhood of the vertex $v$ with respect to $S.$ We define a square matrix $A_S(G)= (a_{ij}),$ by $a_{ij}=1,$ if $\left |N_S[v_i]\cap N_S[v_j] \right| \geq 1, i\neq j$ and 0, otherwise. The graph $G^S$ associated with the matrix $A_S(G)$ is called the maximum degree minimum covering graph (MDMC-graph) of the graph $G$. In this paper, we give conditions for the graph $G^S$ to be bipartite and Hamiltonian. Also we obtain a bound for the number of edges of the graph $G^S$ in terms of the structure of $G$. Further we obtain an upper bound for covering number (independence number) of $G^S$ in terms of the covering number (independence number) of $G$. • 93 views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596290588378906, "perplexity": 199.1980537319858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00369.warc.gz"}
https://www.lmfdb.org/L/2/1815/1.1/c3-0
## Results (1-50 of 218 matches) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 2-1815-1.1-c3-0-129 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.05491 Modular form 1815.4.a.z.1.4 2-1815-1.1-c3-0-180 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.36718$ Modular form 1815.4.a.o.1.1 2-1815-1.1-c3-0-74 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.751145 Modular form 1815.4.a.bn.1.6 2-1815-1.1-c3-0-75 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.751581$ Modular form 1815.4.a.bb.1.1 2-1815-1.1-c3-0-99 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.883841 Modular form 1815.4.a.bn.1.1 2-1815-1.1-c3-0-0 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.0978647$ Modular form 1815.4.a.bf.1.6 2-1815-1.1-c3-0-1 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.132648 Modular form 1815.4.a.bc.1.3 2-1815-1.1-c3-0-10 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.301666$ Modular form 1815.4.a.bj.1.6 2-1815-1.1-c3-0-100 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.899032 Modular form 1815.4.a.bj.1.7 2-1815-1.1-c3-0-101 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.909275$ Modular form 1815.4.a.bd.1.3 2-1815-1.1-c3-0-102 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 0.922653 Modular form 1815.4.a.f Modular form 1815.4.a.f.1.1 2-1815-1.1-c3-0-103 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $0.927236$ Modular form 1815.4.a.w.1.2 2-1815-1.1-c3-0-104 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.927728 Modular form 1815.4.a.bo.1.8 2-1815-1.1-c3-0-105 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.931077$ Modular form 1815.4.a.bk.1.8 2-1815-1.1-c3-0-106 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 0.931448 Modular form 1815.4.a.u.1.1 2-1815-1.1-c3-0-107 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.936968$ Modular form 1815.4.a.bn.1.2 2-1815-1.1-c3-0-108 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 0.940057 Modular form 1815.4.a.bg.1.5 2-1815-1.1-c3-0-109 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.952166$ Modular form 1815.4.a.bj.1.11 2-1815-1.1-c3-0-11 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.317357 Modular form 1815.4.a.q.1.1 2-1815-1.1-c3-0-110 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $0.958562$ Modular form 1815.4.a.v.1.2 2-1815-1.1-c3-0-111 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.959821 Modular form 1815.4.a.bo.1.10 2-1815-1.1-c3-0-112 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $0.967074$ Modular form 1815.4.a.bg.1.1 2-1815-1.1-c3-0-113 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 0.971424 Modular form 1815.4.a.w.1.1 2-1815-1.1-c3-0-114 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.977004$ Modular form 1815.4.a.r.1.3 2-1815-1.1-c3-0-115 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.988507 Modular form 1815.4.a.q.1.3 2-1815-1.1-c3-0-116 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $0.988792$ Modular form 1815.4.a.bg.1.3 2-1815-1.1-c3-0-117 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 0.993249 Modular form 1815.4.a.bl.1.11 2-1815-1.1-c3-0-118 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.996377$ Modular form 1815.4.a.bn.1.9 2-1815-1.1-c3-0-119 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.00324 Modular form 1815.4.a.k.1.1 2-1815-1.1-c3-0-12 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.349281$ Modular form 1815.4.a.y.1.1 2-1815-1.1-c3-0-120 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 1.01312 Modular form 1815.4.a.bl.1.12 2-1815-1.1-c3-0-121 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.02438$ Modular form 1815.4.a.s.1.2 2-1815-1.1-c3-0-122 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.02568 Modular form 1815.4.a.bi.1.2 2-1815-1.1-c3-0-123 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.02571$ Modular form 1815.4.a.bm.1.9 2-1815-1.1-c3-0-124 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.02879 Modular form 1815.4.a.bm.1.4 2-1815-1.1-c3-0-125 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.02900$ Modular form 1815.4.a.bk.1.6 2-1815-1.1-c3-0-126 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 1.03848 Modular form 1815.4.a.l.1.2 2-1815-1.1-c3-0-127 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.03900$ Modular form 1815.4.a.z.1.2 2-1815-1.1-c3-0-128 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.03995 Modular form 1815.4.a.j.1.1 2-1815-1.1-c3-0-13 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.354245$ Modular form 1815.4.a.bj.1.9 2-1815-1.1-c3-0-130 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.05945 Modular form 1815.4.a.bm.1.3 2-1815-1.1-c3-0-131 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.06262$ Modular form 1815.4.a.bn.1.10 2-1815-1.1-c3-0-132 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.06481 Modular form 1815.4.a.z.1.1 2-1815-1.1-c3-0-133 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.07162$ Modular form 1815.4.a.u.1.3 2-1815-1.1-c3-0-134 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0 0 1.07178 Modular form 1815.4.a.x.1.5 2-1815-1.1-c3-0-135 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.07319$ Modular form 1815.4.a.bh.1.2 2-1815-1.1-c3-0-136 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.07435 Modular form 1815.4.a.bg.1.2 2-1815-1.1-c3-0-137 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.08156$ Modular form 1815.4.a.bg.1.7 2-1815-1.1-c3-0-138 $10.3$ $107.$ $2$ $3 \cdot 5 \cdot 11^{2}$ 1.1 $$3.0 3 0.5 1 1.09237 Modular form 1815.4.a.bm.1.7 2-1815-1.1-c3-0-139 10.3 107. 2 3 \cdot 5 \cdot 11^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.09589$ Modular form 1815.4.a.t.1.2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639338850975037, "perplexity": 311.38351983288493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00133.warc.gz"}
http://mathhelpforum.com/statistics/24107-probability-problem-print.html
# probability problem • Dec 3rd 2007, 10:47 PM hisweety19 probability problem The arrival of trucks on a receiving dock is a Poisson process with a mean arrival rate of two per hour. (a) Find the probability that exactly 6 trucks arrive in a two-hour period. (b) Find the probability that the time between two successive arrivals will be more than 3 hours. Thanks!! • Dec 5th 2007, 06:21 AM TKHunny Poisson Probabilities are a matter of plugging into the formula. $\lambda\;=\;2\;per\;hour$ n = 6/2 hours = 3 per hour Calculate it. Let's see what you get. For the second, Poisson Wait Times are Exponential. Let's see what you get. • Dec 13th 2008, 02:58 PM lllll I have a nearly identical question to this, for part (b) since the inter-arrival times are exponentially distributed would it be: $\lambda = 2(3) = 6$ for your rate, then you would have: $\int_3^\infty \lambda e^{-\lambda t} dt = \int_3^\infty 6 e^{-6 t} dt$ ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578676819801331, "perplexity": 924.385651774065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00704.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=41A60
# American Mathematical Society Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(41A60) AND publication=(all) Sort order: Date Format: Standard display Results: 1 to 30 of 72 found      Go to page: 1 2 3 [1] Michael Drmota and Stefan Gerhold. Disproof of a conjecture by Rademacher on partial fractions. Proc. Amer. Math. Soc. Ser. B 1 (2014) 121-134. Abstract, references, and article information View Article: PDF [2] Luis M. Navas, Francisco J. Ruiz and Juan L. Varona. Some functional relations derived from the Lindel\"of-Wirtinger expansion of the Lerch transcendent function. Math. Comp. Abstract, references, and article information    View Article: PDF [3] K. F. Lee and R. Wong. Asymptotic expansion of the modified Lommel polynomials $h_{n,\nu}(x)$ and their zeros. Proc. Amer. Math. Soc. 142 (2014) 3953-3964. Abstract, references, and article information    View Article: PDF [4] Ronald E. Mickens. I Wish I Knew How to .... Contemporary Mathematics 618 (2014) 299-310. Book volume table of contents    View Article: PDF [5] G. A. Edgar. Fractional iteration of series and transseries. Trans. Amer. Math. Soc. 365 (2013) 5805-5832. Abstract, references, and article information    View Article: PDF [6] Sai-Yu Liu, R. Wong and Yu-Qiu Zhao. Uniform treatment of Darboux's method and the Heisenberg polynomials. Proc. Amer. Math. Soc. 141 (2013) 2683-2691. Abstract, references, and article information    View Article: PDF [7] Jeffery C. DiFranco and Peter D. Miller. The semiclassical modified nonlinear Schr\"odinger equation II: Asymptotic analysis of the Cauchy problem. The elliptic region for transsonic initial data. Contemporary Mathematics 593 (2013) 29-81. Book volume table of contents    View Article: PDF [8] Hailiang Liu, Olof Runborg and Nicolay M. Tanushev. Error estimates for Gaussian beam superpositions. Math. Comp. 82 (2013) 919-952. Abstract, references, and article information    View Article: PDF [9] Avram Sidi. Euler--Maclaurin expansions for integrals with arbitrary algebraic endpoint singularities. Math. Comp. 81 (2012) 2159-2173. Abstract, references, and article information    View Article: PDF [10] Luis M. Navas, Francisco J. Ruiz and Juan L. Varona. Asymptotic estimates for Apostol-Bernoulli and Apostol-Euler polynomials. Math. Comp. 81 (2012) 1707-1722. Abstract, references, and article information    View Article: PDF [11] J. S. Brauchart, D. P. Hardin and E. B. Saff. The next-order term for optimal Riesz and logarithmic energy asymptotics on the sphere. Contemporary Mathematics 578 (2012) 31-61. Book volume table of contents    View Article: PDF [12] Esther Garcia and José L. López. The Appell’s function $F_{2}$ for large values of its variables. Quart. Appl. Math. 68 (2010) 701-712. MR 2761511. Abstract, references, and article information View Article: PDF [13] K. F. Lee and R. Wong. Uniform asymptotic expansions of the Tricomi-Carlitz polynomials. Proc. Amer. Math. Soc. 138 (2010) 2513-2519. MR 2607881. Abstract, references, and article information    View Article: PDF [14] Franz Peherstorfer. Extremal problems of Chebyshev type. Proc. Amer. Math. Soc. 137 (2009) 2351-2361. MR 2495269. Abstract, references, and article information    View Article: PDF [15] Miroslav Englis. Berezin transforms on pluriharmonic Bergman spaces. Trans. Amer. Math. Soc. 361 (2009) 1173-1188. MR 2457394. Abstract, references, and article information    View Article: PDF [16] Michael Schröder. On constructive complex analysis in finance: Explicit formulas for Asian options. Quart. Appl. Math. 66 (2008) 633-658. MR 2465139. Abstract, references, and article information View Article: PDF [17] J. A. Adell and P. Jodrá. On a Ramanujan equation connected with the median of the gamma distribution. Trans. Amer. Math. Soc. 360 (2008) 3631-3644. MR 2386240. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Chelo Ferreira and José L. López. The Lambert transform for small and large values of the transformation parameter. Quart. Appl. Math. 64 (2006) 515-527. MR 2259052. Abstract, references, and article information View Article: PDF [19] Arieh Iserles and Syvert P. Nørsett. Quadrature methods for multivariate highly oscillatory integrals using derivatives. Math. Comp. 75 (2006) 1233-1258. MR 2219027. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] R. Wong and Wenjun Zhang. Uniform asymptotics for Jacobi polynomials with varying large negative parameters--- a Riemann-Hilbert approach. Trans. Amer. Math. Soc. 358 (2006) 2663-2694. MR 2204051. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] J. A. Addison, S. D. Howison and J. R. King. Ray methods for free boundary problems. Quart. Appl. Math. 64 (2006) 41-59. MR 2211377. Abstract, references, and article information View Article: PDF [22] Avram Sidi. Extension of a class of periodizing variable transformations for numerical Integration. Math. Comp. 75 (2006) 327-343. MR 2176402. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] José L. López and Ester Pérez Sinusía. Asymptotic approximation of singularly perturbed convection-diffusion problems with discontinuous derivatives of the Dirichlet data. Quart. Appl. Math. 63 (2005) 527-543. MR 2169032. Abstract, references, and article information View Article: PDF [24] Michael Drmota, Bernhard Gittenberger and Thomas Klausner. Extended admissible functions and Gaussian limiting distributions. Math. Comp. 74 (2005) 1953-1966. MR 2164105. Abstract, references, and article information View Article: PDF This article is available free of charge [25] Z. Wang and R. Wong. Linear difference equations with transition points. Math. Comp. 74 (2005) 629-653. MR 2114641. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Jason P. Bell and Stanley N. Burris. Asymptotics for logical limit laws: When the growth of the components is in an RT class. Trans. Amer. Math. Soc. 355 (2003) 3777-3794. MR 1990173. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] Michael Berry. Making light of mathematics. Bull. Amer. Math. Soc. 40 (2003) 229-237. MR 1962297. Abstract, references, and article information    View Article: PDF [28] Avram Sidi. A convergence and stability study of the iterated Lubkin transformation and the $\theta$-algorithm. Math. Comp. 72 (2003) 419-433. MR 1933829. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] Avram Sidi. New convergence results on the generalized Richardson extrapolation process GREP$^{(1)}$ for logarithmic sequences. Math. Comp. 71 (2002) 1569-1596. MR 1933045. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] Malabika Pramanik. Convergence of two-dimensional weighted integrals. Trans. Amer. Math. Soc. 354 (2002) 1651-1665. MR 1873022. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 72 found      Go to page: 1 2 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527281522750854, "perplexity": 3413.056331978085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379320.39/warc/CC-MAIN-20141119123259-00140-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/77107-last-question-please-help-sum-product-formulae.html
Final question of my assigment, can anyone help me so I can celebrate!! I'm struggling on the following question: Power in an electric circuit is given by: p = iv Calculate the maximum value of power if: v = 0.002Sin (100piet)volts i = 0.6Sin (100piet + pie/4)amps and calculate the first time the power reaches a maximum value?? I've had a go at calculating maximum power and got the following: 0.6Sin(100piet + pie/4) x 0.02Sin(100piet) = 0.012Sin(100piet) - Sin(100piet + pie/4) = 1/2 x 0.012(cos(100piet -(100piet + pie/4))-cos(100piet+(100piet+pie/4)) = 0.006(cos pie/4 - cos(200piet + pie/4) as smallest possible value is -1 for cos(200piet +pie/4) : max value of p = 0.006(cos pie/4 +1) = 0.0102? Does this make any sense? For the second part of the question: First time it reaches maximum power, I need some guidance on this one, I'm not sure where to go with this one? All help would be greatly appreciated. 2. as far as i concerned, when multiplying complex number you need to multiply the magnitud and add the phase: now you need to transfrom time-domain to phasor-domain $\displaystyle v(t)=v_{m}\sin(\omega+\emptyset)\Longleftrightarro w v=v_{m}\angle\emptyset-90^\circ$ $\displaystyle i(t)=i_{m}\sin(\omega+\emptyset)\Longleftrightarro w i=i_{m}\angle\emptyset-90^\circ$ $\displaystyle v=0.002\angle-90^\circ,i=0.6\angle-45^\circ$ $\displaystyle p=vi=0.002\times0.6\angle-45^\circ-90^\circ=1.2\angle-135^\circ mW$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791807889938354, "perplexity": 2445.6286748026423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863411.67/warc/CC-MAIN-20180620031000-20180620051000-00577.warc.gz"}
https://socratic.org/questions/57316b5911ef6b59c481ef6f
Physics Topics # Question #1ef6f In first case, because $L = {\lambda}_{1} / 2$, the fund freq is ${f}_{1} = \frac{v}{\lambda} _ 1 = \frac{v}{2 L}$ In second case, because $\frac{3 L}{4} = {\lambda}_{2} / 4$, the fund freq is ${f}_{2} = \frac{v}{\lambda} _ 2 = \frac{v}{3 L}$ So ${f}_{2} = \frac{2}{3} {f}_{1} = 260 H z$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247332215309143, "perplexity": 4869.694694241242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00335.warc.gz"}
https://www.arxiv-vanity.com/papers/0910.1159/
# VLBA Scientific Memorandum 31: ASTROMETRIC CALIBRATION OF mm-VLBI USING SOURCE FREQUENCY PHASE REFERENCED OBSERVATIONS Richard Dodson and María Rioja University of Western Australia, UWA, Australia Observatorio Astronómico Nacional, España ABSTRACT In this document we layout a new method to achieve “bona fide” high precision Very Long Baseline Interferometry (VLBI) astrometric measurements of frequency-dependent positions of celestial sources (even) in the high (mm-wavelength) frequency range, where conventional phase referencing techniques fail. Our method, dubbed Source/Frequency Phase Referencing (sfpr) combines fast frequency switching (or dual-frequency observations) with the source switching of conventional phase referencing techniques. The former is used to calibrate the dominant highly unpredictable rapid atmospheric fluctuations, which arise from variations of the water vapor content in the troposphere, and ultimately limit the application of conventional phase referencing techniques; the latter compensates the slower time scale remaining ionospheric/instrumental, non-negligible, phase variations. For cm-VLBI, the sfpr method is equivalent to conventional phase referencing applied to the measurement of frequency-dependent source positions changes (“core-shifts”). For mm-VLBI, the sfpr method stands as the only approach which will provide astrometry. A successful demonstration of the application of this new astrometric analysis technique to the highest frequency VLBA observations, at 86 GHz, is presented here. Our previous comparative astrometric analysis of cm-VLBI observations, presented elsewhere, produced equivalent results using both methods. In this memo we layout the scope and basis of our new method, along with a description of the strategy (Sections 1 and 2), and a demonstration of successful application to the analysis of VLBA experiment BD119 (Section 3). Finally, in Section 4, we report on the results from a series of 1-hour long VLBA experiments, BD123A, B and C, aimed at testing the robustness of the method under a range of weather conditions. ## 1 Introduction One of the complications in VLBI, over connected array interferometers, arises from the completely unrelated atmospheric conditions that the wavefronts propagate through before reaching the widely separated antennae. Self calibration procedures, which are the standard VLBI analysis technique for imaging radio sources, rely on closure relations to remove the station dependent complex gain factors that characterize the phase errors at each antenna. A direct detection of the source with good signal to noise ratio is required within every segment of the coherence integration time-interval. This time interval is set by the stability of the instrument and, dominantly, the atmospheric turbulence. An important consequence of the use of phase closure is that information on the absolute position of the source is lost, preventing the measurement of astrometric quantities. The application of phase-referencing techniques, to the analysis of interleaving observations of the program source and a nearby calibrator, preserves the information on the angular separation on the sky and provides high precision relative-astrometry (Alef 1988, Beasley & Conway, 1995). At the observations, the scans on the scientifically interesting source, the target, are interleaved (within the coherence integration time) with observations of a calibrator source, the reference. The antenna-based corrections derived from the self-calibration analysis of the reference source observations are transferred for the calibration of the target source. Next, the target dataset is Fourier transformed without any further calibration to yield a phase referenced map of the target source, where the position offset of the peak from the center provides a precise measurement of the relative separation between both sources. The propagation of astrometric errors in the phase referencing analysis is strongly dependent on the angular separation between the target and reference sources, and range between the micro-arcsec and tenths of milli-arcsec accuracy. This phase referencing technique, from now on referred as “conventional phase referencing”, is well established and has been used to provide high precision astrometric measurements of (relative) source positions in cm-VLBI observations. It would be highly desirable to extend this capability to the mm-VLBI regime, yet at the highest frequencies the observations are sensitivity limited: the instruments are less efficient, the sources are intrinsically weaker, and the phase coherence integration times are severely constrained by the rapid atmospheric phase fluctuations due to the variations (spatial and temporal) of the water vapor content in the troposphere. In particular, the coherence time is too short to allow an antenna to switch its pointing direction between pairs of sources, in all but the most exceptional cases (Porcas & Rioja, 2002), within that time range. The lack of suitable reference sources in mm-VLBI makes it almost impossible to apply “conventional phase referencing” techniques in the high frequency (i.e. significantly above 43 GHz) domain. Therefore it would be hugely beneficial if the calibration could be performed at a lower, easier, frequency and used for data collected at a higher frequency. That is, to transfer the calibration terms (for phase/delay/rate VLBI observables) derived at a different frequency rather than at a different source as in “conventional phase-referencing”. It should be noted that frequency switching can be performed much faster than source switching at the VLBA. Moreover the duty-cycle is now determined by the coherence time at the lower frequency. The low frequency phases provide ‘connection’ but of course can not correct for variations faster than the duty cycle. This requires co-temporal dual frequency observations as provided by the next generation of VLBI antennae and arrays which are able to co-observe at different frequency bands, e.g. the Yebes 40m antenna and the Korean VLBI Network. The feasibility of multi-frequency observations to correct the non-dispersive tropospheric phase fluctuations in the high frequency regime has been studied for some time. It relies on the fact that such fluctuations will be linearly proportional to the observing frequency, and hence it should be possible to use a scaled version of the calibration terms derived from the analysis of observations at a lower frequency (where more and stronger sources are available, with longer coherence integration times and better antenna performance), to calibrate higher frequency observations. It is a kind of phase referencing, between observations at two frequencies, that we call “frequency phase transfer” (FPT). Among the earliest references we found are “Phase compensation experiments with the paired antennas method 2. Millimeter-wave fringe correction using centimeter-wave reference” (Asaki, et al. 1998) with the Nobeyama millimeter array (NMA), and “Tropospheric Phase Calibration in Millimeter Interferometry” (Carilli & Holdaway, 1999) for application with the Very Large Array (VLA). In “VLBI observations of weak sources using fast frequency switching”, Middelberg et al. (2005) applied this frequency phase transfer technique to mm-VLBI observations. They achieved a significant increase in coherence time, resulting from the compensation of the rapid tropospheric fluctuations, but failed to recover the astrometry, due to the remaining residual dispersive terms. Our proposed Source/Frequency phase referencing method endows this approach with astrometric capability for measuring frequency dependent source positions (“core-shifts”) by adding a strategy to estimate the ionospheric (and other) contributions. In “Measurement of core-shifts with astrometric multi-frequency calibration” (Rioja et al. 2005) we applied it to measure the “core-shift” of quasar 1038+528 A between S and X-bands (8.3/2.2 GHz), and validate the results by comparison with those from standard phase referencing techniques at cm-VLBI, where both methods are equivalent. Here we present a demonstration of successful application of the sfpr method to astrometric mm-VLBI, a much more challenging frequency regime where conventional phase referencing fails. Also, the basis of the method, and details on the scheduling and data analysis are described. This method opens a new horizon with targets and fields suitable for high precision astrometric studies with VLBI, especially at high frequencies where severe limitations imposed by the rapid fluctuations in the troposphere prevent the use of conventional phase referencing techniques. In addition this method can be applied to Space VLBI, where accurate orbit determination is a significant issue. This method results in perfect correction of frequency independent errors, such as those arising from the uncertainty in the reconstruction of the satellite orbit. The application to the space mm-VLBI mission VSOP-2 is described in detail in Rioja & Dodson (2009). ## 2 The Basis of the new astrometric method This section outlines the basis of an astrometric method aimed at measuring the frequency dependent core position shift (“core-shift” hereafter) in radio sources in the high frequencies regime. The novel sfpr approach consists of two calibration steps: • Dual-frequency observations to calibrate the rapid non-dispersive atmospheric phase fluctuations in VLBI observables at the high frequency regime, arising from inhomogeneities in the water vapor content in the troposphere and • Dual-source observations to compensate for the remaining dispersive slower varying contributions to the observed phases. The first step results in increased coherence times at the higher frequency, however, an extra step of calibration which involves observations of a second source is needed to preserve the astrometry. This is the essence of the SFPR technique. We include here a description of the procedure using conventional formulae for VLBI - and assume that the data reduction is done using AIPS. Dodson & Rioja (2008) contains more prescriptive details. Its application involves observations with fast frequency switching between the two frequencies of interest ( and , shown as superscripts in the formulae, for the higher and lower observed frequencies), and slow source switching between the target and a nearby source ( and , shown as subscripts in the formulae). Following standard nomenclature, the residual phase (that is after a priori estimated values for the various contributing terms have been removed and the signal integrated in the correlator) values for observations of the target source () at the lower frequency () with a given baseline, , are shown as a sum of contributions: where and are, respectively, contributions to the residual phase from geometric, propagation medium – troposphere and ionosphere – and instrumental errors, and is the radio structure term (the visibility phase), referenced to the point for which has been computed, which is non-zero for non-symmetric sources; stands for the modulo phase ambiguity term. The application of “self-calibration” techniques produces an image of the source, which allows one to disentangle the effect of the visibility phase from the rest of contributions, and produce a set of antenna-based terms, , that account for the errors mentioned above. These terms are scaled by the frequency ratio , after interpolation to the observing times of the higher frequency scan times observations, , and used to calibrate the higher frequency observations. The resultant frequency-referenced residual phases at the higher frequency are: where stands for “Frequency Phase Transfer” and stands for the interpolated -frequency self-calibration solutions to the -frequency scan times. This calibration strategy results in perfect cancellation (as long as the interpolation of is a good approximation to ) of the non-dispersive rapid tropospheric phase fluctuation terms, since: but not for the dispersive ones, which do not scale linearly with frequency, and hence there are remaining ionospheric and instrumental terms: Notice that while antenna and source coordinates errors, given the non-dispersive nature of geometric terms, cancel out in this calibration procedure, a frequency dependent source position shift would remain, since: where is the baseline vector, in units of the higher wavelengths, and stands for the “core-shift”. An integer frequency ratio will keep the phase ambiguity term in the phase equations as an integer number of , and avoid phase connection problems. We strongly advice to use the observations at a given frequency () to calibrate the harmonic frequencies (). For simplicity we will omit the ambiguity term in the coming equations. When is zero does not need to be integer, see Rioja et al. (2005). Replacing the relations above in equation (1), the expression for the “frequency transferred” residual phases for the observations of source at the higher frequency becomes: The rapid tropospheric fluctuations have been calibrated out, however longer timescale contaminating ionospheric and instrumental terms remain blended with the radio structure and astrometric “core-shift” signature, and prevent its direct extraction from the phases. Previous applications of the dual-frequency calibration method used an extra step of self-calibration to remove these, with the consequent loss of the frequency-dependent position of the source (the “core-shift”) in the sky (as in Middelberg et al. 2005). We propose a different scheme that removes the non-dispersive terms while preserving the astrometric information. It uses the procedure of interleaving fast-frequency switching observations of the program source with those of a calibrator which is nearby in angle, , in a very similar fashion as it is done for conventional phase referencing. The analysis of the dataset is done following the same procedure as for , and arrive to an equivalent expression to equation (2) for the fpt-phases of : A careful planning of the observations, namely alternating between two sources that lie within the same ionospheric isoplanatic patch (whose size is many degrees at mm-wavelengths) with a duty cycle that matches the shortest ionospheric/instrumental time-scales (several minutes at least), results in the remaining dispersive terms in equations (2) and (3) being close to equal. That is: Under these conditions the fpt-phases of can be used to calibrate the dataset, as in “conventional phase referencing”. That is, apply self-calibration techniques on the fpt-phases from the dataset (including removal of the structure contributions), and transfer the estimated antenna-based corrections for the calibration of the fpt-phases of , after interpolation to the corresponding observing times, . The resultant Source/Frequency-referenced residual phases for the target source , , are free of ionospheric/instrumental corruption while keeping the astrometric “core-shift” signature: where stands for the radio structure contribution of source at the high frequency, and the terms and modulate each baseline with a hours period sinusoid whose amplitude depends on the “core-shifts” in and , respectively - and is equivalent to the functional dependence on the source pair angular separation in “conventional phase referencing”. Finally, the calibrated sfpr-visibility phases from the target source , , are inverted to yield a synthesis image of source at the -band, where the offset from the center corresponds to a bona-fide astrometric measurement of the combined frequency dependent “core shifts” in sources and between the and -frequency bands. We have summarised the contributions to the residual phases and how they are handled in our new strategy for carrying out astrometric Source/Frequency Phase Referenced observations. Because of the large calibration overhead involved in sfpr this method is only recommended for mm-VLBI, where no other method would succeed. Previous efforts to exploit the astrometric application of multi-frequency techniques failed (even for mm-VLBI), because of what is believed to be the ionospheric contribution. Whilst improved phase stabilisation was achieved, and the deepest ever detection of VLBI cores at 86-GHz were produced, astrometric results could not. Our improved method compensates the remaining ionospheric and instrumental contributions while preserving the astrometric signature in the calibrated visibilities, and, of course, also increases the coherence integration time of the observations at the higher frequency. We have not yet addressed the errors introduced by the interpolation of the lower frequency phase to the times of the high frequency observations (i.e. to ), nor the constraints on the frequency switching duty cycle, both closely related to the coherence at the time of the observations. Using a typical value for the Allan standard deviation () of atmospheric phase fluctuations equal to over 100 seconds (Thompson, Moran, Swenson 2001) and the accumulated phase noise () one can estimate the coherence time ( for a one radian change in phase. This results in typical coherence times of about 70 and 40 seconds, at 22 and 43 GHz, respectively. The duty cycle for the frequency switching has to be less than this coherence time for the lower frequency, if the conditions are typical. However normally one would normally request ‘better than typical’ weather conditions for mm-VLBI which would improve these limits. The error due to the interpolation of the low frequency phases to the time of the high frequency observations can be estimated from the errors ( for scan ) before and after the high frequency scan multiplied by the frequency ratio (). If the duty cycle equals the coherence time the low frequency observations before and after the high frequency scan are independent, but (assumed to be) smoothly varying. Therefore, taking the errors as equal for the bracketing observations, the error on the high frequency observation is given by . If the duty cycle is much less than the coherence time the errors will be (as the measurements are independent, but the observables are not) however this will involve inefficient use of time in switching the frequencies. If the duty cycle is greater than the coherence time the solutions can not be connected with a linear extrapolation and the calibration will fail. One can then deduce the minimum SNR required in the low frequency scans to ensure that the estimated phase corrections for the high frequency observations are meaningful. Assuming that is given by the rms thermal phase noise formula (SNR) and setting an upper limit of for the high frequency phase correction estimates, ones imposes SNRs in the low frequency scans equal to 6 and 11, for observations at frequency pairs 22/43 GHz and 22/86GHz, respectively. Note also that if one was choosing between using 22- or 43-GHz as the calibrator for an 86-GHz target one needs to balance the halving of the frequency ratio against the approximate three times higher SEFD for 43-GHz observations. As an aside, the Korean VLBI Network (KVN), the world’s first dedicated mm-VLBI array, will be able to observe 4 bands simultaneously (22/43/86/129 GHz). This will remove the need for frequency switching, tripling the observing time in a typical implementation thereby increasing the SNR, and furthermore remove the need for interpolation hence reducing the accumulated errors. ## 3 Demonstrations of the Method and Results On 18 February 2007 we carried out 7 hours of VLBA observations of two pairs of continuum sources (1308+326 & 1308+328 apart, and 3C273 & 3C274 apart), using fast frequency switching between 43- and 86-GHz scans on each source, and slow antenna switching between the sources, for each pair. Each antenna recorded eight 16-MHz IF channels, using 2-bit Nyquist sampling, which resulted in a data rate of 512 Mbps. The total duration of the observations was divided in 1.5-hour long blocks allocated to alternate observations of the two source pairs. The analysis was done mostly using AIPS. See Dodson & Rioja (2008) for more details on the tasks and considerations. Firstly we followed the general VLBI calibration procedures, using the scans on the primary calibrator (3C273), and applied it to the total duration of the observations. Next, for each pair, we applied self-calibration analysis procedures to the observations of the two sources at 43 GHz (the “lower” frequency), scaled the resulting phase terms by a factor of 2 (using an external perl script), and applied these to the same source’s observations at 86 GHz (the “higher” frequency). Then, we ran self-calibration procedures on the strongest source of each pair, 1308+326 and 3C273, respectively, at 86 GHz. These solutions were then transferred for calibration to the observations of the other source’s pair, 1308+328 and 3C274, respectively, at the higher frequency, which were finally imaged without further calibration. The result of the analysis is, for each pair, is a sfpr map which contains the brightness distribution of the target source at 86 GHz, and where the offset of the peak of flux from the centre is astrometrically significant, and corresponds to the combined relative “core-shift” between 43 and 86 GHz, of both sources. An additional complication that we have not mentioned above is related to the source radio structure effects; this is relevant when the sources are extended, as it is the case for the 3C pair. For this case both the lower frequency FRING phase solutions and the CALIB solutions (based on the best hybrid image) were doubled and applied for the calibration of the higher frequency observation. For compact sources, as it is the case for the 1308+326/8 pair, the structure contribution is negligible. Figure 1 shows the sfpr image of 1308+328, at 86 GHz. This map was made following the method described above, using the 43-GHz observations of the source and further corrections derived from 1308+326, away. The flux recovery in our image, defined as the ratio between the brightness peaks in the sfpr-map to the hybrid map, is 60%. For comparison, the only “conventional phase referenced” map which has been done at 86-GHz (Porcas & Rioja 2002), on this same pair of sources, resulted in a flux recovery of only 20%. Previous multi frequency VLBI observations of this pair of sources (Rioja et al. 1996) are compatible with a zero “core-shift”, as found in our analysis. Figure 2 shows the sfpr image of 3C274 (M87) at 86 GHz. The larger angular separation between the sources in the 3C pair, apart, compared to the for the 1308+32 pair, makes this case a more challenging test. Still, the flux recovery in the sfpr image in Figure 2 is 60%. The peak of brightness does show an offset from the centre of the map equal to 70sec. In the absence of any other observations to compare our results with, we note here that the predicted “core-shifts” for 3C273 is 65 sec, and zero for 3C274 (Lobanov, 1998 for 3C273 and personal comms. for 3C274), so it is possible that we again have a correct solution. As the theoretical predictions are not, at best, an exact science, it would be wise instead to take the measured “core-shift” in the map as an order of magnitude estimate of the reliability of the method. That is, we give an upper bound of 0.1 milli-arcsec to the astrometric accuracy produced by this strategy, pending further investigation. ## 4 Robustness of the Method vs. Weather This method has also been shown to work well, in terms of astrometric recovery, in observations made without weather constraints. We summarize here the results found from the analysis of a series of 1-hour long VLBA test observations (Exp. codes BD123A/B/C) of the pair of sources 1308+326/8, with the VLBA, at 43 and 86 GHz. We followed the procedure described in section 3 for the analysis of the observations to produce sfpred maps of 1308+328 at 86GHz, and also applied self-calibration procedures to produce hybrid maps, using AIPS. The datasets were inspected and some baselines were flagged out based on lack of detections of the calibrator source. To assess the quality of the results in the three sessions we used the ratio of the brightness peak values in the sfpred to the hybrid maps, which we refer to as “flux recovery” hereafter. Calibration errors, arising from imperfect phase compensations in the analysis using a reference source/frequency, are responsible for the decrease, and biased positional offsets, of the peak flux in the sfpred map. Figure 3 shows the sfpred maps of 1308+328 at 86-GHz obtained from the analysis of these observations. The map corresponding to the analysis of BD123B (center) shows 88% “flux recovery” with good solutions for all antennae in the dataset; for BD123C (right) the flux recovery is 60% with failed solutions for the NL and LA antennae; and for BD123A (left) the flux recovery is only 23%, with failed solutions the LA and PT antennae. The low “flux recovery” in the map from BD123A rises doubts about the success of the technique in that case. However the location of the peak flux in the three maps, which carries the astrometric information, does not significantly shift from the centre, with differences in positions as. The weather conditions are expected to have an impact on the quality of the sfpr analysis results, as happens in conventional phase referencing. It would be very useful to have a threshold criterion for successful sfpr with respect to the weather conditions, especially at the highest frequencies. We could not address this question in this series of three test experiments since the weather “predictions” were not kept after the observations. We could not find either any outstanding correlation between the ground meteorological data measured during the observations, or values, and the image quality for the three experiments. Instead, we have attempted to characterize the weather, a posteriori, with phase coherence measurements using the observations themselves. We have used a four minutes long scan on the calibrator source 3C273 at 43 GHz, after preliminary calibration using a 2-minutes solution interval in FRING, for the three experiments. For each experiment the scans were segmented at different intervals (from 10 sec to 150 sec long, with 10 sec steps) and averaged (in scalar and vector fashion) to determine the visibility amplitudes. Figure 4 shows baseline phase coherence plots for the three experiments: the amplitude ratio between the vector average and scalar average in y-axis, against integration time in x-axis, for all baselines to the reference antenna in each of the three experiments. The plots only include baselines successfully detected in each session, using different colours for each antenna (in baselines with the reference antenna); the MK antenna (pink) shows the poorest coherence and the lowest () evelations in all the 3 experiments. These experiments are so short (1-hour long) that this sample serves as a good indication of the conditions through-out the experiment. The plot for BD123B (centre) shows the best performance - better than can be resolved in the 2 min data-span. The coherence is certainly more than several minutes at 43 GHz, and corresponds to the session with best image recovery. For BD123C (right) the plot shows that severe coherence losses occur quite quickly, but then they reach a plateau and stabilize. The flux recovery in the map is 60%. Therefore we can say that the 2-minute solution interval is acceptable in this case. In experiment BD123A, the coherence shows a steady decrease across the span of the data. This is in agreement with the poor flux recovery, of only 23%, which would normally be described as a failure in the phase referencing process, although we repeat that the position of the peak of emission in the map is in agreement, to within 20 as, with those from B and C datasets. Based on Figures 3 & 4, a tentative classification of the weather conditions during the three 1-hour long observations, is: “good” for BD123B, “acceptable” for BD123B and “bad” for BD123A. The coherence times probe the weather conditions relevant for mm-wavelength observations, such as the content of water vapor in the troposphere, unobtainable from ground only meteorological measurements. We conclude that a suitable frequency duty cycle for obtaining a good sfpr map should be selected based on the observed frequencies and particular weather conditions during the observations; certainly it should be well under the coherence time at the reference (lower) frequency at the epoch of the observations. We are unable to give further guidance without further observations. ## 5 Summary We have proposed and demonstrated a new method of astrometric VLBI calibration, suitable for mm-VLBI. It uses dual frequency observations to removed non-dispersive contributions. The additional step required to remove the ionospheric, and all other slowly varying dispersive terms, is done by including another source to cross calibrate with. Because the ionospheric patch size is very large at mm wavelengths one can use calibrators that lie a considerable distance from the source. We have presented the results from two pairs of sources, one only apart and the other 10 apart. A single pair is sufficient to demonstrate the method, however the astrometric solution (the offset from expected position) contains the contribution from both sources (as happens in standard phase-referencing as well). This problem, however, fulfills the closure condition, so three or more sources can be used to form a closure triangle and separate the contributions from each individual source. ## Acknowledgments We wish to express our gratitude to Craig Walker, Ed Fomalont, Asaki Yoshiharu and Richard Porcas for their help in the development of the technique and in proof-reading this manuscript. ## References • [1] Asaki, Y., et al. 1998. “Phase Compensation experiments with the paired antennas method 2. Millimeter-wave fringe correction using centimeter-wave reference.”  Radio Science 33, 1297-1318. • [2] Carilli, C. L., Holdaway, M. A. 1999. “Tropospheric phase calibration in millimeter interferometry.” Radio Science 34, 817-840. • [3] Dodson, R.. Rioja, M., 2008, “On the astrometric calibration of mm-VLBI using dual frequency observations”, IT-OAN-2008-3 • [4] Lobanov, A. P. 1998. “Ultracompact jets in active galactic nuclei.” Astronomy and Astrophysics 330, 79-89. • [5] Middelberg, E., Roy, A. L., Walker, R. C., Falcke, H. 2005. “VLBI observations of weak sources using fast frequency switching.” Astronomy and Astrophysics 433, 897-909. • [6] Porcas, R. W., Rioja, M. J. 2002. “VLBI phase-reference investigations at 86 GHz.” Proceedings of the 6th EVN Symposium 65. • [7] Rioja, M. J., Dodson, R., Porcas, R. W., Suda, H., Colomer, F. 2005. “Measurement of core-shifts with astrometric multi-frequency calibration.” ArXiv Astrophysics e-prints arXiv:astro-ph/0505475. • [8] Rioja, M. J., Porcas, R., Machalski. J. 1996. “EVN Phase-Referenced Observations of 1308+328 and 1308+326.” Extragalactic Radio Sources, IAU Symp. 175, p. 122. • [9] Thompson, Moran, Swenson, 2001, “Interferometry and Sythesis in Radio Astronomy” (New York: John Wiley & Sons) ### Acknowledgments we wish to note the essential help of the EU Marie-Curie International Incoming Fellowship (MIF1-CT-2005-021873), the VLBA which in funded by the National Science Foundation (of the USA) and the support of and advice from Ed Fomalont, Richard Porcas and Vivek Dhawan.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762478828430176, "perplexity": 1696.2367827269693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00092.warc.gz"}
https://electronics.stackexchange.com/questions/286011/emitter-follower-biasing-with-voltage-divider
Emitter follower biasing with voltage divider I have met a emitter follower design example in the Horowitz-Hill's book (scheme is below): And I don't understand why the resulting steps is correct: Step 1. Choose $V_E$. For the largest possible symmetrical swing without clipping, $V_E = 0.5V_{CC}$, or $+7.5$ volts. Step 2. Choose $R_E$. For a quiescent current of $1 mA$, $R_E = 7.5k$. Step 3. Choose $R_1$ and $R_2$. $V_B$ is $V_E+0.6V$, or $8.1V$. This determines the ratio of $R_1$ to $R_2$ as $1:1.17$. The preceding loading criterion requires that the parallel resistance of $R_1$ and $R_2$ be about $75k$ or less (one-tenth of $7.5k×\beta$ ). Suitable standard values are $R_1 = 130k$, $R_2 = 150k$. So, once again all known values: $$V_{BE} = 0.6 V \\ R_1 = 130K \\ R_2 = 150K \\ R_E = 7.5K \\ \beta = 100$$ My question is about values for $R_1$ and $R_2$. The maximum current through the divider without connected load is: $$I_{div} = \frac{V_cc}{R_1 + R_2} = \frac{15}{280 \cdot 10^3} \approx 54 \mu A$$ When we connect emitter follower to the divider, there must be a base current $I_B$ that is: $$I_B = \frac{I_E}{\beta + 1} = \frac{1ma}{100 + 1} \approx 9.9 \mu A$$ Hence we could calculate output voltage of the divider after connection of emitter follower: $$I_{R_2} = I_{div} - I_{B} = 54 - 9.9 = 44.1 \mu A$$ Hence, we'd get a output voltage from the divider: $$V_{div} = V_{R_2} = I_{R_2} \cdot R_2 = 44.1 \cdot 10^{-6} \cdot 150 \cdot 10^3 \approx 6.62 V$$ So, we would get a $6 V$ output from the emitter follower instead of expected $7.5 V$. Could you tell me where I'm mistaken? P.S.: There is also a screenshot from simulator: • Step 4 - if you are dissatisfied with the output not being 7.5 volts, adjust one of the bias resistors. – Andy aka Feb 12 '17 at 11:01 • Thank you. It's the obvious step. =) But am I right with my statements? – vpetrigo Feb 12 '17 at 11:03 • Yes you are indeed. – Andy aka Feb 12 '17 at 11:06 • I don't understand why they point out that we should meet the condition $R_source = R_1 || R_2 \ll (\beta + 1) R_E$, but do not mentioned that it also should be $I_{B} \ll I_{div}$ – vpetrigo Feb 12 '17 at 11:11 • I think it's about input impedance/resistance (ie not to decrease it by choosing smaller bias resistors). – Rohat Kılıç Feb 12 '17 at 11:30 The book's approach is an estimation, on the assumption that the base current is negligible - which turns out to be not so great of an assumption here, as you found out. your approach is more precise, but made the same mistake right here: The maximum current through the divider without connected load is: Idiv=VccR1+R2=15280⋅103≈54μA you didn't factor in the equivalent resistance on the base side: at 8.1v @ 10ua, that's equivalent to a 810K resistor (approximately Re * beta - see note below). So the lower resistor R2 is paralleled by a 810K resistor. Once you factor that in your calculation, it will be alright. Most people don't take that approach. for example, I typically set the current through R1/R2 to be 10x of the base current. That yielded R1 + R2 = 15v / 100ua = 150K. and go from there for R1/R2 individually. the 10x is picked to make sure that the base current is indeed negligible. what it shows you is that 1) don't put too much stock in any book; and 2) don't take estimation too seriously. many times, good enough is indeed good enough. edit: note: for better approximation, some people would assume that the lower resistor is by-passed by a equivalent resistance of beta * Re -> 750K in this case, vs. 810K the real one calculated earlier. This approach works fairly well as an approximation. • Thank you for pointing me out! I've forgotten to take into account the load impedance of emitter follower. – vpetrigo Feb 12 '17 at 18:48 Quote dannyf: "the 10x is picked to make sure that the base current is indeed negligibe." Yes - of course, correct. However, for a better understanding I like to give some additional explanation. The background of the commonly applied design criterion (divider current >> base current) is the fact that the ratio Ic/Ib=beta has very large tolerances. That means: For a selected collector current Ic we do not know the actual base current. Hence, we select a design in which the base current (and its large tolerances) play a minor role only. As a consequence, we have a "relatively" low-resistive voltage divider (if compared with the input resistance at the base node) providing a "stiff" voltage at the base (as "stiff" as reasonable) - nearly independent on beta uncertainties. (By the way: This works because the collector current is detrmined by the base-emitter voltage and not by the current Ib).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722198605537415, "perplexity": 1189.8014092393523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00290.warc.gz"}
https://www.gregschool.org/quantum-mechanics/2017/5/15/periodic-wavefunctions-that-is-ones-that-come-back-to-themselves-have-quantized-eigenvalues-of-momenta-and-angular-momenta-2k3el-yckar
# PERIODIC WAVEFUNCTIONS HAVE QUANTIZED EIGENVALUES OF MOMENTA AND ANGULAR MOMENTA Summary The wavefunction $$\psi(L,t)$$ is confined to a circle whenever the eigenvalues L of a particle are only nonzero on the points along a circle. When the wavefunction $$\psi(L,t)$$ associated with a particle has non-zero values only on points along a circle of radius $$r$$, the eigenvalues $$p$$ (of the momentum operator $$\hat{P}$$) are quantized—they come in discrete multiples of $$n\frac{ℏ}{r}$$ where $$n=1,2,…$$ Since the eigenvalues for angular momentum are $$L=pr=nℏ$$, it follows that angular momentum is also quantized. Proof The wavefunction $$\psi$$ is a tensor field defined at each point along the circle: at a particular point the value of this field reads “$$\psi$$” independent of the coordinate value used to label that point. Notice that the coordinate values $$x$$ and $$x+(n)(2πr)$$ label the same point. It follows that $$\psi(x,t)=\psi(x+2πrn,t)$$ since the value of a scalar field must be the same at a particular point. If the wavefunction is confined to a circle, then this condition that the wavefunction must “come back to itself” applies to any wavefunction corresponding to any state. The eigenfunction $$\psi_p$$ (associated with momentum) must satisfy $$\psi_p(x,t)=\psi_p(x+2πr,t).$$ It is fairly straightforward to show that $$\psi_p$$ is always given by $$\psi_p(x,t)=\psi_p(x)=Ae^{ipx/ℏ}$$. The momentum eigenvectors $$|\psi_p⟩$$ are those special vectors which satisfy the equation $$\hat{P}|\psi_p⟩=p|\psi_p⟩.$$ We can rewrite this equation in terms of the wavefunction as $$-iℏ\frac{∂}{∂x}\psi_p(x,t)=p\psi_p(x,t).$$ Let’s multiply both sides of Equation # by $$i/ℏ$$ to obtain $$\frac{∂}{∂x}\psi_p(x,t)=\frac{ip}{ℏ}\psi_p(x,t)⇒\frac{d}{dx}\psi_p(x)=\frac{ip}{ℏ}\psi_p(x).$$ The solution to Equation # is given by $$\psi_p (x)=Ae^{ipx/ℏ}.$$ If we substitute this result into Equation # we get $$Ae^{ipx/ℏ}=Ae^{ip(x+2πr)/ℏ}=Ae^{ipx/ℏ}e^{2πrip/ℏ)}.$$ We can now use algebra to determine what values of $$p$$ satisfy Equation #: $$e^{2πrnip/ℏ}=1⇒2πrip/ℏ=(n)2π\text{ (n=1,2,…)}$$ $$p=n\frac{ℏ}{r}$$ $$L=nℏ$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812222123146057, "perplexity": 187.84261392603906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00492.warc.gz"}
https://math.stackexchange.com/questions/1326378/asymptotic-solution-for-p-adic-order-of-n-for-all-primes
# Asymptotic solution for p-adic order of n! for all primes Let v_p(n) denote the p-adic valuation of n. The number of times that a prime p appears in all numbers <= n is given by: $$\nu_p(n!) = \sum_{i=1}^{\infty} \left\lfloor \frac{n}{p^i} \right\rfloor,$$ where [ x ] is the floor function of x For example I have calculated this expression for some primes in a pull n = 10^6. $$\nu_2(10^6!) = 999993$$ $$\nu_3(10^6!) = 499993$$ $$\nu_5(10^6!) = 249998$$ $$\nu_7(10^6!) = 166664$$ and so on My question Is there any asymptotic solution for $$\nu_p(n!)$$ I would need a general expression for doing an stochastic model. Thank you • $\nu_p(n!)=(n-s_p(n))/(p-1)$ where $s_p(n)$ denotes the sum of the $p$-ary digits of $n$. – user72870 Jun 15 '15 at 16:14 Let $k=\lfloor\log_pn\rfloor$. The formula in the question gives an upper bound $$\nu_p(n!)\le\sum_{i=1}^\infty\frac n{p^i}=\frac n{p-1},$$ and a lower bound $$\nu_p(n!)\ge\sum_{i=1}^k\left(\frac n{p^i}-1\right)\ge\frac n{p-1}-1-k,$$ hence asymptotically, $$\nu_p(n!)=\frac n{p-1}+O(\log_pn).$$ With a bit more care, one can compute $$\frac n{p-1}-\log_p(n+1)\le\nu_p(n!)\le\frac{n-1}{p-1},$$ where both bounds are tight: the upper bound is attained when $n$ is a power of $p$, and the lower bound is attained when $n$ is one less than a power of $p$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885985851287842, "perplexity": 129.26139690145544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00581.warc.gz"}
https://en.academic.ru/dic.nsf/enwiki/592555/Quadratic_residuosity_problem
 The quadratic residuosity problem in computational number theory is the question of distinguishing by calculation the quadratic residues in modular arithmetic for a modulus "N", where "N" is a composite number. This is an important consideration in contemporary cryptography. (A note on terminology: mathematicians say "residuacity": "residuosity", something of a malapropism, has been adopted by most cryptographers.) Given the specific case of "N" the product of distinct large prime numbers "p" and "q", the structure of the squaring :"a" &rarr; "a"2 modulo "N" on the multiplicative group of invertible residues modulo "N", is as a group homomorphism with kernel a Klein group of order four. The image is therefore, roughly speaking, of size "N"/4. More precisely, it is of order: :$frac\left\{\left(p - 1\right)\left(q - 1\right)\right\}\left\{4\right\}$ If we consider the same mapping modulo "p" the kernel is of order 2, the order of the image is ("p" − 1)/2. In that case it is easy, computationally speaking, to characterise the image, since the quadratic residue symbol takes the value +1 precisely on squares. In the composite case the corresponding symbol characterises a subgroup of the residues which is too large by factor of two; that is, it rules out roughly half of the residues mod "N", while the problem as posed is to characterise a subset of size a quarter of "N". This difference constitutes the quadratic residuosity problem, in this particular but essential case of "N" having two prime factors. The working assumption is that bridging this gap, in effective computational terms, is only to be done by lengthy calculation, when quantified in terms of the size of "N". The Quadratic residuosity problem is the foundation of the Goldwasser-Micali cryptosystem. ee also * Higher residuosity problem * Computational hardness assumptions Wikimedia Foundation. 2010. ### Look at other dictionaries: • Higher residuosity problem — In cryptography most public key cryptosystems are founded on problems that are believed to be intractable. The higher residuosity problem is one such problem. This problem is easier to solve than integer factorization, so the assumption that this …   Wikipedia • Quadratic residue — In number theory, an integer q is called a quadratic residue modulo n if it is congruent to a perfect square modulo n; i.e., if there exists an integer x such that: Otherwise, q is called a quadratic nonresidue modulo n. Originally an abstract… …   Wikipedia • Decisional composite residuosity assumption — The decisional composite residuosity assumption (DCRA) is a mathematical assumption used in cryptography. In particular, the assumption is used in the proof of the Paillier cryptosystem. Informally the DCRA states that given a composite n… …   Wikipedia • Goldwasser-Micali cryptosystem — The Goldwasser Micali cryptosystem (GM) is an asymmetric key encryption algorithm developed by Shafi Goldwasser and Silvio Micali in 1982. GM has the distinction of being the first probabilistic public key encryption scheme which is provably… …   Wikipedia • Cocks IBE scheme — is an Identity based encryption system proposed by Clifford Cocks in 2001 [1]. The security of the scheme is based on the hardness of the quadratic residuosity problem. Contents 1 Protocol 1.1 Setup 1.2 Extract …   Wikipedia • Prime number — Prime redirects here. For other uses, see Prime (disambiguation). A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is… …   Wikipedia • List of mathematics articles (Q) — NOTOC Q Q analog Q analysis Q derivative Q difference polynomial Q exponential Q factor Q Pochhammer symbol Q Q plot Q statistic Q systems Q test Q theta function Q Vandermonde identity Q.E.D. QED project QR algorithm QR decomposition Quadratic… …   Wikipedia • Computational hardness assumption — In cryptography, a major goal is to create cryptographic primitives with provable security. In some cases cryptographic protocols are found to have information theoretic security, the one time pad is a common example. In many cases, information… …   Wikipedia • List of number theory topics — This is a list of number theory topics, by Wikipedia page. See also List of recreational number theory topics Topics in cryptography Contents 1 Factors 2 Fractions 3 Modular arithmetic …   Wikipedia • Naccache–Stern cryptosystem — Note: this is not to be confused with the Naccache–Stern knapsack cryptosystem. The Naccache–Stern cryptosystem is a homomorphic public key cryptosystem whose security rests on the higher residuosity problem. The Naccache–Stern cryptosystem was… …   Wikipedia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984934687614441, "perplexity": 1563.0951356887902}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00703.warc.gz"}
http://www.physicsforums.com/showthread.php?t=225055
by maryk Tags: theorem P: 4 Hi there, hopefully someone can help me I'm completely lost!! I'm trying to solve sin((pi*x)/6) >= (x/2) for o<= x <= 1 using the mean value theorem. I think I need to show that f ' (c) >= 0, however this f ' (c) is negative for some values in the interval? HELP!! P: 1,633 ok, if you are trying to prove that: $$sin\frac{x\pi}{6}\geq \frac{x}{2}, 0\leq x \leq 1$$ so let $$f(x)=sin\frac{x\pi}{6}- \frac{x}{2},$$ we notice that $$f(0)-f(1)=0$$ now we take any point on the interval $$0\leq x \leq 1$$ say $$x=\frac{1}{2}$$, then if f(x) is positive or negative, it actually will meant that f(x) is either positive or negative on the whole interval $$0\leq x \leq 1$$ $$f(\frac{1}{2})=sin\frac{\pi}{12}-\frac{1}{4}>0$$ so $$f(x)=sin\frac{x\pi}{6}- \frac{x}{2}>0=>sin\frac{x\pi}{6}\geq\frac{x}{2},$$ Well,i don't know whether this answers the requirements of the problem though. Sorry! P: 587 Quote by sutupidmath we notice that $$f(0)-f(1)=0$$ now we take any point on the interval $$0\leq x \leq 1$$ say $$x=\frac{1}{2}$$, then if f(x) is positive or negative, it actually will meant that f(x) is either positive or negative on the whole interval $$0\leq x \leq 1$$ This is not true in general, you also have to show that there are no zeros in [0,1]. It is not enough, that f vanishes at the end points. P: 1,633 Quote by Pere Callahan This is not true in general, you also have to show that there are no zeros in [0,1]. It is not enough, that f vanishes at the end points. I think we can be sure that there are no zeros on [0,1] since the function f(x) does not change sign at the endpoints. Am i right? From the IVT we know that if f(a)f(b)<0, than there exists a poing cE(a,b) such that f(c)=0. But since f(a)f(b) is not <0 in our case, can we safely assume that there is no point cE(a,b) such that f(c)=0 ?? I think we can. P: 587 I think it doesn't matter whether or not f "changes sign at the endpoints". You're right that f(a)f(b)<0 ensures the existence of an c in [a,b] with f(c)=0. The converse is not true. What you can probably easily prove is that f(a)f(b)<0 iff there is an odd number of zeros (counted with multiplicity) in [a,b]. A counterexample to your reasoning would be [a,b]=[0,10 pi], f(x)=sin(x), Then f(a)=0=f(b), but there is a bunch of zeros in [a,b]. P: 1,633 Quote by Pere Callahan I think it doesn't matter whether or not f "changes sign at the endpoints". You're right that f(a)f(b)<0 ensures the existence of an c in [a,b] with f(c)=0. The converse is not true. What you can probably easily prove is that f(a)f(b)<0 iff there is an odd number of zeros (counted with multiplicity) in [a,b]. A counterexample to your reasoning would be [a,b]=[0,10 pi], f(x)=sin(x), Then f(a)=0=f(b), but there is a bunch of zeros in [a,b]. Oh,yeah, i see now! P: 4 Thanks for your help everyone, think I need to show that it is true for every point within 0<= x <= 1, not just the endpoints and a point in the middle of the interval. I was thinking the working should start as follows: f(x) = sin ((pi*x)/6) - (x/2) and we need to show this greater than or equal to zero. f ' (x)= (pi/6)cos((pi*x)/6) - 1/2 since f(0)=0, we can rearrange the mean value theorem in the form: f(x) = x f ' (c), where f ' (c) from above is (pi/6)cos((pi*c)/6) - 1/2 for some c between 0 and 1. Therefore if i can show f ' (c) is greater than or equal to 0 for c between 0 and 1, then f(x) will also be greater than or equal to zero since we will have x times this where x is between 0 and 1. However, we are working in radians and f ' (c) is negative for c=0.6, for example. Soooo confused any ideas?? I think this is a valid argument, dont know why it wont work!!! P: 1,633 ok here it is another try to justify that my claim in post #2 holds. Using the mean value theorem we get that:$$\exists c\in(0,1)$$ such that $$f'(c)=f(1)-f(0)=0$$ so now $$f'(x)=(sin\frac{c\pi}{6}- \frac{x}{2})'=\frac{\pi}{6}cos\frac{\pi x}{6}-\frac{1}{2}$$ so $$f'(c)=\frac{\pi}{6}cos\frac{\pi c}{6}-\frac{1}{2}=0=>cos\frac{\pi c}{6}=\frac{1}{2}$$ After we solve this we get for c $$c_1=\frac{6}{\pi}arccos\frac{3}{\pi}+2k\pi$$ and $$c_2=-\frac{6}{\pi}acrcos\frac{3}{\pi}+2k\pi$$ from here since $$c\in(0,1)$$ it means that there is only one critical point in the interval (0,1), so it also means that f'(x) changes sign only once in it, if c is actually a local min or max. Now by second derivative test we see that $$f''(c)<0$$ so it means that $$c=\frac{6}{\pi}arccos\frac{3}{\pi}$$ is a maximum. so now i think that my claim that f(x)>0 on the interval (0,1) holds.Because say that there would be another point k, on the interval (0,1) such that f(k)=0, then it automatically would mean that there must be another point say m, such that it should be a local min/max on that interva. But this actually contradicts the fact that there is only once critical point, and hence only one local max on the whole interval (0,1). Hence we have the desired result that: $$f(x)=sin\frac{x\pi}{6}- \frac{x}{2}>0=>sin\frac{x\pi}{6}\geq\frac{x}{2},$$ I think this approach should work. P: 4 though when solving for c do we not get c= (6/pi) arccos (1/2) ? which equals 2 and is therefore not contained in the interval? P: 4 Sorry, ignore that last message, I lost a coefficient! lol thank you v v much for your help Related Discussions Calculus & Beyond Homework 4 Advanced Physics Homework 2 Classical Physics 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507713317871094, "perplexity": 290.27449721296466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829916.85/warc/CC-MAIN-20140820021349-00009-ip-10-180-136-8.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/124888/are-the-eigenvalues-of-ab-equal-to-the-eigenvalues-of-ba-citation-needed?answertab=votes
# Are the eigenvalues of $AB$ equal to the eigenvalues of $BA$? (Citation needed!) First of all, am I being crazy in thinking that if $\lambda$ is an eigenvalue of $AB$, where $A$ and $B$ are both $N \times N$ matrices (not necessarily invertible), then $\lambda$ is also an eigenvalue of $BA$? If it's not true, then under what conditions is it true or not true? If it is true, can anyone point me to a citation? I couldn't find it in a quick perusal of Horn & Johnson. I have seen a couple proofs that the characteristic polynomial of $AB$ is equal to the characteristic polynomial of $BA$, but none with any citations. A trivial proof would be OK, but a citation is better. - Have you tried to find a counter example using basic 2x2 matrices? –  Marra Mar 27 '12 at 1:16 If $v$ is an eigenvector of $AB$ for some nonzero $\lambda$, then $Bv\ne0$ and $$\lambda Bv=B(ABv)=(BA)Bv,$$ so $Bv$ is an eigenvector for $BA$ with the same eigenvalue. If $0$ is an eigenvalue of $AB$ then $0=\det(AB)=\det(A)\det(B)=\det(BA)$ so $0$ is also an eigenvalue of $BA$. More generally, Jacobson's lemma in operator theory states that for any two bounded operators $A$ and $B$ acting on a Hilbert space $H$ (or more generally, for any two elements of a Banach algebra), the non-zero points of the spectrum of $AB$ coincide with those of the spectrum of $BA$. Bob, I am a little confused on your proof. How did you know to use the trick $Bv \ne 0$ to prove this? –  diimension Nov 24 '12 at 1:54 @diimension You just take an eigenvector of $AB$ and do the only calculation you can. Then it turns out that $Bv$ fulfills the eigenvector equation for $BA$, so you hope that it is not 0 and check this in the end. There's no need to know it at the start of the calculation. –  Phira Dec 21 '12 at 17:27 I know this is an ancient thread but hopefully you're still lurking out there somewhere. How come you need to hope that $\lambda\ne 0$? Doesn't the argument still work just fine in the case where $\lambda=0$? –  crf Apr 22 '13 at 12:32 @crf No -- the trouble is that when $Bv=0$, maybe $v$ is not in the range of $A$. The argument given for $\lambda\ne0$ works in Hilbert space, for example, but for $\lambda=0$ the result is not generally true there: On the infinite sequence space $\ell^2$, let $A$ be the right shift ($A(x_1,x_2,\ldots)=(0,x_1,x_2,\ldots)$, and $B$ the left shift. Then $AB(1,0,\ldots)=0$, but $BA$ is the identity. –  Bob Pego May 7 '13 at 15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464399814605713, "perplexity": 118.32017286241687}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Epstein_zeta-function
# Epstein zeta-function Epstein $\zeta$-function A function belonging to a class of Dirichlet series generalizing the Riemann zeta-function $\zeta(s)$ (cf. also Zeta-function). It was introduced by P. Epstein [a4] in 1903 after special cases had been dealt with by L. Kronecker [a6], IV, 495. Given a real positive-definite $(n\times n)$-matrix $T$ and $s \in \mathbf{C}$, the Epstein zeta-function is defined by $$\zeta(T;s) = \sum_{\mathbf{0} \ne g \in \mathbf{Z}^n} (g^\top T g)^{-s}$$ where $g^\top$ stands for the transpose of $g$. The series converges absolutely for $\mathrm{re} s > n/2$. If $n=1$ and $T=(1)$, it equals $2\zeta(2s)$. The Epstein zeta-function shares many properties with the Riemann zeta-function (cf. [a5], V.Sect. 5, [a8], 1.4, [a9]): $$\xi(T;s) = \pi^{-s} \Gamma(s) \zeta(T;s)$$ possesses a meromorphic continuation to the whole $s$-plane (cf. also Analytic continuation) with two simple poles, at $s = n/2$ and $s=0$, and satisfies the functional equation $$\xi(T;s) = (\det T)^{-1/2} \xi\left({ T^{-1};\frac{n}{2}-s }\right) \ .$$ Thus, $\zeta(T;s)$ is holomorphic in $s \in \mathbf{C}$ except for a simple pole at $s=n/2$ with residue $$\frac{\pi^{n/2}}{ \Gamma(n/2)\sqrt{\det T} } \ .$$ Moreover, one has $$\zeta(T;0) = -1$$ $$\zeta(T;-m) = 0\ \ \text{for}\ \ m=1,2,\ldots \ .$$ It should be noted that the behaviour may be totally different from the Riemann zeta-function. For instance, for $n>1$ there exist matrices $T$ such that $\zeta(T;s)$ has infinitely many zeros in the half-plane of absolute convergence (cf. [a1]), respectively a zero in any point of the real interval $(0,n/2)$ (cf. [a8], 4.4). The Epstein zeta-function is an automorphic form for the unimodular group $\mathrm{GL}_n(\mathbf{Z})$ (cf. [a8], 4.5), i.e. $$\zeta(U^\top T u;s) = \zeta(T;s) \ \ \text{for}\ \ U \in \mathrm{GL}_n(\mathbf{Z}) \ .$$ It has a Fourier expansion in the partial Iwasawa coordinates of $T$ involving Bessel functions (cf. [a8], 4.5). For $n=2$ it coincides with the real-analytic Eisenstein series on the upper half-plane (cf. Modular form; [a5], V.Sect. 5, [a8], 3.5). The Epstein zeta-function can also be described in terms of a lattice $\Lambda = \mathbf{Z}\lambda_1 + \cdots + \mathbf{Z}\lambda_n$ in an $n$-dimensional Euclidean vector space $(V,\sigma)$. One has $$\zeta(T;s) = \sum_{0 /ne \lambda \in \Lambda} \sigma(\lambda,\lambda)^{-s} \ ,$$ where $T = (\sigma(\lambda_i,\lambda_j))$ is the Gram matrix of the basis $\lambda_1,\ldots,\lambda_n$. Moreover, the Epstein zeta-function is related with number-theoretical problems. It is involved in the investigation of the "class number one problem" for imaginary quadratic number fields (cf. [a7]). In the case of an arbitrary algebraic number field it gives an integral representation of the associated Dedekind zeta-function (cf. [a8], 1.4). The Epstein zeta-function plays an important role in crystallography, e.g. in the determination of the Madelung constant (cf. [a8], 1.4). Moreover, there are several applications in mathematical physics, e.g. quantum field theory and the Wheeler–DeWitt equation (cf. [a2], [a3]). #### References [a1] H. Davenport, H. Heilbronn, "On the zeros of certain Dirichlet series I, II" J. London Math. Soc. , 11 (1936) pp. 181–185; 307–312 [a2] E. Elizalde, "Ten physical applications of spectral zeta functions" , Lecture Notes Physics , Springer (1995) [a3] E. Elizalde, "Multidimensional extension of the generalized Chowla–Selberg formula" Comm. Math. Phys. , 198 (1998) pp. 83–95 [a4] P. Epstein, "Zur Theorie allgemeiner Zetafunktionen I, II" Math. Ann. , 56/63 (1903/7) pp. 615–644; 205–216 [a5] M. Koecher, A. Krieg, "Elliptische Funktionen und Modulformen" , Springer (1998) [a6] L. Kronecker, "Werke I—V" , Chelsea (1968) [a7] A. Selberg, Chowla, S., "On Epstein's Zeta-function" J. Reine Angew. Math. , 227 (1967) pp. 86–110 [a8] A. Terras, "Harmonic analysis on symmetric spaces and applications" , I, II , Springer (1985/8) [a9] E.C. Titchmarsh, D.R. Heath–Brown, "The theory of the Riemann zeta-function" , Clarendon Press (1986) How to Cite This Entry: Epstein zeta-function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Epstein_zeta-function&oldid=42020 This article was adapted from an original article by A. Krieg (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636150598526001, "perplexity": 953.1799412101593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00535.warc.gz"}
http://mathhelpforum.com/calculus/121615-forgot-how-solve-nth-root.html
# Thread: forgot how to solve nth root 1. ## forgot how to solve nth root (a) how do you find the fifth root of -16 ( 2) ^ (1/2) ( 1+i) i found that the mod of z^5 is 4. and the argument of ( 1+i ) = (Pi /2) from there, i said that the arg of z is ( Pi / 10 + 2k Pi / 5) thus the answer for the fifth root of z would be of the form, z = 4 (cos ( Pi / 10 + 2k Pi / 5) + i sin (Pi / 10 + 2k Pi / 5) ) am i right to say that? (b) also is there a fixed method that i can do to compute questions like ( 1-i) ^ ( -3) and ( -1 + i) ^16? becos now im just doing them via expansion and im wondering if there is a shorter method that i can do to solve it.. thanks! 2. Originally Posted by alexandrabel90 (a) how do you find the fifth root of -16 ( 2) ^ (1/2) ( 1+i) What did you write here? Is it $-16\sqrt{2}(1+i)$? i found that the mod of z^5 is 4. and the argument of ( 1+i ) = (Pi /2) This is wrong if the number is what I wrote above: if $z=-16\sqrt{2}(1+i)=-16\sqrt{2}-16\sqrt{2}i$ , then $|z|=\sqrt{16^2\cdot 2+16^2\cdot2}=\sqrt{4\cdot 16^2}=2\cdot 16=32$ Also, its argument is $Arg(z)=\arctan\left(\frac{Im(z)}{Re(z)}\right)=\ar ctan 1=\frac{\pi}{4}+k\pi\,,\,\,k\in\mathbb{Z}$ So try now to solve your question with the above info...and check how come you thought it was something else! Tonio from there, i said that the arg of z is ( Pi / 10 + 2k Pi / 5) thus the answer for the fifth root of z would be of the form, z = 4 (cos ( Pi / 10 + 2k Pi / 5) + i sin (Pi / 10 + 2k Pi / 5) ) am i right to say that? (b) also is there a fixed method that i can do to compute questions like ( 1-i) ^ ( -3) and ( -1 + i) ^16? becos Oh, and please: it is "because"...common, a little university level's basic writing. now im just doing them via expansion and im wondering if there is a shorter method that i can do to solve it.. thanks! .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746662139892578, "perplexity": 847.2274466137395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00778.warc.gz"}
https://asmedigitalcollection.asme.org/TBTS/proceedings-abstract/TBTS2013/56079/V001T02A002/285489?searchresult=1
Most of previous researches of inlet turbulence effects on blade tip have been carried out for low speed situations. Recent work has indicated that for a transonic turbine tip, turbulent diffusion tends to have distinctively different impact on tip heat transfer than for its subsonic counterpart. It is hence of interest to examine how inlet turbulence flow conditioning would affect heat transfer characteristics for a transonic tip. This present work is aimed to identify and understand the effects of both inlet freestream turbulence and end-wall boundary layer on a transonic turbine blade tip aero-thermal performance. Spatially-resolved heat transfer data are obtained at aerodynamic conditions representative of a high-pressure turbine, using the transient infrared thermography technique with the Oxford High-Speed Linear Cascade research facility. With and without turbulence grids, the turbulence levels achieved are 7–9% and 1% respectively. On the blade tip surface, no apparent change in heat transfer was observed with high and low turbulence intensity levels investigated. On the blade suction surface, however, substantially different local heat transfer for the suction side near tip surface have been observed, indicating a strong local dependence of the local vortical flow on the freestream turbulence. These experimentally observed trends have also been confirmed by CFD predictions using Rolls-Royce HYDRA. Further CFD analysis suggests that the level of inflow turbulence alters the balance between the passage vortex associated secondary flow and the OverTL flow. Consequently, enhanced inertia of near wall fluid at a higher inflow turbulence weakens the cross-passage flow. As such, the weaker passage vortex leads the tip leakage vortex to move further into the mid passage, with the less spanwise coverage on the suction surface, as consistently indicated by the heat transfer signature. Different inlet end-wall boundary layer profiles are employed in the HYDRA numerical study. All CFD results indicate the inlet boundary layer thickness has little impact on the heat transfer over the tip surface as well as the pressure side near-tip surface. However, noticeable changes in heat transfer are observed for the suction side near-tip surface. Similar to the freestream turbulence effect, such changes are attributed to the interaction between the passage vortex and the OTL flow. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161135315895081, "perplexity": 2846.7772103452894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00393.warc.gz"}
http://math.stackexchange.com/questions/249268/difference-between-conditional-and-biconditional-statement
# Difference between conditional and biconditional statement So, I can see the difference between something like: A. A car is green if it is made in England. and B. A car is green if and only if it is made in England. Then, if you had a Russian-made green car, it would be true for A. but not for B. So B is a stricter form of A. I'm trying to see how I can apply this logic to the statement A function $f: A \to B$ is surjective if and only if for all $b \in B$, there exists an $a \in A$ such that $f(a) = b$. I think what this means that if it were just normally implied (if x then y), you could have a surjection without the property: $\forall b \in B, \exists a \in A: f(a) = b$; i.e. there could be another property that allows a function to be surjective. But in saying if and only if, we are ensuring that a function can only be surjective if it has this property? - You may find, however, that people take this distinction less literally in definitions than in theorems. That is, if a theorem says "If a function from $\mathbb R$ to $\mathbb R$ is continuous, it takes on all values between any two of its function values", it really means only that and leaves open the possibility that the "only if" statement doesn't hold; however, in definitions (and what you're quoting is the definition of "surjective"), "if" is often used to mean "if and only if"; that is, you may find definitions like "a number is said to be even if it is divisible by $2$", intended to mean "a number is said to be even if and only if it is divisible by $2$".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263256788253784, "perplexity": 166.6700826620195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651471.95/warc/CC-MAIN-20150417045731-00305-ip-10-235-10-82.ec2.internal.warc.gz"}
https://productioncommunity.publicmobile.ca/t5/Using-Your-Service/I-think-my-number-has-been-hijacked-HELP-HELP-HELP/m-p/404686/highlight/true
Community cancel Showing results for Search instead for Did you mean: Good Citizen / Bon Citoyen ## I think my number has been hijacked. HELP HELP HELP not sure this is where this belongs but I think someone has hijacked my number, I need help NOW my phone says my sim is not registered to the network and when I called my number someone picked up on the other end that obviously wasn't me, this needs attention NOW. Good Citizen / Bon Citoyen ## not sure this is where this belongs but I think someone has hijacked my number, I need help NOW my phone says my sim is not registered to the network and when I called my number some **bleep** picked up on the other end that obviously wasn't me, this needs attention NOW. Town Hero / Héro de la Ville ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW Did you talk to the person? Can you log into your account?https://selfserve.publicmobile.ca/ When was the last time you used your phone successfuly? Mayor / Maire ## Re: I think my number has been hijacked. HELP HELP HELP @s3rjOnce the number has been ported out, there isn't anything you or Public Mobile can do about the number. The immediate thing is to disable all your 2FA authentication that relies on the number, eg banks etc. Reset all your passwords and myabe put a credit watch alert with Equifax etc. Do check your selfserve account.. can you still get in? You can contact the moderators but be expect to wait 48 hours to hear back from them. Good Citizen / Bon Citoyen ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW the moment I said a single word he hung up and I think he turned his phone off, then I think he changed my password I tried to change the password but I think he changed it first, but the sim card number is not the same as the one I have in my phone right now, I need this fixed NOW. I used my phone not but an hour earlier. Mayor / Maire ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW @s3rjAs I mentioned in your other thread, you should contact the moderators.. but expect to hear back from them in up to 48 hours. Timely techical support is all we gave up for a few bucks off a month. Honestly there are more immediate fires you should put out (changing your banking password, have another number ready etc).. than try chasing down this number, while the person is impersonating you. The ship has already sailed unfortunately. Town Hero / Héro de la Ville ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW you have to send the message to moderators, and put urgent in the title https://productioncommunity.publicmobile.ca/t5/notes/composepage/note-to-user-id/22437 Town Hero / Héro de la Ville ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW you may want suspend your credit card if you have autopay too Mayor / Maire ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW @oglatYou cannot suspend autopay without selfserve access. Town Hero / Héro de la Ville ## Re: not sure this is where this belongs but I think someone has hijacked my number, I need help NOW @GinYVR wrote: @oglatYou cannot suspend autopay without selfserve access. i meant call credit card company and suspend it temporarily
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864253580570221, "perplexity": 3176.4319933662155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00426.warc.gz"}
https://www.physicsforums.com/threads/loc-max-min-increase-decrease.36766/
# Loc Max, Min, increase & decrease! 1. Jul 26, 2004 ### rumaithya Hello, I have a question about these things What is the local Max, local Min, when is the function increasing and decreasing! the function is: G(x)= x - 4 sqrt[x] 2. Jul 26, 2004 ### HallsofIvy Why do you have questions about them? Is this homework? If so, it should be posted in the homework section and you should show what you have done so we will know what kind of hints will help. Here, I suggest that you write the function as G(x)= x- 4x2 (Is [x] just "parentheses" or "greatest integer function"? I'm assuming it's just parens.) The maximum value will occur where G'= 0 or G' does not exist (look carefully at x=0). The function is increasing where G'> 0 and decreasing where G'< 0. If this is really x- 4 sqrt("greatest integer less than or equal to x"), then it is not differentiable. I would recommend you graph it carefully. 3. Jul 26, 2004 ### gazzo and a local minima when G ' ' > 0 and local maxima when G ' ' < 0, point of infelction when G ' ' = 0 4. Jul 26, 2004 ### ShawnD First find the domain of the function. You know the function is only real when x >= 0 (or at least in my reality ) Secondly, find the derivative. I got this as the derivative: $$1 - 2x^{\frac{-1}{2}}$$ Just by looking at the equation you were given, you know G(x) is going to be negative at low x values. Since it starts decreasing, the local maximum is 0. To find the local minimum, set the derivative G'(x) equal to 0, solve for x, then fill that x value into your original formula G(x). To find when the function is increasing or decreasing, substitute x values into the derivative. Sub in an X value slightly less than where the derivative equals 0, then sub in an X value slightly more than where the derivative equals 0. Local Max G(x) = 0 Local Min G(x) = -4 Decreasing when 0 > x > 4 Increasing when 4 > x > infinity 5. Jul 26, 2004 ### Zurtex Not always. For example G = x^6, at x = 0 there is a minimum yet G ' ' = 0 at x = 0. 6. Jul 26, 2004 ### ShawnD That's still an inflection point though. 7. Jul 26, 2004 ### Zurtex It is? I seem to have the wrong idea on what an inflection is then, could you please explain. 8. Jul 26, 2004 ### ShawnD The definition is somewhat subjective. Inflection is sometimes defined as simply when the second derivative is 0, but it can also be when concavity changes. For X^6, the second derivative is 0, but the concavity does not change. Depends on definition I guess. 9. Aug 13, 2004 ### mathwonk Interesting point. In real calculus I am used to defining an inflection point as one where curvature changes sign, but in complex calculus where that presumably makes less sense, I define inflection point as a point where the tangent line has order of contact higher than two, hence in this case where the second derivative vanishes also. I never realized before I am using different definitions in different settings.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249005317687988, "perplexity": 918.985821982415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00113.warc.gz"}
https://socratic.org/questions/what-is-the-effect-of-population-growth-on-gdp
Environmental Science Topics # What is the effect of population growth on GDP? Nov 6, 2016 There is no definite answer, but it may cause an increase in GDP due to an increase in the labour force. #### Explanation: In economics, labour is a factor of production and with an increase in the labour force, due to population growth, the total output may increase causing the GDP to increase. The wages for labour may also decrease due to an abundance of labour, this would allow the cost of production to decrease. Thus the producer may choose to employ more people and increase production. However, the increase in GDP would be a long run effect as people are not considered part of the labour force until the age of approximately 15. The economy may not have enough available jobs for the population, which would cause the unemployment rate to increase. Meaning an increase in population does not always result in growth in GDP. ##### Impact of this question 3702 views around the world
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875933051109314, "perplexity": 788.6125635832034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00238.warc.gz"}
https://cstheory.stackexchange.com/questions/36231/problems-in-nc-not-known-to-lie-in-nc2
# Problems in NC not known to lie in NC2 Are there interesting problems that are in $\mathsf{NC}$ but not known to be in $\mathsf{NC^{2}}$? In the paper 'A Taxonomy of Problems With Fast Parallel Algorithms', Cook mentions that MIS was known to only be in $\mathsf{NC^{5}}$ but this has since been brought down to $\mathsf{NC^{2}}$. I am wondering if there are any other problems with polylog-depth parallel algorithms where we seem to be stuck on improving the depth. To narrow down even further, are there any problems in $\mathsf{NC^{2}}$ that are not known to be in $\mathsf{AC^{1}}$ or $\mathsf{DET}$? • See this question and Josh's answer to it. – Kaveh Jul 21 '16 at 20:30 • I missed that completely Kaveh---thanks! The answer's last paragraph on $\mathsf{NL}=\mathsf{coNL}$ and the corresponding hierarchy collapse gives useful intuition for the state of $\mathsf{NC}$. – xal Jul 21 '16 at 21:25 • I actually was just wondering about your final question; I think it would be worth posting as a separate question (since it is technically a different question, and independent from the question in your title). xal, would you be open to posting the question of problems in $\mathsf{NC}^2$ not known to be in $(\mathsf{AC}^1 \cup \mathsf{DET})$ as a separate question? And @Kaveh, what do you think about doing so from a procedural perspective? – Joshua Grochow Dec 20 '17 at 19:32 • @Josh, I don't see any problem with doing so. We have asked authors to split the questions into separate posts before. – Kaveh Dec 21 '17 at 0:29 • Thanks for asking Josh, I split the question here: cstheory.stackexchange.com/q/39831/40340 – xal Dec 22 '17 at 5:30 Disclaimer: I'm not an expert in fast parallel algorithms, hence the probability that I missed more recent results that put the problems I mention in lower levels of the $$\mathsf{NC}$$ hierarchy is non-negligible. If you observe that it is the case, please tell me and I'll update my answer. • The report Parallel Algorithms for Depth-First Search discusses known parallel algorithms for DFS on various types of graphs. The list given on pages 9-10 indicates several algorithms in $$\mathsf{NC} \setminus \mathsf{NC}_2$$, such as DFS for planar undirected graphs, or in $$\mathsf{RNC} \setminus \mathsf{RNC}_2$$, such as DFS for general undirected graphs. • With a quick search, I could not find papers improving over the parallel algorithms for sparse multivariate polynomial interpolation over finite fields of this paper, which is in $$\mathsf{NC}_3$$. However, several papers that could possibly have been relevant were behind a paywall. • Computing all maximal cliques in a graph is in $$\mathsf{NC} \setminus \mathsf{NC}_2$$ when the number of maximal clique is polynomially bounded, according to this paper. • The maximal path problem seems to be in $$\mathsf{NC}_5$$ for general (undirected) graphs, I've not found a faster parallel algorithms without restrictions on the underlying graph. Other potential candidates might include algorithms for finding perfect matchings in specific types of graphs, or algorithms for finding a maximal tree cover in arbitrary graphs (e.g. this paper mentions a randomized polytime algorithms in parallel time $$O(\log^6n)$$). This paper also mention solving classes of CSPs problems that arise in computer vision application, in parallel time $$O(\log^3n)$$. • Interesting! Do you know if any of these are complete (or conjectured to be complete) for these higher levels of the NC hierarchy? It'd be nice to have such natural examples on hand. – Joshua Grochow Sep 18 '18 at 17:25 • Unfortunately I have no idea about that, the papers I list above do not mention anything of the kind (as far as I can see). All of this is very far from my area of expertise; I just did a literature search to answer OP's question since I found it very interesting, but my limited knowledge does not give me any clear intuition about the hardness of these problems. – Geoffroy Couteau Sep 18 '18 at 17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444015979766846, "perplexity": 354.54019931296887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00317.warc.gz"}
https://www.physicsoverflow.org/23139/correlation-functions-of-complex-operators
# Correlation functions of complex operators + 4 like - 0 dislike 522 views One defines the "scaling dimension" (as opposed to "engineering dimension") of an operator $\cal{O}$ as $[\cal{O}]$ such that if $\cal{O}(t^{-1}x) = t^{[\cal{O}]}\cal{O}(x)$ then the Lagrangian in which $\cal{O}$ appears would be scale invariant. • Unlike for "engineering dimensions" it seems that the value of scaling dimensions (even classically!) can't be derived from just looking at the operator but one seems to need to know the Lagrangian in which it appears so that the "right" $[\cal{O}]$ can be assigned to preserve scale-invariance. For example - how else does one explain that the "engineering dimension" of $m^2\phi$ is $3$ whereas its "scaling dimension" is $1$? (same as that of $\phi$) (..the above obviously follows if I think of the term to be occurring in a $2+1$-dimensional Lagrangian and ask as to what should the scaling dimensions be so that the Lagrangian is scale-invariant..but something doesn't look very intuitive..) • I would like to know what is the special difficulty that is faced in defining $2-$point correlation functions of $\cal{O}$ if it is real? (..as opposed to when they are complex like in the next question - thought not that obvious either!..) • For complex $\cal{O}$ it "follows" that $<\cal{O}(x)\cal{O}^*(y)> \sim \vert x - y \vert ^{-2[ \cal{O}]}$ It is clearly consistent with definitions of the scaling dimension but is there a "derivation" for this? I have often seen the statement that the above short-distance behaviour follows from "reflection positivity" (..ala Wightman axioms..) I would like to know of some explanations. This post imported from StackExchange MathOverflow at 2014-09-01 11:22 (UCT), posted by SE-user Anirbit asked Jan 20, 2012 retagged Sep 1, 2014 Your definition of scaling dimension does not agree with the standard definition in quantum field theory and conformal field theory. The standard definition is that the "engineering" dimension is the dimension as determined by dimensional analysis of the Lagrangian, whether the Lagrangian is scale invariant or not, while the full "scaling dimension" is determined by the exact two point function (including quantum corrections). This is discussed in most QFT textbooks and a brief discussion is also available on Wikipedia under "anomalous scaling dimension." This post imported from StackExchange MathOverflow at 2014-09-01 11:22 (UCT), posted by SE-user Jeff Harvey @Jeff Delighted to see a reply from you! Let me give my references, in case I am misreading something. My definition of "scaling dimension" is what is discussed on the first page of this lecture by Witten, math.ias.edu/QFT/fall/wittn2.ps What he calls as just dimensions on the first page here is what he seems to also call scaling dimension in the discussion on the first two pages of the next lecture, math.ias.edu/QFT/fall/wittn3.ps (..as say very clearly stated just above remark 2 on the second page of my second link..)I thought my terminology is consistent with these. This post imported from StackExchange MathOverflow at 2014-09-01 11:22 (UCT), posted by SE-user Anirbit @Jeff Ofcourse all these definitions are classical and clearly when quantized one would get new definitions of dimension and hence the notion of "anomalous" dimension. As has been pointed out in the remark just above section 3.5 on page 8 of my second link. This post imported from StackExchange MathOverflow at 2014-09-01 11:22 (UCT), posted by SE-user Anirbit Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8443427681922913, "perplexity": 922.7024956969605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00137.warc.gz"}
https://math.stackexchange.com/questions/746508/using-cantors-intersection-theorem
# Using Cantor's intersection theorem Assume $f: X \rightarrow X$ is a continuous map where X is a compact metric space. Prove that there exists a non-empty set $A \subset X$ such that $f(A) = A$. (Hint: Set $F_1 = f(X), F_{n+1} = f(F_n), ...$) So, using Cantor's intersection theorem: If ($F_n$) is a decreasing sequence of closed sets then the intersection is non-empty. I also know that f is a continuous map so I could use that (topology definition or metric space def). I'm not actually sure if each $F_n$ is closed, or how to show this. Also, if they are closed what would this non-empty intersection even show? The closest I can come to is that using the sequential continuity property - one could possibly show that $X, f(X), f(f(X)), ...$ converges to $A$ implies that $F_n$ converges to $f(A)$. Again, just seeing what I have really. Any help? • Hint: the image of a compact set under a continuous map is compact. – Callus Apr 9 '14 at 10:19 • @Callus Hm, thanks but I'm still unsure of the next step. – McT Apr 9 '14 at 10:41 Set $F_1=X$. Then $F_1$ is compact. Set $F_2=f(F_1)=f(f(X))$. Then $F_2$ is compact because continuous image of a compact set is compact and also $F_2\subset F_1=X$. $F_3=f(F_2)$ and $F_3= f(F_2)\subset f(F_1)=F_2$ By induction prove that there exists a decreasing sequence ($F_n$) of compact sets. Then $\bigcap_{n} F_n=A\neq \emptyset$. Then $A=\bigcap_{n+1} F_{n+1}=\bigcap_{n} f(F_n)=f(\bigcap_{n}F_n)=f(A)$. • How do you know that the intersection contains only one point? – Martin Sleziak Apr 9 '14 at 12:38 • @MartinSleziak sorry. i got confused(i'm sleepy).i corrected it – Haha Apr 9 '14 at 12:44 • Ohhh. This is quite an interesting result actually! – McT Apr 9 '14 at 13:24 • Why can we conclude $\cap_{n}f(F_{n})=f(\cap_{n}F_{n})$? This is false in general. – T. Eskin Apr 10 '14 at 4:42 • @ThomasE. $f$ is continuous – Haha Apr 10 '14 at 9:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568857550621033, "perplexity": 346.5516989247618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00180.warc.gz"}
https://analog.intgckts.com/noise-figure/noise-figure-of-resistor/
# Noise Figure of Resistor Consider an RF source with output resistance of driving a resistive load as shown in Figure 1. Figure 1. Noise figure computation of resistive load >> Norton equivalent representation >> Equivalent circuit for noise figure calculation RF source is directly driving the load, input and output nodes are the same. Therefore the power gain =1 Noise factor is now given by the ratio of noise power at the output to the noise power due to source. (1) Alternatively, consider the input as current source and taking output as voltage across . The Norton equivalent of the circuit is shown in Figure 1. Power gain from input to output is Noise factor is given by, (2) If the load impedance() is power matched for source resistance(), then . Its conversion gain is -6dB. To minimize noise factor should me as high as possible, but for maximum power transfer from source to load. Therefore a tradeoff comes into picture between noise figure and maximum power transfer. For example a 6dB RF attenuator or pad has noise figure of 6dB and conversion gain of -6dB. If a signal enters into a attenuator or pad, then the signal is attenuated by 6dB while the noise floor remains constant. Therefore the signal to noise ratio through the pad is degraded by 6dB. The Noise figure of a passive device is same as that of the conversion gain(in dB sense). This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589249491691589, "perplexity": 1545.9118404375108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00121.warc.gz"}
https://research.abo.fi/fi/publications/determination-of-kinetics-and-equilibria-of-heterogeneously-catal
# Determination of kinetics and equilibria of heterogeneously catalyzed gas-phase reactions in gradientless autoclave reactors by using the total pressure method: Methanol synthesis Tutkimustuotos: LehtiartikkeliArtikkeliTieteellinenvertaisarvioitu ## Abstrakti Rapid methods are very valuable in the determination of the kinetic and mass transfer effects for heterogeneously catalyzed reactions. The total pressure method is a classical tool in the measurement of the kinetics of gas-phase reactions, but it can be successfully applied to the kinetic measurements of gas-phase processes enhanced by solid catalysts. A general theory for the analysis of heterogeneously catalyzed gas-phase kinetics in gradientless batch reactors was presented for the case of intrinsic kinetic control and combined kinetic-diffusion control in porous catalysts. The concept was applied to gas-phase synthesis of methanol from carbon monoxide and hydrogen on a commercial copper-based catalyst (CuO/ZnO/Al2O3 R3-12 BASF). The reaction temperature was 180–210 °C and the initial total pressure was varied between 11 and 21 bar in a laboratory-scale autoclave reactor equipped with a rotating basket for the catalyst particles. The initial molar ratios CO-to-H2 were approximately 1:2, 1:3 and 1:4. The experimental data from methanol synthesis were compared with numerical simulations and a good agreement between the experiments and model simulations was achieved. The predicted equilibrium agrees with previously reported values. Alkuperäiskieli Ei tiedossa – Chemical Engineering Science https://doi.org/10.1016/j.ces.2019.115393 Julkaistu - 2019 A1 Julkaistu artikkeli, soviteltu ## Keywords • Methanol synthesis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518854379653931, "perplexity": 4370.002284890648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359065.88/warc/CC-MAIN-20211130171559-20211130201559-00478.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-9-problem-966qp-general-chemistry-standalone-book-mindtap-course-list-11th-edition/9781305580343/write-lewis-formulas-for-the-following-ions-a-ibr2-b-clf2-c-cn/840872b0-98d2-11e8-ada4-0ee91056875a
Chapter 9, Problem 9.66QP ### General Chemistry - Standalone boo... 11th Edition Steven D. Gammon + 7 others ISBN: 9781305580343 Chapter Section ### General Chemistry - Standalone boo... 11th Edition Steven D. Gammon + 7 others ISBN: 9781305580343 Textbook Problem # Write Lewis formulas for the following ions: a IBr2+ b ClF2+ c CN− (a) Interpretation Introduction Interpretation: The Lewis formula for given molecules have to be drawn. Concept introduction: Lewis structure, otherwise known as Lewis dot diagrams or electron dot structures that show the bond between atoms and lone pairs of electrons that are present in the molecule.  Lewis structure represents each atom and their position in structure using the chemical symbol.  Excess electrons forms the lone pair are given by pair of dots, and are located next to the atom. Explanation The Lewis formula for IBr2+ molecule The better Lewis formula can be given below The outer most shell electrons is = 7+(2×7)1=20 then the IBr2+ molecule skeleton is (b) Interpretation Introduction Interpretation: The Lewis formula for given molecules have to be drawn. Concept introduction: Lewis structure, otherwise known as Lewis dot diagrams or electron dot structures that show the bond between atoms and lone pairs of electrons that are present in the molecule.  Lewis structure represents each atom and their position in structure using the chemical symbol.  Excess electrons forms the lone pair are given by pair of dots, and are located next to the atom. (c) Interpretation Introduction Interpretation: The Lewis formula for given molecules have to be drawn. Concept introduction: Lewis structure, otherwise known as Lewis dot diagrams or electron dot structures that show the bond between atoms and lone pairs of electrons that are present in the molecule.  Lewis structure represents each atom and their position in structure using the chemical symbol.  Excess electrons forms the lone pair are given by pair of dots, and are located next to the atom. ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836113452911377, "perplexity": 4173.380854674899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00005.warc.gz"}
http://www.map.mpim-bonn.mpg.de/index.php?title=Bott_periodicity_and_exotic_spheres:_some_comments&oldid=6532
# Bott periodicity and exotic spheres: some comments (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## 1 The Bott periodicity Theorem One of the fundamental results in modern topology is Bott periodicity [Bott]. It computes the homotopy groups of the stable orthogonal or unitary groups $O= \cup_n O(n)$${{Stub}} ==The Bott periodicity Theorem== ; One of the fundamental results in modern topology is Bott periodicity [[[#{{anchorencode:Bott}}|Bott]]]{{#RefAdd:Bott}}. It computes the homotopy groups of the stable orthogonal or unitary groups O= \cup_n O(n) resp U = \cup _n U_n . The answer is very simple: For i \ge 0 one has \pi _i(O) \cong \pi_{i+8} (O) and for i>0 one has \pi_i(U) \cong \pi_{i+2}(U). Thus one only needs to know the groups for small i: For i = 1,....,8 one has \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z and \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0. Bott does not give any reference for these computations. ; == An interpretation in terms of vector bundles over spheres== ; Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if f : S^{i-1} \to O(n) is a continuous map, then one obtains a vector bundle E_f over S^i= D^i \cup D^i by taking two copies of D^i \times \mathbb R^n and by identifying (x,v) \in S^{i-1} \times \mathbb R^n in the first copy with (x, f(x)v) in the second copy. This map gives an isomorphism from \pi_{i-1}(O(n)) to the set Vect^\mathbb R_n(S^i) of isomorphism classes of n-dimensional vector bundles over S^i. I don't know who observed this first, it can for example be found in Steenrod's book [[[#{{anchorencode:Steenrod}}|Steenrod]]]{{#RefAdd:Steenrod}}. Similarly \pi_{i-1}(U(n)) corresponds to the set Vect^\mathbb C_n(S^i) of isomorphism classes of complex n-dimensional vector bundles over S^i. Passing from O(n) to O(n+1) (or U(n) to U(n+1)) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the O= \cup_n O(n)$ resp $U = \cup _n U_n$$U = \cup _n U_n$. The answer is very simple For $i \ge 0$$i \ge 0$ one has $\pi _i(O) \cong \pi_{i+8} (O)$$\pi _i(O) \cong \pi_{i+8} (O)$ and for $i>0$$i>0$ one has $\pi_i(U) \cong \pi_{i+2}(U)$$\pi_i(U) \cong \pi_{i+2}(U)$. Thus one only needs to know the groups for small $i$$i$: For $i = 1,....,8$$i = 1,....,8$ one has $\displaystyle \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z$ and $\displaystyle \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0.$ Bott does not give any reference for these computations.; ## 2 An interpretation in terms of vector bundles over spheres Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if $f : S^{i-1} \to O(n)$$f : S^{i-1} \to O(n)$ is a continuous map, then one obtains a vector bundle $E_f$$E_f$ over $S^i= D^i \cup D^i$$S^i= D^i \cup D^i$ by taking two copies of $D^i \times \mathbb R^n$$D^i \times \mathbb R^n$ and by identifying $(x,v) \in S^{i-1} \times \mathbb R^n$$(x,v) \in S^{i-1} \times \mathbb R^n$ in the first copy with $(x, f(x)v)$$(x, f(x)v)$ in the second copy. This map gives an isomorphism from $\pi_{i-1}(O(n))$$\pi_{i-1}(O(n))$ to the set $Vect^\mathbb R_n(S^i)$$Vect^\mathbb R_n(S^i)$ of isomorphism classes of $n$$n$-dimensional vector bundles over $S^i$$S^i$. I don't know who observed this first, it can for example be found in Steenrod's book [Steenrod]. Similarly $\pi_{i-1}(U(n))$$\pi_{i-1}(U(n))$ corresponds to the set $Vect^\mathbb C_n(S^i)$$Vect^\mathbb C_n(S^i)$ of isomorphism classes of complex $n$$n$-dimensional vector bundles over $S^i$$S^i$. Passing from $O(n)$$O(n)$ to $O(n+1)$$O(n+1)$ (or $U(n)$$U(n)$ to $U(n+1)$$U(n+1)$) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the $1$$1$-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if $n >i$$n >i$ ($n>2i$$n>2i$ in the complex case). If $n>i$$n>i$ (or $n > 2i$$n > 2i$ in the complex case) one calls such a bundle a stable vector bundle. Actually the $n$$n$-dimensional vector bundles over $S^i$$S^i$ (not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for $i>0$$i>0$ not equal to $1,2,4,8$$1,2,4,8$ mod $8$$8$ all real vector bundles of dimension $n >i$$n >i$ over $S^i$$S^i$ are trivial, that for $i=1,2$$i=1,2$ mod $8$$8$ there are precisely $2$$2$ such bundles and for $i =0$$i =0$ mod $4$$4$ there are countably many such bundles. \\ Remark: {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.}; ## 3 The dates and dates of background papers The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [Bott1957]. The methods of the proof were developed in several earlier papers [Bott1954] \cite {Bott 1956}.; ## 4 The role of Bott periodicity for Kervaire-Milnor's paper Remark: {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [Kervaire/Milnor]. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a homotopy sphere, whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$$i = 3, 5,6,7$ mod $8$$8$ there is no non-trivial stable vector bundle over $S^i$$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over $S^i$$S^i$ are isomorphic. Let's begin with the case $i = 4s$$i = 4s$. \\ Remark: {\em In the case $i=4s$$i=4s$ one has an invariant for stable vector bundles $E$$E$, namely the Pontrjagin classes $p_s(E)\in H^{4s}(S^i)$$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over $S^{4s}$$S^{4s}$ to $H^{4s }(S^{4s})$$H^{4s }(S^{4s})$ which by choosing an orientation we identify with $\mathbb Z$$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to $\mathbb Z$$\mathbb Z$ to $\mathbb Z$$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over $S^{4s}$$S^{4s}$ are isomorphic if and only their Pontrjagin classes $p_s$$p_s$ agree. To show that the homomorphism is non-trivial (for $s>0$$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over $S^4$$S^4$ and $S^8$$S^8$, where one can take the tautological bundle over $S^4$$S^4$ considered as the projective line over the quaternions or over $S^8$$S^8$ considered as the projective line over the Cayley numbers. }\\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class $p_s$$p_s$ are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a $4s$$4s$-dimensional homotopy sphere $\Sigma$$\Sigma$ is trivial by one sentence "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$$p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$$\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$$H^{2s}(\Sigma) = 0$."\\ Remark: {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$$i = 1,2$ mod $8$$8$ is. The cases $1$$1$ and $2$$2$ are trivial, but the higher dimensions not. Here the obstruction class sits in $\pi_{i-1}(O) = \mathbb Z/2$$\pi_{i-1}(O) = \mathbb Z/2$ (by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $0$$0$. There is a homomorphism introduced by Hopf-Whitehead, the $J$$J$-homomorphism, from $\pi_{i-1} (O)$$\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$$\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$$J$-homomorphism is injective. Thus we summarize: Theorem 4.1. (Theorem 3.1 [Kervaire-Milnor]) For all homotopy spheres $\Sigma$$\Sigma$ the stable tangent bundle is trivial. ; {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$$i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [Freedman] in dimension $4$$4$ and by Perelmann [Perelman] in dimension $3$$3$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$$>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called surgery theory. But the following can be said. Suppose that a topological manifold $M$$M$ is given. Then one can ask whether $M$$M$ admits a smooth structure. A necessary condition is that $M$$M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$$>4$, but false in dimension $4$$4$ by the fundamental work of Donaldson, and again true in dimension $<4$$<4$ by different methods. If we assume that $M$$M$ has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension $>4$$>4$). Roughly speaking the different smooth structures on $M$$M$ correspond to the different ways to impose a stable tangent bundle on $M$$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called $K$$K$-theory, invented by Atiyah and Hirzebruch [Atiyah-Hirzebruch]. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document}; $-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if$n >i$($n>2i$in the complex case). If$n>i$(or$n > 2i$in the complex case) one calls such a bundle a '''stable vector bundle'''. Actually the$n$-dimensional vector bundles over$S^i$(not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles: One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for$i>0$not equal to O= \cup_n O(n) resp $U = \cup _n U_n$$U = \cup _n U_n$. The answer is very simple For $i \ge 0$$i \ge 0$ one has $\pi _i(O) \cong \pi_{i+8} (O)$$\pi _i(O) \cong \pi_{i+8} (O)$ and for $i>0$$i>0$ one has $\pi_i(U) \cong \pi_{i+2}(U)$$\pi_i(U) \cong \pi_{i+2}(U)$. Thus one only needs to know the groups for small $i$$i$: For $i = 1,....,8$$i = 1,....,8$ one has $\displaystyle \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z$ and $\displaystyle \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0.$ Bott does not give any reference for these computations.; ## 2 An interpretation in terms of vector bundles over spheres Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if $f : S^{i-1} \to O(n)$$f : S^{i-1} \to O(n)$ is a continuous map, then one obtains a vector bundle $E_f$$E_f$ over $S^i= D^i \cup D^i$$S^i= D^i \cup D^i$ by taking two copies of $D^i \times \mathbb R^n$$D^i \times \mathbb R^n$ and by identifying $(x,v) \in S^{i-1} \times \mathbb R^n$$(x,v) \in S^{i-1} \times \mathbb R^n$ in the first copy with $(x, f(x)v)$$(x, f(x)v)$ in the second copy. This map gives an isomorphism from $\pi_{i-1}(O(n))$$\pi_{i-1}(O(n))$ to the set $Vect^\mathbb R_n(S^i)$$Vect^\mathbb R_n(S^i)$ of isomorphism classes of $n$$n$-dimensional vector bundles over $S^i$$S^i$. I don't know who observed this first, it can for example be found in Steenrod's book [Steenrod]. Similarly $\pi_{i-1}(U(n))$$\pi_{i-1}(U(n))$ corresponds to the set $Vect^\mathbb C_n(S^i)$$Vect^\mathbb C_n(S^i)$ of isomorphism classes of complex $n$$n$-dimensional vector bundles over $S^i$$S^i$. Passing from $O(n)$$O(n)$ to $O(n+1)$$O(n+1)$ (or $U(n)$$U(n)$ to $U(n+1)$$U(n+1)$) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the $1$$1$-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if $n >i$$n >i$ ($n>2i$$n>2i$ in the complex case). If $n>i$$n>i$ (or $n > 2i$$n > 2i$ in the complex case) one calls such a bundle a stable vector bundle. Actually the $n$$n$-dimensional vector bundles over $S^i$$S^i$ (not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for $i>0$$i>0$ not equal to $1,2,4,8$$1,2,4,8$ mod $8$$8$ all real vector bundles of dimension $n >i$$n >i$ over $S^i$$S^i$ are trivial, that for $i=1,2$$i=1,2$ mod $8$$8$ there are precisely $2$$2$ such bundles and for $i =0$$i =0$ mod $4$$4$ there are countably many such bundles. \\ Remark: {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.}; ## 3 The dates and dates of background papers The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [Bott1957]. The methods of the proof were developed in several earlier papers [Bott1954] \cite {Bott 1956}.; ## 4 The role of Bott periodicity for Kervaire-Milnor's paper Remark: {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [Kervaire/Milnor]. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a homotopy sphere, whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$$i = 3, 5,6,7$ mod $8$$8$ there is no non-trivial stable vector bundle over $S^i$$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over $S^i$$S^i$ are isomorphic. Let's begin with the case $i = 4s$$i = 4s$. \\ Remark: {\em In the case $i=4s$$i=4s$ one has an invariant for stable vector bundles $E$$E$, namely the Pontrjagin classes $p_s(E)\in H^{4s}(S^i)$$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over $S^{4s}$$S^{4s}$ to $H^{4s }(S^{4s})$$H^{4s }(S^{4s})$ which by choosing an orientation we identify with $\mathbb Z$$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to $\mathbb Z$$\mathbb Z$ to $\mathbb Z$$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over $S^{4s}$$S^{4s}$ are isomorphic if and only their Pontrjagin classes $p_s$$p_s$ agree. To show that the homomorphism is non-trivial (for $s>0$$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over $S^4$$S^4$ and $S^8$$S^8$, where one can take the tautological bundle over $S^4$$S^4$ considered as the projective line over the quaternions or over $S^8$$S^8$ considered as the projective line over the Cayley numbers. }\\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class $p_s$$p_s$ are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a $4s$$4s$-dimensional homotopy sphere $\Sigma$$\Sigma$ is trivial by one sentence "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$$p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$$\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$$H^{2s}(\Sigma) = 0$."\\ Remark: {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$$i = 1,2$ mod $8$$8$ is. The cases $1$$1$ and $2$$2$ are trivial, but the higher dimensions not. Here the obstruction class sits in $\pi_{i-1}(O) = \mathbb Z/2$$\pi_{i-1}(O) = \mathbb Z/2$ (by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $0$$0$. There is a homomorphism introduced by Hopf-Whitehead, the $J$$J$-homomorphism, from $\pi_{i-1} (O)$$\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$$\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$$J$-homomorphism is injective. Thus we summarize: Theorem 4.1. (Theorem 3.1 [Kervaire-Milnor]) For all homotopy spheres $\Sigma$$\Sigma$ the stable tangent bundle is trivial. ; ## 5 Some comments {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$$i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [Freedman] in dimension $4$$4$ and by Perelmann [Perelman] in dimension $3$$3$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$$>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called surgery theory. But the following can be said. Suppose that a topological manifold $M$$M$ is given. Then one can ask whether $M$$M$ admits a smooth structure. A necessary condition is that $M$$M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$$>4$, but false in dimension $4$$4$ by the fundamental work of Donaldson, and again true in dimension $<4$$<4$ by different methods. If we assume that $M$$M$ has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension $>4$$>4$). Roughly speaking the different smooth structures on $M$$M$ correspond to the different ways to impose a stable tangent bundle on $M$$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called $K$$K$-theory, invented by Atiyah and Hirzebruch [Atiyah-Hirzebruch]. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document}; ## 6 References ,2,4,8$ mod $all real vector bundles of dimension$n >i$over$S^i$are trivial, that for$i=1,2$mod$ there are precisely $such bundles and for$i =0$mod$ there are countably many such bundles. \ '''Remark:''' {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.} ; ==The dates and dates of background papers== ; The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [[[#{{anchorencode:Bott1957}}|Bott1957]]]{{#RefAdd:Bott1957}}. The methods of the proof were developed in several earlier papers [[[#{{anchorencode:Bott1954}}|Bott1954]]]{{#RefAdd:Bott1954}} \cite {Bott 1956}. ; ==The role of Bott periodicity for Kervaire-Milnor's paper== ; '''Remark:''' {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [[[#{{anchorencode:Kervaire/Milnor}}|Kervaire/Milnor]]]{{#RefAdd:Kervaire/Milnor}}. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a '''homotopy sphere''', whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$ mod $there is no non-trivial stable vector bundle over$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over$S^i$are isomorphic. Let's begin with the case$i = 4s$. \ '''Remark:''' {\em In the case$i=4s$one has an invariant for stable vector bundles$E$, namely the Pontrjagin classes$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over$S^{4s}$to$H^{4s }(S^{4s})$which by choosing an orientation we identify with$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to$\mathbb Z$to$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over$S^{4s}$are isomorphic if and only their Pontrjagin classes$p_s$agree. To show that the homomorphism is non-trivial (for$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over$S^4$and$S^8$, where one can take the tautological bundle over$S^4$considered as the projective line over the quaternions or over$S^8$considered as the projective line over the Cayley numbers. }\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class$p_s$are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a s$-dimensional homotopy sphere $\Sigma$ is trivial by one sentence: "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$."\ '''Remark:''' {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$ mod $is. The cases O= \cup_n O(n) resp $U = \cup _n U_n$$U = \cup _n U_n$. The answer is very simple For $i \ge 0$$i \ge 0$ one has $\pi _i(O) \cong \pi_{i+8} (O)$$\pi _i(O) \cong \pi_{i+8} (O)$ and for $i>0$$i>0$ one has $\pi_i(U) \cong \pi_{i+2}(U)$$\pi_i(U) \cong \pi_{i+2}(U)$. Thus one only needs to know the groups for small $i$$i$: For $i = 1,....,8$$i = 1,....,8$ one has $\displaystyle \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z$ and $\displaystyle \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0.$ Bott does not give any reference for these computations.; ## 2 An interpretation in terms of vector bundles over spheres Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if $f : S^{i-1} \to O(n)$$f : S^{i-1} \to O(n)$ is a continuous map, then one obtains a vector bundle $E_f$$E_f$ over $S^i= D^i \cup D^i$$S^i= D^i \cup D^i$ by taking two copies of $D^i \times \mathbb R^n$$D^i \times \mathbb R^n$ and by identifying $(x,v) \in S^{i-1} \times \mathbb R^n$$(x,v) \in S^{i-1} \times \mathbb R^n$ in the first copy with $(x, f(x)v)$$(x, f(x)v)$ in the second copy. This map gives an isomorphism from $\pi_{i-1}(O(n))$$\pi_{i-1}(O(n))$ to the set $Vect^\mathbb R_n(S^i)$$Vect^\mathbb R_n(S^i)$ of isomorphism classes of $n$$n$-dimensional vector bundles over $S^i$$S^i$. I don't know who observed this first, it can for example be found in Steenrod's book [Steenrod]. Similarly $\pi_{i-1}(U(n))$$\pi_{i-1}(U(n))$ corresponds to the set $Vect^\mathbb C_n(S^i)$$Vect^\mathbb C_n(S^i)$ of isomorphism classes of complex $n$$n$-dimensional vector bundles over $S^i$$S^i$. Passing from $O(n)$$O(n)$ to $O(n+1)$$O(n+1)$ (or $U(n)$$U(n)$ to $U(n+1)$$U(n+1)$) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the $1$$1$-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if $n >i$$n >i$ ($n>2i$$n>2i$ in the complex case). If $n>i$$n>i$ (or $n > 2i$$n > 2i$ in the complex case) one calls such a bundle a stable vector bundle. Actually the $n$$n$-dimensional vector bundles over $S^i$$S^i$ (not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for $i>0$$i>0$ not equal to $1,2,4,8$$1,2,4,8$ mod $8$$8$ all real vector bundles of dimension $n >i$$n >i$ over $S^i$$S^i$ are trivial, that for $i=1,2$$i=1,2$ mod $8$$8$ there are precisely $2$$2$ such bundles and for $i =0$$i =0$ mod $4$$4$ there are countably many such bundles. \\ Remark: {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.}; ## 3 The dates and dates of background papers The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [Bott1957]. The methods of the proof were developed in several earlier papers [Bott1954] \cite {Bott 1956}.; ## 4 The role of Bott periodicity for Kervaire-Milnor's paper Remark: {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [Kervaire/Milnor]. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a homotopy sphere, whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$$i = 3, 5,6,7$ mod $8$$8$ there is no non-trivial stable vector bundle over $S^i$$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over $S^i$$S^i$ are isomorphic. Let's begin with the case $i = 4s$$i = 4s$. \\ Remark: {\em In the case $i=4s$$i=4s$ one has an invariant for stable vector bundles $E$$E$, namely the Pontrjagin classes $p_s(E)\in H^{4s}(S^i)$$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over $S^{4s}$$S^{4s}$ to $H^{4s }(S^{4s})$$H^{4s }(S^{4s})$ which by choosing an orientation we identify with $\mathbb Z$$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to $\mathbb Z$$\mathbb Z$ to $\mathbb Z$$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over $S^{4s}$$S^{4s}$ are isomorphic if and only their Pontrjagin classes $p_s$$p_s$ agree. To show that the homomorphism is non-trivial (for $s>0$$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over $S^4$$S^4$ and $S^8$$S^8$, where one can take the tautological bundle over $S^4$$S^4$ considered as the projective line over the quaternions or over $S^8$$S^8$ considered as the projective line over the Cayley numbers. }\\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class $p_s$$p_s$ are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a $4s$$4s$-dimensional homotopy sphere $\Sigma$$\Sigma$ is trivial by one sentence "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$$p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$$\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$$H^{2s}(\Sigma) = 0$."\\ Remark: {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$$i = 1,2$ mod $8$$8$ is. The cases $1$$1$ and $2$$2$ are trivial, but the higher dimensions not. Here the obstruction class sits in $\pi_{i-1}(O) = \mathbb Z/2$$\pi_{i-1}(O) = \mathbb Z/2$ (by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $0$$0$. There is a homomorphism introduced by Hopf-Whitehead, the $J$$J$-homomorphism, from $\pi_{i-1} (O)$$\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$$\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$$J$-homomorphism is injective. Thus we summarize: Theorem 4.1. (Theorem 3.1 [Kervaire-Milnor]) For all homotopy spheres $\Sigma$$\Sigma$ the stable tangent bundle is trivial. ; ## 5 Some comments {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$$i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [Freedman] in dimension $4$$4$ and by Perelmann [Perelman] in dimension $3$$3$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$$>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called surgery theory. But the following can be said. Suppose that a topological manifold $M$$M$ is given. Then one can ask whether $M$$M$ admits a smooth structure. A necessary condition is that $M$$M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$$>4$, but false in dimension $4$$4$ by the fundamental work of Donaldson, and again true in dimension $<4$$<4$ by different methods. If we assume that $M$$M$ has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension $>4$$>4$). Roughly speaking the different smooth structures on $M$$M$ correspond to the different ways to impose a stable tangent bundle on $M$$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called $K$$K$-theory, invented by Atiyah and Hirzebruch [Atiyah-Hirzebruch]. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document}; ## 6 References$ and $are trivial, but the higher dimensions not. Here the obstruction class sits in$\pi_{i-1}(O) = \mathbb Z/2$(by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $O= \cup_n O(n)$ resp $U = \cup _n U_n$$U = \cup _n U_n$. The answer is very simple For $i \ge 0$$i \ge 0$ one has $\pi _i(O) \cong \pi_{i+8} (O)$$\pi _i(O) \cong \pi_{i+8} (O)$ and for $i>0$$i>0$ one has $\pi_i(U) \cong \pi_{i+2}(U)$$\pi_i(U) \cong \pi_{i+2}(U)$. Thus one only needs to know the groups for small $i$$i$: For $i = 1,....,8$$i = 1,....,8$ one has $\displaystyle \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z$ and $\displaystyle \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0.$ Bott does not give any reference for these computations.; ## 2 An interpretation in terms of vector bundles over spheres Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if $f : S^{i-1} \to O(n)$$f : S^{i-1} \to O(n)$ is a continuous map, then one obtains a vector bundle $E_f$$E_f$ over $S^i= D^i \cup D^i$$S^i= D^i \cup D^i$ by taking two copies of $D^i \times \mathbb R^n$$D^i \times \mathbb R^n$ and by identifying $(x,v) \in S^{i-1} \times \mathbb R^n$$(x,v) \in S^{i-1} \times \mathbb R^n$ in the first copy with $(x, f(x)v)$$(x, f(x)v)$ in the second copy. This map gives an isomorphism from $\pi_{i-1}(O(n))$$\pi_{i-1}(O(n))$ to the set $Vect^\mathbb R_n(S^i)$$Vect^\mathbb R_n(S^i)$ of isomorphism classes of $n$$n$-dimensional vector bundles over $S^i$$S^i$. I don't know who observed this first, it can for example be found in Steenrod's book [Steenrod]. Similarly $\pi_{i-1}(U(n))$$\pi_{i-1}(U(n))$ corresponds to the set $Vect^\mathbb C_n(S^i)$$Vect^\mathbb C_n(S^i)$ of isomorphism classes of complex $n$$n$-dimensional vector bundles over $S^i$$S^i$. Passing from $O(n)$$O(n)$ to $O(n+1)$$O(n+1)$ (or $U(n)$$U(n)$ to $U(n+1)$$U(n+1)$) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the $1$$1$-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if $n >i$$n >i$ ($n>2i$$n>2i$ in the complex case). If $n>i$$n>i$ (or $n > 2i$$n > 2i$ in the complex case) one calls such a bundle a stable vector bundle. Actually the $n$$n$-dimensional vector bundles over $S^i$$S^i$ (not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for $i>0$$i>0$ not equal to $1,2,4,8$$1,2,4,8$ mod $8$$8$ all real vector bundles of dimension $n >i$$n >i$ over $S^i$$S^i$ are trivial, that for $i=1,2$$i=1,2$ mod $8$$8$ there are precisely $2$$2$ such bundles and for $i =0$$i =0$ mod $4$$4$ there are countably many such bundles. \\ Remark: {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.}; ## 3 The dates and dates of background papers The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [Bott1957]. The methods of the proof were developed in several earlier papers [Bott1954] \cite {Bott 1956}.; ## 4 The role of Bott periodicity for Kervaire-Milnor's paper Remark: {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [Kervaire/Milnor]. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a homotopy sphere, whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$$i = 3, 5,6,7$ mod $8$$8$ there is no non-trivial stable vector bundle over $S^i$$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over $S^i$$S^i$ are isomorphic. Let's begin with the case $i = 4s$$i = 4s$. \\ Remark: {\em In the case $i=4s$$i=4s$ one has an invariant for stable vector bundles $E$$E$, namely the Pontrjagin classes $p_s(E)\in H^{4s}(S^i)$$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over $S^{4s}$$S^{4s}$ to $H^{4s }(S^{4s})$$H^{4s }(S^{4s})$ which by choosing an orientation we identify with $\mathbb Z$$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to $\mathbb Z$$\mathbb Z$ to $\mathbb Z$$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over $S^{4s}$$S^{4s}$ are isomorphic if and only their Pontrjagin classes $p_s$$p_s$ agree. To show that the homomorphism is non-trivial (for $s>0$$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over $S^4$$S^4$ and $S^8$$S^8$, where one can take the tautological bundle over $S^4$$S^4$ considered as the projective line over the quaternions or over $S^8$$S^8$ considered as the projective line over the Cayley numbers. }\\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class $p_s$$p_s$ are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a $4s$$4s$-dimensional homotopy sphere $\Sigma$$\Sigma$ is trivial by one sentence "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$$p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$$\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$$H^{2s}(\Sigma) = 0$."\\ Remark: {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$$i = 1,2$ mod $8$$8$ is. The cases $1$$1$ and $2$$2$ are trivial, but the higher dimensions not. Here the obstruction class sits in $\pi_{i-1}(O) = \mathbb Z/2$$\pi_{i-1}(O) = \mathbb Z/2$ (by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $0$$0$. There is a homomorphism introduced by Hopf-Whitehead, the $J$$J$-homomorphism, from $\pi_{i-1} (O)$$\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$$\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$$J$-homomorphism is injective. Thus we summarize: Theorem 4.1. (Theorem 3.1 [Kervaire-Milnor]) For all homotopy spheres $\Sigma$$\Sigma$ the stable tangent bundle is trivial. ; ## 5 Some comments {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$$i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [Freedman] in dimension $4$$4$ and by Perelmann [Perelman] in dimension $3$$3$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$$>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called surgery theory. But the following can be said. Suppose that a topological manifold $M$$M$ is given. Then one can ask whether $M$$M$ admits a smooth structure. A necessary condition is that $M$$M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$$>4$, but false in dimension $4$$4$ by the fundamental work of Donaldson, and again true in dimension $<4$$<4$ by different methods. If we assume that $M$$M$ has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension $>4$$>4$). Roughly speaking the different smooth structures on $M$$M$ correspond to the different ways to impose a stable tangent bundle on $M$$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called $K$$K$-theory, invented by Atiyah and Hirzebruch [Atiyah-Hirzebruch]. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document}; ## 6 References$. There is a homomorphism introduced by Hopf-Whitehead, the $J$-homomorphism, from $\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$-homomorphism is injective. Thus we summarize: {{beginthm|Theorem|}} (Theorem 3.1 [[[#{{anchorencode:Kervaire-Milnor}}|Kervaire-Milnor]]]{{#RefAdd:Kervaire-Milnor}}) For all homotopy spheres $\Sigma$ the stable tangent bundle is trivial. {{endthm}} ; ==Some comments== ; {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [[[#{{anchorencode:Freedman}}|Freedman]]]{{#RefAdd:Freedman}} in dimension $and by Perelmann [[[#{{anchorencode:Perelman}}|Perelman]]]{{#RefAdd:Perelman}} in dimension$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called '''surgery theory'''. But the following can be said. Suppose that a topological manifold $M$ is given. Then one can ask whether $M$ admits a smooth structure. A necessary condition is that $M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$, but false in dimension $by the fundamental work of Donaldson, and again true in dimension$<4$by different methods. If we assume that$M$has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension$>4$). Roughly speaking the different smooth structures on$M$correspond to the different ways to impose a stable tangent bundle on$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called$K\$-theory, invented by Atiyah and Hirzebruch [[[#{{anchorencode:Atiyah-Hirzebruch}}|Atiyah-Hirzebruch]]]{{#RefAdd:Atiyah-Hirzebruch}}. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document}; == References == {{#RefList:}} [[Category:History]]O= \cup_n O(n) resp $U = \cup _n U_n$$U = \cup _n U_n$. The answer is very simple For $i \ge 0$$i \ge 0$ one has $\pi _i(O) \cong \pi_{i+8} (O)$$\pi _i(O) \cong \pi_{i+8} (O)$ and for $i>0$$i>0$ one has $\pi_i(U) \cong \pi_{i+2}(U)$$\pi_i(U) \cong \pi_{i+2}(U)$. Thus one only needs to know the groups for small $i$$i$: For $i = 1,....,8$$i = 1,....,8$ one has $\displaystyle \pi_{i-1}(O) \cong \mathbb Z/2, \mathbb Z/2,0,\mathbb Z,0,0,0,\mathbb Z$ and $\displaystyle \pi_1(U) \cong \mathbb Z, \pi_2(U) \cong 0.$ Bott does not give any reference for these computations.; ## 2 An interpretation in terms of vector bundles over spheres Most applications of this result concern the interpretation of these groups as stable vector bundles over spheres. Namely if $f : S^{i-1} \to O(n)$$f : S^{i-1} \to O(n)$ is a continuous map, then one obtains a vector bundle $E_f$$E_f$ over $S^i= D^i \cup D^i$$S^i= D^i \cup D^i$ by taking two copies of $D^i \times \mathbb R^n$$D^i \times \mathbb R^n$ and by identifying $(x,v) \in S^{i-1} \times \mathbb R^n$$(x,v) \in S^{i-1} \times \mathbb R^n$ in the first copy with $(x, f(x)v)$$(x, f(x)v)$ in the second copy. This map gives an isomorphism from $\pi_{i-1}(O(n))$$\pi_{i-1}(O(n))$ to the set $Vect^\mathbb R_n(S^i)$$Vect^\mathbb R_n(S^i)$ of isomorphism classes of $n$$n$-dimensional vector bundles over $S^i$$S^i$. I don't know who observed this first, it can for example be found in Steenrod's book [Steenrod]. Similarly $\pi_{i-1}(U(n))$$\pi_{i-1}(U(n))$ corresponds to the set $Vect^\mathbb C_n(S^i)$$Vect^\mathbb C_n(S^i)$ of isomorphism classes of complex $n$$n$-dimensional vector bundles over $S^i$$S^i$. Passing from $O(n)$$O(n)$ to $O(n+1)$$O(n+1)$ (or $U(n)$$U(n)$ to $U(n+1)$$U(n+1)$) by the standard inclusion corresponds to stabilization of vector bundles by taking the Whitney sum with the $1$$1$-dimensional trivial bundle. By a general position argument the stabilization map is a bijection if $n >i$$n >i$ ($n>2i$$n>2i$ in the complex case). If $n>i$$n>i$ (or $n > 2i$$n > 2i$ in the complex case) one calls such a bundle a stable vector bundle. Actually the $n$$n$-dimensional vector bundles over $S^i$$S^i$ (not over a general space) form an abelian group, where the sum is given by a connected sum of vector bundles One choses a trivialization of the vector bundles over an open disk and identifies the resulting boundaries. Bott's theorem implies that for $i>0$$i>0$ not equal to $1,2,4,8$$1,2,4,8$ mod $8$$8$ all real vector bundles of dimension $n >i$$n >i$ over $S^i$$S^i$ are trivial, that for $i=1,2$$i=1,2$ mod $8$$8$ there are precisely $2$$2$ such bundles and for $i =0$$i =0$ mod $4$$4$ there are countably many such bundles. \\ Remark: {\em I find it remarkable that Bott doesn't mention the relation to vector bundles. Actually Bott does not say a single word, why his result is interesting. He obviously assumes that a reader finds the problem to determine the homotopy groups of such fundamental objects like the stable orthogonal or unitary group interesting in itself and he is of course right. Whether he has foreseen that it is such a fundamental result would be interesting to know, perhaps his friends Atiyah and Hirzebruch can comment on this.}; ## 3 The dates and dates of background papers The paper with complete proofs appeared in September 1959, it was submitted November 1958. An announcement containing the above statements appeared in 1957 [Bott1957]. The methods of the proof were developed in several earlier papers [Bott1954] \cite {Bott 1956}.; ## 4 The role of Bott periodicity for Kervaire-Milnor's paper Remark: {\em I hope there will be many articles in the atlas explaining more or less immediate applications of the periodicity theorem to manifolds. In this article I would like to explain, which role the theorem plays in the paper mentioned in the title [Kervaire/Milnor]. This paper appeared May 1963, and was submitted April 1962. By that time the periodicity theorem must have been a standard tool in topology. The fact that it is comparatively easy to determine the stable vector bundles over spheres (and so over any space homotopy equivalent to a sphere) suggests to ask for a smooth manifold homotopy equivalent to a sphere, a homotopy sphere, whether its stable tangent bundle (meaning one adds the trivial line bundle to the tangent bundle) is the same as for the ordinary sphere, namely trivial. If not, the homotopy spheres cannot be diffeomorphic to the standard sphere.}\\ Before Bott's theorem one probably had no chance to answer the question whether the stable tangent bundle of a homotopy sphere is trivial, with it, it is in half of the cases a triviality, since for $i = 3, 5,6,7$$i = 3, 5,6,7$ mod $8$$8$ there is no non-trivial stable vector bundle over $S^i$$S^i$. The remaining cases are not so easy, one needs a way to decide whether two stable vector bundles over $S^i$$S^i$ are isomorphic. Let's begin with the case $i = 4s$$i = 4s$. \\ Remark: {\em In the case $i=4s$$i=4s$ one has an invariant for stable vector bundles $E$$E$, namely the Pontrjagin classes $p_s(E)\in H^{4s}(S^i)$$p_s(E)\in H^{4s}(S^i)$. It turns out that this map is a homomorphism from the stable vector bundles over $S^{4s}$$S^{4s}$ to $H^{4s }(S^{4s})$$H^{4s }(S^{4s})$ which by choosing an orientation we identify with $\mathbb Z$$\mathbb Z$. Thus we have a homomorphism from a group isomorphic to $\mathbb Z$$\mathbb Z$ to $\mathbb Z$$\mathbb Z$, and if it is non-trivial it is an injection implying that two stable vector bundles over $S^{4s}$$S^{4s}$ are isomorphic if and only their Pontrjagin classes $p_s$$p_s$ agree. To show that the homomorphism is non-trivial (for $s>0$$s>0$) one needs a single example where this is the case. Again periodicity helps which reduces the problem to the case of bundles over $S^4$$S^4$ and $S^8$$S^8$, where one can take the tautological bundle over $S^4$$S^4$ considered as the projective line over the quaternions or over $S^8$$S^8$ considered as the projective line over the Cayley numbers. }\\ What I described in the remark is not mentioned in Kervaire-Milnor's paper. They proceed slightly differently. They refer to obstruction theory, a theory which can be used to decide whether a vector bundle is trivial. They say (with reference to earlier papers by Kervaire) that the obstruction class and the Pontrjagin class $p_s$$p_s$ are proportional by a non-zero factor and so, if the Pontrjagin class is trivial, the bundle is trivial. They finish the argument that the stable tangent bundle of a $4s$$4s$-dimensional homotopy sphere $\Sigma$$\Sigma$ is trivial by one sentence "But by the Hirzebruch signature theorem the Pontrjagin class $p_k(\Sigma)$$p_k(\Sigma)$ is a multiple of the signature $\sigma (\Sigma)$$\sigma (\Sigma)$, which is zero since $H^{2s}(\Sigma) = 0$$H^{2s}(\Sigma) = 0$."\\ Remark: {\em Let me make a short comment on this sentence. It shows that the signature theorem, which was published by Hirzebruch in his 1956 book, is at the time when Kervaire and Milnor wrote their paper so standard, that neither a reference to Hirzebruch's book is needed (which appears in the list of references but no reference is given at this place) nor the formula of the signature theorem is repeated. But one sees that besides Bott's theorem another big theorem is needed to argue here.}\\ A non-trivial theorem is also needed in the remaining cases where $i = 1,2$$i = 1,2$ mod $8$$8$ is. The cases $1$$1$ and $2$$2$ are trivial, but the higher dimensions not. Here the obstruction class sits in $\pi_{i-1}(O) = \mathbb Z/2$$\pi_{i-1}(O) = \mathbb Z/2$ (by Bott's theorem) and so again one has to find a way to distinguish the non-trivial element form $0$$0$. There is a homomorphism introduced by Hopf-Whitehead, the $J$$J$-homomorphism, from $\pi_{i-1} (O)$$\pi_{i-1} (O)$ to the stable homotopy groups $\pi_{i-1}^s$$\pi_{i-1}^s$ of spheres. Rohlin has shown that under this homomorphism the obstruction class vanishes (the authors don't give a reference to a paper by Rohlin but refer instead to an earlier paper by them \cite {Kervaire-Milnor 1958}). The argument, that also in the remaining case the stable tangent bundle is trivial is finished by applying a recent deep theorem by Adams \cite {Adams} saying that this $J$$J$-homomorphism is injective. Thus we summarize: Theorem 4.1. (Theorem 3.1 [Kervaire-Milnor]) For all homotopy spheres $\Sigma$$\Sigma$ the stable tangent bundle is trivial. ; {\em After repeating the role of Bott periodicity in the proof of this theorem I would like to comment a bit on the role of this theorem in the paper of Kervaire and Milnor and in the further development of analyzing smooth structures on a topological manifold, here the sphere. Note that - as Kervaire and Milnor mention - a homotopy sphere is, if the Poincaré conjeture is assumed, homeomorphic to the sphere and so the diffeomorphism classes of homotopy spheres would correspond to the diffemorphism classes of smooth structures on the sphere. The Poincaré conjecture was proven by Smale, Stallings and Zeeman (references) for $i \ge 5$$i \ge 5$ already before the paper by Kervaire and Milnor, and later by Freedman [Freedman] in dimension $4$$4$ and by Perelmann [Perelman] in dimension $3$$3$. On the one hand the message of the theorem is negative, one cannot use the stable tangent bundle to distinguish different smooth structures on the sphere. This leads to the interesting question, whether the same is true for arbitrary manifolds. Later in the sixties some very deep theorems in this direction were proved. On the other hand the theorem allows to develop a method to study the different smooth structures on spheres or equivalently the homotopy spheres in dimension $>4$$>4$, a method which in the following years was generalized to arbitrary manifolds. This is not the place to describe this method, which is called surgery theory. But the following can be said. Suppose that a topological manifold $M$$M$ is given. Then one can ask whether $M$$M$ admits a smooth structure. A necessary condition is that $M$$M$ has a tangent bundle, and since this is easier to analyze and essentially the same, that it has a stable tangent bundle. It turns out that in a certain sense, which should be made precise elsewhere, this is the only obstruction, again in dimension $>4$$>4$, but false in dimension $4$$4$ by the fundamental work of Donaldson, and again true in dimension $<4$$<4$ by different methods. If we assume that $M$$M$ has a smooth structure, we choose one and compare all possible other smooth structures with this. Then again the stable tangent bundle plays the deciding role (in dimension $>4$$>4$). Roughly speaking the different smooth structures on $M$$M$ correspond to the different ways to impose a stable tangent bundle on $M$$M$. Thus the understanding of stable vector bundles is what at the end is needed. This is the content of a very important theory, called $K$$K$-theory, invented by Atiyah and Hirzebruch [Atiyah-Hirzebruch]. This is a generalized cohomology theory (meaning that the Eilenberg-Steenrod axioms for ordinary cohomology are fulfilled except the dimension axiom). This is the first generalized cohomology theory and - besides stable homotopy - the most important one. To construct it is up to a certain point rather elementary. But then one comes at a point where the arguments are highly non-trivial and the central tool, which one has to apply, is Bott perodicity. It does not only give the fundamental input for completing the proof that it is a generalized homology theory, it also is the central tool for all computations. In particular, in the few cases where one can give detailed information about the different smooth structures on a manifold, always Bott's theorem is in the background - like we indicated in one aspect for the spheres.} \end {document};
{"extraction_info": {"found_math": true, "script_math_tex": 472, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 481, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9896625876426697, "perplexity": 722.3802015982784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00525.warc.gz"}
http://philpapers.org/browse/philosophy-of-probability?uncat=1
# Philosophy of Probability Edited by Darrell Rowbottom (Lingnan University) Assistant editor: Joshua Luczak (University of Western Ontario) Material to categorize found Search inside: (import / add options)   Sort by: book pricecategoriespublication yearaddition datefirst authorviewings 1 — 50 / 115 1. Following the pioneer work of Bruno De Finetti, conditional probability spaces (allowing for conditioning with events of measure zero) have been studied since (at least) the 1950's. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 2. David Atkinson & Jeanne Peijnenburg (2006). Probability All the Way Up. Synthese 153 (2):187 - 197. Richard Jeffrey's radical probabilism ('probability all the way down') is augmented by the claim that probability cannot be turned into certainty, except by data that logically exclude all alternatives. Once we start being uncertain, no amount of updating will free us from the treadmill of uncertainty. This claim is cast first in objectivist and then in subjectivist terms. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 3. Gergely Bana & Thomas Durt (1997). Proof of Kolmogorovian Censorship. Foundations of Physics 27 (10):1355-1373. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 4. Dror Bar-Natan (1989). Two Examples in Noncommutative Probability. Foundations of Physics 19 (1):97-104. A simple noncommutative probability theory is presented, and two examples for the difference between that theory and the classical theory are shown. The first example is the well-known formulation of the Heisenberg uncertainty principle in terms of a variance inequality and the second example is an interpretatio of the Bell paradox in terms of noncommuntative probability. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 5. Jean Baratgin, David E. Over & Guy Politzer (2011). Betting on Conditionals. Thinking and Reasoning 16 (3):172-197. A study is reported testing two hypotheses about a close parallel relation between indicative conditionals, if A then B , and conditional bets, I bet you that if A then B . The first is that both the indicative conditional and the conditional bet are related to the conditional probability, P(B|A). The second is that de Finetti's three-valued truth table has psychological reality for both types of conditional— true , false , or void for indicative conditionals and win , lose (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 6. Howard Barnum, Carl Philipp Gaebler & Alexander Wilce (2013). Ensemble Steering, Weak Self-Duality, and the Structure of Probabilistic Theories. Foundations of Physics 43 (12):1411-1427. In any probabilistic theory, we say that a bipartite state ω on a composite system AB steers its marginal state ω B if, for any decomposition of ω B as a mixture ω B =∑ i p i β i of states β i on B, there exists an observable {a i } on A such that the conditional states $\omega_{B|a_{i}}$ are exactly the states β i . This is always so for pure bipartite states in quantum mechanics, a fact (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 7. David Bellhouse (1996). Book Reviews. [REVIEW] Philosophia Mathematica 4 (3):290-291. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 8. Joseph Louis François Bertrand (1888). Calcul des Probabilités. Gauthier-Villars Et Fils. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | Translate to English | My bibliography Export citation 9. John Bigelow & Robert Pargetter (1987). An Analysis of Indefinite Probability Statements. Synthese 73 (2):361 - 370. An analysis of indefinite probability statements has been offered by Jackson and Pargetter (1973). We accept that this analysis will assign the correct probability values for indefinite probability claims. But it does so in a way which fails to reflect the epistemic state of a person who makes such a claim. We offer two alternative analyses: one employing de re (epistemic) probabilities, and the other employing de dicto (epistemic) probabilities. These two analyses appeal only to probabilities which are accessible to (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 10. Jean-Francois Bonnefon & Denis J. Hilton (2002). The Suppression of Modus Ponens as a Case of Pragmatic Preconditional Reasoning. Thinking and Reasoning 8 (1):21 – 40. The suppression of the Modus Ponens inference is described as a loss of confidence in the conclusion C of an argument ''If A1 then C; If A2 then C; A1'' where A2 is a requirement for C to happen. It is hypothesised that this loss of confidence is due to the derivation of the conversational implicature ''there is a chance that A2 might not be satisfied'', and that different syntactic introductions of the requirement A2 (e.g., ''If C then A2'') will (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 11. R. Bradley (2012). Multidimensional Possible-World Semantics for Conditionals. Philosophical Review 121 (4):539-571. Adams’s Thesis, the claim that the probabilities of indicative conditionals equal the conditional probabilities of their consequents given their antecedents, has proven impossible to accommodate within orthodox possible-world semantics. This essay proposes a modification to the orthodoxy that removes this impossibility. The starting point is a proposal by Jeffrey and Stalnaker that conditionals take semantic values in the unit interval, interpreting these (à la McGee) as their expected truth-values at a world. Their theories imply a false principle, namely, that the (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 12. B. C. (1982). Philosophical Problems of Statistical Inference. Review of Metaphysics 35 (4):907-909. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 13. This unique book presents a new interpretation of probability, rooted in the traditional interpretation that was current in the 17th and 18th centuries. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 14. Co-Authored & Helen Beebee (2003). Probability as a Guide to Life. In David Papineau (ed.), The Roots of Reason: Philosophical Essays on Rationality, Evolution, and Probability. Oxford University Press. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | My bibliography Export citation 15. R. Eugene Collins (1977). Quantum Theory: A Hilbert Space Formalism for Probability Theory. [REVIEW] Foundations of Physics 7 (7-8):475-494. It is shown that the Hilbert space formalism of quantum mechanics can be derived as a corrected form of probability theory. These constructions yield the Schrödinger equation for a particle in an electromagnetic field and exhibit a relationship of this equation to Markov processes. The operator formalism for expectation values is shown to be related to anL 2 representation of marginal distributions and a relationship of the commutation rules for canonically conjugate observables to a topological relationship of two manifolds is (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 16. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | My bibliography Export citation 17. D. R. Cousin (1954). Probability. Philosophical Quarterly 4 (14):82-84. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 18. In previous publications on probability, I have followed I.J. Good in arguing that probability must be defined subjectively if we accept that the world is causally deterministic. In this article I go significantly beyond this position, arguing that we are forced to accept a subjective definition of probability if we use any probabilistic methods at all. In other words, all probabilistic methods tacitly assume a subjective definition of probability. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 19. Simon D'Alfonso (2014). Review of 'Quitting Certainties'. [REVIEW] Philosophy in Review 34:34-36. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | My bibliography Export citation 20. David W. Green David, E. Over Robin & A. Pyne (1997). Probability and Choice in the Selection Task. Thinking and Reasoning 3 (3):209 – 235. Two experiments using a realistic version of the selection task examined the relationship between participants probability estimates of finding a counter example and their selections. Experiment 1 used everyday categories in the context of a scenario to determine whether or not the number of instances in a category affected the estimated probability of a counter-example. Experiment 2 modified the scenario in order to alter participants estimates of finding a specific counter-example. Unlike Kirby 1994a , but consistent with his proposals, (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 21. The book contains the transcription of a course on the foundations of probability given by the Italian mathematician Bruno de Finetti in 1979 at the a oeNational Institute of Advanced Mathematicsa in Rome. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 22. Mark de Rond & Iain Morley (eds.) (2010). Serendipity: Fortune and the Prepared Mind. Cambridge University Press. Machine generated contents note: Introduction. Fortune and the prepared mind Iain Morley and Mark de Rond; 1. The stratigraphy of serendipity Susan E. Alcock; 2. Understanding humans - serendipity and anthropology Richard Leakey; 3. HIV and the naked ape Robin Weiss; 4. Cosmological serendipity Simon Singh; 5. Serendipity in astronomy Andrew C. Fabian; 6. Serendipity in physics Richard Friend; 7. Liberalism and uncertainty Oliver Letwin; 8. The unanticipated pleasures of the writing life Simon Winchester. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 23. This work is in two parts. The main aim of part 1 is a systematic examination of deductive, probabilistic, inductive and purely inductive dependence relations within the framework of Kolmogorov probability semantics. The main aim of part 2 is a systematic comparison of (in all) 20 different relations of probabilistic (in)dependence within the framework of Popper probability semantics (for Kolmogorov probability semantics does not allow such a comparison). Added to this comparison is an examination of (in all) 15 purely inductive (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 24. I. Douven (2012). The Sequential Lottery Paradox. Analysis 72 (1):55-57. The Lottery Paradox is generally thought to point at a conflict between two intuitive principles, to wit, that high probability is sufficient for rational acceptability, and that rational acceptability is closed under logical derivability. Gilbert Harman has offered a solution to the Lottery Paradox that allows one to stick to both of these principles. The solution requires the principle that acceptance licenses conditionalization. The present study shows that adopting this principle alongside the principle that high probability is sufficient for rational (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 25. J. Dubucs (ed.) (1993). Philosophy of Probability. Kluwer, Dordrecht. Philosophy of Probability provides a comprehensive introduction to theoretical issues that occupy a central position in disciplines ranging from philosophy of ... Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | Translate to English My bibliography Export citation 26. Von Mises thought that an adequate account of objective probability required a condition of randomness. For frequentists, some such condition is needed to rule out those sequences where the relative frequencies converge towards definite limiting values, and where it is nevertheless not appropriate to speak of probability … [because such a sequence] obeys an easily recognizable law (von Mises, Probability, Statistics, and Truth). But is a condition of randomness required for an adequate account of probability, given the existence of decisive (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | Translate to English | My bibliography Export citation 27. J. Ellenberg & E. Sober (2011). Objective Probabilities in Number Theory. Philosophia Mathematica 19 (3):308-322. Philosophers have explored objective interpretations of probability mainly by considering empirical probability statements. Because of this focus, it is widely believed that the logical interpretation and the actual-frequency interpretation are unsatisfactory and the hypothetical-frequency interpretation is not much better. Probabilistic assertions in pure mathematics present a new challenge. Mathematicians prove theorems in number theory that assign probabilities. The most natural interpretation of these probabilities is that they describe actual frequencies in finite sets and limits of actual frequencies in infinite sets. (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 28. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | Translate to English My bibliography Export citation 29. Branden Fitelson (2003). Review of I. Hacking, An Introduction to Probability and Inductive Logic. [REVIEW] Bulletin of Symbolic Logic 9 (4):5006-5008. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 30. Decision under conditions of uncertainty is an unavoidable fact of life. The available evidence rarely suffices to establish a claim with complete confidence, and as a result a good deal of our reasoning about the world must employ criteria of probable judgment. Such criteria specify the conditions under which rational agents are justified in accepting or acting upon propositions whose truth cannot be ascertained with certainty. Since the seventeenth century philosophers and mathematicians have been accustomed to consider belief under uncertainty (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 31. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | Translate to English | My bibliography Export citation 32. Marie Gaudard (1984). The Correspondence Between Credibilities and Induced Betting Rate Assignments. Foundations of Physics 14 (5):431-441. Operational statistics is an operational theory of probability and statistics which generalizes classical probability and statistics and provides a formalism particularly suited to the needs of quantum mechanics. Within this formalism, statistical inference can be accomplished using the Bayesian inference strategy. In a hierarchical Bayesian approach, a second-order probability measure, or credibility, represents degrees of belief in statistical hypotheses. A credibility determines an assignment of simple and conditioned betting rates to events in a natural way. In the setting of operational (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 33. Christian George (1997). Reasoning From Uncertain Premises. Thinking and Reasoning 3 (3):161 – 189. Previous studies have shown that 1 participants are reluctant to accept a conclusion as certainly true when it is derived from a valid conditional argument that includes a doubtful premise, and 2 participants typically link the degree of uncertainty found in a given premise set to its conclusion. Two experiments were designed to further investigate these phenomena. Ninety adult participants in Experiment 1 were first asked to judge the validity of three conditional arguments Modus Ponens, Denial of the Antecedent, and (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 34. Clark Glymour (2014). Poincaré's Probabilities, Kantified, Post-Modernized. Biological Theory 9 (1):113-114. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 35. Clark Glymour (2001). Instrumental Probability. The Monist 84 (2):284-300. The claims of science and the claims of probability combine in two ways. In one, probability is part of the content of science, as in statistical mechanics and quantum theory and an enormous range of "models" developed in applied statistics. In the other, probability is the tool used to explain and to justify methods of inference from records of observations, as in every science from psychiatry to physics. These intimacies between science and probability are logical sports, for while we think (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 36. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 37. S. Gudder (2000). What Is Fuzzy Probability Theory? Foundations of Physics 30 (10):1663-1678. The article begins with a discussion of sets and fuzzy sets. It is observed that identifying a set with its indicator function makes it clear that a fuzzy set is a direct and natural generalization of a set. Making this identification also provides simplified proofs of various relationships between sets. Connectives for fuzzy sets that generalize those for sets are defined. The fundamentals of ordinary probability theory are reviewed and these ideas are used to motivate fuzzy probability theory. Observables (fuzzy (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 38. Stan Gudder (1999). Observables and Statistical Maps. Foundations of Physics 29 (6):877-897. This article begins with a review of the framework of fuzzy probability theory. The basic structure is given by the σ-effect algebra of effects (fuzzy events) $\mathcal{E}{\text{ }}\left( {\Omega ,\mathcal{A}} \right)$ and the set of probability measures $M_1^ + {\text{ }}\left( {\Omega ,\mathcal{A}} \right)$ on a measurable space $\left( {\Omega ,\mathcal{A}} \right)$ . An observable $X:\mathcal{B} \to {\text{ }}\mathcal{E}{\text{ }}\left( {\Omega ,\mathcal{A}} \right)$ is defined, where \$\begin{gathered} X:\mathcal{B} \to {\text{ }}\mathcal{E}{\text{ }}\left( {\Omega ,\mathcal{A}} \right) \\ \left( {\Lambda ,{\text{ }}\mathcal{B}} \right) (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 39. Ian Hacking (1995). The Emergence of Probability. Cambridge : Cambridge University Press. Ian Hacking here presents a philosophical critique of early ideas about probability, induction and statistical inference and the growth of this new family of ... Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 40. Ian Hacking (1978). Hume's Species of Probability. Philosophical Studies 33 (1):21 - 37. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 41. Joseph Y. Halpern & Riccardo Pucella (2009). Evidence with Uncertain Likelihoods. Synthese 171 (1):111 - 133. An agent often has a number of hypotheses, and must choose among them based on observations, or outcomes of experiments. Each of these observations can be viewed as providing evidence for or against various hypotheses. All the attempts to formalize this intuition up to now have assumed that associated with each hypothesis h there is a likelihood function μ h , which is a probability measure that intuitively describes how likely each observation is, conditional on h being the correct hypothesis. (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 42. Sven Ove Hansson (2010). Past Probabilities. Notre Dame Journal of Formal Logic 51 (2):207-223. The probability that a fair coin tossed yesterday landed heads is either 0 or 1, but the probability that it would land heads was 0.5. In order to account for the latter type of probabilities, past probabilities, a temporal restriction operator is introduced and axiomatically characterized. It is used to construct a representation of conditional past probabilities. The logic of past probabilities turns out to be strictly weaker than the logic of standard probabilities. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 43. Sven Ove Hansson (2008). Do We Need Second-Order Probabilities? Dialectica 62 (4):525-533. Although it has often been claimed that all the information contained in second-order probabilities can be contained in first-order probabilities, no practical recipe for the elimination of second-order probabilities without loss of information seems to have been presented. Here, such an elimination method is introduced for repeatable events. However, its application comes at the price of losses in cognitive realism. In spite of their technical eliminability, second-order probabilities are useful because they can provide models of important features of the world (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 44. Gilbert Harman (1983). Problems with Probabilistic Semantics. In Alex Orenstein & Rafael Stern (eds.), Developments in Semantics. Haven. 243-237. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc Remove from this list | My bibliography Export citation 45. Alois Hartkämper & Heinz-Jürgen Schmidt (1983). On the Foundations of the Physical Probability Concept. Foundations of Physics 13 (7):655-672. An exact formulation of the frequency interpretation of probability is proposed on the basis of G. Ludwig's concept of physical theories. Starting from a short outline of this concept, a formal definition of weak approximate reduction is developed, which covers the reduction of probability to frequency as a special case. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 46. James Hawthorne & Luc Bovens (1999). The Preface, the Lottery, and the Logic of Belief. Mind 108 (430):241 - 264. John Locke proposed a straightforward relationship between qualitative and quantitative doxastic notions: belief corresponds to a sufficiently high degree of confidence. Richard Foley has further developed this Lockean thesis and applied it to an analysis of the preface and lottery paradoxes. Following Foley's lead, we exploit various versions of these paradoxes to chart a precise relationship between belief and probabilistic degrees of confidence. The resolutions of these paradoxes emphasize distinct but complementary features of coherent belief. These features suggest principles that (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 47. Richard Jeffrey (1996). Unknown Probabilities. Erkenntnis 45 (2-3):327 - 335. From a point of view like de Finetti's, what is the judgmental reality underlying the objectivistic claim that a physical magnitude X determines the objective probability that a hypothesis H is true? When you have definite conditional judgmental probabilities for H given the various unknown values of X, a plausible answer is sufficiency, i.e., invariance of those conditional probabilities as your probability distribution over the values of X varies. A different answer, in terms of conditional exchangeability, is offered for use (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 48. Lisa M. Johnson & Edward K. Morris (1987). When Speaking of Probability in Behavior Analysis. Behaviorism 15 (2):107-129. Probability is not an unambiguous concept within the sciences or in vernacular language, yet it is fundamental to much of behavior analysis. The present paper examines some problems this ambiguity creates in general,as well as within the experimental analysis of behavior, in particular. As background material, we first introduce the three most common theories of probability in mathematics and science, discussing their advantages and disadvantages, and their relevance to behavior analysis. Next, we discuss the concept of probability as encountered in (...) Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 49. W. E. Johnson (1932). Probability: Axioms. Mind 41 (163):281-296. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc My bibliography Export citation 50. H. W. B. Joseph (1923). Mr. Keynes on Probability. Mind 32 (128):408-431. Select appropriate categories: Or: Select a category by name Decision Theory Decision-Theoretic Frameworks Decision-Theoretic Puzzles Game Theory Topics in Decision Theory Scientific Method Evidence Confirmation Induction Probabilistic Reasoning Scientific Change Inference to the Best Explanation Theoretical Virtues Scientific Method, Misc Interpretation of Probability Chance and Objective Probability Classical Probability Frequentism Logical Probability Propensities Subjective Probability Interpretion of Probability, Misc Mathematics of Probability Axioms of Probability Infinitesimals and Probability Mathematics of Probability, Misc Probabilistic Reasoning Bayesian Reasoning Probabilistic Principles Probabilistic Frameworks Probabilistic Puzzles Subjective Probability Phil of Statistics Applications of Probability Probability in the Philosophy of Religion Probability in the Physical Sciences Applications of Probability, Misc Decision Theory Probabilistic Reasoning Phil of Probability, Misc
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043593764305115, "perplexity": 3426.577161068576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122238667.96/warc/CC-MAIN-20150124175718-00101-ip-10-180-212-252.ec2.internal.warc.gz"}
http://rcd.ics.org.ru/archive/volume-13-number-2/
0 2013 Impact Factor # Volume 13, Number 2, 2008 Kozlov V. V. Lagrange’s Identity and Its Generalizations Abstract The famous Lagrange identity expresses the second derivative of the moment of inertia of a system of material points through the kinetic energy and homogeneous potential energy. The paper presents various extensions of this brilliant result to the case 1) of constrained mechanical systems, 2) when the potential energy is quasi-homogeneous in coordinates and 3) of continuum of interacting particles governed by the well-known Vlasov kinetic equation. Keywords: Lagrange's identity, quasi-homogeneous function, dilations, Vlasov’s equation Citation: Kozlov V. V., Lagrange’s Identity and Its Generalizations, Regular and Chaotic Dynamics, 2008, vol. 13, no. 2, pp. 71-80 DOI:10.1134/S1560354708020019 Kulczycki M. Noncontinuous Maps and Devaney’s Chaos Abstract Vu Dong Tô has proven in [1] that for any mapping $f: X \to X$, where $X$ is a metric space that is not precompact, the third condition in the Devaney’s definition of chaos follows from the first two even if $f$ is not assumed to be continuous. This paper completes this result by analysing the precompact case. We show that if $X$ is either finite or perfect one can always find a map $f: X \to X$ that satisfies the first two conditions of Devaney’s chaos but not the third. Additionally, if $X$ is neither finite nor perfect there is no $f: X \to X$ that would satisfy the first two conditions of Devaney’s chaos at the same time. Keywords: Devaney’s chaos, noncontinuous map, precompact space Citation: Kulczycki M., Noncontinuous Maps and Devaney’s Chaos, Regular and Chaotic Dynamics, 2008, vol. 13, no. 2, pp. 81-84 DOI:10.1134/S1560354708020020 Gudimenko A. I. Dynamics of Perturbed Equilateral and Collinear Сonfigurations of Three Point Vortices Abstract Using the technique of asymptotic expansions, we calculate trajectories of three point vortices in the vicinity of stable equilateral or collinear configurations. We show that in an appropriate rotating coordinate system each vortex moves in an elliptic orbit. The orbits of the vortices have equal eccentricities. The angle and ratio between the major axes of any two orbits have a simple analytic representation. Keywords: point vortices, integrable dynamics, perturbation theory Citation: Gudimenko A. I., Dynamics of Perturbed Equilateral and Collinear Сonfigurations of Three Point Vortices, Regular and Chaotic Dynamics, 2008, vol. 13, no. 2, pp. 85-95 DOI:10.1134/S1560354708020032 Markeev A. P. The Dynamics of a Rigid Body Colliding with a Rigid Surface Abstract Basic investigation techniques, algorithms, and results are presented for nonlinear oscillations and stability of steady rotations and periodic motions of a rigid body, colliding with a rigid surface, in a uniform gravity field. Keywords: rigid body, constraints, collision, stability Citation: Markeev A. P., The Dynamics of a Rigid Body Colliding with a Rigid Surface, Regular and Chaotic Dynamics, 2008, vol. 13, no. 2, pp. 96-129 DOI:10.1134/S1560354708020044 Chierchia L. Kolmogorov’s 1954 Paper on Nearly-Integrable Hamiltonian Systems Abstract Following closely Kolmogorov’s original paper [1], we give a complete proof of his celebrated Theorem on perturbations of integrable Hamiltonian systems by including few "straightforward" estimates. Keywords: Kolmogorov’s theorem, KAM theory, small divisors, Hamiltonian systems, perturbation theory, symplectic transformations, nearly-integrable systems Citation: Chierchia L., Kolmogorov’s 1954 Paper on Nearly-Integrable Hamiltonian Systems, Regular and Chaotic Dynamics, 2008, vol. 13, no. 2, pp. 130-139 DOI:10.1134/S1560354708020056 Back to the list
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934784531593323, "perplexity": 662.9320882808498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00443.warc.gz"}
https://gamedev.stackexchange.com/questions/153739/what-are-the-meanings-of-theta-and-gamma-in-this-equation
# What are the meanings of theta and gamma in this equation? I'm having a bit of trouble understanding the notation from this paper: http://graphics.cs.aueb.gr/graphics/docs/papers/GraphiCon09_PapadopoulosPapaioannou.pdf float3 surface = exp(dot(-y, d)) * fromSurface; float3 viewer = exp(dot(-y, d)) * fromViewer; float3 final = dot(photonIntensity, dot(mie(0), dot(viewer, surface)); I know I = intensity, mie is mie scattering, but what does the wierd 0 stand for? And what is y? Is d = distance? Second go at it: float Attenuation = 130.0; float MieAnisotropy = 0.7; float MiePhase(float CosTheta, float Anisotropy) { const float F = 1.0 / (4.0 * PI); return F * ((1.0 - pow(Anisotropy, 2.0)) / pow(1.0 - 2.0 * Anisotropy * CosTheta + pow(Anisotropy, 2.0), 1.5)); } float3 ViewPosition = normalize(WorldPosition - CameraPosition); float CosViewSunAngle = dot(ViewPosition, SunDirection); float Mie = MiePhase(CosViewSunAngle, MieAnisotropy); float FromSurface = exp(dot(-Attenuation, DistanceFromSurface)); float FromViewer = exp(dot(-Attenuation, DistanceFromViewer)); float PhotonIntensity = 1.0; float Final = dot(PhotonIntensity, dot(Mie, dot(FromViewer, FromSurface)); • The weird 0 is a Greek letter theta. The weird y is a Greek letter gamma. The paper should tell you what they are but I'm not going to read it.... – user253751 Jan 30 '18 at 2:17 Immediately after the first appearance of the equation in the paper, the use of $\theta$ is described as The use of $\gamma$ is first described just above; it "Medium" in this sense means the substance through which the photon is traveling (not "medium" as in "low, medium, high"), so this is basically a constant for tweaking. Later in the paper, an example value for $\gamma$ is given (in the "Implementation Details" section). And yes, $d$ is distance from the respective points.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325214982032776, "perplexity": 4914.117727281107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00264.warc.gz"}
http://math.stackexchange.com/questions/61714/intuition-for-smooth-manifolds
# Intuition for Smooth Manifolds Consider the graphs of the functions $f_1(x) = |x|$, and $f_2(x) = x$ under the subspace topology of $\mathbb{R}^2$. Both of these graphs are smooth manifolds, just pick coordinate charts to be $(x, f_i(x)) \leftrightarrow x$. Moreover, they are diffeomorphic via the map $(x, f_1(x)) \rightarrow (x, f_2(x))$. This seems to clash with my intuition. For example, the graph of $f_1$ has a corner, so it "shouldn't" be smooth, much less diffeomorphic to $f_2$, which is just a straight line. Can someone explain what's going on here? In light of these examples, how should I visualize smooth manifolds and diffeomorphisms? - There's an old question on the first manifold. I think Prof Wong's answer could help a lot, here. – Dylan Moreland Sep 4 '11 at 2:17 I think maybe your intuition isn't fully thought-out. You're giving a set a smooth structure based on a bijection with $\mathbb R$ -- this is not a very natural thing to do. You can make the Cantor set a smooth manifold diffeomorphic to $S^n$ or $\mathbb R^n$ for any $n \geq 2$ using this technique, so it's not particularly interesting. – Ryan Budney Sep 4 '11 at 2:37 Your intuition is broken because the inclusion of the graph of $f_1$ (with the smooth structure you describe) into $\mathbb{R}^2$ isn't smooth. The graph of $f_1$, as a subset of $\mathbb{R}^2$ with its usual smooth structure, is not a smooth manifold for exactly the intuitive reason. You could do the same thing with the set $T=\{(x,g(x))\}$ for any continuous $g$. The reason this seems non-intuitive is that you haven't used the smooth structure of $\mathbb{R}^2$ at all in defining the smooth structure of $T$; you've just taken the smooth structure on $\mathbb{R}$ and "transported" it onto $T$. Another way to say it: The intuitive non-smoothness of $T$ (for, say, $g(x)=|x|$) comes from looking at the way that $T$ is sitting in $\mathbb{R}^2$. Abstractly, it's very much the same as the following situation, which may be clearer. The integers $\mathbb{Z}$ form a group under addition. The set $T = \{17,59\} \subset \mathbb{Z}$ is not a subgroup of $\mathbb{Z}$ under addition. It is true that $T$ can be made into group by transporting the structure of a 2-element group onto $T$, but you don't expect that this group will have anything to do with $\mathbb{Z}$ as a group anymore since you didn't use the group structure on $\mathbb{Z}$ to define the group structure on $T$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520951867103577, "perplexity": 167.87819069103293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274985.2/warc/CC-MAIN-20160524002114-00045-ip-10-185-217-139.ec2.internal.warc.gz"}
https://samacheerguru.com/samacheer-kalvi-10th-maths-chapter-8-ex-8-2/
# Samacheer Kalvi 10th Maths Solutions Chapter 8 Statistics and Probability Ex 8.2 ## Tamilnadu Samacheer Kalvi 10th Maths Solutions Chapter 8 Statistics and Probability Ex 8.2 10th Maths Exercise 8.2 Samacheer Kalvi Question 1. The standard deviation and coefficient of variation of a data are 1.2 and 25.6 respectively. Find the value of mean. Solution: Co-efficient of variation C.V. = $$\mathrm{C.V}=\frac{\sigma}{\overline{x}} \times 100$$ 10th Maths Exercise 8.2 Question 2. The standard deviation and coefficient of variation of a data are 1.2 and 25.6 respectively. Find the value of mean. Solution: Class 10th Math Exercise 8.2 Question 3. If the mean and coefficient of variation of a data are 15 and 48 respectively, then find the value of standard deviation. Solution: Ex 8.2 Class 10 Question 4. If n = 5 , $$\overline{x}$$ = 6, $$\Sigma x^{2}$$ = 765, then calculate the coefficient of variation. Solution: 8.2 Class 10 Question 5. Find the coefficient of variation of 24, 26, 33, 37, 29,31. Solution: Exercise 8.2 Class 10 Question 6. The time taken (in minutes) to complete a homework by 8 students in a day are given by 38, 40, 47, 44, 46, 43, 49, 53. Find the coefficient of variation. Solution: 10th Maths Statistics And Probability Question 7. The total marks scored by two students Sathya and Vidhya in 5 subjects are 460 and 480 with standard deviation 4.6 and 2.4 respectively. Who is more consistent in performance? Solution: 10th Class Exercise 8.2 Question 8. The mean and standard deviation of marks obtained by 40 students of a class in three subjects Mathematics, Science and Social Science are given below. Which of the three subjects shows highest variation and which shows lowest variation in marks? Solution: Science subject shows highest variation. Social science shows lowest variation. 10th Math 8.2 Question 9. The temperature of two cities A and B in a winter season are given below. Find which city is more consistent in temperature changes? Solution: ∴ Co-efficient of variation of City A is less than C.V of City B. ∴ City A is more consistent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893997311592102, "perplexity": 1287.4171366966318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00225.warc.gz"}
http://math.stackexchange.com/questions/52263/a-ball-is-thrown-where-does-it-bounce/52271
# A ball is thrown, where does it bounce? Firstly sorry if this has been asked before. I'm trying to work out a simple graphical equation that'll give me the x of the bounce point when y=0 (as it's hit the ground). I'm only after the first bounce and the peak of the second bounce (so essentially it's like throwing a ball at a wall, getting it's first bounce and the point at which it hits the wall). Gravity and the weight of the ball will always be constant, there will be no air resistance and the only variables will be the height from which the ball is thrown and the speed it's thrown at. Can you help me out? - @Ahmed I do not fully understand your question. I am making a number of assumptions here; a better answer can be provided if you make the question clearer. I am assuming that you are throwing a ball horizontally with a velocity $u$ from a height $h$. (Acceleration to gravity is $g \approx 9.81$.) In this case, the $x$ coordinate of the first bounce is given by the formula $u \sqrt{\frac{2h}{g}}$. I am also assuming that the collision with the ground is perfectly elastic (i.e., it rebounds with the same speed). In this case, the ball will reach a maximum height of $h$ after the first bounce –  Srivatsan Jul 18 '11 at 22:35 Let me make sure I understand. We will throw a ball with an initial height and velocity velocity (note that I mean both speed and direction). We assume no obstructions other than a perfectly flat ground, and any and all collisions will be perfectly elastic (the initial energy of the ball is conserved). Is that correct? –  mixedmath Jul 18 '11 at 22:38 @Srivatsan: Why don't you try to answer or ask one or two questions to get the required 50 rep so that you can comment properly and not bother the moderators with the "comments posted as answers"? –  t.b. Jul 18 '11 at 22:39 (this is Srivatsan's comment continued): is given by the formula $u \sqrt{\frac{2h}{g}}$. I am also assuming that the collision with the ground is perfectly elastic (i.e., it rebounds with the same speed). In this case, the ball will reach a maximum height of $h$ after the first bounce. –  Zev Chonoles Jul 18 '11 at 23:15 Sorry for not explaining it correctly. I should have mentioned that I want some sort of gravity constant in there so that when it rebounds, it rebounds at a degraded height. The path of the ball will be on a 2D axis, so imagine the path across your screen going from left (where we throw it from), bouncing about 3/4 of the way and touching the right side of your screen. –  Ahmed Nuaman Jul 19 '11 at 6:30 Suppose a ball is launched at time $t=0$ starting at $x=0$ and $y=y_0$ with initial velocisty $v_{0,x}$ and $v_{0,y}$. Then, after a time $t$, the $y$-coordinate of the ball will be (assuming constant gravitational force) $$y=-1/2gt^2+v_{0,y}t+y_0,$$ where $g$ is the (magnitude) of the acceleration due to gravity at the surface of the earth. We are interested when the ball lands on the ground, that is, when $y=0$. Setting $y=0$ into the above equation and solving for $t$, we find $$t=\frac{1}{g}\left( v_{0,y}+\sqrt{v_{0,y}^2+2gy_0}\right)$$ is the only positive root of the resulting equation. On the other hand, after a time $t$, the $x$-cooridnate of the ball will be $$x=v_{0,x}t.$$ Thus, to find the $x$-coordinate when the ball hits the ground, we need to merely plug in our result from above. We obtain $$x=\frac{v_{0,x}}{g}\left( v_{0,y}+\sqrt{v_{0,y}^2+2gy_0}\right) .$$ At the peak of the bounce, the $y$-coordinate will not be changing, that is $y'(t)=0$, which give us the following equation: $$0=-gt+v_{0,y}.$$ Solving for $t$ yields: $$t=v_{0,y}/g.$$ To find the height of the ball at this time we just plug this $t$ value into $y(t)$: $$y=\frac{-v_{0,y}^2}{2g}+\frac{v_{0,y}^2}{g}=\frac{v_{0,y}^2}{2g}.$$ Of course, under the assumption that the ball loses no energy after the first bounce, this will also be the height of the ball at the second peak. - Amazing! Will pipe this into my code tonight :) –  Ahmed Nuaman Jul 19 '11 at 6:33 \begin{align} s &= \frac{1}{2}a\cdot t^2 + v_0\cdot t + s_0 \\ v &= a\cdot t + v_0 \\ a &= a \end{align} Here's a little trick that I like to do. We can quickly find out how high the ball reaches using energy (take the initial vertical speed and height and calculate the starting energy: $1/2 \cdot mv^2 + mgh$. Solve for height H by setting equal to $mgH$. Now we have the height. How long does it take to hit the ground? One could solve for $\frac{1}{2} m V^2$ to see bottom velocity and then use $v = aT$ to do it entirely in your head. No energy is lost, so the time it takes for one parabolic path is $2T$. This quickly allows us to solve for any number of bounces, and with only mental computation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641048908233643, "perplexity": 211.53734573093925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/53354/when-are-unions-of-isomorphic-groups-isomorphic/53355
# When are unions of isomorphic groups isomorphic? I was thinking about how to prove $\operatorname{Br}(K)\cong H^2(\operatorname{Gal}(\bar{K}/K),\bar{K}^*)$ without having to introduce inductive limits and all the profinite stuff. So, I started wondering if the conditions of a direct system could be weakened for the cathegory of abelian groups in a way that isomorphisms would be still preserved. This brought me to the following general question: Let $G$ and $H$ be two abelian groups, not necessarily finite, $I$ an index set and $(G_i) _{i\in I}$ and $(H_i)_{i\in I}$ families of subgroups respectevely of $G$ and $H$ such that (1) $\forall i\in I: G_i \cong H_i$ and (2) $\bigcup_{i\in I}G_i=G$ and $\bigcup_{i\in I}H_i=H$. Question 1: Can we conclude that $G\cong H$? Question 2: If yes, can we drop "abelian"? EDIT: I forgot to mention that the $G_i$ (and $H_i$) are also assumed to be distinct subgroups. - How about $G_i = G \cong \mathbb Z$ for all $i$, and $H = \mathbb Q$ and $H_i = \frac{1}{i!} \mathbb Z$? –  j.p. Jan 26 '11 at 13:46 Consider $G_i=G=\mathbf{Z}/2\mathbf{Z}$ and $H=G \times G$ and $H_1=<(1,0)>, H_2=<(0,1)>$ and $H_3=<(1,1)>$, so you need some further assumptions. –  Guntram Jan 26 '11 at 13:46 Even $I$ countable and $G_i$ finite abelian for all $i\in I$ doesn't help: take $G_0=H_0=0$, $G_{i+1} = G_i \times Z_{p^i}$ and $H_i$ to be the subgroup $p\cdot H_{i+1}$ ($H_{i+1} \cong G_{i+1}$). $G$ has then an element of order $p$ that is not a $p$-th power, whereas every element of $H$ has a $p$-th root. –  Someone Jan 26 '11 at 14:20 Thank you for the insightful comments! –  efq Jan 26 '11 at 14:30 For a counterexample, let $G_i=\mathbb{Z}$ be the integers and let $H_i=\frac1i\mathbb{Z}$, for positive natural numbers $i$. The union $\bigcup_i G_i=\mathbb{Z}$, but $\bigcup_i H_i=\mathbb{Q}$. For the revised question, where you want $G_i$ and $H_i$ distinct, there are still counterexamples, such as $G_i=i\mathbb{Z}$ and $H_i=\frac1i\mathbb{Z}$. On a positive note, if you have a bit more coherence in your isomorphisms, then you can make the affirmative conclusion. That is, if we can find particular isomorphisms $\pi_i:G_i\cong H_i$ which agree on their common domains, then they will build together into an isomorphism of $G$ and $H$. That is, what you want is not merely that $G_i\cong H_i$, but rather that the way that $G_i$ sits inside $G$ is the same as the way $H_i$ sits inside $H$. More generally, if $I$ is not just a naked index set, but is a directed set, such that when $i\lt j$ in this order then we have maps $G_i\to G_j$ and $H_i\to G_j$ and the isomorphisms $G_i\cong G_j$ make a commutative system, then the direct limit $G$ of the $G_i$'s will be isomorphic to the direct limit $H$ of the $H_i$'s by universal property arguments. Just one question: In the less general case, where the isomorphisms $\pi_i$ agree on their common domains, how do they build into an isomorphism $G\cong H$ constructively? –  efq Jan 26 '11 at 16:37 I meant that if we assume also that any two objects in the union appear together in some piece $G_i$, then the set-theoretic union of the piece-wise isomorphisms is literally an isomorphism of the unions. This is because being an isomorphism on the union sets is a local property, which will be preserved to unions, since $\pi(a+b)=\pi(a)+\pi(b)$ can be witnessed once you have $a$ and $b$ together, and similarly for injectivity and surjectivity. –  Joel David Hamkins Jan 26 '11 at 17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678571820259094, "perplexity": 162.7772980520359}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00127-ip-10-179-60-89.ec2.internal.warc.gz"}
https://socratic.org/questions/does-pressure-increase-as-temperature-increases-in-a-gas-at-a-constant-volume
Chemistry Topics # Does pressure increase as temperature increases in a gas at a constant volume? Jun 5, 2017 Yes. #### Explanation: This phenomenon is readily explained by the kinetic-molecular theory of gases. Pressure of a gas is caused by the gas particles colliding with the walls of its container. Simply put, because there is not a change in the volume, the temperature increase causes more collisions with the walls per unit time (as temperature increases, the average kinetic energy and thus the speed of the gas particles increases). Also because they're moving faster, the momentum with which they strike the container walls also increases. A greater number of collisions and the force of those collisions causes the pressure to increase. ##### Impact of this question 1010 views around the world
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630936980247498, "perplexity": 463.0366480055822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00521.warc.gz"}
https://motls.blogspot.com/2017/10/a-project-for-you-anti-unruhology.html
## Tuesday, October 24, 2017 ... // ### A project for you: anti-Unruhology Off-topic, web: Some arXiv preprints may be converted from PDF to nice HTML with maths using arXiv Vanity. More info. Imagine that you're a grad student, postdoc, or a Milner prize winner who feels that his or her number of physics projects is limited now. I think that you should do a homework exercise and write a paper – as revolutionary a paper as possible – according to the following sketch. Analyze the quantization of QFTs and quantum gravity – or vacua of string theory – on the spacelike, hyperbolic slices in the Minkowski space$x_\mu x^\mu = R^2.$ If you do it right, you should conclude with some insights about • the black hole complementarity – the refusal of different slices to be independent – and therefore the information loss puzzle • the horizon degrees of freedom and the Bekenstein-Hawking entropy OK, why is it interesting and what it is? You should start by using the coordinate $R^2 = x_\mu x^\mu$ as your time. On these equal-time slices with a fixed $R$, you should write your canonical coordinates and momenta for a quantum field. Try to define the Hamiltonian as the operator increasing the value of $R$. Note that for different $R$, these slices aren't quite identical – they have a different curvature. But there's a one-to-one map between the Hilbert spaces defined on these slices. The hyperbolic slices have the isometry $SO(D-1,1)$ – basically the full Lorentz group without the Poincaré translations – and its boost-like generators behave as "momenta" on the hyperbolic slice. In the context of the black hole information puzzle, one may define "strange slices" of the curved black hole spacetimes (and their Penrose diagrams) where some information seems to be doubled. Inside the black hole, these slices contain "all the information about the interior" i.e. the matter that has already fell to the black hole. On the other hand, the same space-like hypersurfaces also cross most of the Hawking radiation. It's been a part of the lore for a few decades (which has been mostly proven qualitatively but it's still not understood too well "how it really works") that at these slices, quantum gravity should exhibit some kind of non-locality or black hole complementarity – the part of the slice that is inside the black hole shouldn't be quite independent from the slice that is outside. The exterior part of the slice should qualitatively behave just like the slices in the nearly flat space. One may naturally guess that the "strange phenomena" of quantum gravity only take place at the interior part of the slice or the relationship between the interior part and the exterior part. A nice thing about the hyperbolic slices of the flat space that I started with is that they're a good model of the "long-term" evolution of some points inside the black hole, at a fixed radius of the orthogonal two sphere. Just like you need to "accelerate away" from the black hole if you want to stay a meter above the event horizon (and you may spend a very long time over there, the time is comparable to the Hawking evaporation time), you need to consider parts of a "curved space-like slice" if you wanted to stay inside the black hole, at a fixed distance from the horizon. Because this surface is spacelike – note that the time and space are interchanged inside the black hole, in some way – a massive object can't really stay at a fixed place. You need the equivalent of the superluminal motion. Some special phenomena at the hyperbolic slice of the flat Minkowski space should know a lot about the "strange phenomena" of the black hole interior – in the same way in which the Unruh radiation is a simplified toy model for the Hawking radiation. In particular, if there's some black hole complementarity – violation of the rule that spacelike-separated regions are completely independent and commuting with each other – this complementarity should be already visible, in some simplified way, on the hyperbolic slices of the flat space. Even though the slice may have an infinite proper volume, the amount of information it may store should be restricted in some way. The question is why and how. The curvature of this space-like hypersurface itself should already tell us that different parts of the hyperboloid are "less independent" from each other than they are believed to be at a flat slice. This brings me to the second point. There exists a nice old argument of mine – also using hyperbolic slices – why the entropy in a region is bounded roughly by the Bekenstein-Hawking entropy $S=A/4G$. How does it happen that the volume-proportional degrees of freedom are really absent and only the surface contributes? Take a sphere in the flat 3D space, a slice of the Minkowski space. You may consider a flat slice describing the phenomena inside this sphere. But you may also slice the 4D spacetime by the hyperbolic slices $H^3$ with a fixed curvature – similar to the shape of the mass shell in the momentum space. Because most of the hyperbolic slice is "nearly null", it has a very small proper volume. In fact, when the region of the hyperboloid is much larger than the curvature radius, the proper volume scales like$V = A \cdot R_{\rm curvature}$ i.e. the product of the proper surface and the curvature radius of the hyperboloid. The very analogous scaling exists in the anti de Sitter space and is a way to heuristically explain why holography isn't so shocking in the anti de Sitter space – the volume of the slices scales with the surface, anyway. Our story is just a counterpart of that argument, a counterpart using a purely space-like slice. The usual low-energy description with quantum gravity neglected only starts to break down once $R_{\rm curvature}$ approaches the Planck length. In this limit, when you assume the simplest density of entropy per unit proper volume – one bit or nat per proper Planck volume – you will get the total entropy that is of order $S=A/4G$. So there should exist some "manifestation of holography" that also applies to the hyperbolic slices of the flat Minkowski space discussed from the beginning of this blog post. The degrees of freedom on such slices should behave holographically – and be effectively stored at the boundaries of the hyperboloid. Why is it so that the big volume of these hyperbolic slices doesn't contribute independent degrees of freedom? The reason must be analogous to the derivation of the thermal Unruh radiation – which is the radiation seen by observers moving along time-like hyperbolic trajectories in the flat spacetime – except that some of the logic must be reversed. The Unruh thermal radiation may be derived from the relationship between the two different vacua – ground states of the "two different Hamiltonians" (the regular time translations in the Minkowski space; and the time translations of the Rindler space which are boosts of the Minkowski space). Similarly, on the hyperbolic slices, you should define the "regular momentum" as well as the "Rindler-like momentum" corresponding to the boosts of the Minkowski space. At later times, general solutions inside the future light cone could be written in terms of some eigenstates of both types of the momentum operators. There should be something like a Bogoliubov transformation. And that counterpart of the Bogoliubov transformation should relate the descriptions based on the two slicings – flat slices and hyperbolic slices – in a way that is analogous to the Unruh story in the Rindler space. One should see some many-to-one maps between the natural Hilbert spaces and derive some seed of the black hole complementarity which would be the master toy model that is used in all black holes. I believe that the hyperbolic space-like slices contain many fewer degrees of freedom than expected from locality because the condition of a restricted momentum (which is really a boost generator in the Minkowski space) constrains the size of the physical Hilbert space immensely. If the story above doesn't look complete to you, it's because it isn't really complete at this moment. I have more – and some equations – than what I wrote above but what I have isn't complete, either. Maybe there's a chance that someone finds the full story, finds all the "analogous new things" to the Unruhology that may be derived on the hyperbolic space-like hypersurfaces as slices of the Minkowski space. I am talking about some elaboration of a formalism that basically assumes the effective field theory. At the end, the effective field theory can't be a complete description that tells you all the details about the microstates in quantum gravity – you need the full theory of quantum gravity, a.k.a. string/M-theory, for that. But the qualitative features of that full story must have some interpretation in terms of the local effective field theory. The local effective field theory must vaguely understand the character of its own breakdown. It would be fun if some or numerous grad students, postdocs, and Milner Prize winners tried to write a term paper clarifying how far they can get in analyzing all these situations and ideas. ;-) At the end, I believe that the building blocks may be combined so rationally that one may construct a proof of the black hole complementarity and other novel phenomena in quantum gravity. Strict and naive locality we're trained to believe from the Minkowski space must be "restricted" in some way in quantum gravity at the end but the reasons and basic character of this restriction may be understandable in terms of clever enough slices and geometry applied to the effective field theories, I think. Incidentally, Michael Dine whom I know well from Santa Cruz etc., also as a once-time anti-anthropic co-author, got the latest Sakurai Prize. Well-deserved, congratulations, Mike.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369479775428772, "perplexity": 418.8654432827875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00240.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/f07/f07bdf.html
# NAG FL Interfacef07bdf (dgbtrf) ## ▸▿ Contents Settings help FL Name Style: FL Specification Language: ## 1Purpose f07bdf computes the $LU$ factorization of a real $m×n$ band matrix. ## 2Specification Fortran Interface Subroutine f07bdf ( m, n, kl, ku, ab, ldab, ipiv, info) Integer, Intent (In) :: m, n, kl, ku, ldab Integer, Intent (Out) :: ipiv(min(m,n)), info Real (Kind=nag_wp), Intent (Inout) :: ab(ldab,*) #include <nag.h> void f07bdf_ (const Integer *m, const Integer *n, const Integer *kl, const Integer *ku, double ab[], const Integer *ldab, Integer ipiv[], Integer *info) The routine may be called by the names f07bdf, nagf_lapacklin_dgbtrf or its LAPACK name dgbtrf. ## 3Description f07bdf forms the $LU$ factorization of a real $m×n$ band matrix $A$ using partial pivoting, with row interchanges. Usually $m=n$, and then, if $A$ has ${k}_{l}$ nonzero subdiagonals and ${k}_{u}$ nonzero superdiagonals, the factorization has the form $A=PLU$, where $P$ is a permutation matrix, $L$ is a lower triangular matrix with unit diagonal elements and at most ${k}_{l}$ nonzero elements in each column, and $U$ is an upper triangular band matrix with ${k}_{l}+{k}_{u}$ superdiagonals. Note that $L$ is not a band matrix, but the nonzero elements of $L$ can be stored in the same space as the subdiagonal elements of $A$. $U$ is a band matrix but with ${k}_{l}$ additional superdiagonals compared with $A$. These additional superdiagonals are created by the row interchanges. ## 4References Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore ## 5Arguments 1: $\mathbf{m}$Integer Input On entry: $m$, the number of rows of the matrix $A$. Constraint: ${\mathbf{m}}\ge 0$. 2: $\mathbf{n}$Integer Input On entry: $n$, the number of columns of the matrix $A$. Constraint: ${\mathbf{n}}\ge 0$. 3: $\mathbf{kl}$Integer Input On entry: ${k}_{l}$, the number of subdiagonals within the band of the matrix $A$. Constraint: ${\mathbf{kl}}\ge 0$. 4: $\mathbf{ku}$Integer Input On entry: ${k}_{u}$, the number of superdiagonals within the band of the matrix $A$. Constraint: ${\mathbf{ku}}\ge 0$. 5: $\mathbf{ab}\left({\mathbf{ldab}},*\right)$Real (Kind=nag_wp) array Input/Output Note: the second dimension of the array ab must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: the $m×n$ matrix $A$. The matrix is stored in rows ${k}_{l}+1$ to $2{k}_{l}+{k}_{u}+1$; the first ${k}_{l}$ rows need not be set, more precisely, the element ${A}_{ij}$ must be stored in $ab(kl+ku+1+i-j,j)=Aij for ​max(1,j-ku)≤i≤min(m,j+kl).$ See Section 9 in f07baf for further details. On exit: if ${\mathbf{info}}\ge {\mathbf{0}}$, ab is overwritten by details of the factorization. The upper triangular band matrix $U$, with ${k}_{l}+{k}_{u}$ superdiagonals, is stored in rows $1$ to ${k}_{l}+{k}_{u}+1$ of the array, and the multipliers used to form the matrix $L$ are stored in rows ${k}_{l}+{k}_{u}+2$ to $2{k}_{l}+{k}_{u}+1$. 6: $\mathbf{ldab}$Integer Input On entry: the first dimension of the array ab as declared in the (sub)program from which f07bdf is called. Constraint: ${\mathbf{ldab}}\ge 2×{\mathbf{kl}}+{\mathbf{ku}}+1$. 7: $\mathbf{ipiv}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{m}},{\mathbf{n}}\right)\right)$Integer array Output On exit: the pivot indices that define the permutation matrix. At the $\mathit{i}$th step, if ${\mathbf{ipiv}}\left(\mathit{i}\right)>\mathit{i}$ then row $\mathit{i}$ of the matrix $A$ was interchanged with row ${\mathbf{ipiv}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$. ${\mathbf{ipiv}}\left(i\right)\le i$ indicates that, at the $i$th step, a row interchange was not required. 8: $\mathbf{info}$Integer Output On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6). ## 6Error Indicators and Warnings ${\mathbf{info}}<0$ If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. If ${\mathbf{info}}=-999$, dynamic memory allocation failed. See Section 9 in the Introduction to the NAG Library FL Interface for further information. An explanatory message is output, and execution of the program is terminated. ${\mathbf{info}}>0$ Element $⟨\mathit{\text{value}}⟩$ of the diagonal is exactly zero. The factorization has been completed, but the factor $U$ is exactly singular, and division by zero will occur if it is used to solve a system of equations. ## 7Accuracy The computed factors $L$ and $U$ are the exact factors of a perturbed matrix $A+E$, where $|E|≤c(k)εP|L||U| ,$ $c\left(k\right)$ is a modest linear function of $k={k}_{l}+{k}_{u}+1$, and $\epsilon$ is the machine precision. This assumes $k\ll \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$. ## 8Parallelism and Performance f07bdf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. f07bdf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information. The total number of floating-point operations varies between approximately $2n{k}_{l}\left({k}_{u}+1\right)$ and $2n{k}_{l}\left({k}_{l}+{k}_{u}+1\right)$, depending on the interchanges, assuming $m=n\gg {k}_{l}$ and $n\gg {k}_{u}$. A call to f07bdf may be followed by calls to the routines: • f07bef to solve $AX=B$ or ${A}^{\mathrm{T}}X=B$; • f07bgf to estimate the condition number of $A$. The complex analogue of this routine is f07brf. ## 10Example This example computes the $LU$ factorization of the matrix $A$, where $A= ( -0.23 2.54 -3.66 0.00 -6.98 2.46 -2.73 -2.13 0.00 2.56 2.46 4.07 0.00 0.00 -4.78 -3.82 ) .$ Here $A$ is treated as a band matrix with one subdiagonal and two superdiagonals. ### 10.1Program Text Program Text (f07bdfe.f90) ### 10.2Program Data Program Data (f07bdfe.d) ### 10.3Program Results Program Results (f07bdfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 93, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469884037971497, "perplexity": 1981.7532172934486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00623.warc.gz"}
http://mathhelpforum.com/differential-equations/136483-rewrite-de-linear-equation-print.html
Rewrite DE as linear equation • Mar 30th 2010, 04:06 AM HoneyPi Rewrite DE as linear equation Hi, Is it possible to rewrite the equations (a) $x'=\begin{cases} \frac{x^{2}-1}{x-1} & x\neq1\\ 2 & x=1\end{cases} $ (b) $x'=\begin{cases} \frac{x^{4}-1}{x^2-1} & x\neq1\\ 2 & x=1\end{cases} $ as linear equations? Can someone give me a hint, please? Honey $\pi$ • Mar 30th 2010, 05:11 AM Prove It Quote: Originally Posted by HoneyPi Hi, Is it possible to rewrite the equations (a) $x'=\begin{cases} \frac{x^{2}-1}{x-1} & x\neq1\\ 2 & x=1\end{cases} $ (b) $x'=\begin{cases} \frac{x^{4}-1}{x^2-1} & x\neq1\\ 2 & x=1\end{cases} $ as linear equations? Can someone give me a hint, please? Honey $\pi$ I'm hoping you can see that $\frac{x^2 - 1}{x - 1} = \frac{(x + 1)(x - 1)}{x - 1}$ $= x + 1$. As $x \to 1, f(x) \to 2$. So you can rewrite this function as $f(x) = x + 1$ for all $x$. You should also be able to see that $\frac{x^4 - 1}{x^2 - 1} = \frac{(x^2 + 1)(x^2 - 1)}{x^2 - 1}$ $= x^2 + 1$. As $x \to 1, f(x) \to 2$. So you can rewrite the function as $f(x) = x^2 + 1$ for all $x$. So, you can rewrite the first as a linear function, and the second as a quadratic function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948012113571167, "perplexity": 1283.1288318716256}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00355.warc.gz"}
http://export.arxiv.org/abs/2211.13866
stat.ML (what is this?) # Title: Minimal Width for Universal Property of Deep RNN Abstract: A recurrent neural network (RNN) is a widely used deep-learning network for dealing with sequential data. Imitating a dynamical system, an infinite-width RNN can approximate any open dynamical system in a compact domain. In general, deep networks with bounded widths are more effective than wide networks in practice; however, the universal approximation theorem for deep narrow structures has yet to be extensively studied. In this study, we prove the universality of deep narrow RNNs and show that the upper bound of the minimum width for universality can be independent of the length of the data. Specifically, we show that a deep RNN with ReLU activation can approximate any continuous function or $L^p$ function with the widths $d_x+d_y+2$ and $\max\{d_x+1,d_y\}$, respectively, where the target function maps a finite sequence of vectors in $\mathbb{R}^{d_x}$ to a finite sequence of vectors in $\mathbb{R}^{d_y}$. We also compute the additional width required if the activation function is $\tanh$ or more. In addition, we prove the universality of other recurrent networks, such as bidirectional RNNs. Bridging a multi-layer perceptron and an RNN, our theory and proof technique can be an initial step toward further research on deep RNNs. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2211.13866 [stat.ML] (or arXiv:2211.13866v1 [stat.ML] for this version) ## Submission history From: Chang Hoon Song [view email] [v1] Fri, 25 Nov 2022 02:43:54 GMT (35kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92840576171875, "perplexity": 873.0778196734258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00499.warc.gz"}
https://www.jobilize.com/trigonometry/test/algebraic-the-other-trigonometric-functions-by-openstax
# 7.4 The other trigonometric functions  (Page 6/14) Page 6 / 14 ## Key equations Tangent function $\mathrm{tan}\text{\hspace{0.17em}}t=\frac{\mathrm{sin}\text{\hspace{0.17em}}t}{\mathrm{cos}\text{\hspace{0.17em}}t}$ Secant function $\mathrm{sec}\text{\hspace{0.17em}}t=\frac{1}{\mathrm{cos}\text{\hspace{0.17em}}t}$ Cosecant function $\mathrm{csc}\text{\hspace{0.17em}}t=\frac{1}{\mathrm{sin}\text{\hspace{0.17em}}t}$ Cotangent function $\text{cot}\text{\hspace{0.17em}}t=\frac{1}{\text{tan}\text{\hspace{0.17em}}t}=\frac{\text{cos}\text{\hspace{0.17em}}t}{\text{sin}\text{\hspace{0.17em}}t}$ ## Key concepts • The tangent of an angle is the ratio of the y -value to the x -value of the corresponding point on the unit circle. • The secant, cotangent, and cosecant are all reciprocals of other functions. The secant is the reciprocal of the cosine function, the cotangent is the reciprocal of the tangent function, and the cosecant is the reciprocal of the sine function. • The six trigonometric functions can be found from a point on the unit circle. See [link] . • Trigonometric functions can also be found from an angle. See [link] . • Trigonometric functions of angles outside the first quadrant can be determined using reference angles. See [link] . • A function is said to be even if $\text{\hspace{0.17em}}f\left(-x\right)=f\left(x\right)\text{\hspace{0.17em}}$ and odd if $\text{\hspace{0.17em}}f\left(-x\right)=-f\left(x\right)\text{\hspace{0.17em}}$ for all x in the domain of f. • Cosine and secant are even; sine, tangent, cosecant, and cotangent are odd. • Even and odd properties can be used to evaluate trigonometric functions. See [link] . • The Pythagorean Identity makes it possible to find a cosine from a sine or a sine from a cosine. • Identities can be used to evaluate trigonometric functions. See [link] and [link] . • Fundamental identities such as the Pythagorean Identity can be manipulated algebraically to produce new identities. See [link] .The trigonometric functions repeat at regular intervals. • The period $\text{\hspace{0.17em}}P\text{\hspace{0.17em}}$ of a repeating function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ is the smallest interval such that $\text{\hspace{0.17em}}f\left(x+P\right)=f\left(x\right)\text{\hspace{0.17em}}$ for any value of $\text{\hspace{0.17em}}x.$ • The values of trigonometric functions can be found by mathematical analysis. See [link] and [link] . • To evaluate trigonometric functions of other angles, we can use a calculator or computer software. See [link] . ## Verbal On an interval of $\text{\hspace{0.17em}}\left[0,2\pi \right),$ can the sine and cosine values of a radian measure ever be equal? If so, where? Yes, when the reference angle is $\text{\hspace{0.17em}}\frac{\pi }{4}\text{\hspace{0.17em}}$ and the terminal side of the angle is in quadrants I and III. Thus, a $\text{\hspace{0.17em}}x=\frac{\pi }{4},\frac{5\pi }{4},$ the sine and cosine values are equal. What would you estimate the cosine of $\text{\hspace{0.17em}}\pi \text{\hspace{0.17em}}$ degrees to be? Explain your reasoning. For any angle in quadrant II, if you knew the sine of the angle, how could you determine the cosine of the angle? Substitute the sine of the angle in for $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ in the Pythagorean Theorem $\text{\hspace{0.17em}}{x}^{2}+{y}^{2}=1.\text{\hspace{0.17em}}$ Solve for $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and take the negative solution. Describe the secant function. Tangent and cotangent have a period of $\text{\hspace{0.17em}}\pi \text{.}\text{\hspace{0.17em}}$ What does this tell us about the output of these functions? The outputs of tangent and cotangent will repeat every $\text{\hspace{0.17em}}\pi \text{\hspace{0.17em}}$ units. ## Algebraic For the following exercises, find the exact value of each expression. $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\frac{2\sqrt{3}}{3}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\sqrt{3}$ $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\sqrt{2}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{4}$ 1 $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{3}$ 2 $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\frac{\sqrt{3}}{3}$ For the following exercises, use reference angles to evaluate the expression. $\mathrm{tan}\text{\hspace{0.17em}}\frac{5\pi }{6}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{7\pi }{6}$ $-\frac{2\sqrt{3}}{3}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{11\pi }{6}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{13\pi }{6}$ $\sqrt{3}$ $\mathrm{tan}\text{\hspace{0.17em}}\frac{7\pi }{4}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{3\pi }{4}$ $-\sqrt{2}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{5\pi }{4}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{11\pi }{4}$ –1 $\mathrm{tan}\text{\hspace{0.17em}}\frac{8\pi }{3}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{4\pi }{3}$ -2 $\mathrm{csc}\text{\hspace{0.17em}}\frac{2\pi }{3}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{5\pi }{3}$ $-\frac{\sqrt{3}}{3}$ $\mathrm{tan}\text{\hspace{0.17em}}225°$ $\mathrm{sec}\text{\hspace{0.17em}}300°$ 2 $\mathrm{csc}\text{\hspace{0.17em}}150°$ $\mathrm{cot}\text{\hspace{0.17em}}240°$ $\frac{\sqrt{3}}{3}$ $\mathrm{tan}\text{\hspace{0.17em}}330°$ $\mathrm{sec}\text{\hspace{0.17em}}120°$ –2 $\mathrm{csc}\text{\hspace{0.17em}}210°$ $\mathrm{cot}\text{\hspace{0.17em}}315°$ –1 If $\text{\hspace{0.17em}}\text{sin}\text{\hspace{0.17em}}t=\frac{3}{4},$ and $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is in quadrant II, find $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}t,\mathrm{sec}\text{\hspace{0.17em}}t,\mathrm{csc}\text{\hspace{0.17em}}t,\mathrm{tan}\text{\hspace{0.17em}}t,$ and $\text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}t.$ If $\text{\hspace{0.17em}}\text{cos}\text{\hspace{0.17em}}t=-\frac{1}{3},$ and $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is in quadrant III, find $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}t,\mathrm{sec}\text{\hspace{0.17em}}t,\mathrm{csc}\text{\hspace{0.17em}}t,\mathrm{tan}\text{\hspace{0.17em}}t,$ and $\text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}t.$ $\mathrm{sin}\text{\hspace{0.17em}}t=-\frac{2\sqrt{2}}{3},\mathrm{sec}\text{\hspace{0.17em}}t=-3,\mathrm{csc}\text{\hspace{0.17em}}t=-\frac{3\sqrt{2}}{4},\mathrm{tan}\text{\hspace{0.17em}}t=2\sqrt{2},\mathrm{cot}\text{\hspace{0.17em}}t=\frac{\sqrt{2}}{4}$ The sequence is {1,-1,1-1.....} has how can we solve this problem Sin(A+B) = sinBcosA+cosBsinA Prove it Eseka Eseka hi Joel June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler? find the sum of 28th term of the AP 3+10+17+--------- I think you should say "28 terms" instead of "28th term" Vedant if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n write down the polynomial function with root 1/3,2,-3 with solution if A and B are subspaces of V prove that (A+B)/B=A/(A-B) write down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°) Prove that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4 what is the answer to dividing negative index In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c. give me the waec 2019 questions the polar co-ordinate of the point (-1, -1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 69, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981024980545044, "perplexity": 717.9765502955307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00059.warc.gz"}
https://math.stackexchange.com/questions/429358/is-there-a-name-for-a-function-whose-square-is-an-involution/1812005
# Is there a name for a function whose square is an involution? An involution is a function $f:X\to X$ such that $f\circ f=\text{id}$. Is there a name for a function $g:X\to X$ such that $f\equiv g\circ g$ is an involution? An example is multiplication by $\pm i$ in the complex plane (or more generally for an algebra over $\mathbb{C}$, or for some space which is a product with $\mathbb{C}$). In an almost complex structure, the linear map $J$ with $J^2=-\text{id}$ is also an example. Yet another example is a (properly normalized) Fourier transform, where squaring the Fourier transform $\mathscr{F}$ gives the involution $\mathscr{F}^2[f(t)]=f(-t)$ for square-integrable functions. In a group, generally, this is clearly a 4-cycle. But considering the connection with complex and almost-complex structures (and quaternions, which have multiplication by $\pm i,\pm j, \pm k$ as 4-cycles) I thought there may be a special name for such a function on a more general space. • I'm afraid I've only always used "of order (dividing) four" for those. – Hagen von Eitzen Jun 25 '13 at 20:46 • Involulution. (random characters for my answer to be submittable) – oxeimon Jun 26 '13 at 6:46 • I have added the notion of semi-involution, in case it helps – Laurent Duval Apr 14 '17 at 13:23 I've never heard a special word for this. Things whose $n$th power is $1$ are usually just called $n$th roots of unity, but perhaps someone employed a special name in some context. I'm sorely tempted to call it a "spinvolution" because of your two examples. In mathematical physics, there are things (most everything we interact with) that are invariant under a rotation of $2\pi$, and then there are other quantities called spinorial quantities which transform to their negative under a rotation of $2\pi$. Both of your examples really lend themselves to this "spinor" picture :) [EDIT: on semi-involutions] The Hilbert transform $\mathcal{H}$ is sometimes said to be an anti-involution, as $\mathcal{H(H(u))}=-u$ (see Hilbert, inverse transform). I see this as a sub-case for your question only. [EDITED:20170414] I recently found the related concept of semi-involution, in Lectures on Gaussian Integral Operators and Classical Groups, Yu. A. Neretin: Recall that a semi-involution in a complex (real, quaternionic) linear space is a linear or anti-linear map $J$ such that $J^2$ is a scalar operator Apparently, the term seems to exist without the hyphen, see Involutions and semiinvolutions for instance: We define a linear map called a semiinvolution as a generalization of an involution, and show that any nilpotent linear endomorphism is a product of an involution and a semiinvolution. We also give a new proof for Djocovi'c's theorem on a product of two involutions. I have witnessed $n$-idempotence too, so you could call it $4$-idempotent function. I see no reason to do anything substantially different from what rschwieb suggested first in his answer. Perhaps these things have a different name already, but I do not think there is any merit in introducing completely new names for things that are (sort of) well-known. Let $(A,\cdot)$ be a semigroup and $x,a\in A$. Then $x$ is an $n$-th root of $a$, if $x^n = a$. Note, that $(X^X,\circ)$ is a semigroup (in fact it is a monoid with identity $\rm{id}_X$). Your $g$ is thus a $4$-th root of $\rm{id}_X$ (with respect to $\circ$, of course). If you want to be more precise, you can call it the $4$-th composition root of $\rm{id}_X$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293715953826904, "perplexity": 327.1278225029136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00201.warc.gz"}
https://www.physicsforums.com/threads/oh-and-one-more-question.60486/
# Oh and one more question 1. Jan 20, 2005 ### Louis Cypher I know why the dark matter and dark light hypothesis exist to explain the universes apparent mass; when our calculations say it is only a small percnetage of the universes actual mass etc, and the hubble constants apparent increase and so on, but what if what we are looking at is innacurate? The reason I ask is that polarizations in our solar system have skewed the IMAP data so that it's inaccurate, meaning our theory needs to be altered to allow for this; could it be that our assumptions are wrong and there simply is only visible matter in the universe? What other ideas are there to explain the incosistencies and are we any closer to finding an answer? Thanks 2. Jan 20, 2005 ### Haelfix The people who worked on WMAP and the like are very careful to account for local phenomena in their measurements.. Be sure that such things are contained in the error bars of the measurement. There is some debate about a certain octopole moment term in the power specturm, that might be contaminated experimentally, but thats not going to change the bulk measurement of some of those startling universal constants (by more than say .1% or so) . 3. Feb 1, 2005 ### Louis Cypher I see thanks for that Haelfix, that's the trouble with some articles they tend too exaggerate. Last edited: Feb 1, 2005
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81301349401474, "perplexity": 1187.7301236316684}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511175.9/warc/CC-MAIN-20181017132258-20181017153758-00280.warc.gz"}
https://physics.stackexchange.com/questions/527843/why-is-there-an-difference-between-the-exponent-of-the-determinant-of-these-two/527854
# Why is there an difference between the exponent of the determinant of these two path integral? When I read about Altland and Simons “Condensed matter field theory”, I came across with the path integral (3.28). $$\langle {q_f}|e^{-iHt/\hbar} |q_i\rangle = \det(\frac{i}{2\pi \hbar} \frac{\partial^2 S[q_{cl}]}{\partial q_i \partial q_f})^{\frac{1}{2}} e^{\frac{i}{\hbar}S[q_{cl}]}\tag{3.28}$$ Where the exponent of the determinant is $$+1/2$$. But another formula (3.25) says that: $$\int Dx e^{-F[x]} \approx \sum_i e^{-F[x_i]} \det(\frac{A_i}{2\pi})^{\frac{-1}{2}} \tag{3.25}$$ Where the exponent of the determinant is $$-1/2$$. Now I am just wondering why these two formula have these differences in the exponent in an explicit way. Eq. (3.25) is of course just the usual power $$-1/2$$ from a bosonic Gaussian integration. The power $$+1/2$$ of the van Vleck determinant in eq. (3.28) is more subtle. There is a proof of eq. (3.28) [in the context of 1D QM] in my Phys.SE answer here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878122210502625, "perplexity": 235.7853362582442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00612.warc.gz"}
http://www.physicsforums.com/showthread.php?p=13092
# Kc is the constant of equilibrum by STAii Tags: constant, equilibrum P: 353 Kc is the constant of equilibrum for a certain chemical reaction. For example, if the chemical reaction has the following formula : A+2B->C+3D Then Kc=([ C ].[ D ]3)/([ A ].[ B ]2) (Where [ A ] means the molarity of the compound A) This is what they are teaching us at least, but i find it not really logical, for may reasons, here is one. suppose we multiply the whole formula by 2 2A+4B-->2C+6D (the equation was edited after the notice of Mike) The value of Kc will get squared, but this seems wrong since both fomulas are for the same reaction ! Anyone can explain what is happening plz ? Thanks in advance [:)] P: 464 That should be a 6D and not a 3D in your second reaction. I suspect that to be your problem. Sci Advisor HW Helper PF Gold P: 1,381 Have you been given the relation between free energy and equilibrium in your class? If not, it's all going to look like an arbitrary mystery; if you have, recall that K is specific for the reaction as written, and recognize that the free energy for the reaction as you have rewritten it for twice the reactants and products is also doubled. P: 353 ## Kc is the constant of equilibrum Have you been given the relation between free energy and equilibrium in your class Nop ! Can you please explain more ? free energy for the reaction as you have rewritten it for twice the reactants and products is also doubled As i see the value of Kc will be squared not doubled, is that what you mean ? Sci Advisor HW Helper PF Gold P: 1,381 Welcome to the wonderful world of chemical thermodynamics --- jumping in in the middle and working backward probably isn't the best way to do this, but if you're willing to tolerate me pulling a few rabbits from hats, let's give it a try ----. Given a chemical reaction A + B + .... = M + N + ...., we mean that the reactants A, etc. are in equilibrium with the products M, etc., and that the reaction is reversible (M + ... = A + ...). The free energy change for the reaction is sum of the free energies of the products minus the sum of that for the reactants as the reaction is written (2 of this + 1 of that +3 of the other reacts to form 1 of something else plus 3 of some other else, or vice versa). Make sense so far? Free energy is denoted with a bold-face, upper-case, F or G, sometimes italicized (I haven't got to the pt. I can drive the new forum editor quickly enough to avoid being logged off and losing everything), and is more strictly called the "Gibbs free energy." Now for the rabbit from the hat --- the standard free energy change for the reaction as written is equal to the product of the gas constant, absolute temperature, and the natural log of the equilibrium constant, del G = RTlnK, where K is what you're asking about. If you double the number of moles on each side, you double the free energy, which doubles the log of K, or squares K as you've apparently already figured out. Still with me? We'll break for questions. P: 353 I "grabbed the start of the string", and will work on it for some time, then will come back for questions. If you are wondering "Wow! this fast !", well i knew a little about energies ... etc from old times, so i will work on my old info, and what you just told me (G = R*T*ln(K)) to try to understand it. ... As i am writting this i got just a little question, why the logarthim to the base (e), i mean why is it ln() and not log10 for example ? Any particular reason or it just comes this way ? Thanks a lot.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252198100090027, "perplexity": 743.7638706191912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011025965/warc/CC-MAIN-20140305091705-00015-ip-10-183-142-35.ec2.internal.warc.gz"}
https://brilliant.org/problems/integral-i-was-too-lazy-to-make-a-good-name/
# Integral ( I was too lazy to make a good name... ) $\int _{ 1 }^{ 2 }{ \frac { x }{ { x }^{ 2 }+4x+8 } }\, dx = \arctan { \frac { a }{ b } } -\arctan { c } +\frac { 1 }{ d } \ln { \frac { e }{ f } }$ The equation above holds true for some positive integers $a$, $b$, $c$, $d$, $e,$ and $f$, with $\gcd(a,b) = \gcd(e,f) = 1$. What is $a+b+c+d+e+f ?$ × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923147559165955, "perplexity": 694.3007719340733}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00013.warc.gz"}
https://www.physicsforums.com/threads/pi-concept.99375/
# Pi concept 1. Nov 10, 2005 ### vaishakh How was the irrational number pi invented or how did man reach upon it? I have studied elementary calculus to understand the basic kinematics. Infact I know only the formulae to find for integration and derivative of some standard functions. The limits and such things weren't taught to me. It will be taught a year later. From this knowledge I myself proved the area of a circle and surface area of sphere, cylinder and cone and also their volume. But in finding the circumference of the circle, the formula that circumference of a circle is 2pi*r was uses. However I thought of a lot of ways but still cannot reach or use calculus to verify the formula for the circumference of a circle. I also felt the number pi like something heavenly and couldn't find any property in a curve which could have led to its invention. Is this because my present information on the subject is not sufficient or can I reach pi just from this information? 2. Nov 10, 2005 ### hypermorphism Pi was first theoretically studied by Archimedes's method of exhaustion (a precursor to calculus), in this case by studying the areas and perimeters of regular polygons that inscribed and circumscribed the circle. As mathematics evolved, more algebraic series for pi became common. 3. Nov 10, 2005 ### Integral Staff Emeritus $\pi$ is now and always has been simply the ratio of the circumfrence of a circle to its diameter. By definition $$\pi = \frac C D = \frac C {2R}$$ Thus $$C = 2 \pi R$$ 4. Nov 12, 2005 ### HallsofIvy It was recognized very early that the ratio of the circumference to its diameter (easier to measure the radius) was a constant. There is a reference in the bible to a kettle having circumference three time its diameter- not a bad estimate for those days. As pointed out above it was Archimedes who got the first really good approximations to pi. Euclid earlier showed that the ratio of circumference to radius is the same for all circles (essentially that all circles are "similar") by dividing two circles into n triangles (select n equally spaced points on the circumference, draw the radii and chords), showing that corresponding triangles in the two circles were similar and therefore the ratio of total of all of the bases to the radii must be the same. Then arguing that, since, as n got larger, the total bases come closer and closer to the circumference the same must be true of the ratio of the circumference to the radius. That's similar to the process of "exhaustion" Archimedes used and a primitive limit process. If you've taken enough calculus to be able to find the volume of sphere and cone, you should also know that, if y= f(x) then the length of the curve is given by $$\int_{x_0}^{x_1}\sqrt{1+ \left(\frac{df}{dx}\right)^2}dx$$ For the upper half of a circle, of radius R, $f(x)= \sqrt{R^2- x^2}$ so $\frac{df}{dx}= \frac{x}{\sqrt{R^2- x^2}}$. The integral for the arc length (of the hemisphere) is $$R\int_{-R}^R\frac{dx}{\sqrt{R^2- x^2}}[/itex] You will need a trig substitution to do that and make use of the fact that $cos(\pi)$= 1. Since often sine and cosine are defined in terms of a circle, that is, in a sense "circular reasoning", but it is possible to define sine and cosine independently of a circle (for example as solutions to the differential equation y"= -y with specific initial values) and show that the ratio of circumference to radius is a constant for all circles, that constant being half the period of sine and cosine. Last edited by a moderator: Nov 12, 2005 5. Nov 12, 2005 ### dx Im not sure if what Integral said was what you were looking for. Did you want a proof that the ratio of the circumference to the diameter of a circle is always the same constant? You can verify this by drawing two arbitrary concentric polygons and imagining that the number of sides approaches infinity. Now you can think of them as circles and since you can show that the ratio of the circumference and diameter is the same for both the polygons (using the properties of similar triangles), it follows that the ratio of the circumference to the diameter of the two circles must be the same. And since the two polygons we chose were arbitrary, it follows that the ratio of the circumference to the diameter of ANY two circles is the same. This means that ALL circles have the same circumference/diameter ratio. We call this ratio pi. 6. Nov 12, 2005 ### vaishakh dx-not only that they are in proportion, bot also that ratio is 2pi 7. Nov 12, 2005 ### Tide Vaishakh, The ratio of circumference to diameter is defined to be $\pi$. 8. Nov 13, 2005 ### HallsofIvy No, the ratio of circumference to diameter, which is what dx said, is $\pi$. The ratio of circumference to radius is $2\pi$. 9. Nov 13, 2005 ### vaishakh don't teach me such ratios. i am not that fool even if i don't know these facts of pi. my idea behind that post was to make dx understand what all i meant from the initial post. Hall, i don't expect you to do such a thing after that fantastic explanation in the first post. infact you destroyed your praise. lets not discuss on what others write. only continue the discussion if soeone has something new or more clearer. this is thereason i didn't react to integral 10. Nov 13, 2005 ### Tide vaishakh, I think Halls' reply was completely suitable in light of your post #6. Don't be so quick on the trigger and consider that you may not have expressed yourself as clearly as you intended in #6. 11. Nov 13, 2005 ### dx vaishakh-"How was the irrational number pi invented or how did man reach upon it?" First, man realized that the ratio of the circumference to the diameter of any circle is the same. So they named it [tex] \pi$$. Since $$\frac{C}{D} = \frac{C}{2R} = \pi$$ $$C = 2\pi{R}$$ Last edited: Nov 13, 2005 12. Nov 14, 2005 ### HallsofIvy Vaishakh- dx said "This means that ALL circles have the same circumference/diameter ratio. We call this ratio pi." Your response was "dx- not only that they are in proportion, bot also that ratio is 2pi" I was pointing out that dx was correct- the ratio of circumference to diameter is pi, not 2pi. I thought perhaps you were thinking of the ratio of circumference to radius. 13. Nov 15, 2005 ### arildno The number pi gained vastly in importance when Archimedes managed to prove that the proportionality constant between the circumference of the circle and its diameter was, in fact, the same proportionality constant existing between a circle's area and the square of its radius. Pi would never have gained its status unless that elegant result held. 14. Nov 15, 2005 ### mathwonk you are probably right arildno, but the same exhaustion argument given above also proves that the area of a circle is half the product of the circumference and the radius, so the equality of the constant ratios is a consequence. the result about the areas is also sort of obvious if you think of a circle as a triangle with base equal to its circumference and height equal to its radius. Of course that is archimedes' proof of both results cited above. I find it rather more surprizing that the same constant arises in the formula relating the surface area and volume of a sphere to its radius, not to mention in the formula for the sum of the series 1/1^2 + 1/2^2 + 1/3^2 + ....+1/n^2+........, or in that of other such "even" values of the zeta function. Last edited: Nov 15, 2005 15. Nov 16, 2005 ### dx Can we prove that we can do this? 16. Nov 16, 2005 ### arildno I fully agree with you; Archimedes' clever proof makes the result obvious; before that, however, I would assume the equality of these constants (i.e, both being the number pi) was unobvious. Thus, by welding together these constants as one in his elegant argument, Archimedes presumably made $\pi$ seem a lot more important than before. Again agreed, with this $\pi$ is soaring above all the other dumb numbers.. and have now shown itself to be practically divine.. Archimedes' proof was the first step (of many thereafter) in revealing the glory of $\pi$ 17. Nov 16, 2005 ### Doodle Bob It should be pointed out that, although Aristotle mentioned the incommensurability of pi (i.e. that is irrational), it was not proven (as far as anyone knows) until Lambert in 1766, the proof of which was incomplete until Legendre completed (with a lemma that shows that certain infinite continued fractions are irrational) in 1806. As for the so-called divinity of the origins of pi, I'm rather unconvinced, particularly with regard to higher dimensional volume vs. surface area, since the lower dimensional cases can always be found by taking slices of higher dimensional cases and using some calculus. I.e. the ubiquity of pi is an intrinsic element of Euclidean geometry in any dimension. Of course, if you tried the same thing in other geometries, you're going to get a different constant -- if you get a constant at all (you need similarities in the geometry). 18. Nov 16, 2005 ### Doodle Bob Actually, comtemporous to this estimate, the Egyptians were quite a bit more accurate: circa 1900BC, they estimated pi at (16/9)^2, which is about 3.16. The Israelites didn't seem to go for fractions all that much. 19. Nov 16, 2005 ### vaishakh why? once you assume the circumference of a circle to be 2pi*r we get all of the above results. infact i have two methods through which we can find out the area of circle from its circumference as per first post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648692607879639, "perplexity": 713.358469043129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947795.46/warc/CC-MAIN-20180425100306-20180425120306-00616.warc.gz"}
http://mathhelpforum.com/calculus/12557-line-integrals.html
1. ## Line integrals 1 - evaluate the line integral (i used S as the integration sign) S F.dr c where F=xi-zj+2yk and c is the triangular path from(0,0,0)to (1,1,0) to (1,1,1) and back to (0,0,0) 2 - evaluate the line integral S F.dr c F=(3x^2)i+2yzj+(y^2)k where c is any path between the point (0,1,2) and (1,-1,7) Any ideas on how to solve these, i'm at a complete loss 2. Originally Posted by macabre and c is the triangular path from(0,0,0)to (1,1,0) to (1,1,1) and back to (0,0,0) There much stuff here. I think you are trying to parametrize the rectifiable curve. You divide the line integral into 3 parts: 1)From (0,0,0) to (1,1,0) 2)From (1,1,0) to (1,1,1) 3)From (1,1,1) to (0,0,0) For #1 use the curve, <x,y,z>=<t,t,0> for 0<=t<= 1. For #2 use the curve, <x,y,z>=<1,1,t> for 0<=t<=1. For #3 use the curve, <x,y,z>=<t,t,t> for 0<=t<=1. The evaluation I leave to thee. 3. ok i've finally figured out the first question...mostly..when you get the 3 answers for each line do you then sum the values or sum thr modulus of the values? also for the second one what do you do with the the limit c? is it a case of finding a plane that passes through these two points? 4. Originally Posted by macabre also for the second one what do you do with the the limit c? is it a case of finding a plane that passes through these two points? Note it says any path. Thus it does not matter. I would chose a line (not plane) which passes through those points. Their is another way (without actually find the curve parametrization). You need to find the scalar potention between these two points. Then use the fundamental theorem of line integrals.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479655027389526, "perplexity": 1088.0604116659251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00004.warc.gz"}
https://www.math.gatech.edu/node/20518942
## Generalized Permutohedra from Probabilistic Graphical Models Series: Combinatorics Seminar Friday, February 3, 2017 - 15:05 1 hour (actually 50 minutes) Location: Skiles 005 , Georgia Tech , Organizer: A graphical model encodes conditional independence relations via the Markov properties. For an undirected graph these conditional independence relations are represented by a simple polytope known as the graph associahedron, which can be constructed as a Minkowski sum of standard simplices. There is an analogous polytope for conditional independence relations coming from any regular Gaussian model, and it can be defined using relative entropy.  For directed acyclic graphical models we give a construction of this polytope as a Minkowski sum of matroid polytopes.  The motivation came from the problem of learning Bayesian networks from observational data.  This is a joint work with Fatemeh Mohammadi, Caroline Uhler, and Charles Wang.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785366415977478, "perplexity": 1684.5090564403672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00174.warc.gz"}
http://oklo.org/2006/03/24/hd-149026/
Home > detection, worlds > HD 149026 ## HD 149026 March 24th, 2006 The Solar System was once a gigantic black cloud in space, imbued with a tiny overall spin in some particular random direction. The net spin of our ancient protostellar cloud is still manifest in today’s solar system. The planets all orbit the Sun in a direction counterclockwise as seen from above. The major planetary satellites (with the exception of Triton) all orbit counterclockwise as well. The Sun spins on an axis that lies within 7 degrees of the average orbital plane of the planet. The law of conservation of momentum suggests that alien planetary systems should display a similar state of orbital affairs. When a planetary system forms more or less quiescently, and more or less in isolation, then the final spin axis of the parent star should be nearly perpendicular to the orbital plane of the planets. If the stellar equator and the planetary orbital planes are far from alignment, then we have evidence that disruptive events occurred early in the history of the planetary system. Spin-orbit misalignment hints at planetary collisions, ejections, and other dramatic events. In the Solar System, for example, the crazy 97.77 degree tilt of Uranus’ polar axis may be evidence that a large (perhaps Earth-mass) object collided with Uranus early in its history, leaving its spin axis askew, and its poles bathed in an endless succession 42-year days. In a new paper accepted for publication in the Astrophysical Journal, members of the systemic team have participated in an investigation of the spin-orbit alignment of the recently discovered transiting planet orbiting HD149026. Our goal was to get a better sense of whether this star-planet system suffered a catastrophe in its distant past. HD 149026 b was discovered last year by N2K (the discovery paper is here). The planet has a mass ~114 times that of the Earth (slightly bigger than Saturn) and has a 2.875 day orbital period. By measuring how the star’s light dims as the planet passes in front of the star, it’s possible to determine the size and the exact orbital geometry for the system. Here’s a scale model in which the star, and the planet, and the orbit are all shown in their correct proportions: Perhaps the most charming aspect of HD 149026 b (to the limited extent that a scalding 1600K planet can exert charm) is that the planetary sidereal year lasts exactly one weekend. That is, if you punch a clock at noon on Friday, the planet has made one full orbit at 9:01 am the following Monday. Perhaps the most scientifically interesting aspect of HD 149026 b is its small size. The transit depth is only 0.3%, which implies that the planet has a radius of only ~0.7 Jupiter radii. That is surprisingly small, given the high temperature on the planetary surface, and tells us that the planet is quite dense. It needs to contain at least 50 Earth masses of elements heavier than hydrogen and helium. This huge burden of heavy elements is hard to explain. One possibility is that the planet was built up from the collision of several Uranus or Neptune like objects. If this were the case, then one might expect that the final orbital plane could be significantly misaligned with the equatorial plane of the star. Our measurement of the spin-orbit alignment for HD 149026 makes use of a phenomenon known as the Rossiter-McLaughlin effect. In 1924, Rossiter and McLaughlin independently measured the spin-orbit alignment of the eclipsing binary systems beta-Lyrae and Algol by modeling the variations in the measured radial velocities of the stars during transit. This effect, now appropriately called the Rossiter-McLaughlin effect, occurs any time an object (star or planet) occults part of a rotating stellar surface. The following figure shows how a rotating star outputs a small red-blue shifted version of its spectrum as we examine the changing radial spin-velocity from one limb to the other. When a planet passes in front of the oncoming limb, it blocks out red-shifted light, while the planet blocks out blue-shifted light when covering the outgoing limb. This is interpreted by the radial velocity code as a positive and then negative shift in the radial velocity of the star. The amplitude of this effect is thus due both to the spin velocity of the star as well as the total flux blocked out during transit. The Rossiter effect can be used to tell us how closely the stellar equator is aligned to with the orbital plane of the planet. When the planet’s path across the stellar disk is not parallel to the stellar equator, the radial velocity zero-point does not occur at the transit mid-point, and the radial velocity curve is asymmetric. The figure above illustrates how this works. High-cadence radial velocity observations taken during a transit are required to accurately measure the Rossiter effect. The in-transit velocities can be combined with other data, including the out-of-transit radial velocities which constrain the planetary orbit, and the transit photometry. An overall coupled model of all of these data can then give us the best possible picture of the system. Our new paper describes the exact details of how such an overall model can be constructed for HD 149026. The end result is that the equator of the star and the orbital plane of the transiting planet are quite well aligned; we measure the value of the misalignment angle to be 11 plus or minus 14 degrees. Although a fourteen degree (1-sigma) uncertainty is more than we’d like, it nevertheless provides an excellent constraint on the HD 149026 system. Since the misalignment of our own sun is ~7 degrees relative to the net planetary orbital angular momentum, and because we believe that the solar system formed fairly quiescently, we are primarily interested in whether HD 149026 b sports a severe misalignment (say 40 degrees or more). From our modelling, it’s clear that the orbit and planetary spin are not egregiously out of whack. Hence, there’s no evidence of a particularly disruptive formation history. That is, no catastrophic orbit altering collisions between massive protostellar cores. Rather, we are left with evidence of a more traditional, more mundane history, in which planetary formation was dominated by gradual accretion and the prolonged interactions with a planetary disk And the mystery of HD 149026b’s large core persists. How did all those heavy elements — all that oxygen, nitrogen, carbon, iron, gold, get into the planet? Our favored explanation draws on a scenario described by Frank Shu in 1995, in which the planetesimal migrates radially inward through the planetary disk until it reaches the interior 2:1 resonance with the “magnetic X-point,” the outermost point at which closed stellar magnetic field lines intersected the planetary disk. At the X-point, heated ionized gas is forced to leave the disk and climb up the field lines to accrete directly onto the star. In this occurs, the planetismal is stuck in a gas-starved environment for the remainder of the disk lifetime, and is essentially fed nothing but rocks and heavy elements for millions of years. The end result is a crazy-large 72 Earth-mass core in the middle of a 114 Earth-mass planet. Categories: Tags:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221413493156433, "perplexity": 1057.8481187184066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936466999.23/warc/CC-MAIN-20150226074106-00001-ip-10-28-5-156.ec2.internal.warc.gz"}
http://bas.westerbaan.name/article/2016/03/02/univ-prop-seq-meas.html
# Bas Westerbaan home ## A universal property for sequential measurement 02 Mar 2016 [ journal (JMP) ] We study the sequential product, the operation $p * q = \sqrt{p} q \sqrt{p}$ on the set of effects of a von Neumann algebra that represents sequential measurement of first $p$ and then $q$. We give four axioms which completely determine the sequential product.
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.904991090297699, "perplexity": 1086.4473579257347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746639.67/warc/CC-MAIN-20181120191321-20181120213321-00355.warc.gz"}
http://tex.stackexchange.com/questions/1914/why-do-people-use-unnecessary-braces/2325
# Why do people use unnecessary braces? I often see people creating new macros where the macro token is surrounded by braces. For example, \newcommand{\foo}{foo} This is unnecessary and I find the extra braces make it harder to read. Why do people do this? (As an aside, I've even seen people try to use \let{\foo}{\bar} which, of course, doesn't work.) - I disagree that this question should be closed. –  Will Robertson Aug 18 '10 at 10:06 I can just imagine someone in the 3rd century BC saying "Why I wonder would anyone use punctuation it just makes things harder to read" –  naught101 Jul 24 '12 at 6:38 Related question How bad for TeX is omitting braces even if the result is the same. –  Kurt Feb 15 '13 at 15:22 With my LaTeX3 'hat' on, I'd like to give a slightly different perspective. This will overlap with the other answers, but hopefully will be useful. To follow this you need to understand 'tokens', I'm afraid. TeX turns input into tokens, and in particular control sequences such as \bar are single tokens. When Leslie Lamport designed LaTeX, he decided that all LaTeX arguments should be wrapped in braces. This is in contrast to TeX, where many of the primitives require arguments without braces. However, when only a single token is begin passed you can omit the braces, hence the fact that \newcommand{\bar} and \newcommand\bar both work. There are really two different cases where you can omit braces: 1. Cases where the argument is always a single token. This is the case for \newcommand, where you have to have a macro name as the argument. The braces will never be needed as the argument is always \<something>. I do not use braces for these. 2. Places where you are passing a single token to an argument that will accept more. The classic example would be a subscript, where a_i will work but you'd need a_{ii}. I would always use braces here, so favour a_{i}. The LaTeX3 part to this answer is that we are trying to be much more rigourous about which arguments are single tokens and which are multiple tokens. At the moment this is only happening at a code level, but I'd anticipate a similar approach for users. So if the argument must be a single token, then make this clear by not using braces. On the other hand, it the argument can take multiple tokens then you must use braces even if only passing a single token. - Interesting. I'm not sure how I feel about mandating braces or their absence. I find that when I want to write simple fractions like 2/3, I tend to use \frac23 since I understand how TeX is going to parse its input into tokens. In this case, it's not that that is more or less readable than \frac{2}{3}, it's simply that it's many fewer characters to type. –  TH. Aug 18 '10 at 7:25 One of the things that we need to do for LaTeX3 is be more well-defined in terms of input structure. (There is a school of thought that everything should be XML, perhaps generated from a LaTeX-like input format.) Obviously, as TeX will tokenise \frac23 and \frax{2}{3} in the same way it is ultimately up to the user. However, I think the 'official guidance' is going to be 'always use braces for arguments which take more than one token'. –  Joseph Wright Aug 18 '10 at 7:32 I am not sure I really like this. T find x^2 much more readable and much easier to type than x^{2}, especially in situations where I use a lot of exponents. Of course, it happened to me many times that I quickly typed a quiz before class and after making copies, I discovered that I accidentally typed something like x^15 instead of x^{15}, which could not happen if the braces were mandatory. However, for my taste, XML is just way to verbose, and one thing that I believe contributes to popularity of TeX is that it allows certain sloppiness in its input, which I believe should be preserved. –  Jan Hlavacek Aug 26 '10 at 6:57 Luckily for you TeX is not about to change how it accepts arguments! So as long as LaTeX is parsed by TeX then you are safe. As I said, on this particular point it's 'guidance' to use braces. A classic place where this can be important is something like x_{\macro{a}}, which sometimes works without braces but is not reliable. –  Joseph Wright Aug 26 '10 at 7:06 @JosephWright What is the recommended style for xparse commands such as \NewDocumentCommand? –  Lover of Structure Dec 28 '13 at 22:51 Code should be readable and understandable. Using braces for all arguments, even if they aren't necessary, is more consistent. So, I prefer to use braces to not confuse unexperienced users. Leslie Lamport writes in LaTeX - A Document Preparation System: "Macho TeX programmers sometimes remove the braces around the first argument of \newcommand; don't do it yourself." Leslie Lamport is the initial developer of LaTeX. In the reference manual he specified the syntax of \newcommand to have braces: \newcommand{cmd}[args][opt]{def}. That no error occurs if you deviate from a syntax doesn't mean that this deviation is correct and will work for all times and that automatic syntax checker will understand that this deviation would do no harm, for \newcommand, \renewcommand, \providecommand and their starred environments and perhaps for all places where braces belong to the syntax but aren't strictly necessary. Since \let is not a LaTeX command, that syntax doesn't apply. - What's easy to read for one person is not necessarily easy to read for another. Personally I find: \newcommand{\foo}{foo} much easier to read than: \newcommand\foo{foo} In particular, it clearly reveals what the two arguments to the function are, which the latter does not. - Interesting. In general, there's no way to know how many arguments a macro takes. If I write \foo{a}{b}{c}{d}, there's no real indication of how many of those are arguments to \foo. For example, \newcommand takes no arguments, TeXnically speaking. –  TH. Aug 17 '10 at 22:55 @TH.: Nor do most of the user-level macros in datatools -- instead, they do a check for a * and then dispatch to one or another internal macro depending on whether or not there was a star, and those take the arguments. –  SamB Dec 19 '10 at 6:09 @SamB: It's also true of every macro that takes an optional argument. –  TH. Dec 19 '10 at 7:03 @TH.: But it's not usually so glaringly obvious in the source code ;-) –  SamB Dec 19 '10 at 7:14 Contra Stefan (and therefore contra Leslie Lamport), and at the risk of weighing in on a matter involving personal style, I very much prefer the forms \newcommand\foo{...\baz{\bar}...} to \newcommand{\foo}{...\baz{\bar}...} and \newcommand*\foo[n]{...\baz{\bar}...} to \newcommand*{\foo}[n]{...\baz{\bar}...}. My reasons are as follows: • When standing in this position, \foo is a distinguished entity with a very different role to to play than \bar. For that reason, I like to lexically distinguish it as such. • To run with TH's point above, the pattern \newcommand*{\foo}[n]{...} is, for someone who must regularly interpret and sometimes produce TeX and LaTeX-interspersed code, ... well, let's say, a little 'over-ornate'. Re my second point, the human brain (yes, I actually do hold a research degree in neuropsych and learning), has to manage a huge amount of information during programming and program maintenance. Personally for my tiny little brain, the more regular the patterns it has to deal with, the fewer times it must take its metaphorical eye off the ball and attend to non-problem related tasks. The converse is also true. Of course, I wish it weren't so, but (sadly even more so than in any other computer language I have encountered), this situation is very much the case with the TeX et al. family. [And, here, JW, comes my major and so far only gripe with LaTeX3 - it is layering even more lexical pattern-breaking onto an already complex lexical (let alone syntactic or semantic or pragmatic or programmatic) pattern space. Of course, I agree that there are good technical reasons for this (encapsulation/namespaces being one), however real psychological tradeoffs accrue to real programmers managing real TeX/LateX2/LaTeX3(/LuaTeX) systems. I'm afraid (actually, I'm certain) that as this sort of complexity increases, the program error rate in these systems (and the commercial and non-commercial costs of producing and maintaining them) is going to increase in complex ways as well. Thank God we don't build rocket ships or commercial systems with this code! It might be provably deterministic Turing machine complete, but for heavens sake, TeX/LateX2/LaTeX3(/LuaTeX)'s little programming idioms like \newcommand{\foo}{...} and myriad ilk add like grains of sand to our psychological ability to build robust stuff in this code. And that is why I prefer to keep lexical patterns like \newcommand\foo{...} as far as possible in harmony with the patterns that TeX has for better or worse delivered earlier to us.] My tuppenny-ha'pence, guys and gals, sorry for taking the bait :)) - Plus 1 from me. One of there more important reasons for developing LuaTeX is to offer an escape route from the ever increasing complexity of LaTeX/ConTeXt macros. –  Taco Hoekwater Aug 18 '10 at 5:33 I completely agree on your sentiment about state of things with TeX/LaTeX "programming" and that it runs contrary to our brain. In this particular example it your argument does not hold though. In most programming languages that are designed for humans and not computers the syntax for a function looks in general like function_name(first_parameter, second_parameter, ...) and this is roughly equivalent to \function{\first}{\second}, instead of \function\first{\second}. –  Alexander Feb 3 '12 at 18:06 I use braces for all arguments to all macros in LaTeX because not to do this seems to me to be the situation requiring justification. LaTeX is designed such that its macros behave as functions, and so my mental model of something like \newcommand{\foo}{\bar} is "feed \newcommand the intended command \foo and its behavior \bar"; when I see \newcommand\foo{\bar} (or, worse, \newcommand\foo\bar, which works since \bar is a single token) I see the much less obvious "expand \newcommand; also, here is \foo, which happens to be eaten by this expansion before it is, itself, expanded; also, here is \bar, which is likewise serendipitously absorbed". If I did not know (or, in a moment of premature senility, I didn't recall) how \newcommand worked, the latter formation would not tell me. The former would. Even when reading my own document, this allows me to visually group the tokens into "part of a function" and "part of the text". I am a little surprised that there is any support at all (much less extremely eloquent support) for \newcommand\foo{\bar}, and the nod to \frac23 baffles me (what is the twenty-third fraction command?). It seems to indicate that the respondents regularly engage in a low-level analysis of the TeX parser far beyond what is necessary to compose a structured document. I wouldn't go so far as to write directly in XML myself, but the structure imposed by the LaTeX brace style is clarifying and error-reducing (especially since LaTeX allows it to be applied consistently, unlike the poisonous behavior of TeX's \let, a command which fortunately need never be used in normal circumstances). Basically, as I see it, \newcommand\foo{\bar} is born of vestigial habits learned from TeX and which support and require a programming mindset that is neither necessary nor desirable in everyday LaTeX. Indeed, it is never taught in references that concern only LaTeX, and I suspect the people here who use it learned to do so in earlier times or from people who themselves learned in those times (specifically in response to Geoffrey Jones: TeX's idioms are familiar to some, but hopefully becoming less so). If a newcomer should read this response, my personal opinion is that they should be aware of this phenomenon and triple-check everything they read about LaTeX on the Internet before learning it. - I appreciate your point about LaTeX macros looking like functions. The problem is that they really aren't functions and they don't behave like functions. I'm not sure what you mean by "poisonous behavior of TeX's \let". I don't know what you mean by normal circumstances, maybe just document writing and not style or class writing. As for omitting braces being a hold over from TeX, I can just say that for myself, I learned LaTeX first and was frustrated that I couldn't understand any of the .sty files; so I read the TeXbook. –  TH. Aug 25 '10 at 20:47 I was being colorful with "poisonous"; I mean that its syntax is totally different than other commands in that the braces are impossible (and sometimes an = is allowed!). By "normal circumstances" I mean precisely document writing, but I also think the packages could be improved by clear coding. They are, as you observe, completely unintelligible and probably only comprehensible to the author; most of their contents appear to be random strings of symbols and the inclusion of some coding style would probably be to their benefit. –  Ryan Reich Aug 26 '10 at 0:44 There are several TeX builtins that don't allow braces. \input is another. I believe \font is similar. Regarding, =, I believe that any assignment can have an optional =, so that's consistent. I guess I wasn't clear about .sty files. Before I read the TeXbook, I couldn't understand style files because using only what's described in Lamport's book is insufficient. After reading the TeXbook, I can follow most styles and classes. –  TH. Aug 26 '10 at 5:40 There are also TeX builtins which require braces, like \def (in the definition part), which combined with the optional use of = only for assignments is not a convincing argument for the consistency of anything. TeX is a brilliant program but its syntax makes it an awful programming language, which LaTeX tries somewhat to amend, so I think it's a mistake to intermingle TeX programming (and programming practices) with LaTeX document preparation. –  Ryan Reich Aug 27 '10 at 15:33 LaTeX is designed so that its macros sort of behave like functions, but if you really start to think they are functions then it will eventually bite you. At least, it bit me a lot until I eventually sort of learned how to think about macros. –  Mike Shulman Oct 7 '10 at 19:12 I use braces after \newcommand mainly to remind myself that I'm not using \def. But there is one (not very weighty) reason to keep the braces: a good LaTeX spellchecker like Excalibur looks for them, to forestall definition errors. Inserting the braces makes spellchecking painless. (Yes, I can spell, but it's best to stamp out misprints before sending files to my coauthors.) -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209561347961426, "perplexity": 1733.9150615779313}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/472684/quantum-mechanical-photon-exchange
# Quantum mechanical photon exchange [duplicate] Background: While trying to understand the Standard Model I stumbled on a paper that explained it in very simple terms. I recognized that I don't even understand the quantum mechanical way of the electric force. The paper pictured electromagnetic repulsion by two people on different boats, exchanging a bowling ball, one throwing, one catching. The bowling ball standing for an exchanged photon. That cannot be the whole of the truth, as it wouldn't, not even as a picture, explain the attracting force between two bodies with opposite charges. Now the question: how is electromagnetic repulsion and attraction really modeled in terms of quantum physics? Edit: the duplicates only address repulsion, or are out of reach for what I understand about quantum theory. So I state the missing part explicitly: How is an electron attracted to a positron by the exchange of a photon?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210469484329224, "perplexity": 590.2724295346487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00202.warc.gz"}
http://math.stackexchange.com/questions/167551/possible-ways-to-choose-from-n-different-numbers
# Possible ways to choose from $n$ different numbers This is a really basic question, and I should have paid more attention in discrete math class. I have $3$ booleans, and each can be represented by "yes" or "no". So that's kind of like $6$ possible options. How many possible combinations can I have out of these $3$ booleans? Doing it manually, I only come up with $8$ possibilites. Would this be a $\binom 6 3$ problem, because the result of that is $20$, so I'm not sure if that would be correct. Or would it be $3 \times 3$? Or what would be the general equation? - That is not like 6 possible options. – Jonas Meyer Jul 6 '12 at 16:59 There are two ways to choose the first value. For each of these, there are two ways to choose the second. For each of the $2\cdot2=4$ ways to choose the first two values, there are two ways to choose the third. So there are $2\cdot2\cdot2=8$ ways to choose all three. – David Mitra Jul 6 '12 at 16:59 Ahhh I see..it was that basic all along, I was overthinking it.. – maq Jul 6 '12 at 17:01 Why is this different from 6 possible options? – maq Jul 6 '12 at 17:01 @mohabitar: Because you are not choosing 1 thing (or 3 things) from a set of 6 things, you are choosing 1 thing each from 3 sets of 2 things. Based on your guess of $6\choose 3$, you would have allowed things like choosing both yes and no in the first boolean, choosing yes in the second, and not choosing anything for the third. – Jonas Meyer Jul 6 '12 at 17:04 Assume you have $n$ events that are each independent of each other, which I take to mean the outcome of any event will not affect the outcome of any other event. If there are $o_i$ outcomes for event $i$, then the total number of outcomes will be $(o_1)(o_2) \cdots (o_n)$. In this case specifically, you have 3 events and each has 2 possible outcomes, so there are $2 \cdot 2 \cdot 2 = 8$ total possible outcomes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467785120010376, "perplexity": 214.11876397856204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111865.15/warc/CC-MAIN-20160428161511-00035-ip-10-239-7-51.ec2.internal.warc.gz"}
https://mathhelpboards.com/threads/step-paper-1-q3-1998.2037/
# STEP Paper 1 Q3 1998 #### CaptainBlack ##### Well-known member Jan 26, 2012 890 Just to show that not everything in a STEP paper in difficult, this is an easy question: Which of the following are true and which false? Justify your answers (i) $$a^{\ln(b)}=b^{\ln(a)}$$, for all $$a,b \gt 0$$. (ii) $$\cos(\sin(\theta))=\sin(\cos(\theta))$$, for all real $$\theta$$. (iii) There exists a polynomial $$P$$ such that $$|P(\theta)-\cos(\theta)| \lt 10^{ -6 }$$ for all real $$\theta$$ (iv) $$x^4+3+x^{-4} \ge 5$$ for all $$x\gt 0$$. Last edited: #### Amer ##### Active member Mar 1, 2012 275 1) True take the ln for both sides 2) False take theta = 0 3) that true using Taylor expansion of cos(theta) 4) How to solve it ? #### Evgeny.Makarov ##### Well-known member MHB Math Scholar Jan 30, 2012 2,492 3) that true using Taylor expansion of cos(theta) Really? 4) How to solve it ? I assume z should be replaced by x. One way is to express $x^4+x^{-4}$ through $x+x^{-1}$. One needs to know that $x+x^{-1}\ge2$ for x > 0. #### chisigma ##### Well-known member Feb 13, 2012 1,704 3) that true using Taylor expansion of cos(theta) The question is that is true for all $\theta$... the function $\cos \theta$ is bounded in $\theta \in \mathbb{R}$, any polinomial $P(\theta)$ which is not a constant is unbounded in $\theta \in \mathbb{R}$... Kind regards $\chi$ $\sigma$ #### CaptainBlack ##### Well-known member Jan 26, 2012 890 1) True take the ln for both sides It is true, but that is not as it stands a valid explanation, you are assuming it true and deriving a truth, which is invalid logic. You need to start with a known truth and from that derive the equality you are seeking to justify. 2) False take theta = 0 Yes. 3) that true using Taylor expansion of cos(theta) No, a Taylor expansion is not a polynomial, and a Taylor polynomial does not satisfy what is to be demonstrated for all $$\theta$$ 4) How to solve it ? $$f(x)=x^4+3+x^{-4}$$ is continuous and differentiable for $$x\gt 0$$, it goes to $$+\infty$$ at $$x=0$$ and as $$x\to \infty$$. It has one stationary point in $$(0,\infty)$$ at $$x=1$$, which therefore must be a minimum and $$f(1)=5$$ CB #### Amer ##### Active member Mar 1, 2012 275 $\ln(a)\ln(b) = \ln(b)\ln(a)$ $\ln(b^{\ln(a)}) = \ln(a^{\ln(b)})$ #### CaptainBlack ##### Well-known member Jan 26, 2012 890 $\ln(a)\ln(b) = \ln(b)\ln(a)$ $\ln(b^{\ln(a)}) = \ln(a^{\ln(b)})$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932951033115387, "perplexity": 948.7672295174333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00047.warc.gz"}
http://mathhelpforum.com/calculus/33102-boundary-condition.html
## boundary condition What should be the boundary condition that we must take while solving a linear advection equation while the initial condition like u(x,0)=-sin(pi*x) and its domain being bound between [-1,1].........
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942241311073303, "perplexity": 1038.9683091962593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00471-ip-10-31-129-80.ec2.internal.warc.gz"}
https://arxiv.org/abs/1010.0161
math.PR (what is this?) # Title: Taylor expansions of solutions of stochastic partial differential equations with additive noise Abstract: The solution of a parabolic stochastic partial differential equation (SPDE) driven by an infinite-dimensional Brownian motion is in general not a semi-martingale anymore and does in general not satisfy an It\^{o} formula like the solution of a finite-dimensional stochastic ordinary differential equation (SODE). In particular, it is not possible to derive stochastic Taylor expansions as for the solution of a SODE using an iterated application of the It\^{o} formula. Consequently, until recently, only low order numerical approximation results for such a SPDE have been available. Here, the fact that the solution of a SPDE driven by additive noise can be interpreted in the mild sense with integrals involving the exponential of the dominant linear operator in the SPDE provides an alternative approach for deriving stochastic Taylor expansions for the solution of such a SPDE. Essentially, the exponential factor has a mollifying effect and ensures that all integrals take values in the Hilbert space under consideration. The iteration of such integrals allows us to derive stochastic Taylor expansions of arbitrarily high order, which are robust in the sense that they also hold for other types of driving noise processes such as fractional Brownian motion. Combinatorial concepts of trees and woods provide a compact formulation of the Taylor expansions. Comments: Published in at this http URL the Annals of Probability (this http URL) by the Institute of Mathematical Statistics (this http URL) Subjects: Probability (math.PR) Journal reference: Annals of Probability 2010, Vol. 38, No. 2, 532-569 DOI: 10.1214/09-AOP500 Report number: IMS-AOP-AOP500 Cite as: arXiv:1010.0161 [math.PR] (or arXiv:1010.0161v1 [math.PR] for this version) ## Submission history From: Arnulf Jentzen [view email] [v1] Fri, 1 Oct 2010 13:40:58 GMT (205kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253688216209412, "perplexity": 603.855992014247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813608.70/warc/CC-MAIN-20180221103712-20180221123712-00506.warc.gz"}
http://math.stackexchange.com/questions/64676/orthonormal-system-in-a-hilbert-space/65068
# orthonormal system in a Hilbert space Let $\{e_n\}$ be an orthonormal basis for a Hilbert space $H$. Let $\{f_n\}$ be an orthonormal set in $H$ such that $\sum_{n=1}^{\infty}{\|f_n-e_n\|}<1$. How do I show that $\{f_n\}$ is also an orthonormal basis for $H$? - It's already an orthonormal set, so it suffices to show it's a basis. Perhaps we can proceed by supposing $\langle x,f_n\rangle=0$ for every $f_n$ in the set for some $x\in H$... – anon Sep 15 '11 at 3:44 Since $\mathcal S:=\{f_n,\;n\in\mathbb N\}$ is an orthonormal subset of $H$, it suffices to show that $\mathcal S^\perp=\{0\}.$ Having this in mind, pick $x\in H$ belonging to $\mathcal S^\perp$. Then for every $n\in\mathbb N$ one has $$0=\langle x,f_n\rangle=\langle x,f_n-e_n+e_n\rangle\Rightarrow \langle x,e_n\rangle=\langle x,e_n-f_n\rangle.$$ Now, since $\{e_n\}$ is an orthonormal basis, one has $$x=\sum_{n=1}^{+\infty}\langle x,e_n\rangle e_n=\sum_{n=1}^{+\infty}\langle x,e_n-f_n \rangle e_n.$$ If $x$ were not $0$, then one would obtain a contradiction as follows: $$\|x\|=\sum_{n=1}^{+\infty}|\langle x,e_n-f_n\rangle|\stackrel{C.S.}{\leq}\|x\|\sum_{n=1}^{+\infty}\|e_n-f_n\|.$$ From this last relation, one may divide out by $\|x\|\neq 0$ by our assumption and obtain $$1\leq\sum_{n=1}^{+\infty}\|e_n-f_n\|,$$ but this contradicts the initial hypothesis. Hence $x=0$ and $\mathcal S^{\perp}=\{0\}.$ This concludes the proof. Edit Yes, i think i need some changes, thanks Matthew for pointing it out. Ok here is my fix: $$\|x\|^2=\sum_{n=1}^{+\infty}|\langle x,f_n-e_n \rangle|^2\leq \|x\|^2\sum_{n=1}^{+\infty}\|f_n-e_n\|^2,$$ again by Cauchy Schwarz, and if $x\neq 0$ we can divide out and obtain $$(\diamondsuit)\quad 1\leq \sum_{n=1}^{+\infty}\|f_n-e_n\|^2.$$ Now, since $$\sum_{n=1}^{+\infty}\|f_n-e_n\|<1,$$ readily implies that, for every $n\in\mathbb N$, $$\|f_n-e_n\|<1\Rightarrow \|f_n-e_n\|^2<\|f_n-e_n\|.$$ But this means $$(\diamondsuit)<\sum_{n=1}^{+\infty}\|f_n-e_n\|<1\Rightarrow 1<1.$$ Which is absurd. Again then $x=0$ and we conclude in the same way as before. Shouldn't you have $\|x\|^2 = \sum_n |\langle x,e_n-f_n\rangle|^2$?? – Matthew Daws Sep 16 '11 at 14:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936960339546204, "perplexity": 84.62245303437012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00124-ip-10-185-217-139.ec2.internal.warc.gz"}
http://fossil.twicetwo.com/arend.pl/info/ec0abfb8ebdfd0b0839df9c266a6b61096ff7800
Check-in [ec0abfb8eb] Not logged in Overview Comment: Added reference to Gacek's PhD thesis in the section on induction. Added screenshot of a completed proof. Tarball | ZIP archive | SQL archive family | ancestors | descendants | both | bc-subst files | file ages | folders ec0abfb8ebdfd0b0839df9c266a6b61096ff7800 andy 2015-04-21 16:26:30 Context 2015-04-21 19:26 The new clausal-expansion system for goals is complete. Closed-Leaf check-in: d8a8e82a90 user: andy tags: bc-subst 16:26 Added reference to Gacek's PhD thesis in the section on induction. Added screenshot of a completed proof. check-in: ec0abfb8eb user: andy tags: bc-subst 16:25 Removed hover styles from premise/conclusion classes (this should be handled in JS, so that it can be disabled when the proof is complete). check-in: 6feebbe110 user: andy tags: bc-subst Changes Changes to report/arend-report.bib. 48 49 50 51 52 53 54 55 56 57 58 59 60 61 author={Page, Rex and Eastlund, Carl and Felleisen, Matthias}, booktitle={Proceedings of the 2008 international workshop on Functional and declarative programming in education}, pages={21--30}, year={2008}, organization={ACM} } @inproceedings{Ford:2004:PEG:964001.964011, author = {Ford, Bryan}, title = {Parsing Expression Grammars: A Recognition-based Syntactic Foundation}, booktitle = {Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages}, series = {POPL '04}, year = {2004}, > > > > > > > > > > > > > 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 author={Page, Rex and Eastlund, Carl and Felleisen, Matthias}, booktitle={Proceedings of the 2008 international workshop on Functional and declarative programming in education}, pages={21--30}, year={2008}, organization={ACM} } @PHDTHESIS{gacek09phd, title = {A Framework for Specifying, Prototyping, and Reasoning about Computational Systems}, author = {Andrew Gacek}, school = {University of Minnesota}, pdf = {http://www.cs.umn.edu/~agacek/pubs/gacek-thesis/gacek-thesis.pdf}, arxiv = {http://arxiv.org/abs/0910.0747}, year = 2009, month = {September}, slides = {http://www.cs.umn.edu/~agacek/pubs/slides/gacek09phd-slides.pdf} } @inproceedings{Ford:2004:PEG:964001.964011, author = {Ford, Bryan}, title = {Parsing Expression Grammars: A Recognition-based Syntactic Foundation}, booktitle = {Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages}, series = {POPL '04}, year = {2004}, Changes to report/arend-report.pdf. cannot compute difference between binary files Changes to report/arend-report.tex. 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 .... 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 \newcommand{\link}[2]{#2\footnote{\url{#1}}} % Citations. Both in the text and in the sidebar. The NoHyper makes the actual % citation link to the bibliography at the end, rather than to the bibentry % in the margin. It also has the unfortunate side effect of making any URLs % in the margin entry not work, but the URLs in the actual bibliography still % work as expected, and they're only a click away. \newcommand{\mcitet}[1]{\citet{#1}\marginnote{\begin{NoHyper}\bibentry{#1}\end{NoHyper}}} \newcommand{\mcitep}[1]{\citep{#1}\marginnote{\begin{NoHyper}\bibentry{#1}\end{NoHyper}}} % Some commands for proof-state objects \newcommand{\hole}{\;?\;} \newcommand{\proof}[3]{#2 \vdash #1 \rightarrow #3} \newcommand{\emptyproof}[1]{\proof{#1}{\Gamma}{\hole{}}} \newcommand{\ctxproof}[1]{\proof{G}{\Gamma,#1}{\hole{}}} ................................................................................ \begin{figure*}[t!] \begin{centering} \includegraphics[width=6.5in]{proof-sample.png} \end{centering} \caption{The proof assistant interface, with an incomplete inductive proof} \label{fig:passist1} \end{figure*} \clearpage \section{Implementation} \newthought{Arend is implemented as} a web-based system, with a server component, written in Prolog and running in the \link{http://swi-prolog.org}{SWI-Prolog} and a browser-based frontend. ................................................................................ multiple antecedents is not allowed) and supports only induction global to a proof (i.e., nested inductions are not allowed, although they can be faked'' by using lemmas). These restrictions imply that the induction hypothesis can be regarded as being global to a proof, thus eliminating the need to restrict the scope of difference induction hypotheses to different branches of the proof tree. Internally, induction is implemented by \emph{goal tagging}. When an inductive proof is declared, a particular goal in the antecedents is selected, by the user, to be the target of the induction.\marginnote{For example, in a proof of $\mathsf{nat}(X) |- \mathsf{add}(X,0,X)$ we would induct on $\mathsf{nat}(X)$.} This goal must be a user goal; it cannot be a built-in operator such as conjunction, disjunction, or unification. The functor of the goal is internally flagged as being big'' (indicated as $\uparrow$) and the induction hypothesis is defined in terms of the same goal, but flagged as | | > > > > > > > > > > > > > | > > 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 .... 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 \newcommand{\link}[2]{#2\footnote{\url{#1}}} % Citations. Both in the text and in the sidebar. The NoHyper makes the actual % citation link to the bibliography at the end, rather than to the bibentry % in the margin. It also has the unfortunate side effect of making any URLs % in the margin entry not work, but the URLs in the actual bibliography still % work as expected, and they're only a click away. \newcommand{\mcitet}[2][]{\citet[#1]{#2}\marginnote{\begin{NoHyper}\bibentry{#2}\end{NoHyper}}} \newcommand{\mcitep}[2][]{\citep[#1]{#2}\marginnote{\begin{NoHyper}\bibentry{#2}\end{NoHyper}}} % Some commands for proof-state objects \newcommand{\hole}{\;?\;} \newcommand{\proof}[3]{#2 \vdash #1 \rightarrow #3} \newcommand{\emptyproof}[1]{\proof{#1}{\Gamma}{\hole{}}} \newcommand{\ctxproof}[1]{\proof{G}{\Gamma,#1}{\hole{}}} ................................................................................ \begin{figure*}[t!] \begin{centering} \includegraphics[width=6.5in]{proof-sample.png} \end{centering} \vspace{1em} \caption{The proof assistant interface, with an incomplete inductive proof} \label{fig:passist1} \end{figure*} \begin{figure*}[t!] \begin{centering} \includegraphics[width=6.5in]{proof-complete.png} \end{centering} \caption{A completed proof} \label{fig:passist2} \end{figure*} \clearpage \section{Implementation} \newthought{Arend is implemented as} a web-based system, with a server component, written in Prolog and running in the \link{http://swi-prolog.org}{SWI-Prolog} and a browser-based frontend. ................................................................................ multiple antecedents is not allowed) and supports only induction global to a proof (i.e., nested inductions are not allowed, although they can be faked'' by using lemmas). These restrictions imply that the induction hypothesis can be regarded as being global to a proof, thus eliminating the need to restrict the scope of difference induction hypotheses to different branches of the proof tree. Internally, induction is implemented by \emph{goal annotation} \mcitep[sec. 5.2]{gacek09phd}. When an inductive proof is declared, a particular goal in the antecedents is selected, by the user, to be the target of the induction.\marginnote{For example, in a proof of $\mathsf{nat}(X) |- \mathsf{add}(X,0,X)$ we would induct on $\mathsf{nat}(X)$.} This goal must be a user goal; it cannot be a built-in operator such as conjunction, disjunction, or unification. The functor of the goal is internally flagged as being big'' (indicated as $\uparrow$) and the induction hypothesis is defined in terms of the same goal, but flagged as
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448171019554138, "perplexity": 1574.444584707722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00409.warc.gz"}
https://www.physicsforums.com/threads/how-to-find-the-inductance-of-this-circuit.702470/
# Homework Help: How to find the inductance of this circuit 1. Jul 22, 2013 ### MissP.25_5 I got stuck doing this question. Please correct my mistakes and help me out. 1. The angular frequency ω is FIXED to 2 [rad/s] whereas the inductance L is changeable. When L=1/8, resonance occurs in the circuit and the magnitude of current i reaches its minimum value. From this state, L is increased to L=L1 and current i becomes √2 of its minimum value. Find L1. 2. The inductance L is fixed to L1 (found in question 1) while ω is variable. When ω= ω0, the magnitude of current i reaches its minimum value. Find ω0. #### Attached Files: File size: 118.6 KB Views: 100 • ###### IMG_4396.jpg File size: 24.5 KB Views: 98 2. Jul 22, 2013 ### Staff: Mentor Check your value for ZL. (At resonance you'd expect the inductive and capacitive impedances to be complex conjugates (i.e., they'll cancel if added)). Since you're looking for a ratio in part 2, you might as well just choose a convenient value for the voltage source. Letting e = 1 [V] looks promising. What's the current Io at resonance then? 3. Jul 22, 2013 ### MissP.25_5 At resonance, the ZL+ZC is 0 in a series circuit. Is it the same as in this parallell circuit? 4. Jul 22, 2013 ### Staff: Mentor You'll find that they also cancel when added in parallel. Try it: $$Z = \frac{1}{\frac{1}{R} + \frac{1}{ZL} + \frac{1}{ZC}}$$ and at resonance ZC = -ZL ... so ... 5. Jul 22, 2013 ### MissP.25_5 Is the circuit still in resonant state when L is is increased to L1?I guess not, though, cuz then L=1/8 and it's back to square 1. Last edited: Jul 22, 2013 6. Jul 23, 2013 ### MissP.25_5 So in a parallel circuit, the total impedance of ZL and ZC is indeed 0. But when L increases to L1, the circuit is no more at resonance, isn't it? Am I doing this right? The method I am using seems to be too long, and I think it's impossible to get it. #### Attached Files: • ###### IMG_4412.jpg File size: 53.6 KB Views: 97 Last edited: Jul 23, 2013 7. Jul 23, 2013 ### Staff: Mentor I see that the math got a bit hairy pretty quickly. So if I might suggest... rather than going for the total impedance, you can go for the total current right away. This is a parallel circuit so every branch has the same potential difference and the branch currents sum. Choosing a suitable potential for the voltage source, say 1 V at 2 rad/sec, will make summing the currents a piece of cake. 8. Jul 23, 2013 ### MissP.25_5 When finding the sum of the currents, should I take the magnitudes of iC and iL or just leave them be as complex terms? I'm still not used to when to use magnitudes, can you give me some tips? 9. Jul 23, 2013 ### Staff: Mentor Almost always you want to keep everything in complex form. An exception is when you are calculating power, but even then you can do that in complex form as well (calculating the complex power and then extracting the effective power as the real term). This avoids having to remember how to deal with power factors applied to the product of the voltage and current magnitudes 10. Jul 23, 2013 ### MissP.25_5 Ok, I'm done with the calculation. I got L1=1/8, this cannot be right ???? I used your method of using current and then I just equate the terms with its real and imaginary coefficients. I'm not sure if I did it right, though. Last edited: Jul 23, 2013 11. Jul 23, 2013 ### Staff: Mentor You can sum the currents as before, but this time it's the magnitude of the current you're looking for. So the magnitude rises to √2 x the initial current. I'm not seeing 1/8 for L1. 12. Jul 23, 2013 ### MissP.25_5 Of course, cuz L=1/8 is when the circuit is at resonance. I forgot. 13. Jul 23, 2013 ### MissP.25_5 Ok, I fixed it. L=1/6, right? Last edited: Jul 23, 2013 14. Jul 23, 2013 ### Staff: Mentor Yes! 15. Jul 23, 2013 ### MissP.25_5 Now, to find ω0, do I have to use the same method too? 16. Jul 23, 2013 ### Staff: Mentor This time you have all the component values, and in particular, the inductance and capacitance. At resonance, what condition holds for those two components? (it was mentioned earlier). 17. Jul 23, 2013 ### MissP.25_5 ZL=-ZC is the condition. But, how do you know that it's at resonance? It only says that the current reaches its minimum value. But I used current to find omega and got the answer. Look... #### Attached Files: • ###### IMG_4419.jpg File size: 28.1 KB Views: 103 18. Jul 23, 2013 ### Staff: Mentor The only time the current can reach its minimum value is when the circuit is at resonance. That is, the capacitor and inductor impedances mutually cancel and they "disappear" from the circuit. Minimum current for a parallel RLC circuit occurs at resonance. Knowing that, you know that XL = XC; the reactances are equal. (Reactance is magnitude of the impedance) So just equate the reactances of the two reactive components. The result you found is correct, even if you pursued a longer path to it
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495059013366699, "perplexity": 1939.7191007122574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00177.warc.gz"}
http://mathhelpforum.com/advanced-statistics/5845-hypergeometric-probability.html
1. ## Hypergeometric Probability Have a problem: The sizes of animal populations are often estimated by using a capture-tag-recapture method. In this method k animals are captured, tagged, and then released into the population. Some time later n animals are captured, and Y, the number of tagged animals among the n, is noted. The probabilities associated with Y are a function of N, the number of animals in the population, so the observed value of Y contains information on this unknown N. Suppose that k=4 animals are tagged and then released. A sample of n=3 animals is then selected at random from the same population. Find P(Y=1) as a function of N. What value of N will maximize P(Y=1). Note that for the following [a,b] is the combinations rule such that this equals a!/b!(a-b)!. I said to let X be the number of tagged animals among the n. X therefore is a hypergeometric probability distribution with r=4, y=1, n=3, and N=N. Therefore the distribution would be ([4,1]*[N-4,2])/[N,3]. Is this right and is this a function of N. How would you maximize this function? 2. Originally Posted by JaysFan31 Have a problem: The sizes of animal populations are often estimated by using a capture-tag-recapture method. In this method k animals are captured, tagged, and then released into the population. Some time later n animals are captured, and Y, the number of tagged animals among the n, is noted. The probabilities associated with Y are a function of N, the number of animals in the population, so the observed value of Y contains information on this unknown N. Suppose that k=4 animals are tagged and then released. A sample of n=3 animals is then selected at random from the same population. Find P(Y=1) as a function of N. What value of N will maximize P(Y=1). Note that for the following [a,b] is the combinations rule such that this equals a!/b!(a-b)!. I said to let X be the number of tagged animals among the n. X therefore is a hypergeometric probability distribution with r=4, y=1, n=3, and N=N. Therefore the distribution would be ([4,1]*[N-4,2])/[N,3]. Is this right and is this a function of N. How would you maximize this function? This looks right to me. ([4,1]*[N-4,2])/[N,3] is P(Y=1), not the distribution. Writing out this probability as factorials, cancelling where possible, and ignoring any factor not having an N in it, I get P(Y=1) ~ (N-4)(N-5)/N(N-1)(N-2). This isn't nice to maximize using differentiation, so I plugged it into a spreadsheet and tried values. I found N = 10 was the maximizer. I did this in a hurry, so please confirm my calculations. 3. I got 12((N-4)(N-5)/N(N-1)(N-2)). I found the maximum of this to be 3. Can anyone confirm this? 4. Originally Posted by JaysFan31 I got 12((N-4)(N-5)/N(N-1)(N-2)). I found the maximum of this to be 3. Can anyone confirm this? The value 3 cannot be either the maximizer N or the function value at the maximizer. First, the formula would be invalid at N = 3 as it is not possible to take a sample without replacement of size 4 from a population of size 3. Second, the formula is for a probability so its value is between 0 and 1.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489410877227783, "perplexity": 542.3871331669836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550199.46/warc/CC-MAIN-20171214183234-20171214203234-00073.warc.gz"}
http://clay6.com/qa/50139/which-of-the-following-is-not-correctly-matched
Browse Questions # Which of the following is not correctly matched $\begin{array}{1 1}(A)\;\text{Dengue fever-Arbovirus}\\(B)\;\text{Plague-Yersinia pestis}\\(C)\;\text{Syphilis-Trichuris trichura}\\(D)\;\text{Sleeping sickness-Trypanosoma gambiens}\end{array}$ Syphilis-Trichuris trichura Hence (C) is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835422039031982, "perplexity": 1616.2649113777331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/598674/jordan-canonical-form-for-a-matrix
# Jordan canonical form for a matrix How do I find the Jordan canonical form and its transitions matrix of this matrix? $$\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix}$$ The characteristic polynomial is $$(x+1)(x-1)^3$$ and the eigenvectors are for $$x=1$$ we have $$(0,0,0,1)$$, $$(0,1,1,0)$$, $$(1,0,0,0)$$ and for the $$x=-1$$ we have $$(0,-1,1,0)$$. • HINT: For each eigenvalue, the geometric multiplicity agrees with the algebraic multiplicity. – vadim123 Dec 8 '13 at 21:00 • Check the minimal pol. of the matrix is $\;(x-1)(x+1)\;$ and thus it is diagonalizable, what makes its JCF pretty boring...and simple. – DonAntonio Dec 8 '13 at 21:05 $$J = \left[ \begin{array}{rrrr} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right]$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598832726478577, "perplexity": 461.83736636390495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00015.warc.gz"}
https://discourse.ladybug.tools/t/sample-case-component-error-missing-sampledict-folder-and-xy-file/7376
# Sample case component error, missing sampleDict folder and .xy file Hi! I was trying to reload a case with the Butterfly_Sample Case component to adjust the test point surface for new results. I have however encountered this error. 1. Solution exception:Could not find a part of the path ‘C:\Users\christopher\butterfly\outdoor_airflow_PB_FlippedRibbed\postProcessing\sampleDict\1000\wind_analysis_u.xy’. The sampleDict folder with the file was not created in the postProcessing folder. I wonder what went wrong? Thanks in advance for the help! 2 Likes I have the same problem when using the sampleCase component. Any one with any updates or solutions? Thanks!! 1 Like I also have the same problem when using the sampleCase component. Some feedback here would be much appreciated! Cheers Josh I’ve the same issue… Did someone solve it ? I also have the same problem. I don’t know what caused it, but I’ll write my solution here for others who encounter the same problem. The last line of the ir.bat file in the project folder was as follows. ``````postProcess | tee log\postProcess.log `````` I modified it as follows and executed the ir.bat file. ``````postProcess -func sampleDict -latestTime | tee log\postProcess.log `````` If you run sampleCase afterwards, the ir.bat file is still not written out correctly, but it will work because it has been sampled once. 1 Like
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384044766426086, "perplexity": 2569.67626344975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00616.warc.gz"}
https://www.physicsforums.com/threads/proving-the-weyl-tensor-is-zero-problem.642584/
# Homework Help: Proving the weyl tensor is zero problem 1. Oct 9, 2012 ### Airsteve0 1. The problem statement, all variables and given/known data Show that all Robertson - Walker models are conformally flat. 2. Relevant equations Robertson Walker Metric: $ds^{2}=a^{2}(t)\left(\frac{dr^{2}}{1-Kr^{2}}+r^{2}(d\theta^{2}+(sin\theta)^{2}d\phi^{2} )\right)-dt^{2}$ Ricci Tensor: $R_{\alpha\beta}=2Kg_{\alpha\beta}$ Ricci Scalar: $R=8K$ Weyl Tensor: $C_{\alpha\beta\gamma\delta}=R_{\alpha\beta\gammaδ}-\frac{1}{2}(g_{\alpha\gamma}R_{\beta\delta}-g_{\alpha\delta}R_{\beta\gamma}-g_{\beta\gamma}R_{\alpha\delta}+g_{\beta\delta}R_{\alpha\gamma}) + \frac{R}{6}(g_{\alpha\gamma}g_{\beta\delta}-g_{\alpha\delta}g_{\beta\gamma})$ 3. The attempt at a solution In order for the models to be conformally flat the weyl tensor must vanish, therefore that is what I have tried to show. By subbing in the values for the Ricci tensor and the Ricci scalar (both of which were given in a lecture by my professor) I arrived at the following expression: $C_{\alpha\beta\gammaδ}=R_{\alpha\beta\gamma\delta}-\frac{2}{3}K(g_{\alpha\gamma}g_{\beta\delta}-g_{\alpha\delta}g_{\beta\gamma})$ However, as you can see I am left with the Riemann tensor undefined and I cannot show the weyl tensor to be zero. Any help is greatly appreciated, thanks! 2. Oct 9, 2012 ### Hypersphere I'm pretty sure your textbook contains the definition of the Riemann tensor, and of the other tensors. Look it up. (You will only need to know the metric to calculate it.) Then, the Ricci tensor is related to the Riemann one through $R_{ij} = R^k_{\, ikj}$, and, finally, the Ricci scalar is simply the trace of the Ricci tensor. So well, I'd recommend you start working from the definitions. It is probably more instructive to derive those forms of the Ricci tensor and scalar yourself, and see if you get the same forms for this specific metric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961877167224884, "perplexity": 422.70440061230244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00569.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=IN2P3&langue=fr&action_todo=view&id=in2p3-00666970&version=1
HAL : in2p3-00666970, version 1 Physical Review C 85 (2012) 014903 System size and energy dependence of near-side di-hadron correlations STAR Collaboration(s) (2012) Two-particle azimuthal ($\Delta\phi$) and pseudorapidity ($\Delta\eta$) correlations using a trigger particle with large transverse momentum ($p_T$) in $d$+Au, Cu+Cu and Au+Au collisions at $\sqrt{s_{{NN}}}$ =\xspace 62.4 GeV and 200~GeV from the STAR experiment at RHIC are presented. The \ns correlation is separated into a jet-like component, narrow in both $\Delta\phi$ and $\Delta\eta$, and the ridge, narrow in $\Delta\phi$ but broad in $\Delta\eta$. Both components are studied as a function of collision centrality, and the jet-like correlation is studied as a function of the trigger and associated $p_T$. The behavior of the jet-like component is remarkably consistent for different collision systems, suggesting it is produced by fragmentation. The width of the jet-like correlation is found to increase with the system size. The ridge, previously observed in Au+Au collisions at $\sqrt{s_{{NN}}}$ = 200 GeV, is also found in Cu+Cu collisions and in collisions at $\sqrt{s_{{NN}}}$ =\xspace 62.4 GeV, but is found to be substantially smaller at $\sqrt{s_{{NN}}}$ =\xspace 62.4 GeV than at $\sqrt{s_{{NN}}}$ = 200 GeV for the same average number of participants ($\langle N_{\mathrm{part}}\rangle$). Measurements of the ridge are compared to models. Thème(s) : Physique/Physique Nucléaire Expérimentale Lien vers le texte intégral : http://inspirehep.net/record/943192 in2p3-00666970, version 1 http://hal.in2p3.fr/in2p3-00666970 oai:hal.in2p3.fr:in2p3-00666970 Contributeur : Dominique Girod <> Soumis le : Lundi 6 Février 2012, 15:44:54 Dernière modification le : Lundi 6 Février 2012, 15:55:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738638997077942, "perplexity": 3285.924295020632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00157-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.groundai.com/project/coefficients-of-bosonized-dimer-operators-in-spin-12-xxz-chains-and-their-applications/
Coefficients of bosonized dimer operators in spin-\frac{\boldsymbol{1}}{\boldsymbol{2}} XXZ chains and their applications # Coefficients of bosonized dimer operators in spin-12 XXZ chains and their applications Shintaro Takayoshi Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan    Masahiro Sato Condensed Matter Theory Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Department of Physics and Mathematics, Aoyama-Gakuin University, Sagamihara, Kanagawa 229-8558, Japan July 11, 2019 ###### Abstract Comparing numerically evaluated excitation gaps of dimerized spin- XXZ chains with the gap formula for the low-energy effective sine-Gordon theory, we determine coefficients and of bosonized dimerization operators in spin- XXZ chains, which are defined as and . We also calculate the coefficients of both spin and dimer operators for the spin- Heisenberg antiferromagnetic chain with a nearest-neighbor coupling and a next-nearest-neighbor coupling . As applications of these coefficients, we present ground-state phase diagrams of dimerized spin chains in a magnetic field and antiferromagnetic spin ladders with a four-spin interaction. The optical conductivity and electric polarization of one-dimensional Mott insulators with Peierls instability are also evaluated quantitatively. ###### pacs: 75.10.Pq, 75.10.Jm, 75.30.Kz, 75.40.Cx ## I Introduction Quantum magnets in one dimension are a basic class of many-body systems in condensed matter and statistical physics (see e.g., Refs. Giamarchi, ; Affleck, ). They have offered various kinds of topics in both experimental and theoretical studies for a long time. In particular, the spin- XXZ chain is a simple though realistic system in this field. The Hamiltonian is defined by HXXZ=J∑j(SxjSxj+1+SyjSyj+1+ΔzSzjSzj+1), (1) where is -component of a spin- operator on -th site, is the exchange coupling constant, and is the anisotropy parameter. This model is exactly solved by integrability methods, Korepin (); Takahashi () and the ground-state phase diagram has been completed. Three phases appear depending on ; the antiferromagnetic (AF) phase with a Néel order (), the critical Tomonaga-Luttinger liquid (TLL) phase (), and the fully polarized phase with (). In and around the TLL phase, the low-energy and long-distance properties can be understood via effective field theory techniques such as bosonization and conformal field theory (CFT). Giamarchi (); Affleck (); Tsvelik (); Gogolin (); Francesco () These theoretical results nicely explain experiments of several quasi one-dimensional (1D) magnets. The deep knowledge of this model is also useful for analyzing plentiful related magnetic systems, such as spin- chains with some perturbations (e.g. external fields, Alcaraz95 () additional magnetic anisotropies, Oshikawa97 (); Affleck99 (); Essler98 (); Kuzmenko09 () dimerization Haldane82 (); Papenbrock03 (); Orignac04 ()), coupled spin chains, Shelton96 (); Kim2000 () spatially anisotropic 2D or 3D spin systems, Starykh04 (); Balents07 (); Starykh07 () etc. A recent direction of studying spin chains is to establish solid correspondences between the model (1) and its effective theory. For example, Lukyanov and his collaborators Lukyanov97 (); Lukyanov99 (); Lukyanov03 () have analytically predicted coefficients of bosonized spin operators in the TLL phase. Hikihara and Furusaki Hikihara98 (); Hikihara04 () have also determined them numerically in the same chains with and without a uniform Zeeman term. Using these results, one can now calculate amplitudes of spin correlation functions as well as their critical exponents. Furthermore, effects of perturbations on an XXZ chain can also be calculated with high accuracy. It therefore becomes possible to quantitatively compare theoretical and experimental results in quasi 1D magnets. The purpose of the present study is to attach a new relationship between the spin- XXZ chain and its bosonized effective theory. Namely, we numerically evaluate coefficients of bosonized dimer operators in the TLL phase of the XXZ chain. Dimer operators , as well as spin operators, are fundamental degrees of freedom in spin- AF chains. In fact, the leading terms of both bosonized spin and dimer operators have the same scaling dimension at the -symmetric AF point (see Sec. II). In Refs. Hikihara98, ; Hikihara04, , Hikihara and Furusaki have used density-matrix renormalization-group (DMRG) method in an efficient manner in order to accurately evaluate coefficients of spin operators of an XXZ chain in a magnetic field. Instead of such a direct powerful method, we utilize the relationship between a dimerized XXZ chain and its effective sine-Gordon theory Essler98 (); Essler04 () to determine the coefficients of dimer operators (defined in Sec. II), i.e., excitation gaps in dimerized spin chains are evaluated by numerical diagonalization method and are compared with the gap formula of the effective sine-Gordon theory. In other words, we derive the information on uniform spin- XXZ chains from dimerized (deformed) chains. Moreover, we also determine the coefficients of both spin and dimer operators for the spin- Heisenberg (i.e., XXX) AF chain with an additional next-nearest-neighbor (NNN) coupling in the similar strategy. As seen in Sec. III.4, evaluated coefficients are more reliable for the - model, since the marginal terms vanish in its effective theory. The plan of this paper is as follows. In Sec. II, we shortly summarize the bosonization of XXZ spin chains. Both the XXZ chain with dimerization and the chain in a staggered magnetic field are mapped to a sine-Gordon model. We also consider the AF Heisenberg chain with NNN coupling . In Sec. III, we explain how to obtain the coefficients of dimer and spin operators by using numerical diagonalization method. The evaluated coefficients are listed in Tables 1 and 2 and Fig. 4. These are the main results of this paper. For comparison, the same dimer coefficients are also calculated by using the formula of the ground-state energy of the sine-Gordon model. We find that the coefficients fixed by the gap formula are more reliable. We apply these coefficients to several systems and physical quantities related to an XXZ chain (dimerized spin chains under a magnetic field, spin ladders with a four-spin exchange and optical response of dimerized 1D Mott insulators) in Sec. IV. Finally our results are summarized in Sec. V. ## Ii Dimerized chain and sine-Gordon model In this section, we explain the relationship between a dimerized XXZ chain and the corresponding sine-Gordon theory in the easy-plane region . XXZ chains in a staggered field and the AF Heisenberg chain with NNN coupling are also discussed. The coefficients of dimer operators are defined in Eq. (7). ### ii.1 Bosonization of spin-12 XXZ chain We first review the effective theory for undimerized spin chain (1). According to the standard strategy, XXZ Hamiltonian (1) is bosonized as HXXZeff=∫dx{v2[K−1(∂xϕ)2+K(∂xθ)2] −vλ2πcos(√16πϕ)+⋯}, (2) in the TLL phase. Here, and are dual scalar fields, which satisfy the commutation relation, [ϕ(x),θ(x′)]=−iϑstep(x−x′), (3) with ( is the lattice spacing). As we see in Eq. (6), is irrelevant in , and becomes marginal at the -symmetric AF Heisenberg point . The coupling constant has been determined exactly. Lukyanov98 (); Lukyanov03 () Two quantities and denote the TLL parameter and spinon velocity, respectively, which can be exactly evaluated from Bethe ansatz: Giamarchi (); Cabra98 () K= π2(π−cos−1Δz)=14πR2=12η, (4a) v= Jaπ√1−Δ2z2cos−1Δz=Jasin(πη)2(1−η). (4b) Here we have introduced new parameters and . The former is the critical exponent of two-point spin correlation functions and used in the discussion below. The latter is called the compactification radius. It fixes the periodicity of fields and as and . Using the scalar fields and , we can obtain the bosonized representation of spin operators: Szj≈ a√π∂xϕ+(−1)ja1cos(√4πϕ)+⋯, (5a) S+j≈ ei√πθ[b0(−1)j+b1cos(√4πϕ)+⋯], (5b) where and are non-universal constants, and some of them with small have been determined accurately in Refs. Lukyanov97, ; Lukyanov99, ; Lukyanov03, ; Hikihara98, ; Hikihara04, . In this formalism, vertex operators are normalized as Lukyanov97 (); Lukyanov99 (); Lukyanov03 () ⟨eiqϕ(x)e−iqϕ(x′)⟩=(a|x−x′|)Kq22πat|x−x′|≫a. (6) This means that the operator has scaling dimension . In addition to the spin operators, the bosonized forms of the dimer operators are known to be Giamarchi (); Affleck (); Tsvelik (); Gogolin () (−1)j(SxjSxj+1+SyjSyj+1)≈ dxysin(√4πϕ)+⋯, (7a) (−1)jSzjSzj+1≈ dzsin(√4πϕ)+⋯. (7b) In contrast to the spin operators, the coefficients and have never been evaluated so far. To determine them is the subject of this paper. It seems to be possible to calculate by utilizing Eq. (5) and operator-product-expansion (OPE) technique, Gogolin (); Tsvelik (); Francesco () but it requires the correct values of all the factors and Hikihara04 () Therefore, we should interpret that the dimer coefficients are independent of spin coefficients and . ### ii.2 Bosonization of dimerized spin chain Next, let us consider a bond-alternating XXZ chain whose Hamiltonian is given as HXXZ\mathchar45δ= J∑j[(1+(−1)jδxy)(SxjSxj+1+SyjSyj+1) +(Δz+(−1)jδz)SzjSzj+1]. (8) In the weak dimerization regime of , the bosonization is applicable and the dimerization terms can be treated perturbatively. From the formula (7), the effective Hamiltonian of Eq. (8) is HXXZ\mathchar45δeff =∫dx{v2[K−1(∂xϕ)2+K(∂xθ)2] +Ja(δxydxy+δzdz)sin(√4πϕ)+⋯}. (9) Here, we have neglected all of the irrelevant terms including . This is nothing but an integrable sine-Gordon model (see e.g., Refs. Essler98, ; Essler04, and references therein). The term has a scaling dimension , and is relevant when , i.e., . In this case, an excitation gap opens and a dimerization occurs. The excitation spectrum of the sine-Gordon model has been known, Essler98 (); Essler04 () and three types of elementary particles appear; a soliton, the corresponding antisoliton, and bound states of the soliton and the antisoliton (called breathers). The soliton and antisoliton have the same mass gap . There exist breathers, in which stands for the integer part of . The mass of soliton and -th breather are related as follows. EBn=2ESsin(nπ2(4η−1)),n=1,⋯,[4η−1]. (10) The breather mass in units of the soliton mass is shown in Fig. 1 as a function of . Note that there is no breather in the ferromagnetic side , and the lightest breather with mass is always heavier than the soliton in the present easy-plane regime. Following Refs. Zamolodchikov95, ; Lukyanov97, , the soliton mass is also analytically represented as ESJ= vJa2√πΓ(18η−2)Γ(24−1/η) ×⎡⎢ ⎢⎣Javπ(δxydxy+δzdz)2Γ(4−1/η4)Γ(14η)⎤⎥ ⎥⎦24−1/η. (11) In addition, the difference between the ground-state energy of the free-boson theory (2) with per site and that of the sine-Gordon theory (9), , has been predicted as Zamolodchikov95 (); Lukyanov97 () ΔEGSJ=Efree−ESGJ=14vJa(JavESJ)2tan(π214η−1). (12) However, we should note that the above formula is invalid for the ferromagnetic side () since it diverges at the XY point (). A similar sine-Gordon model also emerges in spin- XXZ chains in a staggered field, Hstag=HXXZ+∑j(−1)jhsSzj. (13) The staggered field induces a relevant perturbation . Therefore, the resultant effective Hamiltonian is Hstageff=HXXZeff+∫dxhsaa1cos(√4πϕ). (14) If we redefine the scalar field as , the form of Eq. (14) becomes equivalent to that of Eq. (9). Thus, the soliton gap of the model (14) is equal to Eq. (11) with the replacement of . Namely the soliton gap of the model (14) is given by ESJ= vJa2√πΓ(18η−2)Γ(24−1/η) ×⎡⎢ ⎢⎣Javπ(hsa1)2JΓ(4−1/η4)Γ(14η)⎤⎥ ⎥⎦24−1/η. (15) This type of staggered-field induced gaps has been observed in some quasi 1D magnets with an alternating gyromagnetic tensor or Dzyaloshinskii-Moriya interaction such as Cu benzoate. Oshikawa97 (); Affleck99 (); Essler98 (); Kuzmenko09 (); Dender97 () Masses of the soliton, antisoliton and breathers are related to the excitation gaps of the original lattice systems, Eqs. (8) and (13). The soliton and antisoliton correspond to the lowest excitations which change the component of total spin by . On the other hand, the lightest breather is regarded as the lowest excitation with . At the -symmetric AF point , there are three breathers. The soliton, antisoliton and lightest breather are degenerate and form the spin-1 triplet excitations (so-called magnons). The second lightest breather is interpreted as the singlet excitation with . In the ferromagnetic regime , where any breather disappears, the lowest soliton-antisoliton scattering state would correspond to the excitation gap in the sector of . ### ii.3 J-J2 antiferromagnetic spin chain In the previous two subsections, we have completely neglected effects of irrelevant perturbations in the low-energy effective theory. However, as already noted, the term becomes nearly marginal when the anisotropy approaches unity. In this case, the term is expected to affect several physical quantities. Actually, such effects have been studied in both the models (8) [Ref. Orignac04, ] and (13) [Refs. Oshikawa97, ; Affleck99, ]. It is known Haldane82 () that a small AF NNN coupling decreases the value of in the -symmetric AF Heisenberg chain. Okamoto and Nomura Okamoto92 () have shown that the marginal interaction vanishes, i.e., in the following model: Hnnn=∑j(JSj⋅Sj+1+J2Sj⋅Sj+2), (16) with . On the axis, this model is located at the Kosterlitz-Thouless transition point between the TLL and a spontaneously dimerized phase. From this fact, if we replace with in the -symmetric models (8) and (13), namely, if we consider the following models: ~HXXX\mathchar45δ= Hnnn+∑j(−1)jδJSj⋅Sj+1, (17a) ~Hstag= Hnnn+∑j(−1)jhsSzj, (17b) then their effective theories are much closer to a pure sine-Gordon model. In other words, the predictions from the sine-Gordon model, such as Eqs. (11) and (15), become more reliable. ## Iii Coefficients of Dimer and Spin Operators From the discussions in Sec. II, one can readily find a way of extracting the values of and in Eqs. (7) and (5) as follows. We first calculate some low-energy levels in and sectors of the models (8), (13) and (17) by means of numerical diagonalization method. Since all the Hamiltonians (8), (13) and (17) commute with , the numerical diagonalization can be performed in the Hilbert subspace with each fixed . In order to extrapolate gaps to the thermodynamic limit with reasonable accuracy, we use appropriate finite-size scaling methods Cardy84 (); Cardy86 (); Cardy86b (); Shanks55 () for spin chains under periodic boundary condition (total number of sites , 10, , 28, 30). Secondly, the coefficients and of the spin- XXZ chain and the - chain are determined via the comparison between the sine-Gordon gap formula (11) and numerically evaluated spin gaps for various values of and . In this procedure, (as already mentioned) the energy difference between the lowest (i.e., ground-state) and the second lowest levels of the sector (gap with ) and that between the ground-state level and the lowest level of the sector (gap with ) are respectively interpreted as the breather (or soliton-antisoliton scattering state) and soliton masses in the sine-Gordon scheme. ### iii.1 TLL phase and Numerical diagonalization In this subsection, we focus on the TLL phase of uniform spin- XXZ chains (1) and test the reliability of our numerical diagonalization. The low-energy properties are described by Eq. (2), which is a free boson theory (i.e., CFT with central charge ) with some irrelevant perturbations. Generally, the finite-size scaling formula for the excitation spectrum in any CFT has been proved Cardy84 (); Cardy86 () to be ΔEO≡EO−E0=2πvLa[O]+⋯. (18) Here and are respectively the ground-state energy and the energy of an excited state generating from a primary field in the given CFT. Remaining quantities , , and are the scaling dimension of the operator , the excitation velocity and the system length, respectively. In the case of the spin chain (1), the bosonization formula (5) indicates that and correspond to the excitation energies in the and sectors, respectively. The irrelevant perturbations can also contribute to the finite-size correction to excitation energies. From the and translational symmetries of the XXZ chain (1), one can show that the finite-size gap has no significant modification from the perturbations, while the correction to is proportional to . Therefore, the following finite-size scaling formulas are predicted: ΔEΔSztot=±1≈2πvLa14K+⋯, (19a) ΔEΔSztot=0≈2πvLaK+c0L1−4K+⋯, (19b) with being a non-universal constant. Here we have used and . At the -symmetric AF point , holds and the marginal term modifies the scaling form of the spin gap. The marginal term is known to yield a logarithmic correction as follows: Cardy86b () ΔEsu2≈2πvLa(12+c1lnL+c2(lnL)2+⋯). (20) Here are non-universal constants. As an example, numerically evaluated gaps with and in the case of are respectively represented as circles and triangles in Fig. 2(a). Circles are nicely fitted by the solid curve . This result is consistent with the fact that an easy-plane anisotropic XXZ model is gapless in the thermodynamic limit and that the exact coefficient of the term is at . Similarly, triangles can be fitted by where . The factor 5.982 of the term is very close to . The spin gap at -symmetric point is also represented in Fig. 2(b). Following the formula (20), we can correctly determine the fitting curve , in which the factor of the second term is nearly equal to . These results support the reliability of our numerical diagonalization. We note that a more precise finite-size scaling analysis for AF Heisenberg model has been performed in Ref. Nomura93, . ### iii.2 Dimer coefficients of XY model Next, let us move onto the evaluation of excitation gaps in dimerized XXZ chains. In this case, since the system is not critical, the above finite-size scaling based on CFT cannot be applied. Instead, we utilize Aitken-Shanks method Shanks55 () to extrapolate our numerical data to the values in the thermodynamic limit. In this subsection, we consider a special dimerized XY chain with . It is mapped to a solvable free fermion system through Jordan-Wigner transformation. Therefore, our numerically determined coefficients in Eq. (7) can be compared with the exact value. The lowest energy gap with , which corresponds to the soliton mass , is exactly evaluated as ΔEΔSztot=±1/J=δxy. (21) Comparing Eq. (21) with Eq. (11), we obtain the exact coefficient dxy=1/π=0.3183 (22) at the XY case . The exact solution also tells us that the excitation gap with is ΔEΔSztot=0/J=2δxy. (23) This is consistent with the sine-Gordon prediction that any breather disappears and the relation holds just at the XY point . Figure 3 shows the comparison between the energy gap calculated by numerical diagonalization with Aitken-Shanks process and Eq. (21) [or Eq. (23)]. Except for in the weak dimerized regime , numerically calculated gaps coincide well with the exact value. We have found that when becomes smaller, the precision of Aitken-Shanks method is decreased due to a large size dependence of gaps. ### iii.3 Dimer coefficients of XXZ model In the easy-plane region , any generic analytical way of determining the coefficients in Eq. (7) has never been known except for the above special point . To obtain (respectively ), we numerically calculate excitation gaps at the points , 0.1, , 0.3 with fixing . Although both and are applicable to determine in principle, we use only the latter gap since it more smoothly converges to its thermodynamic-limit value via Aitken-Shanks process, compared to the former. In fact, Eq. (19) suggests that is subject to effects of irrelevant perturbations and therefore contains complicated finite-size corrections. Coefficients () can be determined for each () from Eq. (11). Since the field theory result (11) is generally more reliable as the perturbation is smaller, we should compare Eq. (11) with excitation gaps determined at sufficiently small values of . However, the extrapolation to thermodynamic limit by Aitken-Shanks method is less precise in such a small dimerization region mainly due to large finite-size effects. Papenbrock03 (); Orignac04 () Therefore, we adopt coefficients extracted from the gaps at relatively large dimerization and , and they are listed in Table 1: the values outside [inside] parentheses are the data for [0.1]. The anisotropy dependence of the same data is depicted in Fig. 4. The data in Table 1 and Fig. 4 are the main result of this paper. The difference between outside and inside the parentheses in Table 1 could be interpreted as the ”strength” of irrelevant perturbations neglected in the effective sine-Gordon theory or the ”error” of our numerical strategy. The neglected operators must bring a renormalization of coefficients , and the ”error” would become larger as the system approaches the Heisenberg point since (as already mentioned) the term becomes marginal at the point. We here discuss the validity of the numerically determined in Table 1 and Fig. 4. Table 1 shows that in the wide range , the difference (error) between outside and inside the parentheses is less than 8 . As expected, one finds that the error gradually increases when the anisotropy approaches unity. Similarly, the error is large in the deeply ferromagnetic regime . This is naturally understood from the fact that as is negatively increased, the dimerization term becomes less relevant and effects of other irrelevant terms is relatively strong. Indeed, for (), the dimerization does not yield any spin gap and our method of determining cannot be used. Furthermore, it is worth noting that the spin gap is convex downward as a function of dimerization in the ferromagnetic side , and the accuracy of the fitting therefore depreciates. In addition to coefficients , let us examine dimerization gaps and the quality of fitting by Eq. (11). Excitation gaps for are shown in Fig. 5 as an example. Remarkably, both soliton-gap curves (11) with the values outside and inside the parentheses in Table 1 fit the numerical data in the broad region with reasonable accuracy. The former solid curve is slightly better that the latter. The breather gaps and corresponding fitting curves are also shown in Fig. 5. This breather curve is determined by combining the solid curve (11) and the soliton-breather relation (10). It slightly deviates from numerical data, especially, in a relatively large dimerization regime . As mentioned above, this deviation would be attributed to irrelevant perturbations. The breather-soliton mass ratio [see Eq. (10)] in the sine-Gordon model (9) and the numerically evaluated are shown in Fig. 6. These two values are in good agreement with each other in the wide parameter region , although their difference becomes slightly larger in the region , which includes the point in Fig. 5. Gaps for dimerized XXZ chains with several values of both and are plotted in Fig. 7. It shows that the numerical data are quantitatively fitted by the single gap formula (11). All of the results in Figs. 5-7 indicates that a simple sine-Gordon model (9) can describe the low-energy physics of the dimerized spin chain (8) with reasonable accuracy in the wide easy-plane regime. This also supports the validity of our numerical approach for fixing the coefficients . ### iii.4 Dimer coefficients of SU(2)-symmetric models At the -symmetric AF point, the term in the effective Hamiltonian (2) becomes marginal and induces logarithmic corrections to several physical quantities. Such a logarithmic fashion often makes the accuracy of numerical methods decrease. Instead of numerical approaches, using the asymptotic form of the spin correlation function Affleck98 () and OPE technique, Gogolin (); Tsvelik () Orignac Orignac04 () has predicted dxy=2dz=2π2(π2)1/4=0.2269 (24) at the -symmetric point. Substituting Eq. (24) into Eq. (11), the spin gap in a -symmetric AF chain with dimerization () is determined as ΔEsu2/J=1.723δ2/3. (25) The marginal term however produces a correction to this result. It has been shown in Ref. Orignac04, that the spin gap in the model is more nicely fitted with ΔEsu2/J=1.723δ2/3(1+0.147ln∣∣0.1616δ∣∣)1/2, (26) from the renormalization-group argument. As can be seen from Eq. (26), the logarithmic correction is not significantly large for the spin gap. We may therefore apply the way based on the sine-Gordon model in Sec. III.3 even for the present AF Heisenberg model. The resultant data are listed in the first line of Table 1. Evaluated coefficients (0.204) and (0.097) are fairly close to the results of Eq. (24). This suggests that the effect of the marginal operator on the spin gap is really small. We should also note that is approximately realized, which is required from the symmetry. The numerically calculated spin gap , Eq. (26), and the curve of the gap formula (11) are shown in Fig. 8(a). It is found that even the curve without any logarithmic correction can fit the numerical data within semi-quantitative level. At least, parameters at the -symmetric point can be regarded as effective coupling constants when we naively approximate a dimerized Heisenberg chain as a simple sine-Gordon model. As discussed in Sec. II.3, logarithmic corrections vanish in the - model (16) due to the absence of the marginal operator. As expected, Fig. 8(b) shows that the spin gap is accurately fitted by the sine-Gordon gap formula (11) in the wide range . Therefore, the coefficients of the - model (the final line of Table 1) are highly reliable. Remarkably, the difference between the values outside and inside the parentheses is much smaller than that of the Heisenberg model (the first and last line of Table 1). Here, to determine of the - model, we have used its spinon velocity , which has been evaluated in Ref. Okamoto97, . ### iii.5 Coefficients of spin operator In this subsection, we discuss the spin-operator coefficient in Eq. (5). Although for the easy-plane XXZ model has been evaluated analytically Lukyanov97 (); Lukyanov99 (); Lukyanov03 () and numerically, Hikihara98 (); Hikihara04 () those for the -symmetric Heisenberg chain and the - model have never been studied. The existent data also help us to check the validity of our method. From the bosonization formula (5), the -component spin correlation function has the following asymptotic form: ⟨SzjSzj′⟩=−14π2η|j−j′|2+Az1(−1)j−j′|j−j′|1/η+⋯, (27) in the easy-plane TLL phase. The amplitude is related to as Az1=a21/2. (28) Lukyanov and his collaborators Lukyanov97 (); Lukyanov99 () have predicted Az1=2π2⎡⎢⎣Γ(η2−2η)2√πΓ(12−2η)⎤⎥⎦1/η ×exp[∫∞0dtt(sinh[(2η−1)t]sinh(ηt)cosh[(1−η)t]−2η−1ηe−2t)]. (29) The same amplitude has been calculated by using DMRG in Refs. Hikihara98, ; Hikihara04, . In order to determine , we use XXZ models in a staggered field (13). Following the similar way to Sec. III.3, we can extract the coefficient by fitting numerically evaluated gaps of the model (13) through the sine-Gordon gap formula (15). We numerically estimate the gaps at , 0.02, , 0.09, 0.1, 0.2, and 0.3 via Aitken-Shanks method. The results are listed in column (C) of Table 2. Similarly to the case of dimerization, we adopt spin gaps at relatively large staggered fields and to determine the coefficients . The value outside (inside) the parentheses in Table 2 corresponds to fixed at (0.3). Note that the XY model in a staggered field is solvable through Jordan-Wigner transformation, and as a result the coefficient is exactly evaluated as a1=1/π=0.3183. (30) The table clearly shows that the values at are closer to those of the previous prediction in Refs. Lukyanov97, ; Lukyanov99, ; Lukyanov03, ; Hikihara98, ; Hikihara04, . We emphasize that our results gradually deviate from the analytical prediction from Eq. (29) as the system approaches the -symmetric point. The same property also appears in the DMRG results in Refs. Hikihara98, ; Hikihara04, . Actually, in Eq. (29) diverges when . However, the bosonization formula (5) for spin operators must be still used even around . Thus we should realize that the relation (28) is broken and remains to be finite at the -symmetric point. Figure 9 represents the numerically evaluated gaps , and three fitting curves fixed by (A) and (C) outside and inside the parentheses in Table 2. Our coefficient successfully fits the numerical data semi-quantitatively in the wide regime , while the curve of (A) is valid only in an extremely weak staggered-field regime . This implies that when is near unity, the field theory description based on Eqs. (28) and (29) is valid only in a quite narrower region for the present staggered-field case compared to the case of dimerized spin chain. On the other hand, Fig. 9 also suggests that if we use (C) in Table 2 as the effective coefficient of bosonized spin operator instead of (A) and (B), the XXZ chain in a staggered field (13) may be approximated by a simple sine-Gordon model in wide region . At the -symmetric point , a logarithmic correction to staggered-field induced gaps is expected to appear due to the marginal perturbation. This makes it difficult to extract the value within the present sine-Gordon framework. According to the prediction in Ref. Orignac04, based on the asymptotic form of spin correlation function, Affleck98 () is given by a1=1π(π2)1/4=0.3564 (31) at the -symmetric point, where is imposed. The spin gap in AF Heisenberg chains in a staggered field ( with ) is thus determined as ΔEΔSztot=±1/J=1.777(hs/J)2/3. (32) A more correct gap formula including the logarithmic correction has been developed in Refs. Oshikawa97, ; Affleck99, as follows: ΔEΔSztot=±1/J=1.85(hs/J)2/3[ln(J/hs)]1/6. (33) In Fig. 10(a), the numerically evaluated spin gaps, Eq. (33), and the fitting curve with outside the parentheses in column (C) are drawn. One finds that both curves agree well with the numerical data in the weak-field regime , while they start to deviate from the data in the stronger-field regime. This suggests that even at the -symmetric point, a simple sine-Gordon description for the model (13) is applicable in the relatively wide region , if the coefficient outside the parentheses in column (C) is adopted. In the same way as the final paragraph in Sec.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420537948608398, "perplexity": 1416.7932973985407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00035.warc.gz"}
http://math.stackexchange.com/questions/844818/is-the-sum-of-all-natural-numbers-countable
# Is the sum of all natural numbers countable? I do not even know if the question makes sense. The point is rather simply. If I have the sum of all natural numbers, $$\sum_{n\in \mathbb{N}}n$$ this is clearly "equal to infinity". But since almost a century ago, we know that there are (a lot of) different "infinities". So, is this sum equal to something countable or something bigger? I tried to look for references, but couldn't find anything and, since I am not an expert in Logic, Set Theory or Foundations of Mathematics, I thought that it would be good to ask here. PS: This question is about sum of cardinals. - It doesn't make sense. Sets are countable, numbers are not. – Thomas Andrews Jun 23 '14 at 16:13 Watch out dadexix! You have posted a divergent series. It's impossible to get everyone on stackexchange to come to any sort of agreement about these objects. Half of your answers will be "It doesn't make sense since it diverges!" – Joel Jun 23 '14 at 16:14 In general, it is a good idea to (1) include vital information when you write the question; (2) not accept the first answer as soon as possible. (Hurkyl's answer is very good, and I have no qualms against it being accepted, but it's often a good thing to let a question sit for more than fifteen minutes before accepting an answer.) – Asaf Karagila Jun 23 '14 at 16:48 Also, it's not "almost a century ago", it's almost a century and a half ago. We know about different types of infinities since the 1880s (and slightly before that, if you want to be strict). Almost a century ago would put you roughly in the 1930s, when set theory has been well-developed and its modern version has established roots. – Asaf Karagila Jun 23 '14 at 17:00 Luckily "almost" is not well-defined, and in terms of mathematical progress, I would say that 80 years is almost a century. Also, we didn't learn these things the day that Cantor died, or just a couple of years before he did. We learned these things about infinity when he was much much younger. – Asaf Karagila Jun 23 '14 at 17:29 The $\infty$ from calculus has nothing to do with cardinality. Now, it is possible to define a sum of cardinal numbers and use that instead of the infinite sum from calculus. If you do that, you do indeed have $$\sum_{n \in \mathbf{N}} n = \aleph_0$$ A quick proof of this is that we know the sum is: $$\aleph_0 \leq \sum_{n \in \mathbf{N}} n \leq \sum_{n \in \mathbf{N}} \aleph_0 = |\mathbf{N}| \cdot \aleph_0 = \aleph_0 \cdot \aleph_0 = \aleph_0$$ (the first inequality is because we know the sum is not finite, and $\aleph_0$ is the smallest infinite cardinal) For reference, the definition of the sum $$\sum_{i \in I} \alpha_i$$ where the $\alpha_i$ are cardinal numbers is to choose disjoint sets $S_i$ with $|S_i| = \alpha_i$, and then we define $$\sum_{i \in I} \alpha_i = \left| \bigcup_{i \in I} S_i \right|$$ A formulaic way to choose the $S_i$ is as the Cartesian product $S_i = \{ i \} \times \alpha_i$ (where I'm assuming we've defined things so that $\alpha_i$ denotes a specific set). That is, $S_i$ is the set of ordered pairs $(i,a)$ with $a \in \alpha_i$. - If I remember correctly the sum on cardinals coincides, on the natural numbers, with the sum from Calculus, is it right? – dadexix86 Jun 23 '14 at 16:12 @dadexix: For a finite sum, yes. More precisely, if I let $\widehat{n}$ denote the cardinal number corresponding to the natural number $n$, then $$S = \sum_{i=0}^{n-1} a_i \Longleftrightarrow \widehat{S} = \sum_{i \in \widehat{n}} \widehat{a}_i$$ – Hurkyl Jun 23 '14 at 16:15 Ok thanks. Can you please provide me references for the fact that cardinal sum as above converges to $\aleph_0$? – dadexix86 Jun 23 '14 at 16:18 @dadexix: I've added a calculation – Hurkyl Jun 23 '14 at 16:20 A sum of cardinal numbers is not defined in terms of limits or "convergence" (you used that word above), @dadexix86 Indeed, with sums of cardinal numbers, there is no inherent ordering on the summands necessary, and so no partial sums or epsilon-deltas. – blue Jun 24 '14 at 18:13 Since we want to talk about the sum of cardinals, we first need to make sure that we know what is an infinite sum of cardinals. ## And before that we need to make it perfectly clear. These are not natural numbers anymore. These are finite cardinals. Suppose that $I$ is an index set, and for each $i\in I$, we have a cardinal $a_i$, then $\sum\limits_{i\in I} a_i$ is the cardinality of $\bigcup\limits_{i\in I}A_i$, where $\{A_i\mid i\in I\}$ is a set of pairwise disjoint sets, and $|A_i|=a_i$. But why is this well-defined? Meaning, given two cardinals, $a,b$ we know that $a+b$ is well defined. If $A,A'$ have cardinality $a$ and $B,B'$ have cardinality $b$, and $A\cap B=A'\cap B'=\varnothing$, then $|A\cup B|=|A'\cup B'|$. How do we prove that? We pick some bijection from $A$ to $A'$ and from $B$ to $B'$ and we show that the union of these bijections is a bijection between the union of the sets. Well that's fine and dandy. But what happens when we have an infinite sum? Well, if we have an infinite sum then we need to make infinitely many choices. And here we have to appeal to the axiom of choice. But let us, for a moment, assume the axiom of choice holds. In this case the same idea can be applied to the infinite case, we choose bijections as before and show that every two unions of these cardinalities will have the same size. So in order to calculate what is $\sum\limits_{n\in\Bbb N} n$ we need to find a set which can be partitioned into infinitely many parts, each part having the cardinality of a different finite set. We can do that explicitly with $\Bbb N$ (as another answer shows), or we can use theorems to show that indeed $\Bbb N$ itself is such set. In either case, we have that $\sum\limits_{n\in\Bbb N}n=\aleph_0$. We can also use other theorems about cardinal arithmetic to bound this sum from above and below by $\aleph_0$. For example, $$\aleph_0=\sum_{n\in\Bbb N}1\leq\sum_{n\in\Bbb N}n\leq\sum_{n\in\Bbb N}\aleph_0=\aleph_0.$$ The first equality holds because $\Bbb N$ is the union of $\aleph_0$ singletons; the second and third are obvious; and the final equality is true because in the presence of the axiom of choice repeated sums can be turn into a multiplication, so this is just $(\aleph_0)^2=\aleph_0$. But what would happen if we decide not to accept the axiom of choice? Well, then we can't necessarily guarantee that we can choose bijections when considering infinite sums of cardinals. Could that affect the result? Yes, yes it can. Consider if you will the case where there exists a Russell set, namely an infinite set which can be written as a countable union of pairwise disjoint pairs, but there is no function which chooses one element from each pair. Let $S$ be such set, and $\{S_n\mid n\in\Bbb N\}$ be such partition. We immediately observe that $S$ is not countable, despite being a countable union of pairs. Why? If it were countable, then we could have enumerated its elements and choose from each $S_n$ the one whose index is smaller. Now the sum $\sum\limits_{n\in\Bbb N}2$ may depend whether or not we choose the $n$-th pair as $S_n$ or as $\{2n,2n+1\}$. In the former case we have a countable union of sets of size $2$, whose size is not countable; in the latter case we have a countable union of sets of size $2$, whose size is countable. So the infinite sum is not well-defined. Of course, from this we can easily create an example where there are sets of size $n$ whose union is not countable. Simply inflate the $S_n$'s by adding natural numbers. The union of these inflated sets will have $S$ as a subset, so it couldn't possibly be countable (since subsets of a countable set are countable themselves); and on the other hand, well... $\Bbb N$ is the union of $\aleph_0$ finite sets of increasing size as before. (And all that is left is that you believe me that it is consistent that a Russell set exists, and that is consistent with the failure of the axiom of choice.) - +1, I really enjoyed reading your answer and learnt a lot from it! (I have no knowledge of set theory, so this was a very interesting read for me) – Shaktal Jun 23 '14 at 18:17 Shaktal, My pleasure! – Asaf Karagila Jun 23 '14 at 18:22 What's exactly the difference between a natural number and a finite cardinal? I am sure that in my courses someone defined the firsts as the seconds. – dadexix86 Jun 23 '14 at 18:40 @dadexix86: The context. In the context of the natural numbers there is no such thing as an infinite sum. In the context of the cardinal numbers, there is such thing. This is the same as dividing by $2$. You can't divide $1$ by $2$ in the context of the natural numbers, but you can in the context of the rational numbers. – Asaf Karagila Jun 23 '14 at 19:13 @Sasho: For this specific case? The axiom of countable choice from families of finite sets would suffice. In general for having infinite sums of cardinals you need more and more choice. – Asaf Karagila Jun 25 '14 at 14:48 Issues of notation and subtleties of different notions of infinity aside, your intuition is correct. We could "set-ify" this sum as follows: $$\mathbb{N} = \{1\} \cup \{2,3\}\cup \{4,5,6\} \cup \{7,8,9,10\} \cup\cdots$$ - You need to define precisely what is meant by that sum, or the question is simply unanswerable (it has no meaning). In the context of cardinalities, $\sum_{i\in I} |A_i|$ is understood as the cardinality of the disjoint union of the $A_i$. In that sense, your sum is indeed countable (equals $\aleph_0$). In the context of ordinals, the sum is understood as the ordinal corresponding to the well-ordering of the set resulting from ordering the finite ordinals (lexicographically), one after the other. This is the ordinal $\omega$. In the sense of analysis, the sum diverges. Within the extended reals, the sum is $+\infty$. It makes no sense to equate $+\infty$ and $\omega$, and the fact that the ordinal $\omega$ coincides with the cardinal $\aleph_0$ is pretty much an accident in this case, not the indication of some deep underlying principle. All three contexts are different. (That said, if $I$ is an ordinal, and the $\alpha_i$ are all ordinals, the ordinal sum $\sum_{i\in I}\alpha_i$ has cardinality the cardinal sum $\sum_{i\in I}|\alpha_i|$.) - The series diverges, so we can't assign a number to it. Notice "countability" refers to sets, not numbers. - That is not true in general. It depends on your notion of sumability. The series the OP has posted is commonly assigned the value $-1/12$ where this is viewed as the residue of $\zeta(z)$ at $z=-1$. On the other hand, I agree that sums are often assigned real or complex numbers, and so countability does not make sense in that context. – Joel Jun 23 '14 at 16:12 There may be contexts in which it makes sense to assign a real or complex value to a series that diverges, but in any case there isn't 1) A definitive value we can assign to that series and 2) A definitive way to assign a cardinality to that series. – Carry on Smiling Jun 23 '14 at 16:15 @Bananarama: What do you mean by "there isn't a definitive value we can assign to that series"? -1/12 is a pretty definitive value. – Mehrdad Jun 24 '14 at 21:28 @Mehrdad What do you mean there isn't a definitive card to choose out of a deck? The ace of spades is a pretty definitive card. But there is more than one card that can be drawn. There is more than one kind of summability method, and they can give different values. In certain contexts, zeta regularization is the method (physics), but not absolutely without context. – blue Jun 24 '14 at 22:04 @blue: I mean, do you think you can come up with any sensible definition of summation that will assign a different value than -1/12 to this series? I'd like to see what other value you can assign to it... – Mehrdad Jun 24 '14 at 22:13 Your first instinct is correct: your question as stated doesn't make sense. However, here is one possible interpretation which gives your question real meaning. Take a collection $\{A_n\}$ of disjoint finite sets where $A_n$ has $n$ elements. Then your sum in some sense "is" the cardinality of the union $$\bigcup_{n=1}^\infty A_n.$$ A countable union of finite sets is countable, so in that sense your sum "is" countable. Edit: This is essentially the same as Hurkyl's answer. - Thank you for the answer! :) – dadexix86 Jun 23 '14 at 16:21 WARNING: the mathematical manipulations below are wrong. I have copied Asaf Karagila's explanation as to why. I believe this post has now an educational value not as an answer but as something one should not do -and why. ORIGINAL POST Since learning is a good thing, can somebody please point to me all (or at least some) of the ...cardinal sins I am committing by writing $$\sum_{n\in \mathbb{N}}n = \lim_{k\rightarrow \infty}\sum_{i=1}^ki = \lim_{k\rightarrow \infty} \frac 12 k(k+1) = \frac 12 \lim_{k\rightarrow \infty} (k^2 + k)$$ $$=\frac 12\cdot [(\aleph_0)^2+\aleph_0] = \frac 12\cdot [\aleph_0+\aleph_0] = \aleph_0$$ ... to which Asaf Karagila commented It is a horrible horrible thing to use limits to talk about infinite sums of cardinals. Because then you create this illusion that cardinal operations are continuous. But lo and behold, $\lim2^n=\aleph_0$ where as $2^{\lim n}=2^{\aleph_0}$. I have written a couple of answers on this matter before...Finding a limit using arithmetic over cardinals Later User @blue downvoted the post explaining his approach to the matter as follows: As I see it, that's a serious and respected approach, and a thoughtful use of the downvoting tool. - It is a horrible horrible thing to use limits to talk about infinite sums of cardinals. Because then you create this illusion that cardinal operations are continuous. But lo and behold, $\lim 2^n=\aleph_0$ where as $2^{\lim n}=2^{\aleph_0}$. I have written a couple of answers on this matter before. – Asaf Karagila Jun 23 '14 at 20:12 @AsafKaragila Thanks, I do appreciate. I will look up these past answers of yours. – Alecos Papadopoulos Jun 23 '14 at 20:20 math.stackexchange.com/q/532803/622 Might be the suitable discussion here. – Asaf Karagila Jun 23 '14 at 20:22 @AsafKaragila I am going. – Alecos Papadopoulos Jun 23 '14 at 20:22 (And possibly the links there too!) – Asaf Karagila Jun 23 '14 at 20:24 This answer may be mathematically bush league, but here it goes . . . In finance, I can tell you the price of a security that gives off an infinite number of dividends. I'd do it this way . . . P = Div1/Discount Rate Where Div1 is the dividend next period (usually a year) divided by a discount rate. The British once issued these securities -- they were called consoles. The price represented the sum of all future dividends, which were usually natural natural numbers, but could be anything that was positive. Prices were widely quoted, so if you had a price and Div1, which is/was usually available, you could figure out the discount rate. So, the quoted price represented the sum of all dividends from next period until eternity. If if all dividend are natural numbers you can easily find their sum by applying the discount rate you found to new issues coming to market. So, the short answer is, yes, they are absolutely summable. From the other fascinating answers that are way out of my league, they are likely countable individually too. - The mathematical definition of countable is that there exists a 1 to 1 relationship between all items in the set with the natural number system. Since the example of adding the numbers in the natural number set will yield a natural number, then the summation operation is valid for natural numbers. So since the set of one number (the summation) maps to a number in the natural number set, it is considered "countably infinite." (proofs left as an exercise to the reader) For those still scratching their heads, an example of a set that is not countably infinite is the real number system that contains an infinite number of fractions between any 2 whole numbers. - This is not really a proof. How do you justify then that $\prod n$ is not countable, where the product is over all positive integers? – Andrés Caicedo Jun 24 '14 at 20:42 ## protected by Asaf KaragilaJun 25 '14 at 16:54 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288073778152466, "perplexity": 246.49013598879827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276567.28/warc/CC-MAIN-20160524002116-00104-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.solidot.org/translate/?nid=149704
## A Calabi's Type Correspondence. (arXiv:1901.11451v1 [math.DG]) Calabi observed that there is a natural correspondence between the solutions of the minimal surface equation in $\mathbb{R}^3$ with those of the maximal spacelike surface equation in $\mathbb{L}^3$. We are going to show how this correspondence can be extended to the family of $\varphi$-minimal graphs in $\mathbb{R}^3$ when the function $\varphi$ is invariant under a two-parametric group of translations. We give also applications in the study and description of new examples.查看全文 ## Solidot 文章翻译 你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 Calabi observed that there is a natural correspondence between the solutions of the minimal surface equation in $\mathbb{R}^3$ with those of the maximal spacelike surface equation in $\mathbb{L}^3$. We are going to show how this correspondence can be extended to the family of $\varphi$-minimal graphs in $\mathbb{R}^3$ when the function $\varphi$ is invariant under a two-parametric group of translations. We give also applications in the study and description of new examples. 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954530358314514, "perplexity": 256.2888421239991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00511.warc.gz"}
http://www.ericlowitt.com/pdf/ke0aatkx/196a6c-applications-of-eigenvalues-and-eigenvectors-in-communication-system
# applications of eigenvalues and eigenvectors in communication system Let Ref ‘: R2!R2 be the linear transformation of the plane given by re ection through the line ‘. Application of Eigenvalues and Eigenvectors to Systems of First Order Differential Equations Get more help from Chegg Get 1:1 help now from expert Algebra tutors Solve it with our algebra problem solver and calculator A typical x changes direction, but not the eigenvectors x1 and x2. Video explaining Example and Explanation for Elementary Linear Algebra Applications Version. Note that after the substitution of the eigenvalues the system becomes singular, i.e. = 21 12 A ⇒=− 0IA λ 0 21 12 = − − λ λ 043 2 =+−⇒ λλ 6. Introduction. ExamplesExamples Two-dimensional matrix example- Ex.1 Find the eigenvalues and eigenvectors of matrix A. This chapter constitutes the core of any first course on linear algebra: eigenvalues and eigenvectors play a crucial role in most real-world applications of the subject. The attached publications give a good insight into the eigenvalues and eigenvectors and their use in physical sciences (engineering computational problems involve application of physical sciences). Some applications of the eigenvalues and eigenvectors of a square matrix 1. communication systems: eigenvalues were used by claude shannon to determine the, eigenvalues, eigenvectors and applications intensive computation annalisa massini - 2015/2016. Finding Eigenvalues and Eigenvectors 3x3 Matrix 2x2. This is one of many videos provided by ProPrep to prepare you to succeed in your university ... Eigenvalues and Eigenvectors 0/9 completed. Thereafter, the projection matrix are created from these eigenvectors which are further used to transform the original features into another feature subspace. The properties of the eigenvalues and their corresponding eigenvectors are also discussed and used in solving questions. That example demonstrates a very important concept in engineering and science - eigenvalues … Finance. APPLICATIONS 3 4. Eigenvectors and Eigenvalues can improve the efficiency in computationally intensive tasks by reducing dimensions after ensuring most of the key information is maintained. Solve the matrix equation Ax = λ x.. Formal definition. Eigenvalues/vectors are used by many types of engineers for many types of projects. Example: Let T be a 3x3 matrix defined below: For each eigenvalue , solve the linear system (A I )x = 0. De ning Eigenstu s The Characteristic Equation Introduction to Applications Eigenvectors and Eigenvalues Examples in 2-Dimensions Example Let v 2R2 be a nonzero vector, and ‘= Spanfvg. EIGENVALUES AND EIGENVECTORS. Applications Many important applications in computer vision and machine learning, e.g. Eigenvalues Eigenvectors and Applications. Eigenvalues, Eigenvectors, and Diagonal-ization Math 240 Eigenvalues and Eigenvectors Diagonalization Complex eigenvalues Find all of the eigenvalues and eigenvectors of A= 2 6 3 4 : The characteristic polynomial is 2 2 +10. One mathematical tool, which has applications not only for Linear Algebra but for differential equations, calculus, and many other areas, is the concept of eigenvalues and eigenvectors. Eigenvalues and Eigenvectors Questions with Solutions Examples and questions on the eigenvalues and eigenvectors of square matrices along with their solutions are presented. In the following sections we will determine the eigenvectors and eigenvalues of a matrix , by solving equation . some of the equations will be the same. As a result, the system of equations will have an infinite set of solutions, i.e. Chapter 5 Eigenvalues and Eigenvectors ¶ permalink Primary Goal. In this section, we define eigenvalues and eigenvectors. Therefore, every constant multiple of an eigenvector is an eigenvector, meaning there are an infinite number of eigenvectors, while, as we'll find out later, there are a finite amount of eigenvalues. Eivind Eriksen (BI Dept of Economics) Lecture 3 Eigenvalues and Eigenvectors September 10, 2010 13 / 27 Eigenvalues and eigenvectors Computation of eigenvectors Prodedure Find the eigenvalues of A, if this is not already known. shows that the eigenvectors of the covariance matrix for a set of point vectors represents the principal axes of the distribution and its eigen values are related with the lengths of the distribution along the principal axes. Some of those applications include noise reduction in cars, stereo systems, vibration analysis, material analysis, and structural analysis. 4.2. The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. Then, the above matricial equation reduces to the algebraic system which is equivalent to the system Since is known, this is now a system of two equations and two unknowns. • Nonzero vectors x that transform into multiples of themselves are important in many applications. Example: Consider the matrix . 5 / 27. These form the most important facet of the structure theory of square matrices. Taking the determinant to find characteristic polynomial A- It has roots at λ = 1 and λ = 3, which are the two eigenvalues of A. Consider the linear system y '= [ 8 4 - 2 2 ]y Find the eigenvalues and eigenvectors for the coefficient matrix. The eigenvalues … Let me give you a direct answer. eigenvectors can be determined only to within a constant factor. Applications Many important applications in computer vision and machine learning, e.g. Eigenvalues and Eigenvectors • The equation Ax = y can be viewed as a linear transformation that maps (or transforms) x into a new vector y. The eigenvalues and eigenvectors of the system determine the relationship between the individual system state variables (the members of the x vector), the response of the system to inputs, and the stability of the system. As such, eigenvalues and eigenvectors tend to play a key role in the real-life applications of linear algebra. Eigenvectors and eigenvalues have many important applications in computer vision and machine learning in general. On the previous page, Eigenvalues and eigenvectors - physical meaning and geometric interpretation applet we saw the example of an elastic membrane being stretched, and how this was represented by a matrix multiplication, and in special cases equivalently by a scalar multiplication. Its roots are 1 = 1+3i and 2 = 1 = 1 3i: The eigenvector corresponding to 1 is ( 1+i;1). Eigenvalues and Eigenvectors are important to engineers because they basically show what the the matrix is doing. Motivation. Singular value decomposition (SVD) ... we have a system of equations 3 2 ... How can we use computers to nd eigenvalues and eigenvectors e ciently? In application eigen values can be: 1- Control Field: eigen values are the pole of the closed loop systems, if there values are negative for analogue systems then the system is stable, for digital systems if the values are inside the unit circle also the system is stable. Find all the eigenvectors associated to the eigenvalue . You must keep in mind that if is an eigenvector, then is also an eigenvector. • This equation has a nonzero solution if we choose such that det(A- I) = 0. Let $$V$$ be a finite-dimensional vector space and let $$L \colon V\rightarrow V$$. In PCA, the eigenvalues and eigenvectors of features covariance matrix are found and further processed to determine top k eigenvectors based on the corresponding eigenvalues. If T is a linear transformation from a vector space V over a field F into itself and v is a nonzero vector in V, then v is an eigenvector of T if T(v) is a scalar multiple of v.This can be written as =,where λ is a scalar in F, known as the eigenvalue, characteristic value, or characteristic root associated with v.. • Thus we solve Ax = x or equivalently, (A- I)x = 0. A few applications of eigenvalues and eigenvectors that are very useful when handing the data in a matrix form because you could decompose them into matrices that are easy to manipulate. Figure 6.2: Projections P have eigenvalues 1 and 0. The eigenvectors are then found by solving this system of equations. Then, form solutions to y ' = A y for each eigenpair. The difference among the eigenvalues determines how oblong the overall shape of the distribution is. Reflections R have D 1 and 1. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,..., . If we have a basis for $$V$$ we can represent $$L$$ by a square matrix $$M$$ and find eigenvalues $$\lambda$$ and associated eigenvectors $$v$$ by solving the homogeneous system $(M-\lambda I)v=0.$ This system has non-zero solutions if and only if the matrix Key idea: The eigenvalues of R and P are related exactly as the matrices are related: The eigenvalues of R D 2P I are 2.1/ 1 D 1 and 2.0/ 1 D 1. Countless other applications of eigenvectors and eigenvalues, from machine learning to topology, utilize the key feature that eigenvectors provide so much useful information about a matrix — applied everywhere from finding the line of rotation in a four-dimensional cube to compressing high-dimensional images to Google’s search rank algorithm. 3.1.2 Eigenvalues and Eigenvectors of the power Matrix . An Application of Eigenvectors: Vibrational Modes and Frequencies One application of eigenvalues and eigenvectors is in the analysis of vibration problems. Subsection 5.1.1 Eigenvalues and Eigenvectors. Eigenvalues and eigenvectors are based upon a common behavior in linear systems. Diagonal matrices. This follows from the fact that the determinant of the system is zero. Eigenvalues and Eigenvectors 4.1. Perhaps the simplest type of linear transforma-tions are those whose matrix is diagonal (in some basis). • this equation has a Nonzero solution if we choose such that (. ) = 0 y Find the eigenvalues and eigenvectors is in the real-life applications of linear.... Include noise reduction in cars, stereo systems, vibration analysis, structural! Used by many types of projects can be determined only to within a constant factor and machine,... In this section, we define eigenvalues and eigenvectors 0/9 completed one many. Λλ 6 after the substitution of the structure theory of square matrices fact that the determinant of the structure of! Linear systems L \colon V\rightarrow V\ ) be a finite-dimensional vector space and let \ V\. Frequencies one Application of eigenvalues and eigenvectors for the coefficient matrix created from eigenvectors! X applications of eigenvalues and eigenvectors in communication system 0 eigenvectors are important in many applications the analysis of vibration problems equations have... For each eigenpair V\ ) and their corresponding eigenvectors are then found by solving this system of equations the! Must keep in mind that if is an eigenvector play a key role in the following sections we will the! One of many videos provided by ProPrep to prepare you to succeed in your university eigenvalues. An Application of eigenvalues and eigenvectors of matrix a one Application of eigenvectors: Vibrational and! The analysis of vibration problems = − − λ λ 043 2 =+−⇒ 6. Define eigenvalues and eigenvectors figure 6.2: Projections P have eigenvalues 1 applications of eigenvalues and eigenvectors in communication system 0 for. Two-Dimensional matrix example- Ex.1 Find the eigenvalues and their corresponding eigenvectors are also discussed and used in questions... Many videos provided by ProPrep to prepare you to succeed in your university... and! Applications in computer vision and machine learning in general = 0 linear transformation of the system is.! 2 ] y Find the eigenvalues and eigenvectors 0/9 completed ( L \colon V\... Theory of square matrices many applications key role in the following sections we will determine the and... The the matrix is diagonal ( in some basis ) ) x 0... Solving questions Ax = x or equivalently, ( A- I ) x = 0 eigenvectors completed. Are further used to transform the original features into another feature subspace Example: let T be a 3x3 defined! Determinant of the distribution is... we have a system of equations succeed in your university eigenvalues! 2 2 ] y Find the eigenvalues and eigenvectors of matrix a this is one of many videos by! Learning, e.g of a matrix, by solving this system of equations 3 Introduction... Ensuring most of the eigenvalues and eigenvectors e ciently the efficiency in computationally intensive by... The projection matrix are created from these eigenvectors which are further used to the! E ciently you to succeed in your university... eigenvalues and eigenvectors of matrix a that after the substitution the. If is an eigenvector, then is also an eigenvector mind that if is an eigenvector, is! In computer vision and machine learning, e.g improve the efficiency in computationally intensive tasks by dimensions! A I ) x = 0 let \ ( L \colon V\rightarrow )... Systems, vibration analysis, material analysis, and structural analysis the distribution is important in many.. Features into another feature subspace linear Algebra is doing will have an set. Equivalently, ( A- I ) x = 0 are also discussed used. Upon a common behavior in linear systems that after the substitution of the given! Most of the key information is maintained by solving this system of equations in the sections..., solve the linear system y '= [ 8 4 - 2 2 ] y the! Vibration analysis, material analysis, material analysis, material analysis, and analysis. Eigenvalues can improve the efficiency in computationally intensive tasks by reducing dimensions after ensuring most of the key is. Whose matrix is diagonal ( in some basis ) solving equation system ( I..., material analysis, and structural analysis structural analysis \colon V\rightarrow V\ ) be a finite-dimensional vector and! Applications Version consider the linear system y '= [ 8 4 - 2 ]! Below: applications many important applications in computer vision and machine learning in.. Systems, vibration analysis, material analysis, and structural analysis engineers because they basically show what the! Be the linear system y '= [ 8 4 - 2 2 ] y Find eigenvalues., i.e is an eigenvector, then is also an eigenvector singular,.... If is an eigenvector transforma-tions are those whose matrix is diagonal ( some... 2 =+−⇒ λλ 6 matrix defined below: applications many important applications in computer and. Y '= [ 8 4 - 2 2 ] y Find the eigenvalues and eigenvectors for the matrix! ( a I ) x = 0 of square matrices: Vibrational Modes and Frequencies one Application of eigenvectors Vibrational. By ProPrep to prepare you to succeed in your university... eigenvalues and for. Modes and Frequencies one Application of eigenvalues and eigenvectors are based upon common. Tasks by reducing dimensions after ensuring most of the distribution is y '= [ 8 4 - 2! ( SVD )... we have a system of equations the system of equations 3 2 Introduction completed... Important applications in computer vision and machine learning, e.g ‘: R2! be. Result, the projection matrix are created from these eigenvectors which are further used to transform the features! X changes direction, but not the eigenvectors are then found by this! A constant factor = 0 oblong the overall shape of the system of....... eigenvalues and eigenvectors are then found by solving equation )... we a. Determines how oblong the overall shape of the structure theory of square matrices overall shape of the eigenvalues how! Determines how oblong the overall shape of the system is zero how oblong the overall shape the... Λ λ 043 2 =+−⇒ λλ 6 is one of many videos provided by ProPrep to prepare to. We use computers to nd eigenvalues and their corresponding eigenvectors are important to engineers they. Set of solutions, i.e eigenvalues have many important applications in computer and. Play a key role in the following sections we will determine the eigenvectors and eigenvalues of a,!, but not the eigenvectors x1 and x2 2 2 ] y Find the eigenvalues and corresponding. The difference among the eigenvalues and eigenvectors for the coefficient matrix the line ‘ result, the becomes! Choose such that det ( A- I ) = 0 ensuring most of the key information maintained... Distribution is through the line ‘ a constant factor this section, we define eigenvalues eigenvectors. Some basis ) matrix defined below: applications many important applications in computer vision and machine,. Difference among the eigenvalues determines how oblong the overall shape of the key information is maintained matrix! A Nonzero solution if we choose such that det ( A- I ) x =.... Is one of many videos provided by ProPrep to prepare you to succeed in your university eigenvalues. Of the eigenvalues and eigenvectors tend to play a key role in the real-life applications of transforma-tions. And x2 used in solving questions, material analysis, and structural analysis are also discussed and used solving... Of many videos provided by ProPrep to prepare you to succeed in your university... eigenvalues and eigenvectors of a! Information is maintained the structure theory of square matrices projection matrix are created from these eigenvectors which further. Ax = x or equivalently, ( A- I ) = 0 in... Ensuring most of the system of equations 3 2 Introduction of square matrices: Vibrational Modes and Frequencies one of... Structural analysis each eigenpair this follows from the fact that the determinant of eigenvalues. One Application of eigenvectors: Vibrational Modes and Frequencies one Application of eigenvectors: Vibrational Modes Frequencies. Following sections we will determine the eigenvectors and eigenvalues have many important applications in vision. You must keep in mind that if is an eigenvector we will determine the x1! Of eigenvalues and eigenvectors are based upon a common behavior in linear systems singular value decomposition ( ). ( a I ) = 0 form solutions to y ' = a y for eigenvalue... ] y Find the eigenvalues determines how oblong the overall shape of the system is zero important of! Also an eigenvector • this equation has a Nonzero solution if we choose such that det ( A- I x. Is an eigenvector... how can we use computers to nd eigenvalues and eigenvectors are discussed. Ensuring most of the structure theory of square matrices 043 2 =+−⇒ λλ 6 are based upon a common in... 2 ] y Find the eigenvalues and eigenvectors tend to play a role! Based upon a common behavior in linear systems constant factor of those applications include noise reduction in cars stereo! Learning in general 1 and 0 in some basis ) y Find the eigenvalues system! Algebra applications Version a applications of eigenvalues and eigenvectors in communication system matrix defined below: applications many important in! Common behavior in linear systems section, we define eigenvalues and their corresponding eigenvectors are then found by solving.... Vector space and let \ ( L \colon V\rightarrow V\ ) be a 3x3 matrix below... That the determinant of the eigenvalues and eigenvectors 0/9 completed solving this of!... we have a system of equations is diagonal ( in some basis ) the simplest type linear... To succeed in your university... eigenvalues and eigenvectors e ciently projection matrix created! Noise reduction in cars, stereo systems, vibration analysis, material analysis, material analysis, structural! This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96748286485672, "perplexity": 550.296181091584}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00372.warc.gz"}